Dokumen - Pub - Microsoft Azure in Action Meap v06
Dokumen - Pub - Microsoft Azure in Action Meap v06
1. Copyright_2023_Manning_Publications
2. welcome
3. 1_What_is_Microsoft_Azure?
4. 2_Using_Azure:_Azure_Functions_and_Image_Processing
5. 3_Using_Virtual_Machines
6. 4_Networking_in_Azure
7. 5_Storage
8. 6_Security
9. 7_Serverless
10. 8_Optimizing_Storage
MEAP Edition
Version 6
https://livebook.manning.com/book/microsoft-azure-in-action/discussion
manning.com
welcome
You are pretty special, you know that? You are one of the very first amazing
people to have a peek at my upcoming book, Microsoft Azure in Action. This
book isn’t like any other tech book. At least not entirely. It is a journey into
cloud computing where you are at the center. I am here to convey over 10
years of Microsoft Azure experience to you in a way that is free of jargon and
fluff.
The book has four parts. Part 1 of the book introduces Azure to your world,
what are the benefits and advantages of cloud computing, as well as going
straight into building a photo resizing app with serverless. Yes, really. Part 2
is all about those fundamentals, and getting them nailed down, as the rest of
the book builds on them. Part 3 is managing data in all the various ways that
Azure offers (there are many flavors). Finally, part 4 is about DevOps,
security, and performance.
Because your feedback is essential to creating the best book possible, I hope
you’ll be leaving comments in the liveBook Discussion forum. After all, I
may already know how to do all this stuff, but I need to know if my
explanations are working for you! In particular, I want to know if you are
enjoying yourself as you read through the chapters.
- Lars Klint
In this book
Azure is available globally, which means Microsoft has built data centers in
many regions around the world. When you create an application, it doesn’t
matter if you create it in Australia or Norway[1]; It is the same approach and
commands you use. This makes it very simple to create products that are
close to your customers, but that scale globally.
At the time of writing this book, Azure has more than 65 regions each
containing 1-4 data centers for total of over 160 data centers. And it is
growing all the time. Some regions are restricted to government bodies and
their contractors, while operating in China has very special conditions. Figure
1.1 shows some of the current Azure regions available.
Figure 1.1: A selection of some of the many Azure Regions spanning the globe.
To place Azure in the context of other cloud platforms, there are three main
types of clouds:
Public – Anyone can set up an account and use the services offered on a
public cloud. The common offerings are Azure, AWS and Google Cloud
Platform (GCP), among others.
Private – A company can create a private cloud, which is accessible only
to that company. They host their own hardware and abstraction layers.
Hybrid – Many companies, and especially government deparments, are
not ready to move to the cloud 100% and they will have some services
on-premises and some in the cloud. This is called a hybrid cloud
approach and mixes public and private clouds.
Azure is a public cloud offering, providing IaaS, PaaS and some SaaS
products, although you can also use Azure as part of a hybrid cloud setup.
Let me show you some examples of using Azure that makes a lot of sense in
the real world.
Mr. Wayne has a small website that includes an online shop selling various
items, mostly black in color. The current setup for the website is hosted on
premises by Mr. Wayne, as shown in Figure 1.3.
An example of how Mr Wayne can improve his business and web shop using
Azure services is shown in Figure 1.4.
This will partly solve the latency issue using Azure DNS (by resolving the
location of your site faster), peaks in demand can be managed by scaling the
App Service Plan horizontally (creating more computing instances), there is
no maintenance of hardware, and regular backups are made of the SQL
databases and App Service.
Using Azure, all of the above issues have been addressed and B. Wayne is
saving costs of owning hardware, managing internet connections and more,
on top. As his business grows, scaling the cloud architecture with it is vastly
easier. You will learn more about optimizing performance in Azure later in
the book as well.
As a result, Natasha has more time to focus on other projects. She is saving
part of her company budget, as she has avoided having to buy extra hardware
and instead only pay for the storage as it grows in size. She can now say with
confidence that company intellectual property is backed up and secured.
With the solution in hand, Clark can now fly off like a bird. Or a plane.
She starts using Azure Resource Manager (ARM) templates, which describes
resources with JSON syntax.
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deplo
"contentVersion": "1.0.0.0",
"parameters": {
"tagValues": {
"type": "object",
"defaultValue": {
"Environment": "<Prod>",
"Projectid": "<1234>",
"Projectname": "<some name>",
"Subcontractor": "<some vendor>",
"Routineid": "<some number>",
"Routinename": "<some name>",
"Applicationname": "<some application name>"
}
},
"GalleryImageSKU": {
"type": "string",
"metadata": {
"description": "Image SKU."
},
"defaultValue": "2016-Datacenter"
},
"GalleryImagePublisher": {
"type": "string",
"metadata": {
"description": "."
},
"defaultValue": "MicrosoftWindowsServer"
},
… Much more
Using ARM templates as above, Carol can fine tune the deployment of the
VMs, which not only achieves her goals of less errors, speed and more time
free for other things, but also saves money on running unnecessary VMs, lets
her use source control to keep track of version history and integrate the VM
creation workflow with other functions in the business. Win.
1.3.1 Scalability
Imagine you have a website that runs on your on-premises web server. The
hardware is designed to handle a specific maximum capacity of traffic to the
website. This capacity can be limited by the CPU power, the amount of
RAM, how fast a network connection you have and much else. When the
server is maxed out, how do you ensure users of your Nicholas Cage meme
generator service can still use it? How can you manage seasonal patterns in
your traffic?
1.3.2 Reliability
Nothing can erode trust in your online services as much as reliability issues.
If your services are not responding, run out of storage space or lose network
connection, your users’ trust in your service will disappear faster than ice
cream on a hot summer day. When you use Azure to host your services,
reliability is part of the package you pay for. In this book you will learn how
to take full advantage of the cloud reliability using VMs, virtual networks,
caching data to speed up your applications, drilling into service logs and so
much more.
Azure services are at their core built for reliability. Azure regions are paired
to ensure business continuity and disaster recovery by being able to switch
from one to the other at short notice. Services are fault tolerant by using
availability zones for hardware failures, power outages and any other
gremlins that might rear their ugly heads. Azure Storage is resilient by design
and is at a minimum replicated three times within each region, even though
you just have one Storage account, but much more about storage and
availability zones later in the book. In a few chapters’ time you will also learn
about Cosmos DB and its geographic replication and reliability. That is one
of my favorite services.
Azure is expanding all the time and as the cloud platform grows, gets more
users and streamlines products and services, you will benefit from economies
of scale. Azure computing services, such as Virtual Machines, keep getting
cheaper, and it is the same with storage. This makes keeping up with your
product growth much more manageable.
1.3.4 Support
If a service on Azure isn’t performing, there is a bug in a feature, or you need
help with setting up a service, there are multiple ways you can get support.
Timely and well articulated support can often make the difference between
success and mediocrity. If you are stuck on a problem and waiting for help,
you want that support to be both quick and accurate. This book can form part
of the support network for your project by teaching you the best practices for
both individual services and complex architecture.
1.3.5 Compliance
1.4 Costs
Costs in Azure, and cloud computing in general has a rumor of “blowing
costs out without you knowing”. You might have heard stories of Virtual
Machines racking up thousands of dollars in cost overnight, or some service
left running that ruins the company budget. Yes, it happens. No, it is not
common.
Throughout this book costs and pricing will be treated as a “fourth pillar” of
cloud computing, with compute, network and storage being the first three.
While you won’t learn about every single price and pricing tier, because
watching paint dry is more interesting, you will learn how to keep costs down
as we go through the wonderful world of cloud computing with Azure.
Cost saving is a definite plus for using Azure, so let’s go over some of the
ways Azure can be very cost effective and benefit you, your company and the
evil overloads.
You get USD200 to spend on any Azure service you want, such as
Azure Kubernetes Services (AKS) or Azure Cognitive Services.
There are services that are free for twelve months, such as a 250GB
instance of SQL Server or 750 hours of use on certain Virtual Machines
instances.
And then there are services that are always free, which include Cosmos
DB free tier, 1 million requests every month for Azure Functions and 50
Virtual Networks.
With a free account you have access to enough services and resources that
you can build a prototype, try out an unfamiliar service or try out some of the
examples in this book. Later in this chapter you will learn how to create a free
Azure account.
1.4.2 PAYG
Pay as You Go (PAYG) is the most common, and most expensive, billing
model. As the name implies you pay for what you use, whether that is 24
minutes on a Virtual Machine, 300,000 RU/s on Cosmos DB or another App
Service Plan.
PAYG usage is billed monthly, and while it embodies the idea of cloud
computing and only paying for what you use, it is also the most expensive
way, when looking at cost per unit. For a lot of Azure services, PAYG is the
way you are billed for use, but some services have ways to make them
cheaper. Much cheaper. Read on.
It’s like renting a car vs. leasing a car. If you rent a car, you pay a higher
daily price, but you only have it for a few days. The rental company needs to
set the price higher, as they might not know when the car will be rented
again, they have to clean it, and staff needs to manage it all. On the other
hand, if you lease a car, you commit to a much longer period of using the car,
often several years. Your daily cost is greatly reduced, and the car company
doesn’t need to clean it, it doesn’t sit idle, and less staff can manage it. The
same logic applies to Azure reserved VM instances.
Spot VMs are excellent for interruptible processes, for testing and for
development. Azure will even tell you how likely you are to be evicted at
what times, to let you plan your use even better. Oh yeah, don’t use them for
any production purposes whatsoever. It’ll end in tears.
1.4.5 Billing
Billing of Azure services come in three delicious flavors:
How does all this work together to form your invoice then? That is almost a
topic for a whole other book, as cloud computing invoices can be notoriously
difficult to decipher. Throughout the book we will continue to make billing
and pricing part of the discussion, as your manager will want to know what it
costs.
1.5.1 On premises
While not a topic for this book, having on-premises infrastructure is a
solution that is right for many and has worked for decades. On-premises can
provide a number of benefits due to the greater control you have of especially
the physical premises and hardware.
Sovereignty of data can be very specific. You know exactly where your
data is stored, which is critical for legislation in some countries.
Security of hardware and software can conform easier to certain
company-specific requirements.
Existing investment in infrastructure could warrant use of this, as the
costs is low for new applications.
The move to cloud computing for more and more businesses means growth
for on-premises infrastructure is slowing down, especially brand-new freshly
smelling setups. Depending on where you get your statistics from, more than
90% of companies use a cloud platform in some way.
Multi-cloud most often means “two clouds”. Every time a company decides
to use an additional cloud, they effectively double their investment.
You need to train engineers to use the new features, services, APIs and
technology.
You have to train architects in what the best practices are.
Managers must learn how the billing side of things work.
Quality assurance must understand the intricacies of services to test for
implementation flaws.
Security engineers need to understand vulnerabilities and weak points in
the cloud platform.
One of the issues with multi-cloud can be that there isn’t a single entity inside
the organisation that is controlling what the strategy is and how to implement
it. Various departments can sometimes decide independently which cloud to
use for a project. Often this leads to a pile of cloud spaghetti that becomes
increasingly difficult to unravel and consume.
This isn’t to say there isn’t value in multi-cloud when managed and
implemented with care and thoughtful process. Many organisations
successfully implement a multi-cloud strategy, and benefit from a broader
selection of cloud services, cost benefits and innovation opportunities.
The Portal lets you investigate logs, set up alerts, monitor your security
posture, implement policies, manage users and so much more. Almost
everything you can do in Azure you can do in the Portal. The few exceptions
are services in preview, and specific features in non-native Azure services
such as Azure Container Instances and Azure Kubernetes Services.
A lot of the examples and tutorials in this book will be using the Azure Portal
too. It is an important tool, and Azure professionals use it. It isn’t the only
tool though.
The Azure command line interface is just that: an interface for your favorite
command line tool, such as Bash. As the name implies, there are no visual
interactions or buttons you can click on. It is all white text on black
background. Or pink on white, or whatever color you choose, but no buttons!
Where the Azure Portal is visual and you click through sections of screens, or
panes as they are called, and forms, the CLI is direct and fast. Because of
these benefits, many Azure professionals will use the CLI day-to-day.
If you think the Azure CLI is appealing, but want a sprinkling of scripting
and configuration, PowerShell is here. While you can use PowerShell as an
ordinary command line tool, the real power is two-fold.
PowerShell isn’t only for Azure though. It is a tool and platform that can be
used to manage Windows in general, and it runs on macOS and Linux
platforms too. You’ll get to use PowerShell throughout this book as well.
1.6.4 SDKs
Using the Azure CLI or PowerShell can fall short if you are wanting to
integrate Azure services and APIs with your own applications. While those
two tools are the preference of systems administrators for managing
infrastructure and spinning up new systems, developers prefer to use a
software development kit, commonly known as an SDK.
Writing code for an application or service that uses Azure cloud computing
means using the Azure SDK, which is available for .NET, Java, Python and
JavaScript to name the most common programming languages. The SDK is a
collection of libraries designed to make using Azure services in your
programs easy and smooth sailing. Cloud sailing. We will dig into the Azure
SDK and its use later in the book.
1.8 Summary
Azure is applicable to real-world scenarios and can be effective and
desirable for a variety of projects and companies.
Scalability and reliability are two of the main factors that define Azure
and cloud computing in general.
Cost can be managed by using a combination of PAYG services,
reserved instances of VMs and spot pricing VMs.
Azure has services that are always free, such as Cosmos DB free tier, a
certain number of Azure Function executions and some Azure App
Service Plans.
The three types of computing relevant to Azure is cloud, on-premises
and multi-cloud.
The main ways to interact with Azure is using the Azure Portal, the
Azure CLI, PowerShell, or integrating with an Azure software
development kit.
You can create a free Azure account to get started trying out services
and features.
[1] Not all services are available in all regions.
[2]Any apparently useless activity which, by allowing you to overcome
intermediate difficulties, allows you to solve a larger problem.
[3] https://docs.microsoft.com/en-us/azure/
[4] https://docs.microsoft.com/en-us/answers/products/azure?product=all
[5] 16 February 2021
2 Using Azure: Azure Functions and
Image Processing
This chapter covers
Creating cloud infrastructure for an application
Exploring and creating integrated logic work flows
Interconnecting Azure services
Exploring a serverless architecture
Now that you have an idea why Azure is so popular and why it is a great
choice for computing infrastructure, let us dive into a practical example of
using Azure to solve a real problem of storing images from emails and
compressing them. We will keep it simple, but that doesn’t mean it will lack
in power and application options.
Note:
This example can be done entirely with the free subscription tier in Azure. It
is recommended to have a new Azure account for the free credits, hours and
services to still be available.
To solve this problem, you have been tasked with creating a process or
application that gets the images from the email account and gets them ready
for display on the website. This will include storing the images in both their
original and web-optimized formats. There are several project outcomes that
needs to be achieved:
And while there are project goals to achieve, there are also some parts we
explicitly leave out. It is as important to know what not to include, as it is
what to include.
Attachments, which should be images, are then stored in Azure Blob Storage.
Blob storage is a versatile, ever expanding storage mechanism for storing
pretty much anything. It is a native part of an Azure storage account, and this
is where the images will live and the album website gets them from too.
Once there are new images in Blob storage, an Azure Function will be
triggered and process them, compress them, and store the compressed copy
back in Blob storage. That way both the original image is kept, as well as a
version that is more efficient to show on the website.
All of these Azure services will live in a resource group, which is a logical
container for any Azure service. Make sense? Alright, let’s start building with
cloud.
Go through the wizard as shown in Figure 2.5 and fil in the values as you see
best. Follow a naming convention that works for you as well to get into the
habit of good naming of resources.[1]
Figure 2.5: Enter values for the resource group.
It doesn’t matter what name you give a resource group. They are all valid,
however it is recommended to decide on a naming strategy before you start. I
like using the format <project name>RG. This makes it instantly clear which
project I am dealing with and the kind of resource, which is a resource group
in this case. Often companies and teams will have a naming strategy they
follow for just these reasons.
The wizard experience is a taste for how resources are created through the
Azure Portal. As you will learn later in the book, whichever way you create
resources in Azure it is always done through the Azure Resource Manager.
You get the same result regardless of the tool you choose. For now, we’ll use
the Azure Portal wizard experience and click Review + Create when you
have filled in the details.
Figure 2.6: When validation has passed, you can create the resource group.
Finally, click the Create button as shown in Figure 2.6. This is the first step
done for pretty much any new project or logical group in Azure. The resource
group is a fundamental part of your Azure life, and you will create a ton of
them. As long as a ton is less than 800 resource groups, of course.
Figure 2.8: Click Create to create a new container within the storage account
While clicking the “+ Create” button is a simple act, a lot goes on behind the
scenes throughout the wizard experiences. Part of the appeal of cloud
computing with Azure is the abstraction of “yak shaving”, which I mentioned
in the previous chapter. Tasks that have to be done before you get to the tasks
you really want to do.
Alright, time to go through another wizard and fill in the necessary values to
create a storage account as show in Figure 2.9. Go on, it’ll be a beauty.
Figure 2.9: Fill in the basic values for the storage account.
As you go through creating more and more resources on Azure, this wizard
experience will become familiar. While they all follow the same format of
“Basics” -> “Networking” -> other steps, the amount of data needed to create
a resource varies a lot. A resource group, like you created a minute ago,
requires very few bits of information to get created. Others require a
significant amount. Often resources require other resources to be created first,
such as a resource group for our storage account.
Choose the values you want, but make sure you choose the resource group
you created before. You can leave the rest of the values as default for now.
Later in the book we go through what most of them mean in detail.
The lesson for today: Don’t accept default values as correct, as they may
affect the entire project down the line. Having said that, we are indeed going
to accept the default values in this instance. Don’t argue. I’m writing this
book.
And again, leave the default values (after reading what they do carefully of
course), then click the big blue button “Review + create”. Yes, we will skip
the Advanced and Tags steps in this example.
The Create step, also has a review built in, which checks that all the values
and options you have entered are valid. Once confirmed, you can create the
storage account by clicking the “Create” button. Surprise surprise.
You have now gone through an entire experience for an Azure Portal Wizard.
This one was for creating a storage account, but all the Azure Portal wizards
are very similar. The wizard will make sure you fill in all the needed values,
but it is still your responsibility to make sure those values are accurate and
make sense for your scenario.
We are going to use containers, which can in turn store blobs, or binary large
objects. A container organizes a set of blobs, similar to a directory in a file
system. A storage account can include an unlimited number of containers,
and a container can store an unlimited number of blobs. Storage account,
container, blob (Figure 2.13).
Figure 2.13: Storage account hierarchy: blobs inside containers inside a storage account.
When you select a tier that allows some public access, a warning (Figure
2.16) will help you be completely sure that is the right level for you. For this
application you can choose Blob to allow access to the blobs from the Azure
Function we will build soon.
For this project, we need a Logic App that can interrogate an Outlook online
account and get any attachment that is in an email (Figure 2.18). After all, we
are asking customers to email us photos, so we need to collect them.
Figure 2.18: Add the Logic App and email account connection to the project.
2.2.9 Creating a Logic App
Let’s start by finding the Logic Apps section, as shown in Figure 2.19. Again,
there are multiple ways to create any resource in Azure, and this is one way.
Throughout the book you will learn more ways to create resources apart from
the Portal.
Figure 2.19: Create a new Logic App using the Azure Portal.
Using the search bar in the Azure Portal is an excellent way of finding
resources, third party services, documentation, and other relevant
information. In this case we use it to quickly navigate to Logic Apps. You’ll
see a list of all your Logic Apps like in Figure 2.20, but chances are you
won’t have any yet.
Add a new Logic App by clicking “Add”. In this case you get a dropdown,
because at the time of writing there are two ways to create Logic Apps.
Consumption, which means you pay for what you use, such as per
execution and per connector. This is the one we’ll use in this chapter.
Preview, which is a newer way that allow you to create stateful and
stateless applications. It is also single tenant, so workflows in the same
logic app and a single tenant share the same processing (compute),
storage, network, and more. This is outside the scope of this chapter.
However, as Confucius says “Friends don’t let Friends put anything
Preview into Production”.[2]
Creating a new consumption Logic App will open the familiar Azure Portal
wizard experience (Figure 2.21), where you specify all the details of your
new resource.
Figure 2.21: Fill in the details for the Logic App.
Again, you have to fill in the Basics, such as subscription, resource group,
name of the resource and region. Most services will ask for this info, whether
it is created through the Portal, the CLI or some other tool. Fill it in and then
click “Create + review”. We’ll skip the Tags section.
Azure will perform a check of your input before the resource is created, just
to make sure it all makes sense.
2.2.10 Logic App – triggers and connectors
However, clicking “Create” is only the start of getting Logic App brought to
life. The resources needed to run the Logic App have now been allocated in
Azure, but the App doesn’t do anything. Of course, that is about to change.
When the Logic App is created, you get presented with a Logic App welcome
screen (Figure 2.22).
Each step of the Logic App will look like the Outlook step in Figure 2.23.
You can clearly see what the sequence of steps is, and how each is linked.
First you need to connect your Outlook.com account, in order to receive any
emails we can work with. If you already have connections configured, you
can choose an existing one, or you can create a new connection. Either way,
click the three dots on the trigger to set it up. If you are creating a new
connection, you will be asked to log into your account and approve the
connection for use in the Logic App.
This is an important step to get right. We are now configuring the parameters
we need to select the emails we want, which in this case are the ones that
have attachments. There are many ways to select the required emails for any
scenario, so getting this right is critical to creating our image service.
Make sure you choose the following settings for the parameters
If you see other parameters than those above, then use the Add new
parameter dropdown at the bottom of the dialog to select the correct ones.
The right values are now available for us to store the image received via
email, to our newly created storage container. Which brings us to adding the
storage step in the Logic App, shown in Figure 2.25.
Figure 2.25: Choose the action “Create blob” which will create a blob for us and fill it with the
email attachment.
2.2.12 Logic App operation – storing data from email to blob
storage
We want to add a step to pass the attachment data from the email step into the
container. Rather than a trigger, which is what we added to collect the email,
we are now adding an operation. There are a LOT of operations available for
Logic Apps. We are interested in creating a blob in the storage container, and
the easiest way to find that is searching for “blob” as shown in Figure 2.25.
As you might have expected there are many options for working with blobs,
and the operation we’re interested in is simply Create block blob. Click on
it, and the step is added to our Logic App as shown in Figure 2.26.
Figure 2.26: Connect the action in the Logic App to the storage account.
It isn’t enough to just add the step though. It needs to be configured both to
collect the right data and to store it in the right place. Azure does most of the
hard lifting for you and pre-populates a list of the available storage accounts
in the current Azure subscription that the user creating the Logic App has
access to. You then select the appropriate storage account, which is the one
we created earlier in the chapter. Azure now creates an authenticated
connection from the Logic App to the storage account, and you don’t have to
keep track of passwords, tokens, connection strings or any other
configuration value. Neat.
Now that we know where to put the data, it’s time to select what data to store
there, which brings us to Figure 2.29, and an excellent example of the power
and ease of Azure Logic Apps.
Figure 2.27: Choose the Attachments Content dynamic field to iterate through the email
attachments.
First, choose the container within the storage account to use by clicking the
folder icon in the Folder path field. The Add dynamic content dialog will
show and when you choose the “Attachment Content” field as input, Logic
apps will change the operation to a “For each” automatically (Figure 2.28), as
there might be more than one attachment in the email. Clever.
Figure 2.28: The operation changes to a ‘For each’ when choosing a collection of input, such as
Attachments.
Figure 2.29: Select the email properties to use.
Second, select the email properties we want to map to the blob value in order
to create a valid blob entry, shown in Figure 2.29. Click on each field and
select the following dynamic content entries from the dialog shown:
I mentioned before the power of Logic Apps, and Azure in general. With a
few clicks, you have just created a workflow that connects to an email
account, interrogates and filters those emails to only get the ones with
attachments. The data is then mapped from the email directly into a container
on a storage account, making it possible to store all the attachments for later
use. You didn’t have to write any code, mange any hardware or virtual
machines, or set up any new users and passwords. That is the power of cloud.
It lets you focus on the problem you are trying to solve, rather than all the
“yak shaving”. I love it.
Figure 2.30: The complete Logic App has run for the first time.
From now on the Logic App will continue to run until you delete or disable
it. Azure will make sure it is running and performing. If you want to be
notified of any issues or failures, you can set up alerts, but we will get to
alerts later in the book.
When you have a successful run of the Logic App, navigate to the storage
account and confirm you see the email attachment there (Figure 2.31).
Figure 2.31: The image file received on email and now stored in the storage account.
You can see all of the properties of the file, such as its public URL for
retrieval, type, size and much more. Almost there! Let’s move on to the last
part of our app infrastructure.
We are using an Azure Function for our online photo upload application,
specifically for the image resizing part as shown in Figure 2.32. It is a single
function of work needed to complete the application.
Figure 2.32: We are now at the “resizing images with Azure Functions” part of the application.
Figure 2.33: Decide on the container name and the access level.
Figure 2.34: Find the Function App section in the Azure Portal.
Click through to the Azure Function App overview, which looks like most
other Azure service overviews. There are differences between the services,
but the experience is consistent and if you haven’t already, getting used to the
Azure Portal doesn’t take long.
To create a new Azure Function, click on the “+ Add” button and enter the
properties for the function as shown in Figure 2.35.
Subscription: Choose the same subscription used for the other services
in this chapter.
Resource Group: Choose the same resource group as for the other
services in this chapter.
Function App Name: ActionImageResizer
Publish: Code
Runtime stack: .NET
Version: 3.1
Region: Australia East. This needs to be the same region as you created
the storage account in.
Once you have the properties filled out, click on “Next: Hosting” to configure
the mechanics of the Function App (Figure 2.36).
Figure 2.36: Set up the hosting for the Azure Function App.
Every Function App needs to use a storage account for various tasks. These
can include:
Maintaining bindings for states and triggers, which is metadata for using
the features.
A file share for storing and running the function app code, as well as
logs and other metadata. After all the code files themselves need to live
somewhere.
Queue and table storage for task hubs in Durable Functions.
Click “Next: Monitoring” to get to the next step in the wizard, which,
surprisingly, has to do with monitoring and looks like Figure 2.37.
You will see the customary sanity check before you are allowed to create the
resource. At times you can get an error here if you have chosen an incorrect
option or service, but most of the time the wizard (shall we call him
Gandalf?) keeps you out of trouble. All there is left to do is click “Create”
then go to the Function App dashboard (Figure 2.38).
Figure 2.39: The web frontend for a brand new Azure Function. Pretty, right?
As I mentioned before, the Azure Function App is the container that the
Azure Functions live inside. You get all the tools for managing the app,
which is a similar toolset to what you get to work with in a standard App
Service.
Figure 2.40: Create a new function and use the blob storage trigger template.
When you add a new function through the Azure Portal you can choose from
a range of templates that represent common ways of using Azure Functions.
From this list choose the Azure Blob Storage Trigger, and then fill in the
template details:
Click “Add” and you get a brand spanking new shiny Azure Function, ready
to go (Figure 2.41).
Figure 2.43: Create an output for the function that targets the “imageoutputcontainer” blob
storage.
Click on the Add output link in the Outputs box, which will open the Create
Output sidebar. Enter the following values for the Output properties.
When you choose the Binding Type as Azure Blob Storage the rest of the
output parameters are updated. The Blob parameter name is the variable
name that will be returned from the function. Just like before the Path is the
relative path in the storage account for where the resized image is being
placed. The variable {rand-guid} is the name of the resized image, and in this
case it will just get a random GUID[4] as file name.
Click “OK” to create the output, and then we are ready to look at the function
itself.
2.2.20 Writing code in a function
Figure 2.44: Now we have input and output open up the Function code.
This is just one of the many ways you can get to the Azure Function. There is
now a path for the data to go from trigger to function to output. We are going
to massage the image a bit and reduce its size for the web album. This is the
part where we first look at the code of an Azure Function. If you haven’t
been exposed to the C# programming language before, follow along for now.
You can also use other languages like Java, Python and Node.js. It is okay
not to understand all the parts, as we will get back to more code further on in
the book. For now, let’s have a first look at the code (Figure 2.45).
All this does is output a log of the name and size of the blob being inserted
and thus triggering the function to run. Not very exciting, not very
productive. We need to update the code to resize the myBlob input parameter.
Replace the template code in the file run.csx with the delicious code shown in
Figure 2.46.
Once you have replaced the code with that in Figure 2.46, click “Save”.
You can upload any file you want to your function, but often it isn’t needed.
The whole point of functions is to keep them light and simple. Adding lots of
files defeats that purpose. However, we do need to tell the function where to
find the SixLabors.ImageSharp library, which is done in a file called
function.proj as shown below.
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netstandard2.0</TargetFramework>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="SixLabors.ImageSharp" Version="1.0.3" />
</ItemGroup>
</Project>
You can only have one target framework for a function, but you can have
multiple package references, in case you have to include multiple libraries.
This is all that is needed for the function to compile successfully (Figure
2.48).
Figure 2.48: Saving a file in the editor triggers compilation of the function.
Every time you save a new or updated file, the whole function is compiled,
and the output window at the bottom of the screen shows you any
compilation errors there might be. The function will then run whenever
triggered. For this application, the function is triggered by a file being
inserted into blob storage.
Grab your favorite email client and send off an email with an image attached.
Make sure you send it to the email address you configured in the Logic App
back in Figure 2.23. When the Logic App inserts the email attachment into
the blob storage, the blob storage trigger fires and the function runs (Figure
2.49).
Figure 2.49: The function is triggered when a new image is inserted into the storage account.
When the trigger fires off the function, a reference to the newly inserted blob
is passed to the function as well. And that is why the function can then create
a resized copy of the image and insert it into the second blob storage we
created (Figure 2.50).
Figure 2.50: The resized image in the blob container for output.
We made it to the end! That was quite the journey through Azure and various
services. You now have a fully automated application that will collect emails,
copy attachments, resize them and add them to a storage for further use.
Neat!
2.3.1 Scaling
When the service takes off and you need more computing power to handle all
the incoming email attachments, Azure will ensure any Logic App can handle
up to 300,000 requests every 5 minutes. There are other limits that come into
effect as well, such as data throughput and concurrent inbound calls, but at no
point do you have to buy more hardware or provision more virtual machines.
The scaling of Logic Apps is Azure’s responsibility, not yours. This is the
same story with Azure Functions, which will continue to spin up more
instances of the function to cope with the increased number of images
needing processing.
Like you learned in chapter 1, being able to scale an application is one of the
primary benefits of using cloud computing. Resources are there when you
need them, and when you don’t, they scale right back. Where traditional data
centers and company infrastructure only scales to the capacity of the
hardware you have bought, cloud computing has almost infinite resources. Of
course, it’s not “infinite”, but it might as well be. And if you do create “The
Devourer of Resources” application that would bring Azure to its knees, that
is why there are limits on most resources. If you hit those limits, you most
likely have other problems with your application.
2.3.2 Performance
Performance used to be directly linked to the power of the hardware, how
much memory you had on the server and how fast the internet connection
was. Not with cloud computing. Performance isn’t a direct indication of your
hardware budget. Performance is both a fundamental cloud component, as
well as something that can be purchased when needed and for the time
needed. This allows even small teams to have the computing power they need
at any given time.
Applications are only valuable when they run and perform the tasks they
were made for. When hardware fails, new code deployed has critical bugs or
John presses the wrong switch on the admin tool (it’s always John), which
results in the application stopping, the consequences can be expensive.
Expensive in terms of money, but also reputation, trust and momentum can
be greatly impacted.
What would happen if the image service we built in this chapter suddenly
stopped working? The best case scenario would be for the emails to stay in
the inbox and not be processed until the app is ready again. Users of the web
album service would have to wait for their photos to appear. However, using
Azure services we get built-in tolerance of outages and faults. If part of the
underlying hardware fails for the Logic App or Azure Function, Azure will
automatically and instantly use one of several other identical pieces of
hardware in the Azure infrastructure. You wouldn’t even notice any outage.
Of course, if you mess up your own code or configuration, then Azure can
only do so much.
Perhaps by now you have noticed a few “lacking features” in the application
you have built in this chapter. And you are right. While the application works
exactly as intended by collecting email attachments and resizing photos, we
have skipped a few details, such as:
I have left a bunch of deeper level details out on purpose, as building the
application is meant to give you an insight into the power of Azure in the real
world, rather than being bogged down with a million nuances and details.
Don’t worry if you are here for nuances and details though. There are plenty
of both in the rest of the chapters.
Before we get into the Azure details and foundational knowledge, let’s look
at one of the most enticing aspects of cloud computing: cost. But first, a
disclaimer. Cloud computing costs are notorious for being difficult to figure
out. There are lots of tools to help you both calculate costs and monitor costs,
but it is still difficult because of the nature of “compute by the minute” and
“compute by feature” pricing. Hence, the following cost analysis is an
example of how the cost of the image resizing application could turn out.
Table 2.1 outlines the cost for each of the three Azure services we have used:
Logic App, Function and Storage Account.
Estimated Estimated
Service
Region Description monthly upfront
type
cost cost
All up, this will cost you $36.94 per month. That’s it. Imagine you’d have to
buy hardware to run the code, then connect it to a network, write the
application code, and then maintain all the hardware. You’d very quickly end
up with a bill a lot larger than $36.94.
Remember that resource group we created right at the start of the chapter?
Because we placed all resources inside the resource group, all we have to do
is delete that. All the resources inside will be deleted too. Follow the steps in
Figure 2.51 to delete all the resources we just created.
Go to the resource group ActionRG in the Azure Portal, then do the following
1. Click on “Delete resource group”, which will open the side panel on the
right.
2. Enter the name of the resource group to confirm you know what you are
doing.
3. Click “Delete” and all your sins are forgiven. Well, the resources are
gone at least.
That is all there is to it. Because we added all the resources in the same
resource group, managing them is much easier.
2.5 Summary
Every Azure resource has to be in a resource group, which is one of the
first parts to create for any project on Azure.
Storage accounts can have blob containers that can hold blobs, which
are pretty much any kind of binary object, such as images.
Logic apps are workflows consisting of connectors and triggers.
Connectors let you hook into third party services such as email clients to
manipulate their data. Triggers makes the workflow start when a certain
event happens.
Azure services such as storage, Functions and Logic Apps can be
connected to create more powerful applications that can add value to
your business.
Azure Functions are single-purpose compute processes that run entirely
within Azure. They’re called functions because that is all they do; a
single function.
It is not uncommon to build entire applications from hundreds or even
thousands of Azure Functions, each responsible for a single purpose.
This makes it simple to build, debug and maintain.
Azure application lifecycles can be much easier to manage than
traditional applications, as you can focus only on the services you
interact with and not worry about hardware and other infrastructure.
Azure Services can be extremely cost effective, such as using free Azure
Function executions, only paying for the storage you are using (and not
the whole allocation), and only paying for the actual executions of a
Logic App.
One of the pillars of cloud computing, virtual machines are the backbone of
almost any application, whether explicitly or implicitly. There are few
company IT infrastructure ecosystems that don’t require a virtual machine in
some capacity to get the job done, and because they are a “raw” machine that
you can use for almost any computing task, they are very popular and there
are a vast array of types, sizes and flavours.
In this chapter we will cover one of the fundamental parts of not just cloud
infrastructure, but any application, big or small. Compute. And the specific
type of compute in Azure that is used by the most projects and organizations,
is virtual machines (VM).
This might seem like any other IT project that you have ever come across,
and for good reason. Most traditional IT projects exhibit these requirements
and behaviors. In fact, it would seem unusual if they didn’t. The cool thing is
that cloud computing loves exactly this scenario and one of the ways it can be
solved is using virtual machines. While you might have heard of a virtual
machine and possibly even used one or more of them, what does it actually
mean? What is a VM, as they are commonly referred to, and why is it one of
the most critical services on Azure? The short answer: VMs give you all the
power of owning hardware without, well, owning hardware as show in Figure
3.1. Azure will abstract all the hardware layers away, so you can focus on just
managing and configuring the virtual machine.
Figure 3.1: Azure abstracts all the hardware layers when using a VM.
If we take the infrastructure scenario from the start of this chapter and place it
into the world of cloud, a VM make a lot of sense.
You can install any piece of software on a VM, with full control of any
custom integration features and registration processes.
When there is an increase in demand, there are multiple options for
ensuring your application has enough resources to cope. Scale up the
VM, use multiple VMs, increase disk sizes and more. And you do this
without ever buying any hardware.
You can choose from several difference operating systems, such as
Windows Server, Red Hat Enterprise Linux, CentOS, CoreOS, Debian,
Oracle Linux, SUSE Linux Enterprise, openSUSE, and Ubuntu.
And of course, there are no hardware costs. It is Infrastructure-as-a-
Service (IaaS), and you only pay for the time you use the VM. This is
the essence of pay-as-you-go (PAYG) pricing that we covered in chapter
1 as well. If you don’t use the VM all the time, you can switch it off,
you can upgrade it, you can cuddle it to keep warm at night. Well,
almost.
3.1.1 Infrastructure-as-a-Service
A virtual machine is one of the three main services on the Infrastructure-as-a-
Service (IaaS) platform, which we briefly touched on in chapter 1. IaaS is the
lowest level of service you can get on Azure, meaning the closest you get to
the physical hardware, such as disks and CPUs. You provision only the
hardware resources you need, but through virtualisation.
Virtual machines on Azure all come ready to use RDP from the start. To
connect, go to the Connect menu for the VM in the Azure Portal as show in
Figure 3.5.
As you might have noticed there are two other ways to Connect to a virtual
machine in Figure 3.5: Secure Shell, commonly known as SSH, and Bastion
(Figure 3.6).
Part of the choice of using a virtual machine, and IaaS in general, is the
understanding that you are only provided the infrastructure to work with.
Which services and software runs on it, and how this is maintained is up to
you. This is part of the appeal of using a VM: You get a full machine you can
control completely, but you don’t pay for the actual hardware. If you want to
run a piece of legacy software that three people in the company use, that is
totally cool. If you want to run a backend process that relies on three different
databases and is triggered twice every second, no worries. You are in control
of the software and decide on the maintenance schedule and lifecycle of the
product.
To help you manage the virtual machine itself and adhere to Azure best
practices, there is Azure Automanage. This service can remove some of the
headaches of onboarding Azure services and maintaining them over time. As
illustrated in Figure 3.7 you can choose to fully self-manage the VM or you
can dial it all the way to auto-manage and get Azure to help you by applying
patches and updating policies, security and configurations to industry best
practices.
Figure 3.7: Self-manage your VM and be in charge of policies and configurations, or use Azure
Automanage to let Azure apply update patches and apply security best practices.
The point is that even though a VM is as close as you get to the actual
hardware in Azure, and offers you the most control, you aren’t left
completely to your own devices, unless you choose to be. Managing your
own business critical software and applications on a VM is often enough
work without having to also factor in the maintenance of the VM itself and
the Azure ecosystem it lives in. Azure Automanage can help with backups,
logs, security, monitoring, updates and much more with just a single service.
3.2 Creating a VM
Okay, enough talk. Let’s create a VM now that you know the best scenarios
for them, how to connect to one, and that you aren’t left completely alone.
There are several ways that you can create a VM on Azure, such as using
PowerShell, the Azure CLI, ARM templates and more. In this chapter we will
do it using the Azure Portal, as it provides the best way to get your head
around the nuances and features of a virtual machine. I’ll sprinkle the chapter
with references to PowerShell[1], as you start to get more and more into the
fluffy world of cloud.
Go to the Virtual machines section of the Azure Portal and click “Create” as
shown in Figure 3.9.
There are quite a few things going on in the following screen (Figure 3.10),
as well as the subsequent steps, so we will use the next several sections to go
through it. No, you shouldn’t skip it just because it is long. Yes, you should
try and follow along on your own Azure instance. We can do it together,
right? It’ll be fun.
Figure 3.10: The first part of the wizard for creating a new VM.
The first part of the Basics should not cause any concern by now. Fill in the
Subscription, Resource Group and VM name as shown in Figure 3.11. Easy
peasy.
Figure 3.11: Set resource group and name for the VM.
3.2.1 Location
Now we are starting to get into the more interesting part: region. The region,
or location, of where you choose to create your VM might seem trivial.
However, it can have a big impact on performance, pricing, features, size and
more. Choosing the right location for a VM is critical and once you have
selected a region, it can be a chore to move it. Choose a region that you
prefer but pay attention shortly to how the choice of region affects various
factors. Set the availability options to No infrastructure redundancy required
as shown in Figure 3.12.
West US 2 0.57
East US 2 0.58
West US 3 0.71
UK West 0.77
South Africa North 0.77
If you do create a virtual machine (or one of a number of other compute and
network IaaS resources) in a location you regret, all hope is not lost. If you
sacrifice your favorite stuffed toy to the cloud gods and chant the secret
passphrase three times, you can move it. Or you can just use the Azure
Resource Mover. This excellent service will let you move your VM to
another region in a validated and managed way.
Another important consideration for choosing a location is the need for
redundancy of the VMs, also called availability. As you can see in Figure
3.12, Australia East region has Availability Zones and Availability Sets.
Availability zones (Figure 3.13) are three distinct physical and logical
locations inside some Azure regions, such as Australia East. Each zone
is made up of one or more datacenters and is equipped with independent
power, cooling and networking infrastructure. All data and applications
are synchronized between the availability zones.
This means you will eliminate any single point of failure and the risk of
a “crash and burn” scenario is much smaller.
Figure 3.13: Azure availability zones with separate cooling, power, and network.
Availability sets are groups of VMs that are each assigned to a fault
domain and an update domain (Figure 3.14). A fault domain is a logical
group of underlying hardware that share a common power source and
network switch, similar to a rack within an on-premises datacenter.
Update Domains are only ever updated one at a time, two would never
be updated at the same time and this is how service outages are avoided.
These domains inform Azure which VMs can be rebooted at the same
time in case of an outage (fault domain) or platform updates (update
domains).
Figure 3.14: An availability set with 9 VMs, 6 update domains and 3 fault domains.
If you choose to use an availability zone, you will then also choose which
zone to place your VM in. This is the physical placement of the VM within
the region, although you don’t actually know where availability zones 1, 2
and 3 are. If you want to use an availability set, you must set one up before
you create the VM. For now, you can choose to not have any infrastructure
redundancy, but it is important to know what both availability zone and
availability set can do to improve the fault tolerance and high availability of
your virtual machines.
Figure 3.17: Balance the need for compute resources with the cost.
I can’t tell you what size you need, as each case for a VM should be
evaluated individually. All I know is that the path to your manager’s heart
goes through the budget in his spreadsheet. For the exercise in this chapter,
choose something small and inexpensive like a Standard_D2s_v3 image
(Figure 3.18).
Figure 3.19: Use the SSH key pair as authentication to improve security.
A general recommendation is to use an SSH key pair, as a standard password
can be more easily acquired and abused to gain access. You can choose to
generate a new key pair or use an existing one you have saved for just this
occasion. Once you have given it a name, that is it. Follow the prompts on the
screen to create and store the SSH key pair.
The last step on the Basics tab of the wizard is to choose which inbound ports
are open from the start (Figure 3.20). Up to you, but in this case, you could
go with SSH, which is port 22. Port 80 is for unencrypted web traffic, and
port 443 is for encrypted web traffic.
Figure 3.20: Select the ports you want to open on the new VM.
Alright, that is it for the basics. Now you may click the intriguing “Next”
button. Go on. You earnt it.
3.2.5 Disks
A computer isn’t worth much without a hard disk, and the same goes for a
VM. We need to create a disk, or at least attach an existing one, as part of the
VM creation process. As shown in Figure 3.21, you need to choose a primary
OS disk that your main VM image will be deployed to, and then any
additional data disks.
The second choice is the type of encryption. All managed disks on Azure are
encrypted at rest, so the choice is whether you want to manage the key
yourself, and if you want double encryption! Double must be better, right? It
is encryption with both the platform managed key and your own key.
Finally attach any data disks you need. You can attach an existing disk you
have or create a brand spanking new one. We will look at lot more at storage
in the chapter on…well, storage. And now, click that “Next” button.
3.3 Networking
The VM can’t do much without a connection to it. We need to get all that
delicious data in and out of the machine, and that is what networking is for
(Figure 3.22).
Figure 3.22: VMs need to connect to a network to access the Internet and other Azure services.
Unless you have a network that Azure resources, such as a VM, can connect
to they are completely isolated. For that reason, you can’t create a VM
without a virtual network attached to it. The networking tab is where this
happens (Figure 3.23).
If you want to access your VM from the public internet, you will need a
public IP address. This could be to let users access a web server, host an API,
or just be able to connect remotely to the VM using RDP. Whatever the
reason, the one thing to keep in mind is that a public facing IP address means
the VM is open to the whole Internet. Of course, we have many ways to
protect from attacks and unauthorized access, but a public facing network
connection still offers up a target. More on connecting to a VM using RDP
later in this chapter too.
Create a public IP address for the VM in this case though. We will use it to
connect to the VM in a minute.
To filter the traffic to and from a VM, you use a network security group
(NSG). This is a critical part of keeping your VM, as well as your greater
cloud infrastructure, safe from intruders and unauthorized players.
None: Don’t set one up. Which is what you would normally do. No
kidding, but I will get to that shortly.
Basic: You can choose from several preset port configurations such as
RDP, HTTP, HTTPS, SSH and more, to allow access to the VM.
Advanced: This lets you pretty much set up the NSG as you want.
And that is about as much as we will go into NSGs at this time, for two
reasons. Number one: we will go through NSGs in the networking chapter in
much more detail. Number two: best practices suggest not to create an NSG
that is attached to a network interface card (NIC) for a specific VM. Instead
NSGs are associated with a virtual network, which the VMs are then attached
to. This is both more maintainable and can save cost. For now, leave it as
Basic.
There are two services you can use to balance the load for a VM, or group of
VMs (Figure 3.24).
The Application Gateway is a service that lets you route traffic to your
application based on the web address it is requesting. For example, all traffic
going to https://llamadrama.com/image can be routed to a specific VM that
manages images, and all traffic requesting https://llamadrama.com/video can
be routed to a VM that processes video.
The Azure Load Balancer doesn’t route based on the web address, but rather
the type of protocol being used for the traffic, such as TCP and UDP. This
has higher performance and lower latency than the Application Gateway, but
you can only do load balancing (distributing the traffic to multiple VMs) and
none of the fancy URL magic routing.
For either load balancing service, you need to have created an existing
service before creating the VM. We won’t do that now, and we will return to
load balancing later in the book too. It’s a great feature and one that is used
all the time to run and manage millions of applications. Finally, click the
“Management” button to go to the next tab of the wizard.
3.4 Managing a VM
We are almost at the end of creating our VM. I am aware it has taken awhile
to get through only 3 tabs on a single wizard, however, I hope you appreciate
the many nuances and considerations, before creating a VM for your project.
We are almost at the end, and at the point of creating and provisioning the
VM, so let’s go through the management part of the wizard (Figure 3.25).
I’ll skip over the Monitoring section for now, but we will return to the topic
later in the book. It is a critical topic for maintenance and performance, but is
a part of a much bigger picture. That single checkbox under Identity holds a
lot more power than you might think at first glance. Imagine seeing Clark
Kent on the street in this suit and glasses. Not exactly an intimidating
superhero but give him a phone box and flying is done and lasers deployed. If
you assign a managed identity to your VM, you open up the world of Azure
cloud superpowers.
Figure 3.26: Using a managed identity to authenticate access between Azure resources.
Next, there are spot instances. You might have noticed back when we chose
the size of the VM, you could check “Azure Spot instance”. If you need a
development VM or have a transient workload that can be interrupted at any
time, Spot instances are a cheap way of getting compute time. Be aware
though that Azure can evict the VM at any time they need the capacity or the
price of the compute power goes above a threshold you have set. According
to Microsoft you can save as much as 90% on costs. More money for Lego.
Or hats.
Finally, if you are in for the long haul with your VM usage, use reserved
instances. If you can commit to 1 or 3 years of use of the VM, you can get it
much cheaper. Microsoft knows you are committed to it long term, and in
turn you get a huge discount. And more Lego hats.
Now you can finally click “Review + Create” and set that VM to work. Once
the validation has passed you will be shown the price of the VM (Figure
3.28).
Figure 3.28: Always double-check the VM price is right.
Always check the price at this point. It is easy to accidentally choose a larger
VM without noticing and then your costs will go up a lot faster. Other
services like scale sets, backups, choice of disk types and more will also
affect price, but that is covered elsewhere in the book. Once you are ready,
click the “Create” button and let the VM come to life.
From this point on in the book we will continue to use both Azure CLI and
PowerShell commands more frequently, as they are a big part of using Azure,
day to day. As new services, concepts and implementations come up, we will
use a mix of the Portal, the CLI and PowerShell where it makes the most
sense. If you want to catch up on the basics of the CLI, make sure to check
out the appendices.
Figure 3.29: The Portal page for changing VM size and type.
When you change the size of your VM, that is known as vertical scaling.
Changing to a larger size is scaling up, changing to a smaller is scaling down.
You can also use horizontal scaling, which is adding or removing VMs to a
pool of compute power. This is most often done using a scale set, which is a
group of VMs that can be put into action when needed and switched off again
just as easily.
Figure 3.30: An Azure availability set with 3 fault domains and 5 update domains.
An update domain is a group of VMs that can all be updated and/or rebooted
at the same time without any downtime to the greater infrastructure and users
won’t notice it at all. Only one update domain is ever rebooted at the same
time, and you can have up to twenty update domains for an availability set, if
you need it. A fault domain is similar, but the VMs in it share power, cooling
and networking. In Figure 3.30 a fault domain is represented as a rack. In a
single datacenter these racks would be located physically away from each
other. Having more than one fault domain ensures that VMs on separate fault
domains aren’t affected by disruptions to the power, cooling and networking.
Using availability sets, update domains and fault domains ensure you don’t
have any downtime during infrastructure faults or planned maintenance,
which can be critical. Of course, it does cost more, as you need to have
multiple VM instances running, but that might still be worth it depending on
how critical the infrastructure is.
3.5.3 Policies
To keep a whole team, department, or even company working as one
seamless team, make sure the policies and guidelines for the company reflect
chosen best practices wherever possible. A policy is a collection of rules and
guidelines such as what type of VM you can create, the backup policy and
auditing whether managed disks are used or not. When a new VM is created a
number of implicit policies can be automatically added. The VM we created
in this chapter had a single policy (Figure 3.31), which is already marked as
non-compliant.
Azure has a lot of built-in policies you can use, and you can create your own
custom ones too, by using specific syntax in JSON notation.
Figure 3.32: Connect to the VM only using the Microsoft Azure network.
This will open an RDP connection to the VM in the browser and you can then
use remote desktop like you would normally. The only missing feature is
copying files directly to the VM from your local machine, but using a cloud
storage service such as OneDrive can easily mitigate that.
Had enough of hearing and learning about VMs? Good, because that is all I
have for now. A quick summary and then on to the next chapter. Don’t forget
to delete your VM, or at least turn it off.
3.6 Summary
A virtual machine is one of the three main pillars in infrastructure-as-a-
service (IaaS) services on Azure, along with networking and storage.
You use the remote desktop protocol (RDP) or SSH to connect to a
virtual machine to configure and manage it.
Creating a virtual machine requires planning of location/region,
operating system, size, disk types, networking, and security. Any of
these properties can significantly impact performance, cost, and overall
application success.
Without a network connection, there is no use of a VM. Creating or
using the right virtual network is critical to both the performance and
security of the VM.
There are several ways to save costs on using a VM, such as auto-
shutdown, reserved instances and managing the size and power of VMs.
The Azure CLI or PowerShell can be a much more efficient way of
creating and managing VMs, once you understand how they work and
which features influence the performance and cost of them.
If you need to change the VM type or location, you can do this at any
time.
You can use fault domains and update domains inside an availability set
to mitigate the risk of outages for your application.
[1]
See Appendix 1 for an introduction to the Azure CLI and Azure
PowerShell
4 Networking in Azure
This chapter covers
Creating and using virtual network, subnets and network security groups
Using best practices for creating and managing virtual networks
Distributing traffic from an outside network to internal resources
Where compute is the brains of a cloud project, the networks that combine
them are the arteries and blood lines. Just like computing, you need
networking for anything you do with cloud computing on Azure. There is no
choice. At times the network parts of your infrastructure will be abstracted
away from you, but they are still there.
In this chapter we are going to focus on the virtual networks in Azure that
you can control, manage, and make jump through hoops made of IP address
ranges. When managed effectively, virtual networks in Azure can secure
against intruders, optimize data efficiency, help with traffic authorization, and
form part of your applications internet footprint.
The company is wanting to expand their product line to include a new range
of Alpaca conditioners and hairstyling products. This will complement the
current range, and the plan is to use Microsoft Azure for this. In this chapter
we will go through various steps and features to enable Alpaca-Ma-Bags to
thrive in the cloud.
First, let’s introduce a new tool, or at least new in this book, to create a
virtual network: PowerShell. I won’t go through the setup and connection to
Azure with PowerShell though. If you are not sure how to install or get
started with PowerShell as well as talking to Azure through it, check out the
Appendices in the book. It will explain the basic syntax and composition of
cmdlets, as well as introduce you to PowerShell as a tool. With that out of the
way, the first step for creating any resource on Azure, as you learned in the
previous chapters, is creating a resource group. I’ll show you how to in
PowerShell this time, but then I will assume you have a resource group ready
for the examples in the rest of the book. Deal? Awesome. Let’s power some
shell. Once you are logged into Azure with PowerShell, run this command to
set up a new virtual network:
New-AzResourceGroup#nbsp;-Name 'AlpacaRG' -Location 'Australia Southeast'
Once you have a resource group, let’s add a virtual network to it, also using
PowerShell.
New-AzVirtualNetwork -Name AlpacaVNet -ResourceGroupName 'AlpacaRG' -Locatio
This will spit out the output shown in Figure 4.2, which shows a few
properties for the VNet, including Id and name.
Aren’t you happy now that we don’t have to click click click through the
Azure Portal wizard? With just a couple of commands in PowerShell, we
have a virtual network at our disposal. The Azure Portal and the PowerShell
cmdlets talk to the same API endpoint, meaning the exact same information
is passed on to Azure whichever tool you use. If you aren’t sure, go check the
Azure Portal and inspect the VNet there.
As you might have heard before, choosing the right tool for the job is often a
critical step, and PowerShell can often be that tool. Anything you can do in
Azure, you can do with PowerShell, including creating and managing virtual
networks. Next, we do just that, and delve into subnets.
4.2 Subnets
Any device that wishes to connect to the internet or any other network, must
have an IP address. When you connect your smartphone to the public
Internet, that device is assigned an IP address, which is how data can be sent
to the phone. However, there is only a finite number of IP addresses (roughly
4.3 billion)[1] and there are way more devices than there are public IP
addresses. This problem is in part solved by using local networks that have
their own IP address range, and if a device needs an Internet connection only
a single, or very few, IP address is needed for the entire local network. In
Figure 4.3 the Alpaca-Ma-Bags devices are assigned local IP addresses in the
IP range 10.0.0.0, which is the local address space on the local network.
When, and if, the devices need to send traffic through the public Internet,
they will use the public IP address assigned to the VNet.
Figure 4.3: Devices on a VNet get local addresses, and traffic to the public internet get a public IP
address.
A subnet is a logical division of the virtual network, where a subnet gets a
range of IP addresses allocated, and those IP addresses fall within the total
address range of the VNet, such as 10.0.0.0 in Figure 4.3. If you think of the
entire VNet as the road network of a city (Figure 4.4), then a subnet is the
road network in a suburb. It is a logically defined part of the road network
(roads within a suburb), and it is still part of the entire road network (the
VNet).
192.168.0.0 -
C 256 65,536
192.168.255.255
From this table you can deduct that it is important to plan how many subnets
and which network class you are going to use. In our example in Figure 4.2
we are using a class A network of 10.0.0.0, which only offers us a single
network choice, but you can associate more than 16 million devices with it. If
you then split up the network in logic subnets, you can have a huge network,
which can be a nightmare to manage but offers plenty of options. Again,
planning how you intend to use the network is crucial before choosing a
network class.
At this point the subnet is only defined in memory on the instance running
PowerShell. The subnet is not yet created on Azure. For that change to be
provisioned, use this cmdlet
PS C:\> $vnet | Set-AzVirtualNetwork
Operation Result
0 AND 0 0
0 AND 1 0
1 AND 0 0
1 AND 1 1
Figure 4.5: Applying the mask defined in the CIDR notation.
Applying the AND binary operation to each bit pair in the IP address and
mask (Figure 4.5), you end up with a possible IP address range from 10.1.0.0
to 10.1.255.255, or 65536 different hosts. To put it into context, for a virtual
network with a subnet of 10.1.0.8/16, you can have 65536 different devices
on that subnet. Well, almost. Azure reserves 5 IP addresses within each
subnet of size /25 and larger. For smaller subnets Azure reserves only 4.
These reserved IP addresses are used for internal Azure purposes to manage
and run the VNet.
Segmenting the VNet into smaller subnets can help organise devices and
services logically, but also improve data transfers. Don’t make the subnets
too small though, as most organisations always end up adding more resources
than they initially planned on. Re-allocating addresses in a VNet is hard work
and takes forever.
4.2.3 Routing optimization
The other equally important reason for using subnet is the Azure routing
optimization that happens as well. Although you don’t have to explicitly set
anything up for this to happen, it is a fundamental part of how Azure
networking works.
Azure automatically adds system routes for each subnet in a VNet to a route
table, as well as adds IP addresses to each device. You can’t create system
routes, nor can you delete them, but you can override them using custom
routes and subnet specific route tables. What are routes, you might ask? They
are the optimal path from one endpoint to another across a network. In Figure
4.6 subnet A has a route defined to subnet P, so when VM 2 needs to talk to
VM 4, the route ensures that happens using the most efficient path.
Imagine going to a specific address in your city with no map, GPS, or road
signs. How would you ever find the optimal route across the city? Network
routing is similar and a route in a route table (which is where routes live)
describes the optimal path to an endpoint, which can be a device or network.
The implicit system routes for any subnet you create ensures traffic gets to
that subnet on the optimal route, thus optimizing your network efficiency. A
custom route is when you need to stop off at a certain shop for every trip
across the city, or, for Azure, when you need to go through a specific VM
that runs a firewall when using the Internet. In Figure 4.7, the custom route
overrides the system route to make sure traffic goes through the firewall VM.
Figure 4.7: Custom route make sure the traffic goes through the Firewall VM.
It isn’t enough to have optimal network routes though. You also have to
make sure they are adequately secure for your purposes.
Figure 4.8: NSG1 applies to all VMs in the subnet, but NSG2 only applies to VM4.
At the core of an NSG is a list of rules that filters what kind of traffic comes
in and out of the network or device the NSG is attached to (Table 4.3). Where
the Azure routing tables defines how traffic goes from one end to the other,
NSGs define what traffic is allowed to arrive or leave the endpoints of that
route.
Virtual Virtual
65000 AllowVNetInbound Any Any Allow
Network Network
AllowAzureLoadBalancer Azure
65001 Any Any Any Allow
InBound LoadBalancer
The tables of inbound and outbound security rules are what NSGs are all
about. Before we create our own NSG, let’s inspect what a rule consists of,
and the relationship between rules.
Priority: a value between 100 and 4096. Rules are processed one at the
time from lowest number (highest priority) to highest number (lowest
priority). When a rule matches the traffic, and an action is taken, no
other rules are processed for that data packet. The three rules with
priority 65000-65002 are default rules that can’t be removed.
Name: The name of the rule. While you can name a rule whatever you
like, it is highly recommended the name reflects the rule such as
AllowVnetInbound.
Port: The port of the traffic. This is between 0 and 65535, or Any.
Protocol: Use TCP, UDP, or Any.
Source & Destination: Any, an individual IP address, a CIDR range of
addresses or a service tag. An Azure service tag defines a group of
Azure services, such as AppService, AzureLoadBalancer or Sql, and
they are managed by Microsoft. This avoids hard coding IP addresses,
which may change later.
Action: Allow or Deny.
You can have zero rules in an NSG or as many as you’d like. Caution: Be
very mindful of the number of rules, how they overlap, and how they are
prioritized. It is exceptionally easy to make a complete mess of it, where you
lose track of which rules are invoked when and if your network or device is
indeed secured. Some tips on how to manage this coming up shortly, but first
let’s create a new NSG for the subnet we created before.
PS C:\> $nsg = New-AzNetworkSecurityGroup -Name "nsgAlpaca" -ResourceGroupNa
This will create our new NSG with a bunch of output in PowerShell. This is
common for most PowerShell commands, where you are shown the
properties for a new or updated resource. In this case, most of the output is
relating to the security rules that are created by default. However, we want to
attach the NSG to our subnet hairstyleSubnet to put it to work. That is done
with this beautiful piece of PowerShell code, which may look slightly
familiar.
PS C:\> Set-AzVirtualNetworkSubnetConfig -Name hairstyleSubnet -VirtualNetwo
PS C:\> Set-AzVirtualNetwork -VirtualNetwork $vnet
The main difference in this piece of PowerShell code from when we created
the subnet above, is that we use the Set variant of the
AzVirtualNetworkSubnetConfig cmdlet, rather than the Add. This is because it
is an update to an existing subnet, rather than creating a new one. Apart from
that we are associating the network security group, and then provisioning the
changes.
Consider creating subnets that align with the network security groups you
want to design. If you have a group of VMs that can only be connected to via
RDP and another that can only be used with Bastion, then create two subnets.
The NSG for each group can be clearly defined and associated with a subnet
each.
As I mentioned before, NSG rules are one of the most important parts of your
network to get right, and at the same time they are easy to mess up. However,
there are some best practices you can follow to make it much more
manageable and stay on top of the rules.
Having firm guidelines for your NSG rules will pay off exponentially and
make your life much easier, and secure. Now that the subnet, and VNet, is
secured it is time to make it talk to other networks.
4.4 Connecting Networks
What is better than one VNet? Many of course! It is unlikely you will have
just a single network in your Azure infrastructure, and as soon as you have
more than one, they’ll want to talk to each other. They will be like teenagers
on the phone, discussing the latest pop music all night long, and there is no
stopping them, but I digress.
There are two main new products at Alpaca-Ma-Bags, which are hosted on
Azure. One manages the fleece length that alpacas can be shorn to, and one
sells alpaca shearing scissors. Each product has a virtual network with VMs,
databases and other services to make the products function. Currently, if the
shearing scissor products need data from the fleece data, a connection is
made through public endpoints over the internet using a public IP address on
either network as shown in Figure 4.9.
Figure 4.10: Peering two VNets to get lower latency and higher throughput.
To set up a VNet peering using PowerShell, use the following cmdlets and
parameters. First, we need to create a second VNet to connect to though.
PS C:\> $vnet2 = New-AzVirtualNetwork -Name ShearingVNet -ResourceGroupName
And now we can then connect the two VNets using peering. We have to
create the peering in both directions, so there are two parts to the process.
PS C:\> Add-AzVirtualNetworkPeering -Name AlpacaShearingPeering -VirtualNet
PS C:\> Add-AzVirtualNetworkPeering -Name ShearingAlpacaPeering -VirtualNet
Once both directions have been peered, the output in PowerShell will show
PeeringState to be Connected, which means you have a successful peering
(Figure 4.11).
And we can now finally create the VPN Gateway itself. There are several
SKUs, or models and sizes of Gateways to choose from. They define the
number of concurrent connections and throughput.
Warning
You can only have a single VPN gateway per virtual network in Azure.
However, you can create multiple connections to that single VPN gateway to
allow as many other networks to connect as you need. If you connect the
VPN gateway to an on-premises VPN device it is called site-to-site
networking, and if you create a connection between two Azure VPN
gateways it is called VNet-to-VNet.
4.4.3 ExpressRoute
Figure 4.13: ExpressRoute network diagram. Traffic never enters the public internet and flows
directly into Azure.
We have now been introduced to three different ways of connecting from one
network to a virtual network, but when do you use each? If you take costs out
of the picture, it boils down to whether you want to use the public internet or
not, and if you have to connect to Azure from outside of Azure. Table 4.4 has
all the details.
Peering No No
ExpressRoute No Yes
It might sound odd to tell you to avoid Internet traffic in the cloud, which is,
by definition, on the Internet somewhere. However, the Internet is also where
the biggest risk lies when it comes to intruders, attackers, and super villains.
Whenever possible, avoid sending data over the public Internet. You can use
ExpressRoute to avoid this by creating a connection that only goes through a
dedicated private fiber network. You can also use site-to-site VPN
connections to secure the traffic between Azure and your on-premises, and
finally use Bastion to only connect to VMs through the Azure Portal.
Figure 4.14: A load balancer distributing traffic from the Internet to three VMs.
The Internet traffic requests that arrive for your application don’t know they
are hitting a load balancer instead of the application itself. The load balancer
takes that traffic and uses rules to determine which VM in the backend pool
that will receive the request. There are a few moving parts that make all this
possible in a what seems like a choreographed dance of bits and bytes, so
before we jump into creating a load balancer, let’s look at the other parts of
the service.
The IP address is the public face of your app. It is the destination for all the
traffic from the internet. For use with a load balancer in Azure, the IP address
needs to be packaged up in a frontend configuration using this cmdlet
PS C:/> $frontendconfig = New-AzLoadBalancerFrontendIpConfig -Name 'woolFron
The backend pool config is where the VMs are placed and removed from.
Often the backend pool is organized in a VM scale set. You use a scale set to
allow the number of VM instances within it to automatically decrease or
increase as demand dictates according to a schedule. Scale sets provide both
high availability and redundancy without you having to manage it.
In a scale set all the VMs are created equal, meaning from the same VM
image. All the configuration and applications are the same across all VM
instances, which is key to having a consistent experience no matter which
VM serves the request from the load balancer.
Protocol: This specifies the protocol to use for the probe and can be tcp,
http or https.
Port: The port to connect to the service on the VM that is being load
balanced.
IntervalInSeconds: The number of seconds between each check the
health probe does. Minimum is 5.
ProbeCount: The number of times in a row the probe has to fail before a
VM is considered ‘sick’. Minimum is 2.
The two latter parameters are what you adjust depending on how critical and
susceptible to errors the service is. The combined time of the interval
between probes times the number of probes that have to fail, provides the
time it will take for an endpoint to be determined as ‘sick’.
The health probe informs the load balancer which in turn determines which
VM in the backend pool receives any given request. When a health probe
fails, the load balancer will stop sending traffic to that sick VM.
Once you have an IP, a backend pool configuration and a health probe, we
need to have at least one rule for distributing traffic from the frontend, the IP
address, to the backend pool. Let’s create a rule.
PS C:/> $rule = New-AzLoadBalancerRuleConfig -Name 'alpacaTrafficRule' -Prot
You can have several rules to route your traffic based on protocol, backend
pool, port and health probe that should handle the traffic.
4.6 Summary
You create a virtual network (VNet) in Azure to connect other Azure
service to, such as a VM. A VNet has a local range of IP addresses that
it can assign to device that connect to it.
Subnets are logical separations of a VNet that get a portion of the
available IP addresses in the VNet.
You use subnets to logically separate resources and apply specific
network security rules to them.
Classless Inter-Domain Routing (CIDR) notation is used to accurately
and efficiently describe and IP address range for a network.
A Network security group (NSG) defines a set of rules that are applied
to the resource it is attached to, such as a VM or subnet.
The rules in a NSG are evaluated according to priority and evaluation
stops once a rule is hit. Rules determine if certain types of network
traffic is allowed or denied.
You can connect VNets on Azure with each other using peering, or with
on-premises networks using VPN gateways and ExpressRoute.
When a single VM can’t manage the traffic to it, you can use a load
balancer to distribute the traffic to more VMs that are identical and live
in a backend pool.
[1]This is the number of IP v4 addresses, which is commonly used at the time
of writing.
5 Storage
This chapter covers
Creating a storage account and using its features
Creating blob, file, queue and table storage, uploading data, and storing
it cost-effectively
Understanding some best practices for using storage on Azure
This is the key lesson of using storage on Azure: choosing the right service
will have an impact on the performance, the cost and ultimately the viability
and success of the application using that storage. Yes, it really is that
important. No, it isn’t always straight forward to pick the best storage option.
More on that conundrum later in this chapter.
Storage accounts are simple to create and can be incredibly cheap. You only
pay for what you use and depending on the access frequency and redundancy
model of the data, the cost can go way down.
Banning Books want to achieve all these objectives using Azure storage, as
they want to reap the benefit of this cheap, accessible, efficient and highly
scalable service. Read on to see how Azure can help them out.
Figure 5.2: The four types of storage that can live inside a storage account.
That means a single storage account can hold all four types of data storage at
the same time, which makes it simpler to logically separate your data storages
by project, client or department. Banning Books need a new storage account
to get started moving all their assets to Azure. Let’s start by creating a storage
account for them using PowerShell. As you have learnt in the previous
chapters, the first step is, as for any resource on Azure, to create a resource
group. The PowerShell cmdlet New-AzResourceGroup is how you create a
new resource group.
PS C:\> New-AzResourceGroup -Name ‘BanningRG’ -Location westus
That should be a command you recognize from the previous chapter. Next,
we can then create the actual storage account in the resource group with a
new cmdlet New-AzStorageAccount, which defines the properties for a new
storage account.
PS C:\> New-AzStorageAccount -ResourceGroupName ‘BanningRG’ -Name ‘banningst
Once you have run the cmdlet in PowerShell, you should see an output like
Figure 5.3, which shows the most important properties of the banningstorage
storage account, such as storage account name, location, SKU Name and
access tier.
Figure 5.3: Output in the PowerShell command prompt when creating a storage account (edited
for clarity).
In the PowerShell code snippet above there are a few parameters we need to
be familiar with for the cmdlet New-AzStorageAccount we are using.
Resource group should be familiar by now, and the location is where you
want to place the storage account physically.
While the Name parameter seem straight forward, there are a couple of
considerations. The name must be all lowercase. The name is used as part of
the public domain name for the storage account, and therefore must be unique
across all of Azure. While domain names aren’t case sensitive, this is a
decision Azure has made to ensure all storage accounts comprise valid
domain names. In the case of the Banning books storage account above, the
public URL would be
https://banningstorage.<storage type>.core.windows.net
This also means that the storage account name is tied to all elements in it,
such as blobs, files, queues and tables. More on that later in this chapter.
You have surely noticed by now, and yes, I know you want to know what this
Standard/Premium malarkey is all about. And for that we need Table 5.1 to
outline some of the main differences.
Table 5.1: Comparison of Standard vs. Premium storage account type.
Standard Premium
Storage
Blob, file, queue, table Blob and file only
Type
You share the physical You get a physical SSD that isn’t
Sharing
storage with other users shared
We aren’t done with the PowerShell command from before though. There is
still the kind parameter. The possible values are StorageV2,
BlockBlobStorage and FileStorage. It isn’t quite as simple as just choosing
one, as they depend on your choice of SKUName and which type of storage
you are after. And yes, you guessed it. We need another table, like Figure 5.2,
to show what the valid combinations of storage type and kind are.
Table 5.2: Valid combinations of Kind and SKUName parameters for a specific storage type.
Storage
Kind SKUName
Type
Standard_LRS / Standard_GRS /
Standard
StorageV2 Standard_RAGRS/ Standard_ZRS /
V2
Standard_GZRS / Standard_RAGZRS
Premium
Block BlockBlobStorage Premium_LRS / Premium_ZRS
blobs
Premium
File FileStorage Premium_LRS / Premium_ZRS
Shares
Premium
Page StorageV2 Premium_LRS
blobs
The storage type is the more specific type of storage you want to create. The
combination of Kind and SKUName defines which type storage you create.
For example, if we had chosen FileStorage for Kind and Premium_LRS as
SKUName, then the storage account would be of type Premium File Share.
Wow, that was a fair bit wrapped up in that one bit of PowerShell code,
right? As I mentioned earlier in the book, it is often simple to create resources
on Azure, but understanding when to use a feature, or how a parameter can
influence a service is both harder and much more valuable. To create an
efficient and it is critical to get these selections right. To ensure the right
storage quality for your customers, you must consider appropriate
redundancy and type. For example, if you have a simple low usage solution
for a few customers, and it isn’t critical, then there is no need to pay for an
expensive multi-region redundant solution. You might want to choose
premium storage though to ensure immediate access to the data using a solid
state drive.
Now it is time to move on and learn about what lives inside storage account.
Repeat after me: blooooooob.
5.3 Blob
While blob undeniably is a fun word to say a lot, it does have an actual
meaning too. It is short for binary large object, which is another way of
saying “store pretty much anything”. It was one of the first types of cloud
storage on Azure and remains one of the most popular today. Blob storage is
one of the four types of data you can have in a storage account, elegantly
depicted in Figure 5.4 with excellent iconography. I hope you like it.
Figure 5.4: The blob section of a storage account, which is the most versatile.
5.3.1 Access Tiers
Before we get into the ins and outs of blob storage, there is one more thing
you need to know about. If we revisit the PowerShell output from before
(Figure 5.5), there is a column name AccessTier.
Figure 5.5: The access tier is set by default to “hot” when creating a storage account.
The access tier can take one of three values depending on how often you need
the data: cold, hot or archive. The less urgent a piece of data is, the less it can
cost to store on Azure, if you use the correct access tier.
Hot is for files that need to be readily available. This could be files that
are either used frequently or needed straight away when requested. For
example, this could be customer order files for Banning books. When
they are requested, they need to show up straight away.
Cold is used when files are accessed infrequently and aren’t urgent when
they are. It is also used for larger files that needs to be stored more cost
effectively. For Banning Books this might be staff images used for the
monthly newsletter. They aren’t urgent and are rarely used at all.
Finally, the Archive tier is an offline tier for rarely accessed files. It is
dirt cheap. Really cheap. It is designed for large backups, original raw
data, data to comply with legislation and other datasets that are unlikely
to ever be used again, but are required to be kept. It can often take hours
to get the files out of archive, and you can’t do it too often.
The three tiers are important for both price and performance. Getting the
combination wrong and you will either be paying way too much for storage,
or your application depending on the blob storage items will not perform
optimally. Or probably both. However, there is more to blob storage than just
performance tiers.
5.3.2 Containers
All blobs are stored within a parent container within your storage account
(Figure 5.6). Think of the container as a folder in a traditional file system. It
organizes the blobs logically, and you can have an unlimited number of
containers in your storage account.
Figure 5.6: A storage account can have an unlimited number of blob service containers.
Creating a container within the Banning storage account can be done with the
following PowerShell. First, we need to get the context of the storage
account, so we can add the container inside of it. Remember this bit, as we
will continue to use the storage account context throughout this chapter.
PS C:\> $storageContext = Get-AzStorageAccount -ResourceGroupName BanningRG
Then simply add the storage container inside that context using the New-
AzStorageContainer cmdlet.
PS C:\> New-AzStorageContainer -Name bannedbooks -Context $storageContext.Co
Off: Only the owner of the storage account can access the container.
Blob: Allows anonymous requests to access any blob within the
container, but only if you have the direct address. We’ll get to how those
addresses work shortly.
Container: Provides full read access to the container and all the blobs
inside it.
In this case we choose Off as the books being stored here should not be
accessible publicly. This is critical to get right, as choosing the wrong
Permission parameter value could expose all the blobs to the public internet.
This is also why Permission is set to Off by default. Let’s get some blob
action happening.
Figure 5.7: Blobs are in a container, which are in a storage account. Blobs can be anything in a
binary format.
There is a bit more complexity to storing blobs though, as there are three
different types of blobs you can insert into a container.
Block: The most common type of blob is the block blob. They are both
text and binary data, and can hold up to 190.7 TiB[1], which is a lot.
Uploading a single file to a container will create a block blob.
Append: Very similar to block blobs, but optimized for append
operations, such as a continuous log. As the name indicates, when you
modify an append blob, block are added to the end of the blob only.
Each block can be 4 MiB[2] in size and you do a maximum of 50,000
append operations on a blob.
Page: These are your random-access files, and are split up into pages of
512 bytes, hence the name. Most commonly page blobs store disks for
VMs, thus they are optimized for frequent read/write operations. For
example, Azure SQL DB, which is covered later in this chapter, uses
page blobs as the underlying persistent storage for its databases.
Finally, Banning Books want to upload their books in PDF format to blob
storage for use later. They use the below PowerShell command for each PDF,
where the PDF filename bannedbook.pdf is the file being uploaded. We can
use the same context $storageContext that we used before to create the
container, to designate where to upload the blob.
PS C:\> Set-AzStorageBlobContent -File "C:\books\bannedbook.pdf" -Container
Adjust the file path to a valid one in your environment, and ideally make it a
variable such as $blobfilePath to enable automation of the upload of many
files. The -Blob parameter defines the blob’s name in the storage container,
equivalent to the destination file name. This is the name you’ll use to
reference the blob in the container. The StandardBlobTier is the hot, cold or
archive tier. If we inspect the container in the Azure Portal, you can see the
blob inside it now (Figure 5.8).
The interesting part in the command above is not the cmdlet Get-
AzStorageBlob but the “pipe” command with the select Name after it is a way
to chain multiple commands together. In this case the cmdlet, container and
context produce a result containing all the blobs in the container. That result
is then fed into the next part of the pipeline, defined by the “pipe” character
as shown in Figure 5.9.
Figure 5.9: The anatomy of a PowerShell pipeline command.
The pipe command is a very powerful way to manipulate the data to create a
custom dataset for either output or further processing.
If we then put that into context of the blob container bannedbooks, you get
the following URL:
https://banningstorage.blob.core.windows.net/bannedbooks/<blob name>
The container bannedbooks is added to the URI similarly to how a folder path
is structured in a file system. Of course, access is managed through the
permissions on the storage account or container. There are other ways to
access blobs in Azure too, which we will cover in later chapters of the book
as you build up your Azure knowledge. For now, it is time for another type of
storage that might be more familiar.
5.4 File
File storage on Azure is a managed service, meaning it works like, well, a file
system. If you have used Windows Explorer on almost any version of
Windows, you will be familiar with the system of folders and files it presents.
Figure 5.10 illustrates how Azure File Storage resembles the traditional file
system.
One of the common use cases for file shares on Azure is to attach it to a
computer, such as a VM, so the files are accessible directly from within the
host’s file system. This is not to run the main operating system disk, but
rather the files that are stored on the file share. Just like plugging in that
extern hard drive with all your movies on it at your friend’s house, you can
attach a file share to a VM.
To get a file storage service to use, let’s start by creating a File storage
section in our storage account for Banning Books and then discover some of
the details. Banning wants to store their administration files, such as time
sheets, lists of banned customers, and lists of top banned books on Azure too.
A file share in an Azure storage account is perfect for this. Using the
following PowerShell command we create a file share in the existing
banningstorage storage account.
PS C:\> New-AzRmStorageShare -ResourceGroupName BanningRG -StorageAccountNam
While you get an HDD backed service when you create a cold, hot or
transaction optimized storage, you only pay for the storage space you
actually use. And that brings me to the last parameter in the PowerShell
command.
QuotaGiB is the provisioned size of the file share in Gibibyte, which in this
case is 1024. You need to define the size of the file share you want, so Azure
can make sure it is there for you when you need it.
If you store infrequently accessed files in the transaction optimized tier, you
will pay almost nothing for the few times in a month that you request those
files, but you will pay a high amount for the data storage costs. If you were to
move these same files to the cool tier, you would still pay almost nothing for
the transaction costs, because you hardly use them, but the cool tier has a
much cheaper data storage price. Selecting the right tier for your scenario
allows you to considerably reduce your costs.
As with blob storage, you can also access the file share through a specific end
point URI, which for file shares is structured like this:
https://banningstorage.file.core.windows.net/bannedfiles/
The key here being the file part of the URI, which identifies the file share
withing the banningstorage account, and the bannedfiles path that identifies
the specific file share.
Premium file shares support 100 TiB from inception, so this option isn’t
relevant for the premium tier.
5.5 Queue
At first glance, a queue doesn’t seem to be related to storage. A queue is a
line of items or objects that progress from the back to the front of the queue,
then disappear as shown in Figure 5.12.
Figure 5.12: Queue storage with messages coming in one end and out the other end, in order.
Storage on the other hand is a persistent service that keeps hold of your data
for as long as you need it. Queues and storage thus seem to be somewhat
opposites, but Azure queue storage serves several important purposes. Before
we get to those, let’s understand what a queue is in Azure computing terms.
A queue holds messages, and those messages are just data objects up to 64Kb
in size. The unique property of a queue is that you push new messages into
the end of the queue and retrieve messages already queued, from the front,
just like you saw before in Figure 5.12.
The first of those important purposes I mentioned before is volume. You can
store millions of messages in a queue storage up to the capacity limit of the
storage account itself. And it is very cost effective as well, especially if you
already pay for a storage account. Now to that pressing question I can sense
you have: How does queue storage work and how to I create one? Okay, that
was two questions, but let’s answer those. First, we will create a queue
storage inside our existing storage account. Banning Books will use the
queue to store incoming book orders before they are processed. Use the
PowerShell cmdlet New-AzStorageQueue with the storage account context
we used to create the blob storage.
PS C:\>$queue = New-AzStorageQueue -Name bannedorderqueue -Context $storageC
That is pretty simple, and there isn’t much to explain, other than we now
have a queue storage. Yay! As with blob and file storage, you also get a URI
for direct HTTP/HTTPS access over the public Internet. It looks like this:
https://banningstorage.queue.core.windows.net
Remember, the storage account can only be accessed if you set the
permissions to allow it, as we did at the start of the chapter. Time to insert
some messages into the queue, which requires a new bit of PowerShell that
defines the message to insert, then inserts it.
# Create a new message using a constructor of the CloudQueueMessage class
PS C:\>$message = [Microsoft.Azure.Storage.Queue.CloudQueueMessage]::new("Ba
The most obvious new part is the message itself, which has the syntax for
creating an object of type CloudQueueMessage in the namespace of
Microsoft.Azure.Storage.Queue, where the ::new part tells PowerShell that
you want to create a new object. We then add the message as the parameter
for that object constructor. Finally, we queue the new message on the storage
queue using the AddMessageAsync function. The keyword here is Async,
which is short for asynchronous and ensures PowerShell won’t wait for the
PowerShell call to complete but can finish or continue without “hanging”.
This would be critical if you were to upload millions of messages, for
example.
At this point you might be thinking that you wouldn’t use PowerShell to
upload a single message like this, and you’d be right. We got to start
somewhere though, and now that you understand what queue storage is and
how a message is stored, we can move a bit further up the food chain.
Banning Books has a website that generates orders, which are eventually
stored in a database. Banning has concerns about the processing capacity
when there are sales on and during other high activity events. For each order
a message will be added to the storage queue for asynchronous processing by
a separate process that can continue to work at a constant pace regardless of
the input rate to the queue as shown in Figure 5.13.
Figure 5.13: using an Azure storage queue for holding orders without overwhelming the
orderprocessing.
And that brings us to another important purpose for queue storage:
decoupling. If services and applications that depend on each other are too
tightly coupled, it makes scalability difficult. Instead, by using queue storage
you can decouple services that add messages to the queue and services that
consume the messages, and the services don’t even have to know about each
other.
And speaking of consuming, let’s end this section with reading the message
from the queue that we just added to it.
PS C:\> $queueMessage = $queue.CloudQueue.GetMessageAsync()
PS C:\> $queueMessage.Result
If we inspect the PowerShell code, we can see that you don’t define which
message you want. GetMessageAsync() gets the next message in the queue,
whatever that message might be. That is the whole point of a queue though.
Messages go in one end, and you process them in order from the other end. In
Figure 5.14 the message details are displayed when the message is
successfully retrieved from the queue.
If the GetMessageAsync() call failed, the message would remain on the queue
and the next process requesting a message from the queue would then get that
message again. You are guaranteed that queue messages are only de-queued
when they have been processed, or at least until the process reading the
message removes it. Alright, only one more core type of storage to go.
5.6 Table
The last of the four types of storage you can have within a storage account is
table storage. This is not a database of tables like Microsoft SQL or Cosmos
DB (covered later in the book), but instead structured data that is placed into
buckets called tables as shown in Figure 5.15. In this example you can see
four tables inside the table storage account, which again is inside the storage
account itself.
Banning Books wants to create a searchable list of all the tech books ever
made to keep on top of their inventory and which books have been banned.
Banned books are their core business and they need to find simple book
information really fast to deliver the data to applications and customers.
Table storage is perfect for this, so they use the PowerShell cmdlet New-
AzStorageTable to create their first table storage.
PS C:\>New-AzStorageTable -Name 'Books' -Context $storageContext.Context
The return value from this script is the table universal resource indicator, or
URI, endpoint to access the data in the table storage. The URI will look
something like this.
<pre class="codea"><a href="https://banningstorage.table.core.windows.net/Bo
</pre>
Does that look familiar? The URI is very similar to the one used for blob, file
and queue storage, which, of course, is by design, so it is easy to use and
remember. Just like folders in a file share that holds files and data, tables in
table storage are where the data lives. Each table has entities, which are like a
row in a traditional database. These entities are a set of properties, which are
key/value pairs of data, with the key being the specific property name as
shown in Figure 5.16.
Figure 5.16: The entity, property, and key/value pairs of a table in table storage.
5.6.1 Schema-less
We are using a new kind of syntax this time too, which is really a
combination of two PowerShell statements into one. Everything inside the
bracket Get-AzStorageTable –Name Books –Context
$storageContext.Context returns a reference to the Books table. We could
have stored that in a variable such as $tableref and used
$tableref.CloudTable. Instead, we enclose it in brackets and reference the
CloudTable property directly from the object that Get-AzStorageTablei
returns. This is easier to read and more concise.
Note:
The partition key is the first half of a unique identifier for the row, or entity.
The partition key is also an indication to which physical partition you want
the data to be stored in. If a partition is experiencing high load, it can be
moved to another physical machine in Azure, or if it grows too big it can be
moved to a physical machine with more space. With careful planning you can
optimize how the data is accessed if you use adequate partitions for the right
data in the table. More on that later in the book too.
The row key is the other half of the primary key for the entity. This is a
unique value, which is why we use a GUID value in the PowerShell above.
Each row in the table must have a unique row key. Think of the row key as
the unique identifier within a partition and the partition key + row key as the
unique identifier within the entire table.
Finally, the property parameter is the data the row holds. The syntax is a long
list of key/value pairs with a maximum of 252 keys, or columns (Azure
reserves a few to make it 255). In this case we have two columns, title and
bookid. The max size of the data in a single row is 1MB, so we aren’t talking
about storing your collection of images from “stuff on my cat”, nor your
collection of 1980s Japanese anime. Azure Table storage is cheap, flexible,
and massively scalable. It is a foundational service of Azure, and one you
will be using all the time.
And that brings us to the end of the four core types of storage inside an Azure
Storage account. As you have learnt so far, each type of storage serves a
basic purpose, and each comes with their own strengths as outlined in Figure
5.17.
Durability: Just like Azure storage, managed disks have at least three
replicas of your data. This ensures persistence of the data and gives a
99.999% availability guarantee.
You can create up to 50,000 managed disks per Azure subscription,
which in conjunction with VM scale sets provides massive scalability.
Using availability sets, managed disks are isolated from each other to
eliminate any single point of failure.
Your managed disk will be in the same availability zones as your VM.
And finally, there is full backup support, just like for your VM.
Managed disks are important when creating a virtual machine, as there is a lot
less overhead and maintenance involved in keeping the disk healthy. You can
create managed disks at any time for a VM, and the data on these disks is
persisted, even if you turn the VM off.
Managed disks are a foundational part of running Azure VMs and storing
data, but we will go into more detail about them later in the book. They are a
foundational part of the storage story in Azure.
5.8 Summary
A storage account is one of the three main pillars in infrastructure-as-a-
service (IaaS) services on Azure, along with virtual machines and
networking.
Each storage account will have a defined redundancy, which can be
local, geo, zone or geo-zone.
A single storage account can hold all four types of data storage at the
same time, which makes it simpler to logically separate data storages by
project, client or department.
Each storage account has a unique URI that identifies access to it either
from within Azure or the Internet if the permissions allow.
Blob storage can hold any kind of data objects, up to 190.7 TiB in size
for block blobs.
File shares are like a traditional file system and is excellent as a
detached centralised storage.
Queue storage can hold millions of messages while each message is
processed one at the time.
Table storage is a schema-less super-fast way of storing data in logical
tables. Each table consists of entities, properties and key/value pairs
Each entity in a table in table storage has a unique identifier made up of
the partition key and the row key.
[1] Pronounced tebibyte and is 240 bytes = 1,099,511,627,776 bytes
[2] Pronounced mebibyte and is 220 bytes = 1,048,576 bytes.
6 Security
This chapter convers:
Understanding Azure security fundamental concepts including defense
in depth, shared responsibility and zero-trust approach
Creating new identities in Azure Active Directory
Improving your security posture using Microsoft Defender for Cloud
Enabling multi-factor authentication for Azure Active Directory
Every aspect of Azure involves security. After all, we are trusting someone
else with all our data, processing, transfers and top-secret business logic.
Luckily, Azure is designed and built from the ground up with security as a
top priority, which means we can trust it, right?
This chapter focuses on that built-in security, all the things that you get for
free when you use the cloud services that Azure offers. However, we will
also dive into the parts that you can control, configure, and, of course, mess
up. As security is in a sense another foundational part of Azure, this chapter
won’t be delving into each product and how to make it the most secure for
your project or application. Instead, we focus on the security concepts that
run through all the products like a shining vein of titanium-plated armor and
secures your architectural wonders.
As I mentioned at the top of the chapter, Azure has security built in from the
very foundation of every single service. In fact, the physical infrastructure of
Azure data centres has state-of-art security measures as well, which make
sure no one tampers with the servers, disks and network cables. Starting with
this foundation, security is then at the forefront of every part of Azure. Let
me give you an example.
Once the VM is in use, Azure Defender for Cloud will watch over the
machine and surface any alerts about best practices that can make your VM
even more secure, as well as detect potential vulnerabilities. If you want
intelligent security analytics on top, Azure Sentinel lets you do that too,
which we cover later in the book. Using artificial intelligence, Azure can
detect patterns in user behavior that is out of the normal routine, flag them,
and let you investigate.
All of this happens without you having to write any code or configure any
scripts. Some features need to be switched on, and some will require basic
input, but it is all part of the secure foundation that Azure provides. As I
mentioned before, you need to do some of the work too but compare the
effort to “rolling your own” and the advantages are obvious. We won’t
actually do that comparison now, but you get the idea. All of what I described
just then is part of why this chapter is so important for getting a solid
understanding of Azure and the foundations of cloud computing. Now, let’s
dive into the depth of Azure defenses.
When a medieval king was sitting in his castle, he didn’t rely on just a few
guards to defend himself, the princess and all the treasures in the vault. Did
they have vaults in castles? Anyway, he needed several methods of protection
both for backup and for redundancy. As shown in Figure 6-2 the castle has an
outer gatehouse, which is a first check on anyone approaching the castle.
Beyond that is a moat surrounding the entire castle. Perhaps the moat is even
filled with ninja crocodiles. Who knows? To get over the moat, there is a
drawbridge, which of course can be drawn up in case of an attack or
emergency. The drawbridge leads to an outer wall, which is manned with
knights and archers. The outer wall also has towers and turrets to provide
cover. Past the outer wall is a piece of no-mans land before you get to the
inner wall. That is again manned by knights, archers and other defenses. Past
the inner wall is the actual castle, which holds the keep. The keep is where
the king is sitting, but he is surrounded by a last ring of navy seals. Oh wait…
knights. Yes, knights!
In other words, the king is behind gatehouse, moat, drawbridge, outer wall,
inner wall, keep and special forces knights. That make seven layers of
defense for his majesty. Fast forward to present day, and Azure adopts the
same approach of defense in depth, but of course the king is now your data.
Azure also has seven layers of defense that are all designed to protect your
king… I mean data, as shown in Figure 6-3.
Physical: The first line of defense is the physical datacenter, the building
itself. Only authorized personnel is allowed and is authenticated upon
entry.
Identity: Being able to log in to Azure using the identity of the user.
Each user on Azure has their own identity that includes credentials such
as username and password, multi-factor authentication (MFA), and
audits of user access. This is all managed through Azure Active
Directory. We’ll cover MFA later in the chapter in more detail.
Perimeter: This is what Azure puts around their entire platform. A
perimeter of services that protects the entire platform against large scale
attacks, such as a distributed denial of service (DDoS) attack. It also
includes services specifically for your network, such as a firewall.
Network: By default, services on Azure can’t connect over the network,
or at least the communication is very limited. Traffic to and from the
public Internet is restricted and monitored to ensure no bad actors are
finding their way in. This defense layer also includes the facility to
secure any connections from on-premises to Azure.
Compute: Whenever a VM is fired up and you start using it, it becomes
a security risk. Nothing against you and your security efforts, but VMs
are prime targets for attacks and exploitations. Azure knows this and
provides secure infrastructure to run the VMs on and services for your to
connect securely to them, such as with Bastion. Azure also ensures that
security patches and updates are always applied to the entire platform.
Application: Your application is at the heart of why you use Azure. To
solve a business problem and provide a service to your customers. Azure
knows this and provides application specific services such as the
Application Gateway to manage the traffic to and from the application.
Data: Finally, the king of the layer cake. Yeah, that analogy totally
works. Data is what every business collects, protects, sells and manages.
Every product has data in it in some way, and often this is what attackers
are after. They want your users’ passwords and data, they want the
competitive information for your products, they want the source code for
your application. Data is stored in databases, on disks, in files and many
other places. Azure employs a range of security measures to protect your
data, including encryption, secure hardware, and access policies.
Figure 6-4: The shared responsibility for security and integrity, between Azure and the customer.
6.2.1 On-premises
Starting with on-premises infrastructure, you are responsible for….
everything. Microsoft isn’t going to come and patch your servers or watch
your back for DDoS attacks. You are on your own. Of course, that is part of
the value proposition of cloud computing, that you don’t have to do a lot of
the heavy lifting.
6.2.2 IaaS
Once you move into the world of cloud and start with IaaS, it is a different
scenario though. First, Microsoft doesn’t want you to have access to their
hardware, and you don’t want that either. The whole idea is to use the
hardware only when you need it, and not worry about where it is or how it
works. With IaaS the physical servers, network and storage are all managed
by Microsoft on the platform. You are guaranteed by Azure that you have
access to the VMs, the disks are maintained and replicated, network cables
are plugged in properly, and the infrastructure is working optimally.
Creating the core services that we went through in the first chapters of this
book, such as VMs, Azure Storage and Virtual Networks, means that the
responsibility of the security and maintenance of those services in part lies
with Azure. When you create a VM, a default security posture is applied to
the VM. This includes that all hardware complies with the ISO/IEC 27001
standard for legal, physical, and technical controls involved in an
organization’s information risk management processes. This is a very
stringent standard that ensures Azure complies with numerous regulatory and
legal requirements that relate to information security.
With IaaS you still have a responsibility to look after the security and
maintenance when it comes to the operating system on the VM, your
applications, network access, user management, the data access and any
devices connected to your infrastructure. Of course there is always an
exception to the rule, and Azure offers automatic patching for VMs, which is
almost a PaaS product.
No matter which service and which hosting type you use on Azure, the
responsibility of the physical hardware is always Azure’s. No more staying
up until 3am building that new server you need because Dave fried the
motherboard. Again.
6.2.3 PaaS
When you use PaaS services, your responsibility become less, and Azure’s go
up. However, it gets a bit less well defined, as the security responsibility falls
into the category of “it depends”. PaaS products and services vary in nature
and depending on how you implement your solution, the responsibility of
security and maintenance can be either Azure’s or yours. As I said, “it
depends”.
There are three areas though where it depends on your scenario, whether
Azure or you are responsible for the security of the platform.
6.2.4 SaaS
The last of the three hosting options, SaaS, has the least responsibility,
handing over most of the security reigns to Azure. Just like with PaaS, the
identity is a shared responsibility, as you still need to manage users for your
application.
The second area for SaaS applications where responsibility is shared, is when
it comes to client devices and endpoints. Azure can provide tools for
managing some of these parts of your application, but it isn’t responsible for
what users get up to and how that affects your SaaS application.
Figure 6-5: Security on-premises can be challenging and under-resourced compared to leveraging
the Azure platform.
Not only does that mean you have all of Microsoft’s Azure engineers
working for you, but they are specialized in certain areas and will apply best
practices to areas you will never worry about. This means you have much
more time, energy and budget to focus on the areas that really make a
difference to your business or project. I am completely aware that this isn’t
going to give you any hands-on examples to take away, but it is very
important that you have a good grasp of what security in Azure gives you,
and what the platform doesn’t.
The areas you are responsible for can now be focused on, knowing the rest of
the infrastructure is secure. Furthermore, Azure will provide tools, such as
Sentinel, which can help predict and prevent security incidents in real time.
We will cover Sentinel in more detail later in the book too. Now, it is time to
talk about how you can’t trust your users. Zero trust for them.
Most of the services and features mentioned in this chapter are paid services.
You won’t be able to use them with a free Azure subscription. There are two
reasons for this: 1) Azure knows you are keen on being as secure as possible
by using the services already offered on Azure, so they will charge for them,
and 2) security services require a large number of engineers, products owners
and other staff to keep up to date and effective. Unlike a VM or website,
security is a constantly moving target which require large investments from
Microsoft. Hence, you need to pay a bit to tap into that large effort.
One of the best, and most recognized, ways to build an Azure environment
that is more secure is to adopt a zero-trust approach, which is exactly what it
sounds like. Don’t trust anyone. Not your grandma, not your dog, not that
new guy that needs access to your Azure infrastructure. Zero trust means you
deny access to everything until the user is verified to access the resources in
question. The idea is that it is much more secure to deny everyone access,
and then slowly open up to each user when needed, instead of trying to guess
what each user needs access to. It also stops anyone from either accidentally
breaking stuff they shouldn’t have touched or tempt users to access and
manipulate resources they shouldn’t have. If you don’t have access, you
aren’t going to make a mess. In practical terms, implementing a zero-trust
infrastructure has several parts and in the middle of it is Azure Active
Directory.
AAD is key for a zero-trust approach, because we want to manage users. The
users we don’t trust. AAD is in the category of an identity management
system (IMS), and that is where each identity lives. We use the term identity
rather than user because an application or process can also have an identity.
We aren’t just talking about human users. To create a zero-trust
infrastructure, we start with creating a user, or identity, in AAD as shown in
Figure 6-6. Any identity that wants access to Azure resources must be in
AAD to have credentials assigned to it.
Creating a new user is trivial, and initially you can leave pretty much all the
options as default. As shown in Figure 6-7 a new user only needs a user name
and a name. The rest of the options (not visible in the figure) such as
password, group membership and permissions, can all be added later.
Figure 6-7: Creating a new user only need a user name and name to start off with.
Click Create when you have had a look at the various options and put in a
name and user name for the user. A bit later in the chapter we will go through
how to set up multi-factor authentication to make sure users are safe (and
who they say they are), and then also assign permissions to a user using role-
based access control. Stay tuned for those goodies, but first, let’s go through
how Azure can help us stay safe and let us know how to improve our security
posture.
Figure 6-8: Azure Defender gets data from resources and provides security alerts and
recommendations.
First, let’s orient ourselves on where Defender lives in Azure and what it
looks like. Find the Defender for Cloud service using one of the Azure search
tools, such as the Portal search bar shown in Figure 6-9. This will take you to
the main Defender blade in the Azure Portal.
Figure 6-9: Use the Azure Portal search bar to find Azure Defender.
The overview of all things Defender (Figure 6-10) is a compressed view of
all the various parts Defender monitor for you, such as the secure score,
compliance numbers and more. Notice the secure score here being 20. We
will find ways to improve that in this chapter, while looking at parts of the
Azure security continuum.
The file server for Alpaca-Ma-Bags has been left as it was the day it was
created. Nothing has been secured any further than what comes out of the
Azure box from the start. We need to improve on that. On the file server, let’s
go to the security menu (Figure 6-11).
Figure 6-11: The security menu is where Defender lives for a VM.
The security menu for the VM resource is not unique to the VM as a resource
type. By far the majority of Azure services will have a similar security blade
that will provide recommendations for how to improve the security for that
resource, as well as the severity of each of those recommendations as shown
in Figure 6-12.
Figure 6-12: Each of the recommended security fixes are described and rated for severity.
Each of the recommendations will have details of why the issue is raised,
such as for the encryption of temporary disks and data caches as the one
shown in Figure 6-13. In this instance you can see the following
Figure 6-13: The details of a specific security issue raised by Microsoft Defender.
Finally, at the bottom of the security issue blade there is a Take Action
button. Click it. Go on. It takes you to the disk blade for the VM, where we
can now perform the necessary changes to fix the security issue. Isn’t this
cool? Azure tells you where there are problems with your system. The
platform then goes on to give details on why it is an issue, so you can learn
from it, rather than just accept it blindly. Finally, there is a single button that
can either fix the issue for you, if you choose, or at least take you to the right
place to fix it if Azure can’t. Suddenly, keeping to the right end of the
Security Continuum is made much easier.
6.4.2 Secure score
The secure score is a one stop shop for how well you are doing on Azure,
security wise. It’s a single number that tells you in an instant where on the
security continuum your Azure environment is. Before I continue, I can tell
what you are thinking right now though. “Lars, this score seems like a silly
gimmick to please managers that just want a number saying how secure they
are.” You are kinda right, and kinda not right. Let me explain.
Yes, your manager will love that there is a number they can monitor, which
gives an indication of your security posture on Azure. However, there is more
to it than that. The score is calculated based on the individual security posture
for each service in Azure you are using. This creates a number of
recommendations from Azure to improve the secure score, shown in the
secure score recommendations dashboard in Figure 6-14. Each of these
recommendations are then broken down into their current score as a
contribution to the overall secure score, and then, here is the really cool part,
how much the score will improve for each recommendation when you fix the
issue.
To get a better view of a single alert, click on it and a detail pane opens on
the right (Figure 6-17), which provides more information about the chosen
alert.
Note
Security alerts are not a free service within Microsoft Defender for Cloud.
Consider creating alerts in circumstances where resources are both exposed to
vulnerabilities and critical for your business processes.
Alerts forms a great defense against threats along with the secure score and
the security recommendations. Microsoft Defender has a lot more features
than these three key components, and I will encourage you to explore them.
However, the score, the recommendations, and the alerts are the foundation
of Defender for Cloud. Use them. They will save your bacon. Let’s change
our focus a little bit and look at how to make the authentication process more
robust.
Alpaca-Ma-Bags need to upgrade their security posture for their users, and
part of that is using MFA through Azure. As shown in Figure 6-18 they want
to make sure users need at least one more form of authentication before they
are let into the wonderful world of plush luggage.
Figure 6-18: You need at least two of the three types of authentication to satisfy the MFA flow.
As the figure shows, you need at least two of the three types of
authentication:
In the Azure Portal, let’s head over to the Active Directory section and then
click on the security menu, as shown in Figure 6-19.
Figure 6-19: Access the MFA settings in the security part of AAD.
Note
You won’t be able to configure MFA for a free version of Azure AD. You
will need Premium P1 or equivalent to take advantage of MFA. Luckily, you
can take advantage of one of the free trials of the tier, which Azure always
seem to offer. Just don’t forget to cancel it before first payment is due.
Once in the security section of AAD, use the Conditional Access menu item,
which opens the blade shown in Figure 6-20. From there you want to create a
new policy, which is a concept we briefly touched on in chapter 3. A policy is
a defined set of rules that determines an outcome. In this case the rule is the
MFA function for one or more users, or one or more groups. Whenever you
see the word policy in Azure, think rules that must be obeyed.
Figure 6-20: The security blade in AAD, where you can create a new MFA policy.
Alright, click that New Policy button and let’s create some MFA goodness for
Alpaca-Ma-Bags. That will show you the wizard for creating a new policy as
shown in Figure 6-21. You will have to give the policy a name and then
choose some users or groups. These groups and users are the accounts in
AAD that will be affected by the policy. Normally, an Azure AAD tenant
will have a group that includes all users. This group will have other groups in
it, such as administrators, standard users and other main users.
Note
Figure 6-21: Defining users and groups for a conditional access policy.
There is one more step to do before our policy can be created. We have to
add, or enforce, an access grant. This grant says only to grant access if a
certain condition is met, which in this case is to require multi-factor
authentication as shown in Figure 6-22. You can see there are other grants
that can be selected for users to be authenticated, such as for specific devices
or apps. We won’t go into the details of those grants but just know there are
others to use besides MFA.
Figure 6-22: Add at least one grant or session, which in this case is to only grant access to MFA
authenticated users.
As set out in Figure 6-22 select the grant, click on Select, then Create. You
are now creating an MFA policy for Azure AAD. Exciting! Although, before
we can call it a day and go celebrate with chicken wings and lemonade, there
is a bit more to do. So far, we have just created a policy requiring MFA.
Nothing is using the policy yet.
A trigger for the policy could be your custom application, a management tool
or something else. For Alpaca-Ma-Bags they want to make sure that users
logging into the Azure Portal to manage any resources are using MFA,
exactly like we saw in the security recommendations in Figure 6-12. To do
this, open up the policy we created before, then select Cloud apps or actions
and search for Azure management as shown in Figure 6-23. Check the
Microsoft Azure Management app, then click Select again.
Figure 6-23: Adding a target application to the MFA policy.
The final part of finishing the policy is to enable it. You can see in Figure 6-
23 there are three options for enabling the policy:
Report only: This lets you monitor and evaluate how a policy would
impact the users it applies to. You get a report that outlines how your
policy works for your users without them noticing any change.
On: Turn the policy on. Camera. Lights. Action.
Off: Turn the policy off. Yeah, you knew that already.
Choose the option that suits you best, but if you want to use it for real, choose
“On”. Then log out of the Azure Portal and in again with an account that you
set the new conditional policy for, and you will be presented with a MFA
screen like Figure 6-24 or asked to set up MFA if you haven’t already.
Figure 6-24: The MFA check for my account asking for a text message or phone call
confirmation.
6.5.1 Passwordless
An emerging way to authenticate users is to use no password at all. Humans
are in general quite rubbish at remembering random words and phrases,
which is why we have a single password that we add a number or character to
at the end. You know what I am talking about. Don’t deny it.
Azure has recognized this and come up with a variation of the MFA process,
which eliminates the need for a password, called passwordless. The name
kinda gives it away, but it uses Windows Hello (face recognition, fingerprint
or PIN), an authenticator app, or a hardware security key. Once set up, these
methods are much more secure than a password that can be leaked, forgotten
or reused. To set this up, head back to the security blade of AAD, just like
you did before when setting up MFA, and click on the Authentication
methods menu (Figure 6-25).
Figure 6-25: To set up passwordless authentication, use the authentication methods menu option
in AAD Security.
If it isn’t already, you need to enable the combined security info experience
by clicking on that nice blue ribbon as shown in Figure 6-26. Go on, click it.
Figure 6-26: To enable users to use passwordless, enable the combined security info.
And then choose either Selected to only select certain users that can use
passwordless authentication or All for every user in the AAD tenant as shown
in Figure 6-27.
Figure 6-27: Choose “Selected” or “All” for combined security information registration.
Then hit the Save button. The last step is to enable the passwordless methods
of authentication you want to allow. At a minimum I would recommend
enabling Microsoft Authenticator and consider FIDO2 security key. Those
are currently the two strongest authentication mechanisms in Azure. In the
Authentication methods blade, click on Microsoft Authenticator and enable it
as shown in Figure 6-28.
Figure 6-28: Enable Microsoft Authenticator for use with Azure AAD authentication.
Do the same for any other methods you want to enable. Users will now have
to download the Microsoft Authenticator app on their Android or iOS device
to use passwordless authentication or use any other methods you have
enabled.
Next, let’s dive into how the newfound authentication for users lead to
authorization to use resources with role-based access control.
6.6 Role Based Access Control
Arguably one of the most important ways to manage users and the resources
they can access is role-based access control, generally known as RBAC. This
is the key feature of Azure when it comes to avoiding losing track of what
users can access and keeping your entire security posture to the right on the
security continuum. RBAC determines who has access to a resource, what
they can do with that resource, and what areas in general they have access to.
As shown in Figure 6-29 RBAC works through the combination of three
elements: a security principal, a role definition and a scope.
Anyway, Azure provides a whole bunch of built-in roles that are already
defined for you, such as owner, contributor and reader. In fact, there are
hundreds of specific built-in roles for all sorts of purposes, including DNS
Zone Contributor, Azure Event Hubs Data Owner, and Cognitive Services
Metrics Advisor Administrator. As you can tell, they can get incredibly
specific, and chances are that you can use a built-in role for defining your
RBAC implementation.
If you can’t find a role from the Azure provided list, you can also create your
custom role definition. Avoid this if you can though, as it quickly becomes
difficult to keep track of what the roles do, and thus maintaining the
permissions in them, and who are using them.
6.6.3 Scope
Finally, the scope is the where of the RBAC equation. You define the scope
of where the who and the what apply. This is really useful, if you for example
want a user to be a database contributor, but only for a certain resource
group.
There are four levels of scope you can target, being management group,
subscription, resource group, and resource. Using these effectively can
dramatically improve how you manage authorization of users.
Together those three parts make up a role assignment, which is what you call
the process of attaching permissions to an identity. For example, you could
take the user we created in Figure 6-7 and give them access to the AlpacaRG
resource group. Go to the resource group and then click on the Access
Control (IAM) menu, then add role assignment as shown in Figure 6-30.
You might think that a lot of roles would be standard, such as allowing read,
but not write, or allowing read and write but not delete. You’d be right! There
are a whole lot of standard roles that we use on Azure all the time, which is
why Azure provides a bunch of built-in roles you can use. I always
recommend using those roles wherever possible, as creating your own brings
a lot of challenges. You have to maintain the roles, make sure they don’t give
too few or too many permissions, not to mention naming them! We all know
that naming things is the hardest thing in technology. For this assignment to
the Alpaca-Ma-Bags identity, we will use the Contributor role as shown in
Figure 6-31. This particular role is commonly used where an identity needs
access to fully manage all applicable resources, but not pass that capability on
to any other users.
Figure 6-31: Choosing a built-in role, if possible, is the best choice for RBAC.
The built-in roles have a pre-defined role definition. You have already chosen
the scope when we went to the resource group and selected the Access
Control (IAM) menu. Finally, we need to select the security principals for the
role assignment. Click Next and then it is time to choose some members.
Figure 6-32 shows the next part of the role assignment wizard where you
choose the identities that are included in the assignment.
Figure 6-32: Select the members of the role you are assigning.
Hit Select and then Review + assign, after which you can save that role
assignment to the identity. The Alpaca Ma Bags identity now has access to
all resources in the AlpacaRG resources group and can even create new
resources as well. That is the brief introduction to RBAC, as there are a lot of
nuances to it, which are out of the scope of the book.
6.7 Summary
Azure uses 7 layers of defense to provide defense in depth. They are
physical, identity, perimeter, network, compute, application, and data.
Azure Active Directory is where you manage all your users’
authentication and authorization by using MFA and RBAC.
Use the shared responsibility model to understand when and where you
need to take responsibility for your security on Azure.
A zero-trust approach is the best way to ensure only needed access is
granted to Azure identities. Assume from the start, no one needs access
to anything until proven otherwise.
Leveraging the security foundation in Azure means you can focus on
adding business value and do the fun stuff, as thousands of security
engineers work on improving your security posture every day.
All hardware used by Azure complies with the ISO/IEC 27001 standard
for legal, physical, and technical controls involved in an organization’s
information risk management processes. This is a very stringent
standard that ensures Azure complies with numerous regulatory and
legal requirements that relate to information security.
Microsoft Defender for Cloud security recommendations are a to-do list
of the security improvements you can (and should) do for a particular
service.
The Microsoft Defender for Cloud secure score directs you to the
security improvements that will add the most value to your Azure
infrastructure.
Security alerts are raised when Defender detects a threat to one or more
of your cloud resources.
Multi factor authentication is enabled through Azure Active Directory
and then assigned to groups and identities.
Passwordless authentication is a special type of MFA that increases the
security posture by removing passwords the user has to remember, and
instead relying on hardware keys or biometrics for authentication.
RBAC defines identity authorization by using a combination of a
security principal, a role definition and a scope.
RBAC
If you have been wondering, since the start of the chapter, what a ninja
crocodile looks like, I have good news. I managed to get a closeup of one.
You’re welcome.
7 Serverless
This chapter convers:
Creating a function app and function from Visual Studio Code.
Creating Logic Apps to integrate with other services.
Introducing API Management and creating a public facing API.
Understanding what serverless cloud services are good at.
Serverless is one of those terms that your manager loves and which has come
to signify using and implementing cloud computing like the cool kids. There
is a certain expectation of efficiency and cost saving if you start describing
your solution as serverless. While that can certainly be true when approached
correctly, let’s get one thing clear right from the start: there is no such thing
as server-less. You can’t do any kind of computing without a server of some
kind, whether that is your laptop, an on-premises datacentre, or cloud
computing. What is referred to is that you are using a server that someone
else manages. You and your project are serverless, but the computation you
perform is not.
The best way to look at serverless is like chapter 6 on security in the sense
that serverless is a spectrum, rather than a yes/no decision. As shown in
Figure 7-1 you are somewhere between a server-based architecture and fully
serverless. It is perfectly okay to have a mix of services relying on servers
and some relying on serverless services. As always, use the right tool for the
job.
Non-Fungible tokens
Also, this book does not presume to know anything about NFT trading,
valuation, or any other aspect of this blockchain enabled digital trading
world. I am not suggesting you invest, acquire or do anything with these
assets. It is merely in this book as part of a serverless scenario.
These unique digital assets can make their customers feel special and loved,
but Alpaca-Ma-Bags also want their partners to get some benefit from them,
and to have access to their customers’ NFTs. Up until now the web shop data
flow has been a record inserted into a database every time an order is made of
some of their products, such as the popular alpaca hairstyling gel. Now, when
an NFT is created by a blockchain process, a reference to the NFT and the
NFT image itself will be stored too.
NOTE:
The images will then be accessible through a public facing API, which
partners can be granted access to. The API architecture and endpoints will be
created in a serverless setup which makes it scalable and low cost. There was
that word again: Endpoints. I have mentioned it a few times in the book, but
now is the time to understand exactly what that word encapsulates. An API is
a collection of functions that a user of the API can call, or access. For
example, an API might have a function that calculates the power of one
number to another, such as 24. This function requires two inputs, being the
two numbers. For an API, this is exposed through an endpoint, which is most
often called using HTTP via a unique URL as shown in Figure 7-2. In short,
an endpoint is a web address that can be accessed over a network, usually the
Internet, which you can pass a number of arguments to, and get a result.
Figure 7-2: An API has endpoints, which are often HTTP URLs that can be accessed over the
Internet.
While the current flow is adequate for the shop as it operates right now,
Alpaca-Ma-Bags wants to make sure the shop can grow and integrate with
other services of the future. For example, if a customer creates an order, it
could trigger a marketing email workflow to recommend other related
products. You know you want more marketing emails about alpacas. For this
reason, they are planning on expanding their cloud footprint with serverless
additions. The parts we will need for the initial architecture are shown in
Figure 7-3 and include:
API Management: This is the service that binds all the API endpoints,
of which we will have two initially.
Azure Function: The function will retrieve the NFT from the blob
container.
Logic app: Post a copy of the image to Twitter. Because, why not?
Blob storage: Blob storage is where the images are stored. We will also
send output from the Azure Function to here. Blob containers are within
a storage account, which we went over in chapter 5.
Figure 7-3: The serverless architecture we will use. Look at that fancy alpaca.
Of course, this scenario is a tiny bit contrived and made up, but it will give us
a chance to learn about three major serverless services in Azure: API
Management, Functions, and Logic Apps. And if you can accept alpaca hair
products, I am sure we can agree on using this business venture into NFTs
too. Right?
7.1.1 What is an API?
Figure 7-4: Two external apps are communicating with the API to access internal data.
You might be thinking that you can do all this with a standard website, or
indeed your tried and tested desktop application. The difference between
those scenarios and an API, is that there is no user interface. An API is
exclusively for pieces of software to connect and communicate with each
other in a managed way.
As you might have noticed from Figure 7-4 one purpose of APIs is to hide
the internal details of how a system works, exposing only those parts a
programmer will find useful. This has the added benefit of future changes to
the internal system doesn’t necessarily mean the API endpoint and usage
have to change. APIs are what binds together a LOT of software solutions.
And with that, let’s start building our own serverless API.
As you learnt in chapter 2, when we built the image resizing app, functions
live inside a function app. This function app can be loosely translated to the
VM that will run your function, which is why serverless simply means
someone else’s computer. Let’s create a function app, but this time we will
use the Azure CLI instead of the Portal. As mentioned before in chapter 3 of
this book when creating VMs, the CLI is a common tool for creating and
managing Azure resources, once you start working with them frequently. It is
just faster and more convenient.
C:/> az functionapp create --name ActionFunctionApp --storage-account alpaca
Figure 7-5: Use a storage account in the same region as the function app to minimize latency
when running your function.
Because we are creating a function app of type consumption, the parameter
consumption-plan-location is needed to determine which region to host the
function app in. A whole lot more on hosting options and constraints is
coming up in a little bit. Finally, the functions-version refers to the major
version of the function app, which at the time of writing goes up to 4. Fire
that command off in the CLI and you have a shiny new function app. You
should see a ton of JSON output once the command succeeds. However, the
function app doesn’t do anything on its own because we need to have one or
more functions inside it. How many functions you can have in a function app
depends on the type of function app and the scenario you are using it in.
Before we get into the details of what makes up a function, let’s create one
first. In chapter 2 this happened using the Azure Portal, but that isn’t the most
common way of creating a function. We do that using a much more efficient
tool: Visual Studio Code. If you aren’t familiar with VS Code, go check out
the appendices in this book to get started.
Once you have the Azure extension installed in VS Code and logged in, you
should be able to find the function app we created a minute ago shown in
Figure 7-6. In fact, you can also see the function app we created in chapter 2.
This will create a folder called AlpacaNFTProject inside the folder you
navigated to. This new folder will have various files for the dotnet project as
shown in Figure 7-7.
Table 7-1: Overview of the files created for a new Azure function.
File Purpose
While this command is short and relatively simple, there is a lot going on.
First, the new parameter indicates we are creating a new function. Nothing
surprising about that, but then we get to template, which is a whole story in
itself. The template for the function describes which trigger the function has.
We will dive into triggers in just a second. To see a list of available triggers
for your current system use the command
func templates list
We are using the HTTP trigger template for the function, which triggers the
function whenever its HTTP endpoint is called. Finally, the authlevel
parameter can have one of three values, anonymous, function, or admin. This
determines what sort of authentication is required to access the function, as
outlined in Table 7-2.
We now have a function in VS Code, but it is only on our local machine for
now. That is okay. We will get into Azure soon enough. However, as we now
have a function to look at, let’s go through more of the details and features of
functions. And we start with triggers.
7.2.1 Triggers
Triggers are what cause a function to run. A trigger defines how a function is
invoked and a function must have exactly one trigger. If there wasn’t a
trigger, a function would never run, as it wouldn’t know when to do so. In the
function we just created, we used the HTTP trigger, which lets you invoke
the function with a call to a web address. A blob storage trigger would
invoke the function when a new or updated blob in a container is detected.
There are many other triggers as well, such as a queue trigger with the queue
item as data. These additional types of triggers are outside the scope of this
book though.
Triggers have associated data, which is often provided as the payload of the
function. If you look at the signature for the Run method in our function:
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route
ILogger log)
The data coming from the trigger is the req parameter of type HttpRequest.
This data object will have data that is common to an HTTP request, such as
the host and referrer of the request.
7.2.2 Bindings
You know how to start a function using a trigger, but how do you interact
with other resources that the function needs to do its work? That is where
bindings come in. Bindings let us connect other resources to the function
using input and output parameters. The data from the parameters is then made
available to the function from the bound service.
For our function, let’s add a binding to the input as well as the output.
Eventually, this function is being connected to our Alpaca API (Apipaca?
), which will in turn trigger the function. The function will retrieve the
NFT image using a binding to blob storage, and then also create a smaller
version of the image, store it in blob storage and then return it to the API
caller. This smaller image is both smaller in dimensions as well as file size, to
make it faster to retrieve it for previews or thumbnails. I created Figure 7-8 to
make that a bit clearer.
Figure 7-8: The flow we are creating for the Azure function.
As you can see in Figure 7-8 we need to extract the image from a blob
container, then also create a copy of the NFT image, which is inserted into
the second blob container. We are going to do all this in the function, but let’s
start with a look at the first part of the code of the function as it stands from
when we created it.
[FunctionName("ProcessNFT")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route
ILogger log)
{
There are two more things we need to do, to make this function work. First,
we need to add a reference to the SixLabors image manipulation library,
which we also used in chapter 2. Using Visual Studio Code, open the project
file, such as AlpacaNFTProject.csproj, and add the following line with the
other package references.
<PackageReference Include="SixLabors.ImageSharp" Version="2.1.3" />
You might have to use a different version, but 2.1.3 is current at the time of
writing. You then need to add the using references to the top of your function
file to let your function know how to manipulate our NFT image.
using SixLabors.ImageSharp;
using SixLabors.ImageSharp.PixelFormats;
using SixLabors.ImageSharp.Processing;
using SixLabors.ImageSharp.Formats;
Note:
If you are getting errors for the Blob bindings, it might be because you have
to install the Storage extension in VS Code. Run the command dotnet add
package Microsoft.Azure.WebJobs.Extensions.Storage in the terminal
windows to add the reference library.
The second thing is to add some images to the blob container defined in the
input binding originalnfts/{nftid}. You can add whatever you like, but in this
case, I have used the local Azurite open-source emulator, which will allow
you to run everything locally. If you haven’t used the Azurite open-source
emulator before, check out the appendices. Once you have one or more
images in your blob container, press F5 in VS Code to run the function. This
will build the code for the function, start up a small local web server, and host
your new shiny function ready for use. And use it we will.
In the Terminal window, you’ll see a line like the following at the very end.
Functions:
This shows the functions that have been built and are now running on your
local host. In this case the ProcessNFT function is being hosted on localhost
port 5004. Your port might be different. One way to call the function is to
open your web browser and enter the URL specified above but replacing the
{nftid} with the name of the blob you want to process.
http://localhost:5004/api/nft/nft-1-2022.png
Once the function has run, a second image has been generated and inserted
into the path in the output binding, such as copynfts/{nftid}. You may
have noticed that the input and output bindings are very similar. A binding is
a binding in general, and it depends on how you use it, which determines if it
is input or output. In this example, the input binding is set to FileAccess.Read
and the output is FileAccess.Write, which makes sure I don’t accidentally
change the original NFT.
The final question is how do we get the local function deployed to Azure
proper? In VS Code, it is quite simple. Make sure you are logged in to Azure
in the IDE, then choose your function app and click the deploy icon as shown
in Figure 7-9.
Alright, now that we have a function app, it is a good time to dive a bit more
into hosting your code.
7.2.3 Hosting
While I love to look at the code and learn how to create the function, let’s for
a minute pause to talk about the hosting of the function. You might think
“wait, hosting? I thought this was serverless?” As we discussed at the
beginning of the chapter, serverless just means it runs on someone else’s
server. We need a host to run the function. While you only do this once when
setting it up, it can make a big difference how you approach it. There are
three different ways to host your function on Azure: consumption, premium,
and on an app service plan. The pricing of each model is very different and
are outlined in Table 7-3.
Hosting
Cost model Description
option
While pricing is certainly a significant factor in how you decide the right
hosting for your Azure function, there are other factors that should be
considered carefully too.
Let’s start with the consumption option. This is the default option, and it will
suit a large part of your function needs. Whenever your function is triggered,
the consumption hosting will create a VM to run your function, load the code,
and execute it. If there is high demand for the function, the consumption plan
will automatically scale up to meet that demand by creating VMs to meet it.
The premium plan has one main feature: pre-warmed VM instances. We’re
not talking about cooking here of course, but the reference to having at least
one VM running (or spun up) and ready to trigger the function at any time.
On the consumption plan, the VM running your function will be recycled at
some point. This might be when the resource is needed or when your function
is not being triggered for a period. When your function is triggered again, the
VM has to be created, started and your code loaded onto it. This takes time
and is referred to as “cold start”. Sometimes, the few seconds this can take
can be critical in your workflow, and this cold start has to be avoided. That is
what pre-warmed instances are there to prevent. With the premium hosting
you never have a cold start, as at least one more VM than currently needed to
run your function workload is always ready as shown in Figure 7-10, which
is probably the best designed figure in this whole book. Right?
Figure 7-10: The compute resources provided in a premium function app is always more than
needed to prevent cold starts.
There are other benefits of the premium hosting as well. The timeout for the
function is increased from 5 to 30 minutes by default, you get more powerful
VMs for computing, and you can integrate the function app with your virtual
networks. The last point deserves a bit more information though. Remember
back in the chapter on networking that everything on Azure is connected to a
network? If so, you might wonder why this is a premium feature for
functions. Virtual network integration allows your function app to access
resources inside a specific virtual network, which will provide the benefits of
gateways, subnet access, and peering.
Finally, you can choose a new or existing app service plan. I briefly
mentioned app service plans in chapter 2 but let me recap. They are measured
in compute units and are really VMs that you provision to run your web apps,
being websites, APIs, docker containers and more. A very cool feature of app
service plans is that you can host your Azure functions apps on them too.
Why is this cool? First, you are already paying for the app service plan, so the
functions are essentially “free”, as you don’t pay extra to host a function app
on it as well. Second, you have some insight into the health and current
environment of the VM your function runs on. This can sometimes be the
difference between fixing a problem and sitting in a corner rocking back and
forth.
To summarise the hosting options for Azure functions, it can make a big
difference which you choose. As the comparison in Table 7-4 shows, you
need to consider your requirements for the Azure function you are creating.
As per service
Scaling Automatic Automatic
plan
Coming back to the question at the start of the section. Do you think
functions are kind of magic? You have almost unlimited compute power with
only a few lines of code, and you can integrate with other Azure services to
create a complex workflow with very little code. I love functions. Let’s move
on to one of my other top 3 serverless services.
While you can create a Logic App through the Azure CLI, as well as define
the triggers, actions, integrations and flows through JSON notation, we will
use the Azure Portal in this case. It is easier to visualise and follow that way.
Open up the Azure portal and go to the Logic Apps section, where you can
create a new instance. Click on Add, then go through the standard Azure
wizard to create the Logic App by providing your subscription, resource
group, name, and region. Those properties are by now a known part of your
Azure services, as they need to be defined for everything. Let’s dive a bit
deeper into the Plan type though. You have two options, being Standard and
Consumption as shown in Figure 7-11.
Figure 7-12: The Logic App landing page in the Azure Portal.
What you have created now, doesn’t do anything yet. Just like an Azure
function app is needed to host the functions you create, a logic app is needed
to host the logic flow of the logic app.
7.3.1 Connectors
Connectors in logic apps are the abstraction for using hundreds of services to
help you access data, events, and resources in other apps, services, systems,
protocols, and platforms. It is a prebuilt part of the logic app ecosystem
which Azure provides to you. Often a connector will ask you for credentials
to connect your data to the logic app, but then you can use it as a native step
in your workflow. Connectors comes in both triggers and actions, which are
the next two sections.
Next, we need to create that logic flow of the logic app, which will tweet the
NFT images, like there is no tomorrow. The first step of the process for any
logic app is a trigger, which is also a connector.
7.3.2 Triggers
A trigger is what starts the logic app, what makes it come alive. You can’t
have a logic app without a trigger, and there are many triggers to choose
from. In general, there are two categories of triggers (Figure 7-13).
Poll: This is when the trigger periodically check’s the service endpoint
to see if there is an event. This could be an HTTP trigger checking if
there is new data on an API endpoint.
Push: The trigger subscribes to an endpoint with a callback URL. The
endpoint then pushes a notification to that callback URL when there is
an event.
Once you select the trigger, you are asked to create a connection to the
storage account you want to use. The following values are needed to create
the connection.
Figure 7-15: Copy the key from the storage account Access keys to use in the logic app
connection.
A connection in a logic app is an authenticated interface to a service such as
the blob storage. The connection is defined within the logic app and is
available to any trigger or action in it.
Once you have created the connection to the storage account, we can start
defining what data we are going to retrieve from it as part of the trigger. We
now have to define a few more values (Figure 7-16), which will configure the
trigger.
Storage account name: Yes, I know we only just added this, but now it
is a drop down, and the connection is defined for use.
Container: This is the specific blob container we will get the NFT
images from. You can either enter the path manually or use the very
handy folder icon to select the container.
Number of blobs to return: Every time the storage account is polled by
the logic app, how many blobs do you want? We can leave this as 10 for
now, but it could be more or less depending on how often you check.
Interval: Finally, you can choose how often the logic app checks for
new items. The default is every 3 minutes, but you can choose anywhere
from every second, to once a month.
Figure 7-16: Configuring the blob trigger for the logic app.
Click the Save button if you haven’t already, and now we have a trigger for
the logic app. Of course, a trigger is only useful if there are one or more
actions that follow, and now that we are retrieving all the blobs we insert, it is
time for some action.
7.3.3 Actions
The action we want to perform on the blobs coming from the trigger is
tweeting. I know you think that is extremely exciting too, so let’s get right to
it. Before we can get to the actual tweeting, we need to first get the blob data
itself. The trigger only passes a reference to the blob, and not the data itself,
but we need that to add to our tweet, so it’s time for some blob action.
Click the + New step button underneath the trigger, and search for blob again.
This time choose the Get blob content (V2) action and fill in the parameters
as shown in Figure 7-17. The storage account drop down should contain the
connection to the storage account we set up before, the blob is the dynamic
field to the path of the blobs withing the storage, and finally, leave the
content type as default. This will make the blob data available to the next step
in the flow, which will be the tweet action. Exciting!
Figure 7-17: Retrieve the blob content for the inserted blob that triggered the logic app.
Again, click on the + New step button to open up the operation selection box.
In the search field, type in Twitter and then find the “Post a tweet” operation
as shown in Figure 7-18. Also, have a look through the list of available
twitter actions to get a sense for the kinds of operations you can do in a logic
app.
Figure 7-18: Find the “Post a tweet” operation for the step.
When you select the action, you will be asked to create a connection to a
twitter account (Figure 7-19), which is where the tweet will be posted. To
allow this interaction, Twitter requires an application which is a predefined
entity within the Twitter ecosystem. You can create your own application
using Twitter’s developer portal, or you can use the application that
Microsoft has already made for Azure. We are going to use the latter. The
reason you would use your own Twitter application is to have more control
over the permissions and statistics for its use. We don’t need that in this case.
Figure 7-19: Connect the logic app action to Twitter by authenticating with an application.
When you click the big blue Sign in button, you will be asked to log into
Twitter with a valid account, which will then authenticate the Azure Twitter
application to tweet on that account’s behalf. In other words, the logic app
will have permission to send tweets out from the account you just logged in
with.
Once authenticated with Twitter, you have to select the parameters you want
to include in the tweeting stage. There are only two, Tweet text and Media,
and we will need both (Figure 7-20). The tweet text is what you want the text
in the tweet to say, and the media is the NFT image, which is the blob data
we now have available from the previous step.
Figure 7-20 Choose both the available parameters for the imminent tweet.
Fill in the tweet text and notice when you click on it and the Media field that
the dynamic fields dialog opens again (Figure 7-21). This will happen on
most interactions within logic apps, as that is really the key to their power.
The whole idea of logic apps is the flow of data between seemingly disparate
entities, such as our blob storage and tweets. If you were to code all of this
yourself, which of course is possible, you’d be at it for days, maybe even
weeks. While I appreciate the excitement and satisfaction in getting
applications to work together in code, I also appreciate my time to deliver a
solution. A business problem isn’t about solving it in the most interesting and
clever way, but about doing it fast and with confidence in the stability and
security of that solution. That is what logic apps give us. I hear you saying, “I
like to write code”, because I do too, but a service like logic apps not only
gives you a faster and more tested solution, but also serves the business in
most cases. You are a better team player, thus adding more value to a
business, by using the best tool for the job. Logic apps can often be that best
tool, even if it doesn’t touch actual code.
What logic does touch on is, well, logic. And that is a core part of any
application development and solution. The dynamic fields of the various
steps in a logic app are where your tech brain needs to shine. The dynamic
fields are your parameters, inputs and outputs you need to manage and
massage to get the data flow you need. And that is why I find them so
exciting. They are quick, simple, yet deliciously logical as well.
Figure 7-21: The dynamic content dialog is the power of logic apps.
Back to the task at hand. Fill in the tweet text and choose the File Content for
the media (there is only that dynamic field available), and save the logic app,
using the Save button. And then it’s time to test. Upload an image file into
your blob container by using the upload pane and selecting your file (Figure
7-22), then either wait for the logic app to trigger or go and trigger it
manually yourself.
Figure 7-22: Upload an image to blob storage to trigger the logic app.
Phew! That was a lot of triggers and actions to get our tweet out to the world.
If we take a step back, this approach can be used for more than just sharing
vain alpaca pictures on social media. Every workflow you create for a logic
app will follow this pattern of a trigger to start the process, then any number
of actions and logic constructs to manipulate the data and achieve the desired
outcome for your business or project. This could be sending an email when a
specific type of record is inserted into a database, running an Azure function
when a message is received on a queue, and much more. Try it out for
yourself too. Go press all the buttons and see what you can make happen in
your own workflow with a new logic app.
If you have set up everything correctly, the logic app will now have run and
you have a tweet on the account you linked. Wondering which image I used?
Check out Figure 7-23. You’re welcome.
Figure 7-23: The actual tweet I created at the time of writing. It is still there.
Only one more step in creating our serverless architecture for Alpaca-Ma-
Bags: API Management.
The part of the Alpaca-Ma-Bags serverless project we are creating now is the
API Management service (Figure 7-24). This will glue together the function
and logic app we created earlier and make them available to third parties in
an authenticated and managed way using, of course, serverless.
Figure 7-24: The API Management service glues the serverless project together.
Let’s create an API Management instance and then delve into the main parts
of the service. We are going to create the service using the Azure CLI, but
further on will also use the Azure Portal as the visuals are often easier to
understand, and some of the features only make sense when using the Azure
Portal. Use the following command template to create a new instance, but
don’t execute it just yet. You’ll have to change a couple of things.
C: /> az apim create --name alpacaapim --resource-group AlpacaRG --publisher
By default, the API Management pricing tier is Developer which is the most
economical but not meant for production use. You can choose Consumption
too, which is the typical serverless tier that comes with 1 million calls for
free.
As you are waiting for the API Management to be provisioned, this makes it
a good time to explain the main parts of the service, the API Gateway, the
Management Plane, and the developer portal, as depicted in Figure 7-25. It
gives developers access to their subscription level and API documentation
through the developer portal, and it lets developers connect their own
applications through the API gateway. The management plane is what binds
it all together.
The developer portal, management plane, and API gateway are all equally
important and together provide all the tools you need to create a robust API,
as we are about to. They are all fully managed by Azure and hosted by Azure.
Let’s start with the API Gateway.
Open the Azure Portal to the newly created API Management service with the
name you gave it. Then go to the APIs menu as shown in Figure 7-26, and
scroll down on the API templates on the right. These are all different ways
that you can create APIs for your API Management service. Find the Azure
Function API template, which we will use in a moment.
Figure 7-27: The vanilla freshly created developer portal. There are tools for updating the design
and features, as well as testing it.
It is a public website, which developers can use to sign up for and acquire
API keys for use in their own applications. They can find the API
documentation, test API endpoints and access usage statistics for their
account.
At this point you might ask “Lars, why are you showing me this developer
portal? Are we going to configure it or press some buttons?” No, sorry. The
point here is that you know it gets created and it is the main way developers
interact with your API. We just don’t have enough pages to go into details
about how you design and configure the developer portal. I do invite you to
access it now that you have it and press all the buttons. Get a sense for it.
Okay, now let’s connect some services to make some API endpoints.
Enough talk about what API Management consists of. Let’s get into hooking
up some endpoints to let the users of Alpaca-Ma-Bags use the services we
have created. We’ll start with integrating the function that processes the NFT.
In the Azure portal, go to your API management instance, then go to the API
section, and then choose the Function App as shown in Figure 7-28.
Figure 7-28: Choosing the Function app to start the integration with the API Management
service.
This will prompt you to fill in the function app name, a display name, name
and API URL suffix as shown in Figure 7-29. Enter values that make sense
for you, then click Create.
Note:
All functions within the function app will be exposed in the API. If you have
6 functions hosted inside a single function app, then you will get 6 API
endpoints in API Management when using this integration template.
Figure 7-29: Find the function app to use for an endpoint, then give it a name and URL suffix.
This will create the endpoints for the API based on the functions in the
function app. We just have the one function we created before, which is then
added as shown in Figure 7-30. If you select the GET method for example,
you can see the URL for the endpoint, as well as which parameters are
needed.
Figure 7-30: Once the function app has been linked to the API management service, it shows
under APIs in the portal.
How simple was that!? To make sure the endpoint works, let’s test it by
going to the test tab for the API, as shown in Figure 7-31. All you have to do
is enter a value in the template parameter, which in this example is the file
name, and then click send. If you scroll down a bit further in the portal
window, you will see the request URL is shown as well. It would be
something like https://alpacaapim.azure-api.net/nftcopy/nft/nft-1-
2022.png.
Figure 7-31: Testing an API directly through API management.
This will trigger the function we built earlier and create a copy of the image
in your bound blob container.
NOTE:
If you are getting an error, it could be because you haven’t created the blob
containers on the linked Azure storage account. In the configuration settings
for the function app, you can see which storage account is being used by
inspecting the AzureWebJobsStorage setting.
That is the first of our two API endpoints done for the serverless Alpaca-Ma-
Bags API. Now for the second one using logic apps.
The second part of the API is hooking up our logic app tweet machine. As
shown in Figure 7-32 this is the last part of the Alpaca-Ma-Bags architecture
that we defined at the start of the chapter.
Figure 7-32: The last part of the architecture is connecting the logic app to API Management.
You might have guessed it, and yes, this is equally simple to do. However,
there is a catch. To connect a logic app to API Management, it must have an
HTTP endpoint. If you think about it, that make sense. An API is a collection
of HTTP endpoints exposed to users over a network, specifically with no user
interface. It stands to reason that any endpoint must be accessible over HTTP,
and that then trickles down to the kind of integrations you can do within the
API Management API gateway. Our logic app from earlier doesn’t have an
HTTP trigger, but rather a blob trigger.
Triggered by HTTP.
Receives the NFT id via a query string parameter
Retrieves the NFT image from blob storage using the id
Posts a tweet with the image and a description of your choosing
Connect the logic app to the API Management API gateway
Bonus points: Use some form of authentication for the incoming request
to the API
Go ahead and have a go. I will wait here until you are done. I promise.
That last bonus point brings us on to a small elephant in the room. Sort of
Dumbo sized. I am talking about authentication of course. A lot of the
chapter has elegantly avoided the topic, but that doesn’t mean it isn’t part of
the story. However, for this part of your Azure journey, we focus on the three
services, and authentication in various forms is coming up in the book. We
aren’t avoiding or neglecting it, but it isn’t quite a part of the story just yet.
API Management is a very large service on Azure, and you have only got the
smallest of introductions to it in this chapter. At this point it is worth your
time to “press more buttons” in the service to get a better understanding of
what it does in more detail. It is out of scope to dive into API Management
more in this book, with the goal of giving you a functional introduction to the
service being achieved. Almost at the end of the chapter, but first a slightly
contemplative piece to finish.
You are right, that there are certain similarities between the two models, the
main one being to save time for developers. Both serverless and PaaS are
designed to remove yak shaving and just let developers get on with the job of
writing code and solving business problems. However, there are at least three
distinct differences between the two approaches as outlined in Table 7-5:
scaling, time to compute, and control.
Table 7-5: Comparing the main three differences between PaaS and serverless services.
Both serverless and PaaS have their strengths and weaknesses and, as always,
the important thing is that you use the right tool for the right job.
7.6 Summary
Serverless architecture is a spectrum ranging from using many servers to
using less servers. There is no such thing as “serverless” where no
servers are used.
An API is the connection that allows two or more applications to
communicate with each other using a collection of defined endpoints.
An API has no graphical interface and is exclusively for pieces of
software to connect and communicate with each other in a managed
way.
Azure functions are units of compute that perform a single task.
Azure functions live within a function app, which defines the tier such
as premium or standard, as well as the compute power for the functions.
You can create an Azure function using various methods, including the
CLI and Visual Studio Code.
Every function must have exactly one trigger, which is the entry point
for the function. This is how the function starts every single time.
Triggers can be via HTTP, from a new blob file, a new queued message,
and many other ways.
A binding in a function is a connection to another Azure resource, such
as blob storage. Bindings are a shortcut to manipulating the connected
resource within the function at run-time.
Using VS Code to develop Azure functions will provide you with a
complete local environment, so you don’t accidentally publish anything
to production.
The three most common hosting methods for functions is consumption,
premium and with an Azure app service plan. Each have their
advantages and drawbacks.
Logic apps can combine disparate third-party services into a single
workflow to provide very quick and valuable additions to your
infrastructure.
All logic apps have exactly one trigger which starts it. These triggers can
be anything from a new blob being inserted to a Tweet being posted
with a specific keyword.
Triggers in logic apps can be poll, using a simple http call, or push,
which uses a callback service for the trigger service.
Actions in logic apps manipulates the data from the trigger and provides
the business value. Some workflows have one action, some have many.
API Management provides a serverless way to create an API in Azure
using existing services and resources. The services don’t have to be
hosted on Azure necessarily.
Serverless and PaaS architecture approaches are similar but have
differences, including scaling, time to compute and control over the
development environment.
8 Optimizing Storage
This chapter convers:
Creating a free static website hosted in your storage account.
Managing data inside the storage using azcopy
Sharing access to data securely and with limitations
Using lifecycle management to save costs and optimize data use
Let me start this chapter by saying this: no, you aren’t going mad, this really
is another chapter on storage. While chapter 5 provided the fundamentals of
storage on Azure by delving into the storage accounts and the types of data
you can store there, we only scratched the surface of what you can do, and
what you should do. To use storage on Azure efficiently we need to
understand it better than only having the foundations, which are indeed
important, but not enough. It is like learning to ride a bike and never taking
the training wheels off. Yes, you can get along and eventually you will get
there, but it is slow and much harder to go around the obstacles efficiently.
In this chapter we dive into ways that you can save on cost, securely share
access to your data, keep track of data versions, manage data life cycles and
much more. These are all important parts of Azure storage usage and
maintenance. Join me as we visit the weird world of banned technical
literature again.
All these features need to be done within the Azure storage account to ensure
easy maintenance and cost reduction. As you will see, Azure Storage has so
much more to offer than first meets the eye, and first is setting up a website
that requires almost no maintenance and is free. Yep, free.
8.2.1 AzCopy
We want to move the files from our local computer to the Azure storage
hosting target destination. We are going to do this using the tool AzCopy,
authenticate the commands with a Shared Access Signature (SAS) token,
then create a static website with a custom domain all hosted within Azure
Storage. It’ll be a work of wonder.
I have mentioned a few times before in this book that the Azure command
line interface, or CLI, is the preferred way to do a lot of tasks in Azure, as it
is more concise, rarely changes, and is much more efficient. Bear in mind that
automation is the ideal way to deploy anything in tech, as you remove the
“human error” element. For now though, we don’t cover automation, so the
CLI is the best “human” interface. Make sense? A commonly used CLI tool
for moving files is AzCopy, which is what we’ll use to copy the static
website files to Azure storage. If you haven’t set up AzCopy on your current
machine, see the appendices to get started. Once you have AzCopy ready to
go, the first step is to create a blob container in the storage account, which
will be the home for the static website. Open up your command shell of
choice, such as the Windows Command Shell (cmd.exe), and start typing in
this command, replacing the storage account name (banningstorage) with the
one you are using. Don’t press enter yet though.
C:/> azcopy make “https://banningstorage.blob.core.windows.net/$web”
Windows Command Shell Tip
If you're using a Windows Command Shell (cmd.exe), use double quotes ("")
for your path/URL instead of single quotes (''). Most other command shells
use a single quote for strings.
We are not quite done yet with the command. The keen reader may have
noticed that if this was a valid command, then anyone could create a
container in the storage account. We need to have authentication of the
command to not get into trouble. There are two ways of doing this in Azure,
either through Azure Active Directory (AAD) or using a shared access
signature (SAS) token. We aren’t covering AAD in this book, so in this case
we will use a SAS token, which will do the job of authenticating AzCopy
very nicely, just like one of those small umbrellas you get in your cocktail to
make it just right.
Before we create a SAS (Shared Access Signature) token, let’s answer the, I
assume, obvious question: “what exactly is a SAS token?” A SAS token is a
type of secure token that is used to grant access to specific Azure resources. It
can be used as an alternative to using a username and password to
authenticate requests to Azure resources, and provides a way to grant
granular, time-limited access to specific resources or resource groups. You
can grant very detailed authentication indeed.
Figure 8-3: The SAS URI consists of the resource URI and then the SAS token as a query
parameter.
Now you know what it looks like, but how do we generate the SAS token
itself? Open the Azure Portal, go to the storage account, in this case
banningstorage, and then to the Shared Access Signature menu item. That
will show a screen similar to Figure 8-4, which lays out all the parameters
you have for creating a SAS token.
Figure 8-4: Creating a new SAS token for the storage account using the Azure Portal
Let’s understand some of those parameters you can set for a SAS token.
Once you have chosen the parameters you want for your SAS token, press the
button that says Generate SAS and connection string, which will give you
three values as output just below the button.
Wow, that took a while to get to the part where you press Enter from the
command above, just to create a container for our website, but we are almost
there. We can now use the SAS token part of the output from Figure 8-5 and
append it to the command
C:/> azcopy make “https://banningstorage.blob.core.windows.net/$web”
Run this in your command shell, and you should see an output saying
Successfully created the resource. We now have a container in blob
storage called website, which is where we will upload the Banning Books
website to. And finally, to upload the entire website to the storage account
using azcopy, we use this command.
C:/> azcopy copy "C:\Repos\Banning Books\banning-books" "https://banningstor
This command copies all the files from the first path C:\Repos\Banning
Books\banning-books to the second path, which is the full service URL from
Figure 8-5. You will need to have the files from either the Banning Books
website GitHub repository, or another static website of your choosing, in the
first path of the command. Finally, we do the copy action recursively with the
parameter --recursive=true, which means that any subfolders and files are
copied across too.
Okay, time to execute!! Once the command has run (it is usually very quick)
you now have all the files for the website in your Azure storage account,
ready to become a website just like the cool files are.
Once you have your files in the $web folder it is time to activate this static
masterpiece! You will need two pieces of information to get it happening:
an error document, or page, which is what users will see when they
request a page that doesn’t exist, or something else goes wrong. If you
are using the Banning Books website provided, then that file is
error.html
an index file, which is the static website itself. If you are using the
Banning Books website provided, then that file is index.html
Bundling all that together we use this Azure CLI command to enable the
static website.
C:/> az storage blob service-properties update --account-name banningstorage
The storage and blob keywords denote that we are going to do something
with the blob storage. The service-properties update keywords then specify
that we want to update the service properties of the storage account, which is
where the static website feature lives. We then explicitly choose the static-
website service property to explicitly address this feature of the Azure storage
account, and finally we choose the error page file name and the index (home
page) file.
Once you successfully run the command, you will get a JSON output which
outlines some of the current service properties for the storage account. One of
the sections should read like this, confirming static website is enabled.
"staticWebsite": {
"defaultIndexDocumentPath": null,
"enabled": true,
"errorDocument_404Path": "error.html",
"indexDocument": "index.html"
},
To confirm the static website is up and running, let’s do two things. First,
check the Azure Portal to make sure the static website is running(Figure 8-6).
Go to the static website blade of your storage account, and make sure it is
enabled as well as the index and error documents are configured.
Figure 8-6: Confirm the static website and documents have been configured correctly.
Second, let’s check that the website works, and to that we need the URL to
enter into the browser. There are two ways to find the URL, one being
looking it up in the Azure Portal, which is what Figure 8-6 shows as the
Primary endpoint, in this case
https://banningstorage.z22.web.core.windows.net/. Another way to find that
address is using the Azure CLI (of course). The following command will
query the storage account for any primary endpoints for the web, which is
another way of saying website address.
How cool is that! Without having to pay any extra, Banning Books now has a
simple static website live on the internet, and they don’t have to worry about
hosting, traffic, security, or any of the other maintenance tasks that often
come with having your own website.
Let’s go through these steps together. I am not going to tell you a specific
company where you can buy a domain. Either choose one you know or use
your favorite search engine to search for a domain name provider.
Next, you need to create a CNAME, or Canonical Name, record for your
domain. This will point your new shiny domain to the static website domain
Azure provided above. The CNAME record management section location
will vary depending on which domain name provider you use. The important
part is that you enter a subdomain such as www or books, making the full
final domain www.banningbooks.net or books.banningbooks.net as shown in
Figure 8-8.
The third step is to connect the new domain and CNAME record to the Azure
storage and static website (Figure 8-9). On the networking menu for the
storage account in the Azure portal, go to the custom domain tab.
You may get an error accessing the website using http, if the storage account
is configured for https/SSL only. You can add “https” in front of your custom
domain, which will warn you of continuing. For testing purposes, you can use
this method, but for production it is advised to obtain an SSL certificate or
utilize a service such as Azure CDN to secure the site traffic adequately.
There are two parts of this command that are new and which you should take
note of. First, we are updating the properties of the storage account using
blob-service-properties update which is how you update specific properties of
an existing Azure storage account. Second, we define which property we
want to update, in this case --enable-versioning, then set it to true. Once
this command is executed, versioning will be enabled for all blob containers
within the storage account.
After version tracking is enabled, you can interact with and manage blob
versions using various Azure CLI commands. For example, you can list all
versions of a specific blob by running:
C:/> az storage blob version list --account-name banningstorage --container-
Tracking is simple to set up and can get you out of trouble more than once.
This could be when someone accidentally deletes or modifies a file, and you
can quickly restore the file and its specific version. Perhaps you have an audit
request for detailed logs and metrics of all the operations on your data, or
maybe you have a corrupted file you want to quickly restore to a previous
version.
Let's shift our focus to another valuable feature that can further protect your
resources: implementing a Resource Management lock in Azure.
Figure 8-10: Resource locks can be placed on the resource group, storage account or individual
objects.
There are two types of ARM locks: 'CanNotDelete' and 'ReadOnly.'
To apply an ARM lock to the 'banningstorage' storage account, you can use
the Azure Command-Line Interface (CLI) using the following command.
C:/> az lock create --name banningLock --resource-group BanningRG --resource
To enable Soft Delete for the 'banningstorage' storage account use this CLI
command
C:/> az storage account blob-service-properties update --account-name bannin
Look familiar? We are using the same update command to the blob-
service-properties to change the soft delete or delete retention as the
parameter is called. We then also set the retention period to 7 days, which
means any data that is deleted will be kept for 7 days before it is permanently
deleted.
If you do accidentally (or otherwise) delete a blob and need to restore it, you
can use the Azure CLI command:
C:/> az storage blob undelete --account-name banningstorage --container-name
Figure 8-11: Azure storage object replication between two storage accounts in different regions.
Before we dive into the potential benefits of object replication, let’s configure
it in practice using the Azure CLI. As you might have guessed, we need a
second storage account as the destination for the replication, so let’s create
that first.
C:/> az storage account create --name banningstoragereplica --resource-group
The destination storage account then needs to have versioning enabled so the
replication knows if a version of a file is up to date or not.
C:/> az storage account blob-service-properties update --resource-group Bann
Then it is time to create the object replication policy which will manage the
replication. This involves specifying the source and destination storage
accounts and containers. Use this last CLI command to set up the replication
between Banning Book’s two storage accounts.
C:/> az storage account or-policy create --account-name banningstoragereplic
The or-policy command is short for object replication, and the rest of the
parameters define the source and destination storage accounts and containers.
One point worth noting is that the policy is created for the destination storage
account, not the source.
Note:
It is possible to set up a policy for object replication across Azure tenants too,
but you will need add the parameter and you will need to supply a full
resource ID such as
/subscriptions/<subscriptionId>/resourceGroups/<resource-
group>/providers/Microsoft.Storage/storageAccounts/<storage-
account> rather than just the storage account name.
Now that object replication is set up for Banning Books, let’s explore for a
moment a couple of the immediate benefits it brings.
Figure 8-12: The trade-off -The access costs go up for cold and archive tiers, where the storage
costs go up for cold and hot tiers.
To create a lifecycle management rule using Azure CLI that transitions data
blobs to cool if it hasn't been modified in 30 days, move all the way to
archive if it's not modified for 90 days, and delete after a year without
modification, you can use the following policy represented in JSON.
{
"rules": [
{
"name": "TransitionHotToColdToArchive",
"enabled": true,
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"tierToCool": {
"daysAfterModificationGreaterThan": 30
},
"tierToArchive": {
"daysAfterModificationGreaterThan": 90
},
"delete": {
"daysAfterModificationGreaterThan": 365
}
}
},
"filters": {
"blobTypes": [
"blockBlob"
]
}
}
}
]
}
There is a lot going on here, but using JSON makes it very readable and
defined. In fact, almost all templates, policies, definitions, and anything else
on Azure can be defined in JSON. Save the JSON in a file called
policy.json. For this specific policy let’s go through each part of the
definition.
"rules": This is the top-level array that contains all the rules in the
policy. Each rule is represented as an object within the array. So, yes,
there can be more than one rule in the policy.
"name": "TransitionHotToColdToArchive": This is the unique name
assigned to the rule, which helps in identifying and managing the rule.
"enabled": true: This is a boolean value that indicates whether the rule
is active. If set to true, the rule will be executed by the Azure Storage
Lifecycle Management system. If set to false, the rule will be ignored.
"type": "Lifecycle": This specifies the type of rule. In this case, it's a
"Lifecycle" rule, which means it manages the lifecycle of the blobs based
on the specified actions.
"definition": This object contains the actions and filters that define the
rule's behavior.
"actions": This object contains actions to be performed on the baseBlob.
"baseBlob": This object contains the actions to be performed on the
base blobs, such as tier transitions and deletion.
"tierToCool": {"daysAfterModificationGreaterThan": 30}: This
action specifies that the blob should transition from the Hot tier to the
Cool tier if it hasn't been modified for more than 30 days.
"tierToArchive": {"daysAfterModificationGreaterThan": 90}: This
action specifies that the blob should transition from the Cool tier to the
Archive tier if it hasn't been modified for more than 90 days.
"delete": {"daysAfterModificationGreaterThan": 365}: This action
specifies that the blob should be deleted if it hasn't been modified for
more than 365 days.
"filters": This object contains filters that determine which blobs the rule
should apply to.
"blobTypes": ["blockBlob"]: This array specifies the types of blobs
the rule should apply to. In this case, the rule applies to "blockBlob"
type blobs.
You might ask, “don’t we need to define when the data is being moved up to
hot again?” No, Azure Storage will do this automatically when you start
using the files, whether that is read or modify. Also, I added the delete part to
the policy to show a full flow of lifecycle management. However, as Banning
Books are unlikely to want to delete their books, this part can be removed.
With the policy we need to now implement it using the Azure CLI using this
command
C:/> az storage account management-policy create --account-name banningstora
We are creating a management policy on the storage account, using the file
we created before defining the policy. Depending on which implementation
of the Azure CLI (stand alone, in Portal etc.) you can upload the file the CLI
instance and execute it from there.
8.5.2 Cleaning up
Even though you just saw that files can be deleted using a lifecycle
management policy, there are still more areas we can clean up and save
storage space, hence cold hard cash. Earlier we set up version tracking for the
blobs, but that means we are potentially keeping many more versions of a file
than we need to. Well, color me purple, there is a service for that too. In fact,
it is another lifecycle management policy defined in JSON.
{
"rules": [
{
"name": "CleanUpOldBlobVersions",
"enabled": true,
"type": "Lifecycle",
"definition": {
"actions": {
"version": {
"delete": {
"daysAfterCreationGreaterThan": 90
}
}
},
"filters": {
"blobTypes": [
"blockBlob"
]
}
}
}
]
}
You might recognize quite a few sections from the first lifecycle policy we
created. So let’s first go through the new parts.
See how much easier and less frightening it was the second time? Well, I
hope that was the case. While the word “policy” might make you fall asleep,
on Azure it is a word that saves you time and money. Policies are your
friends. Save the JSON into a file called deletepolicy.json, and then we
can almost reuse the CLI command from before.
C:/> az storage account management-policy create --account-name banningstora
However, when you do that, you overwrite the existing lifecycle policy we
created, so instead we need to combine the two JSON files into a single file.
This means copying everything inside the rules section and placing it into
the first file’s rules section.
{
"rules": [
{
"name": "TransitionHotToColdToArchive",
"enabled": true,
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"tierToCool": {
"daysAfterModificationGreaterThan": 30
},
"tierToArchive": {
"daysAfterModificationGreaterThan": 90
},
"delete": {
"daysAfterModificationGreaterThan": 365
}
}
},
"filters": {
"blobTypes": [
"blockBlob"
]
}
}
},
{
"name": "CleanUpOldBlobVersions",
"enabled": true,
"type": "Lifecycle",
"definition": {
"actions": {
"version": {
"delete": {
"daysAfterCreationGreaterThan": 90
}
}
},
"filters": {
"blobTypes": [
"blockBlob"
]
}
}
}
]
}
Save this JSON as policies.json, then use the CLI command from before
again with the new file.
C:/> az storage account management-policy create --account-name banningstora
Finally, to confirm we have the policies in place, let’s get the CLI to show the
current lifecycle management policies for the storage account.
C:/> az storage account management-policy show --account-name banningstorage
That is it. Banning Books now has a much more efficient storage account on
Azure, getting the most value for their money with the least effort.
8.6 Summary
You can host a simple static website using Azure storage by uploading
CSS, HTML and JavaScript files to the $web folder.
AzCopy is a commonly used tool for moving files from on-premises to
Azure storage accounts, or between Azure storage accounts.
You can authenticate AzCopy commands with either Azure Active
Directory or using a Shared Access Signature (SAS) token.
A SAS token is a type of secure token that is used to grant access to
specific Azure resources. It can be used as an alternative to using a
username and password to authenticate requests to Azure resources, and
provides a way to grant granular, time-limited access to specific
resources or resource groups.
A SAS token can be configured and generated using the SAS token
menu for the storage account in the Azure portal.
Files for a static website on Azure storage must reside in the $web
folder.
Protecting data isn’t just to keep out unwanted people and processes, but
also to increase customer trust, protect intellectual property and ensure
business continuity.
With Blob version tracking enabled, each time a blob is modified or
deleted, Azure blob Storage automatically creates and maintains a new
version of the blob.
Azure locks can be applied to an entire resource group, a storage
account, or an individual object, such as a blob.
There are two types of ARM locks: 'CanNotDelete' and 'ReadOnly.' The
'CanNotDelete' lock ensures that the locked resource cannot be deleted
but allows for modifications. The 'ReadOnly' lock prevents both deletion
and modification of the locked resource.
Versioning of the storage account has to be enabled to use object
replication.
Data transitions automatically between hot, cold and archive tiers
according to your storage policies.
Use lifecycle management policies to ensure data is moved to the
appropriate tier or deleted to save costs and optimize the storage.
Policies are defined in JSON format, which also allows the Azure CLI to
implement them using plain text files.
Lifecycle management policies are your best friend when it comes to
managing versioning, data retention, object replication, and sanity.