Exam Ref 70 533 Implementing Microsoft Azure
Exam Ref 70 533 Implementing Microsoft Azure
Rick Rainey
Michael Washam
Dan Patrick
Steve Ross
Exam Ref 70-533 Implementing Microsoft Azure Infrastructure Solutions,
2nd Edition
Published with the authorization of Microsoft Corporation by:
Pearson Education, Inc.
Copyright © 2018 by Pearson Education
All rights reserved. Printed in the United States of America. This publication is
protected by copyright, and permission must be obtained from the publisher
prior to any prohibited reproduction, storage in a retrieval system, or
transmission in any form or by any means, electronic, mechanical,
photocopying, recording, or likewise. For information regarding permissions,
request forms, and the appropriate contacts within the Pearson Education Global
Rights & Permissions Department, please visit
www.pearsoned.com/permissions/. No patent liability is assumed with respect to
the use of the information contained herein. Although every precaution has been
taken in the preparation of this book, the publisher and author assume no
responsibility for errors or omissions. Nor is any liability assumed for damages
resulting from the use of the information contained herein.
ISBN-13: 978-1-5093-0648-0
ISBN-10: 1-5093-0648-X
Library of Congress Control Number: TK
2 18
Trademarks
Microsoft and the trademarks listed at https://www.microsoft.com on the
“Trademarks” webpage are trademarks of the Microsoft group of companies. All
other marks are property of their respective owners.
Warning and Disclaimer
Every effort has been made to make this book as complete and as accurate as
possible, but no warranty or fitness is implied. The information provided is on an
“as is” basis. The authors, the publisher, and Microsoft Corporation shall have
neither liability nor responsibility to any person or entity with respect to any loss
or damages arising from the information contained in this book or programs
accompanying it.
Special Sales
For information about buying this title in bulk quantities, or for special sales
opportunities (which may include electronic versions; custom cover designs; and
content particular to your business, training goals, marketing focus, or branding
interests), please contact our corporate sales department at
[email protected] or (800) 382-3419.
For government sales inquiries, please contact
[email protected].
For questions about sales outside the U.S., please contact [email protected].
Editor-in-Chief
Greg Wiegand
Senior Acquisitions Editor
Laura Norman
Development Editor
Troy Mott
Managing Editor
Sandra Schroeder
Senior Project Editor
Tracey Croom
Editorial Production
Backstop Media
Copy Editor
Christina Rudloff
Indexer
Julie Grady
Proofreader
Christina Rudloff
Technical Editor
Tim Warner
Cover Designer
Twist Creative, Seattle
Contents at a glance
Introduction
Preparing for the exam
CHAPTER 1 Design and implement Azure App Service Web Apps
CHAPTER 2 Create and manage Compute Resources
CHAPTER 3 Design and implement a storage strategy
CHAPTER 4 Implement Virtual Networks
CHAPTER 5 Design and deploy ARM templates
CHAPTER 6 Manage Azure Security and Recovery Services
CHAPTER 7 Manage Azure Operations
CHAPTER 8 Manage Azure Identities
Index
Contents
Introduction
Organization of this book
Microsoft certifications
Acknowledgments
Microsoft Virtual Academy
Quick access to online references
Errata, updates, & book support
We want to hear from you
Stay in touch
Preparing for the exam
Chapter 1 Design and implement Azure App Service Web Apps
Skill 1.1: Deploy web apps
Create an App Service Plan
Create a web app
Define deployment slots
Swap deployment slots
Deploy an application
Migrate a web app to separate App Service Plan
Skill 1.2: Configure web apps
Configuring application settings
Configure a custom domain for a web app
Configure SSL certificates
Configuring handler mappings
Configuring virtual applications and directories
Skill 1.3: Configure diagnostics, monitoring, and analytics
Enabling application and web server diagnostics
Retrieving diagnostic logs
Viewing streaming logs
Monitor web app resources
Monitor App Service Plan resources
Monitor availability, performance, and usage
Monitor Azure services
Configure backup
Skill 1.4: Configure web apps for scale and resilience
Scale up or down an app service plan
Scale app service instances manually
Scale app service instances using Autoscale
Configure Azure Traffic Manager
Thought experiment
Thought experiment answers
Chapter summary
Chapter 2 Create and manage Compute Resources
Skill 2.1: Deploy workloads on Azure Resource Manager (ARM) virtual
machines (VMs)
Identify and run workloads in VMs
Create virtual machines
Connecting to virtual machines
Skill 2.2: Perform configuration management
PowerShell Desired State Configuration
Using the custom script extension
Enable remote debugging
Skill 2.3: Design and implement VM Storage
Virtual machine storage overview
Operating system images
Virtual machine disk caching
Planning for storage capacity
Disk encryption
Using the Azure File Service
Skill 2.4: Monitor ARM VMs
Monitoring options in Azure
Configuring Azure diagnostics
Configuring alerts
Skill 2.5: Manage ARM VM availability
Configure availability zones
Configure availability sets
Skill 2.6 Scale ARM VMs
Change VM sizes
Deploy and configure VM scale sets (VMSS)
Skill 2.7 Manage containers with Azure Container Services (ACS)
Configure for open-source tooling
Create and manage container images
Implement Azure Container Registry
Deploy a Kubernetes cluster in ACS
Manage containers with Azure Container Services (ACS)
Scale applications using Docker Swarm, DC/OS, or Kubernetes
Migrate container workloads to and from Azure
Monitor Kubernetes by using Microsoft Operations Management Suite
(OMS)
Though experiment
Thought experiment answers
Chapter summary
Chapter 3 Design and implement a storage strategy
Skill 3.1: Implement Azure Storage blobs and files
Manage blob storage
Using the async blob copy service
Configuring the Content Delivery Network
Configuring custom domains for storage and CDN
Skill 3.2: Manage access
Manage storage account keys
Creating, and using, shared access signatures
Using a stored access policy
Virtual Network Service Endpoints
Skill 3.3: Configure diagnostics, monitoring, and analytics
Configuring Azure Storage Diagnostics
Analyzing diagnostic data
Enabling monitoring and alerts
Skill 3.4: Implement storage encryption
Encrypt data using Azure Storage Service Encryption (SSE)
Implement encryption and role based access control with Azure Data
Lake Store
Thought experiment
Thought experiment answers
Chapter summary
Chapter 4 Implement Virtual Networks
Skill 4.1: Configure Virtual Networks
Create a Virtual Network (VNet)
Design subnets
Gateway subnets
Setup DNS at the Virtual Network level
User Defined Routes (UDRs)
Connect VNets using VNet peering
Implement Application Gateway
Skill 4.2: Design and implement multi-site or hybrid network connectivity
Choose the appropriate solution between ExpressRoute, Site-to-Site
and Point-to-Site
Choose the appropriate gateway
Identify network prerequisites
Implement Virtual Network peering service chaining
Configure Virtual Network and Multi-Site Virtual Networks
Skill 4.3: Configure ARM VM Networking
Configure Private Static IP Addresses
Public IP Address
DNS at the Network Interface (NIC) Level
Network Security Groups (NSGs)
User Defined Routes (UDR) with IP Forwarding
External and Internal load balancing with HTTP and TCP health
probes
Direct Server Return
Design and Implement Application Gateway (App Gateway)
Skill 4.4: Design and implement a communication strategy
Leverage Site-to-Site (S2S) VPN to connect to an on-premises
infrastructure
Implement Hybrid Connections to access data sources on-premises
Thought experiment
Thought experiment answers
Chapter summary
Chapter 5 Design and deploy ARM templates
Skill 5.1: Implement ARM templates
Author ARM templates
Deploy an ARM template
Skill 5.2: Control access
Leverage service principals with ARM authentication
Set management policies
Lock resources
Skill 5.3: Design role-based access control (RBAC)
Implement Azure RBAC standard roles
Design Azure RBAC custom roles
Thought experiment
Thought experiment answers
Chapter summary
Chapter 6 Manage Azure Security and Recovery Services
Skill 6.1: Manage data protection and security compliance
Create and import encryption keys with Key Vault
Automate tasks for SSL/TLS Certificates
Prevent and respond to security threats with Azure Security Center
Configure single sign-on with SaaS applications using federation and
password based
Add users and groups to applications
Revoke access to SaaS applications
Configure federation with public consumer identity providers such as
Facebook and Google
Skill 6.2: Implement recovery services
Create a Recovery Services vault
Backup and restore data
Use of snapshots
Geo-replication for recovery
Implement DR as service, Deploy ASR agent, ASR Configuration &
best practices
Thought experiment
Thought experiment answer
Chapter summary
Chapter 7 Manage Azure Operations
Skill 7.1: Enhance cloud management with automation
Implement PowerShell runbooks
Integrate Azure Automation with Web Apps
Create and manage PowerShell Desired State Configurations (DSC)
Import DSC resources
Generate DSC node configurations
Monitor and automatically update machine configurations with Azure
Automation DSC
Skill 7.2: Collect and analyze data generated by resources in cloud and on-
premises environments
Collect and search across data sources from multiple systems
Build custom visualizations
Visualize Azure resources across multiple subscriptions
Transform Azure activity data and managed resource data into an
insight with flexible search queries
Monitor system updates and malware status
Track server configuration changes by using Azure Log Analytics
Thought experiment
Thought experiment answers
Chapter summary
Chapter 8 Manage Azure Identities
Skill 8.1: Monitor On-Premises Identity Infrastructure and Synchronization
Services with Azure AD Connect Health
Monitor Sync Engine & Replication
Monitor Domain Controllers
Setup Email Notifications for Critical Alerts
Monitor ADFS proxy and Web Application proxy Servers
Generate Utilization Reports
Skill 8.2: Manage Domains with Active Directory Domain Services
Implement Azure AD Domain Services
Join Azure virtual machines to a Domain
Securely Administer Domain-joined virtual machines by using Group
Policy
Migrate On-premises Apps to Azure
Handle Traditional Directory-aware Apps along with SaaS Apps
Skill 8.3: Integrate with Azure Active Directory (Azure AD)
Add Custom Domains
Implement Azure AD Connect and Single Sign-on with On-premises
Windows Server
Multi-Factor Authentication (MFA)
Config Windows 10 with Azure AD Domain Join
Implement Azure AD Integration in Web and Desktop Applications
Leverage Microsoft Graph API
Skill 8.4: Implement Azure AD B2C and Azure AD B2B
Create an Azure AD B2C Directory
Register an Application
Implement Social Identity Provider Authentication
Enable Multi-Factor Authentication (MFA)
Set up Self-Service Password Reset
Implement B2B Collaboration and Configure Partner Users
Integrate with Applications
Thought experiment
Thought experiment answers
Chapter summary
Microsoft certifications
Microsoft certifications distinguish you by proving your command of a broad set
of skills and experience with current Microsoft products and technologies. The
exams and corresponding certifications are developed to validate your mastery
of critical competencies as you design and develop, or implement and support,
solutions with Microsoft products and technologies both on-premises and in the
cloud. Certification brings a variety of benefits to the individual and to
employers and organizations.
Acknowledgments
Rick Rainey. It is a privilege to be a contributing author to such a valuable
resource for the IT professional working in Azure. To the reader, it is my hope
that the information in this text provides a rich learning experience. Thank you
to everyone who has contributed to this second edition. To my family and
dearest friends, thank you for your patience and support during this journey.
Michael Washam. Helping others learn about the cloud is always a great
experience, and I hope this second edition helps readers learn more about Azure,
and of course ultimately help them prepare for passing the exam! I would like to
thank my wife Becky for being very patient with me when I take on projects like
this, and my co-authors for making this book excellent by passing on their
immense technical expertise. In addition to the tech gurus, I would like to thank
James Burleson at Opsgility for editing assistance and the rest of the folks at the
Opsgility team for being patient during the authoring and editing process.
Finally, the editors and reviewers from Pearson provided fantastic support and
feedback throughout the process.
Dan Patrick. Writing this book has taught me much more than probably anyone
who reads it will ever learn. To Michael Washam, thank you for taking a chance
on me. Finally, I want to thank my two girls Stella and Elizabeth, and the love of
my life Michelle for being the reason why I continue to learn.
Steve Ross. This was my first foray into authoring a book, and it was quite a
learning experience. Many thanks to Michael Washam and Dan Patrick for
patiently answering a lot of questions during the process. I’m also thankful for
the folks who accomplished the editing and technical reviews, as the work is
much improved due to their diligence. Endeavors like this, done “off the side of
the desk” require a lot of afterhours work, so thanks to my beautiful wife and
kids for putting up with my late nights in the office. Finally, I’m thankful to God
for a wonderful career in IT, and for many other kindnesses too numerous to
mention. May He be honored in all I do.
Stay in touch
Let’s keep the conversation going! We’re on Twitter:
http://twitter.com/MicrosoftPress.
Preparing for the exam
Microsoft certification exams are a great way to build your resume and let the
world know about your level of expertise. Certification exams validate your on-
the-job experience and product knowledge. Although there is no substitute for
on-the-job experience, preparation through study and hands-on practice can help
you prepare for the exam. We recommend that you augment your exam
preparation plan by using a combination of available study materials and
courses. For example, you might use the Exam ref and another study guide for
your ”at home” preparation, and take a Microsoft Official Curriculum course for
the classroom experience. Choose the combination that you think works best for
you.
Note that this Exam Ref is based on publicly available information about the
exam and the author’s experience. To safeguard the integrity of the exam,
authors do not have access to the live exam.
CHAPTER 1
Design and implement Azure App Service Web Apps
Microsoft Azure Web Apps is a fully managed Platform as a Service (PaaS) that
enables you to build, deploy, and scale enterprise-grade web applications in
seconds. Whether your organization requires a global web presence for the
organization’s .com site, a solution to a line-of-business intranet application that
is secure and highly available, or a site for a digital marketing campaign, Web
Apps is the fastest way to create these web applications in Azure. Of all the
Azure compute options, Web Apps is among the simplest to implement for
scalability and manageability, and for capitalizing on the elasticity of cloud
computing.
This chapter covers Azure Web Apps through the lens of an IT professional
responsible for deploying, configuring, monitoring, and managing Web Apps.
As such, the tools that will be used to demonstrate these functions will be as
follows:
Azure Portal
Azure PowerShell Cmdlets, v4.2.1
Azure CLI 2.0, v2.0.12
EXAM TIP
The Basic, Standard, Premium, and Isolated tiers offer you dedicated compute
resources. The Free and Shared tiers use shared compute resources with other
Azure tenants. Furthermore, with Free and Shared, you are throttled to not
exceed certain limits, such as CPU time, RAM consumption, and network
bandwidth. More information on limits, quotas and constraints is available at
https://docs.microsoft.com/en-us/azure/azure-subscription-service-limits.
Within the Basic, Standard, Premium, and Isolated tiers, you have three types
of plans to choose from that vary only in their capacity, such as the number of
cores and amount of RAM. As an example, the three types of plans in the
Premium tier are shown in Figure 1-1.
FIGURE 1-1 Premium tier options for web apps as shown in the Azure portal
You can create a new app service plan when you create a new web app using
the Azure portal. Or, you can create an app service plan first and then use it later
when creating one or more web apps.
EXAM TIP
The Isolated pricing tier is only available for App Service Environments
(ASE). ASE’s is a feature of Azure App Service that provides a dedicated and
fully isolated environment for running app service apps (web apps, API apps,
and mobile apps). Because ASE’s provide dedicated resources, they are ideal
for scenarios where you have very high scale and/or memory requirements.
A unique feature of ASE’s is that they are deployed inside a subnet in a
virtual network. This is how isolation is achieved. Being contained within a
virtual network, ASE’s can be protected using network security controls such
as Network Security Groups (NSGs) with inbound and outbound traffic rules.
Because ASE’s are always deployed in a virtual network, they are ideal for
scenarios where you have increased security requirements or when you want
to use a virtual appliance for packet inspection, firewall protection, and other
network security controls.
ASE’s can be internet-facing through one or more public IP addresses. These
are called External ASE’s. An example where you may have multiple IP
addresses is when your app service is hosting multiple web apps, and you
need separate public IP addresses for each.
ASE’s can be strictly internal (not internet-facing), where web apps are
deployed behind an internal load balancer (ILB). These are called ILB ASE’s
and generally intended for scenarios where your virtual network is connected
to an on-premises environment using either a site-to-site or ExpressRoute
VPN.
Finally, because ASE’s are bound to a virtual network, there are some things
you need to consider regarding your virtual network configuration, inbound
and outbound traffic rules, and portal dependencies. Information on these
additional considerations are available at: https://docs.microsoft.com/en-
us/azure/appservice/appservice-environment/network-info.
FIGURE 1-2 New App Service Plan blade in the Azure portal
EXAM TIP
Docker containers are used to support Linux on web apps. There are built-in
Docker images to support various versions of Node.js, PHP, .Net Core, and
Ruby runtime stacks. You can also choose to use runtime stacks from Docker
Hub or a private registry of images, if your organization has one.
If you select Linux for your operating system type, you must select an app
service plan that was configured for Linux, or create a new app service plan
with Linux support.
FIGURE 1-4 Example of how deployment slots can be used for different
environments
EXAM TIP
Adding additional deployment slots to an Azure Web App requires that the
App Service Plan it is running on be configured for Standard or Premium
pricing tier. You can add up to 10 deployment slots in Standard. You can add
up to 20 deployment slots in Premium.
To create new deployment slot that clones an existing deployment slot, use the
Get-AzureRmWebAppSlot cmdlet to get a reference to the slot you want to
clone. Then, pass it in using the SourceWebApp parameter of the New-
AzureRmWebAppSlot cmdlet. The code below creates a new deployment slot
that clones the production deployment slot.
Click here to view code image
$resourceGroupName = "contoso"
$appServicePlanName = "contoso"
$webAppName = "contoso-hr-app"
$stagingSlotName = "Staging"
$productionSlotName = "Production"
resourceGroupName="contoso"
webAppName="contoso-hr-app"
stagingSlotName="Staging"
EXAM TIP
EXAM TIP
FIGURE 1-7 Swapping deployment slots and reviewing warnings in the Azure
portal
When you are ready to proceed with the swap, click the Ok button at the
bottom of the Swap blade. Since this is a multi-phase swap, the configuration
settings have been updated in the source (Staging) slot. Now, you should do
some final testing and validation of the application before proceeding to the
second phase.
After validating the application, you need to complete the swap. To do so,
open the Deployment slots blade and click the Complete swap button at the top
of the blade. In the Swap blade, set the Swap action to Complete swap if you
want to proceed with the swap. Or, set it to Cancel Swap if you want to cancel.
Finally, click OK to complete the second phase, as shown in Figure 1-8.
resourceGroupName="contoso"
webAppName="contoso-hr-app"
stagingSlotName="Staging"
productionSlotName="Production"
swapAction="swap"
Deploy an application
Deploying an application to an Azure web app is the process by which the web
application (or code) is copied to one of the deployment slots, usually a test or
staging slot. A web app can be published using a variety of tools, such as the
following:
Source control systems are often used in a continuous delivery (or
deployment) model where the applicate is deployed as code changes are
checked into the source control system
FTP clients, such as FileZilla
Azure PowerShell
Web Deploy
Visual Studio
Local Git
GitHub
Visual Studio Team Services
Azure Resource Manager Template (using the MSDeploy web app
extension)
More…
EXAM TIP
If you have multiple deployment slots defined for your web app, then you
must migrate each slot separately to a different App Service Plan. Migrating
one deployment slot to a separate App Service Plan does not automatically
migrate all of the deployment slots.
EXAM TIP
A web app can be migrated to an App Service Plan in the same resource
group and region as the existing App Service Plan it is linked to. If you need
to move a web app to an App Service Plan in a different region and/or
resource group, then you can choose the Clone App option under the
Development tools section. The Clone App feature is only available on the
Premium pricing tier.
EXAM TIP
The value for a connection string defined as a site setting can be retrieved at
runtime by referencing the name of the environment variable for the setting. The
name of the environment variable is a combination of a constant string based on
the type of database connection string plus the name of the key. The constant
strings are as follows:
SQLAZURECONNSTR_
SQLCONNSTR_
MYSQLCONNSTR_
CUSTOMCONNSTR_
Using the example from earlier, the environment variable name for the
ContosoDBConnStr connection string is
SQLAZURECONNSTR_ContosoDBConnStr.
Similarly, the value for an application setting defined as a site setting can also
be retrieved at runtime by referencing the name of the environment variable for
the setting. The constant string for application settings is APPSETTING_. As an
example, if an application setting key is defined as ContosoHRWebServiceURL,
then the environment variable name for the setting is APPSETTING_
ContosoHRWebServiceURL.
EXAM TIP
By default, app settings and connection strings are swapped when performing
a deployment swap. However, there may be cases where you want a setting to
stick to the deployment slot it is defined in and not be swapped. This is achieved
by marking an app setting or connection string as a slot setting, as shown in
Figure 1-10. In this example, SettingX will not be swapped to another
deployment slot during a swap operation because it has been marked as a slot
setting.
FIGURE 1-10 Slot settings for a web app in the Azure portal
resourceGroupName="contoso"
webAppName="contoso-hr-app"
If you use an A record, then Azure requires that you first add a TXT record
mapped to the web app’s hostname to verify that you own the domain. Table 1-2
illustrates how the A record and TXT record are defined for the custom domain
contoso.com.
TABLE 1-2 Example DNS records when using A records to configure a custom
domain
RECORD TYPE NAME VALUE
A @ 13.85.15.194
TXT @ Contos0-web.azurewebsites.net
EXAM TIP
The TXT record is only used when using an A record to configure a custom
domain. This record can be deleted after you have finished configuring your
custom domain.
If you use CNAME records, then your DNS records only indicate the custom
domain and the Azure Web App URL (or hostname) it maps to. It is also
possible to map subdomains. Table 1-3 shows an example of how a CNAME
record is defined for a custom domain contoso.com.
TABLE 1-3 Example DNS record when using CNAME records to configure a
custom domain
EXAM TIP
SSL support for an Azure Web App with a custom domain is not provided in
the Free and Shared pricing tiers of App Service Plans.
A certificate authority must sign your SSL certificate, and the certificate must
adhere to the following requirements:
The certificate contains a private key.
The certificate must be created for key exchange that can be exported to a
Personal Information Exchange (.pfx) file.
The certificate’s subject name must match the custom domain. If you have
multiple custom domains for your website, the certificate will need to be
either a wildcard certificate or have a subject alternative name (SAN).
The certificate should use 2048-bit (or higher) encryption.
There are two methods for configuring an SSL certificate. One option is to
create an App Service Certificate. Another is to obtain an SSL certificate from a
third party.
EXAM TIP
After creating the app service certificate, you must perform the following
three steps:
1. Store the certificate in Azure Key Vault.
2. Verify domain ownership. This refers to ownership of the host name you
specified when creating the app service certificate.
3. Import the certificate into your web app and add SSL bindings, which can
be SNI SSL or IP-based SSL.
EXAM TIP
An app service certificate can only be used with other app services (web apps,
API apps, mobile apps) in the same subscription. It also must be stored in an
Azure Key Vault instance.
After the SSL certificate has been uploaded, the last step in the process is to
configure the SSL bindings. Azure Web Apps support Server Name Indication
(SNI) SSL and the traditional IP-based SSL. You can configure the SSL
bindings in the Azure portal in the SSL Certificates blade referenced earlier in
Figure 1-13. For each binding you must specify the following:
The custom domain name
The SSL certificate to use for the custom domain
Select either SNI SSL or IP-based SSL
EXAM TIP
If you choose IP-based SSL for your SSL binding and your custom domain is
configured using an A record, Azure will assign a new dedicated IP address to
your website. This is a different IP address than what you previously used to
configure the A record. Therefore, you must update the A record with your
DNS registrar using the new virtual IP address. The virtual IP address can be
found in the management portal by clicking the Properties part of the Website
blade.
resourceGroupName="contoso"
webAppName="contos0-web"
TABLE 1-4 Diagnostic log file locations on the file system for an Azure website
EXAM TIP
Every Azure web app gets the Site Control Manager site extension installed
Every Azure web app gets the Site Control Manager site extension installed
by default. There is nothing you have to do to enable it.
The URL is the same as the URL for your web app, with the added scm
immediately after the website name. Figure 1-16 is an example of what the Site
Control Manager (SCM) home page looks like for a web app running on a
Windows App Service Plan. For a web app running on a Linux app service plan,
the SCM features are not as rich.
FIGURE 1-16 The home page of the site control manager “Kudu” extension
Using Site Control Manager, select the Debug Console and then select the
CMD option. This opens a debug console (Bash or SSH for Linux App Service
Plans) that you can type directly into or use the navigation above. As you click
the navigation links, the console will update to your current directory. Figure 1-
17 shows the contents of the LogFiles folder.
FIGURE 1-17 The debug console in Site Control Manager and viewing the
LogFiles folder
Using Site Control Manager, you can download an entire folder or individual
files by clicking the download link to the left of the directory or file name.
resourceGroupName="contoso"
webAppName="contos0-web"
EXAM TIP
The streaming log service is available for application diagnostic logs and web
server logs only. Failed request logs and detailed error messages are not
available via the log-streaming service.
FIGURE 1-19 Average Response Time metric with CPU Time added to the
graph in the Azure portal
FIGURE 1-20 App Service Plan CPU and memory percentage metrics in the
Azure portal
MORE INFO AZURE WEB APP AND APP SERVICE PLAN
METRICS
For a complete list of metrics available for web apps and app service plans
see: https://docs.microsoft.com/en-us/azure/appservice-web/websites-
monitor/.
To get started, create a new Application Insights resource in the Azure portal.
Search for Application Insights in the Azure Marketplace. Provide a friendly
name for the application Insight resource and select an application type. If your
application developer included the Application Insights SDK in the application
code, then you can get deeper insights by selecting an application type matching
the framework/code that the developer used. If you don’t have this information,
you can select a General application type. Figure 1-21 illustrates creating a new
application insight resource in the Azure portal.
FIGURE 1-21 Creating a new Application Insights resource in the Azure
portal
FIGURE 1-22 Create a new application insights availability test in the Azure
portal
The other type of availability test is a Multi-step web test. This type of test
requires that you create a Web Performance and Load Test Project in Visual
Studio, configure the tests, and then upload the test file to Azure.
Availability tests run at configured intervals (default is five mins) across all of
the test locations you configured. As a result, it will take several minutes before
you start to see data from your availability test. Figure 1-23 shows the
availability blade for a simple URL Ping Test. Each green dot represents a
successful test from a test location. You can also click on the green dots to see
details about that specific test.
FIGURE 1-23 Availability test summary for a web app configured with a URL
Ping Test
When you create an availability test you can also create an alert. The default
alert for the URL Ping Test triggers an alert if three of the test locations fail
within a five minute window. If this happens, then subscription administrators
get an automated email about the alert.
Three types of alerts are available as follows:
Metric Alert Triggered when some metric crosses a threshold value for a
sustained period of time.
Web Tests Alert Triggered when a site becomes unavailable or has
response times exceeding a threshold value.
Proactive Diagnostic Alert Warns you of potential application failure,
application performance, and app service issues.
You can add new alerts from the Application Insights blade. Scroll down to
the Configure section and click Alerts to open the Alerts blade.
MORE INFO CONFIGURING ALERTS IN APPLICATION
INSIGHTS
Further guidance on the types of alerts, how to configure them, and when to
use them is available at: https://docs.microsoft.com/en-
us/azure/application-insights/app-insights-alerts.
Configure backup
Having a solid backup plan is a best practice for any application running on-
premises or in the cloud. Azure Web Apps offer an easy and effective backup
service that can be used to back up your web app configuration, file content, and
even databases that your application has a dependency on.
To configure backup for a web app, you must have an Azure Storage account
and a container where you want the backups stored. Open the web app blade and
click on Backups under the Settings section.
Backups can be invoked on-demand or based on a schedule that you define
when configuring the backup. When configuring a backup schedule you can also
specify retention period for the backup.
specify retention period for the backup.
EXAM TIP
The storage account used to back up Azure Websites must belong to the same
Azure subscription that the Azure website belongs to.
FIGURE 1-25 Restore web app from backup in the Azure portal
FIGURE 1-26 Scale up option under Settings in the App service plan blade
Selecting this option opens up the pricing tier blade, where you can scale up
or down your App Service Plan. It is also possible to choose an entirely different
pricing tier, such as changing from a Standard tier to a Premium tier.
Next, click the +Add a rule link to define a new scale rule. The first property
in a scale rule you need to set is the Metric Source, which defaults to the App
Service Plan resource. This means the metrics you can base your scale rule on
are those provided by the App Service Plan, such as CPU Percentage. However,
you can choose a different metric source that will provide different metrics to
base your rule on. For example, you may want to define a rule based on a metric
in an Application Insights resource, such as Server response time.
Next, choose the metric to base your rule on, configure the criteria for the
metric, and the scale action to execute when the criteria is met. Figure 1-29
shows a scale rule based on CPU Percentage, whereby the instance count will
increase by one when the CPU percentage across all instances exceeds 70% for a
period of 10 minutes or more.
FIGURE 1-29 Scale rule based on CPU Percentage
When the schedule criteria is not met, such as in the evenings and on
weekends, the default scale condition created in the previous section is applied.
This would have the effect of scaling back down to one instance if the CPU
percentage stays below 70%.
To leverage the features of Azure Traffic Manager, you should have two or
more deployments of your web app. The deployments can be in the same region
or spread across multiple regions around the world.
For Azure Traffic Manager to determine the health of your web app endpoints
(deployments) you need to provide configuration settings so Azure Traffic
Manager can query your endpoints to determine if an endpoint should be taken
out of the rotation. The configuration settings consist of the following:
Protocol This can be HTTP, HTTPS, or TCP.
Port Defaults to standard HTTP and HTTPS ports, such as 80 or 443. You
may choose to use a different port to separate normal web traffic from
endpoint monitoring traffic.
Path This is the path in the application that the monitoring service will
perform an HTTP GET request against (if using HTTP/S). This can be the
root of the application, such as “”. Or, it could be a specific health check
page the application may make available, such as Healthcheck.aspx.
Probing interval Determines how frequent Azure probes your endpoint.
Tolerated number of failures Nmber of times a health probe can fail
before the endpoint is considered to be down/unavailable.
Probe timeout Timeout period for a probe request. Must be smaller than
the probing interval setting.
To create a Traffic Manager profile using Azure PowerShell, use the New-
AzureRmTrafficManagerProfile cmdlet. For example, the code below creates a
profile named contoso-public with a domain name of contoso-public-
tm.trafficmanager.net, Performance routing method, and TCP monitoring on
port 8082.
Click here to view code image
# Properties for the traffic manager profile
$tmName = "contoso-public"
$tmName = "contoso-public"
$tmDnsName = "contoso-public-tm"
$ttl = 300
$monitorProtocol = "TCP"
$monitorPort = 8082
Remove-AzureRmTrafficManagerEndpoint -ResourceGroupName
$resourceGroupName
-ProfileName $tmProfile.Name `
-Name $newTmEndpointName -Type AzureEndpoints -Force
Disable-AzureRmTrafficManagerEndpoint -ResourceGroupName
$resourceGroupName -ProfileName
$tmProfile.Name `
-Name $newTmEndpointName -Type AzureEndpoints -Force
-Name $newTmEndpointName -Type AzureEndpoints -Force
Update DNS records for your custom domain
The last step to configuring Azure Traffic Manager is to update your custom
domain to point to the Azure Traffic Manager DNS name using a CNAME
record. As an example, assume your custom domain is contoso.com and your
Azure Traffic Manager DNS name is contoso-web-tm.trafficmanager.net. Table
1-5 shows how the CNAME record should be configured in this scenario.
TABLE 1-5 Example DNS record for a custom domain and an Azure Traffic
Manager DNS name
EXAM TIP
Thought experiment
In this thought experiment, apply what you have learned about this chapter. You
can find answers to these questions in the next section.
You are the IT Administrator for Contoso and responsible for managing the
Contoso website. The public-facing website today is running in a Shared mode
Contoso website. The public-facing website today is running in a Shared mode
hosting plan. The development team is about to release a new version of the web
app that will require 3.5 GB of memory to perform as designed.
As part of this new release, the marketing team is planning to run some
television commercials and radio ads to notify new and existing customers of the
new features and services offered by Contoso. The expectation is that these ads
will generate large demand spikes as they are run during the campaign.
Due to the frequency of changes, the business has indicated that the web
application should be backed up 3 times per day (once every 8 hours).
You need to provide a solution to meet the resource requirements for the new
website version and to support the traffic spikes expected during the marketing
campaign.
1. How will you scale the app service plan to meet the new resource
requirements?
2. How will you configure the web app to support the traffic spikes during
the marketing campaign?
Chapter summary
Azure web apps are created under the *.azurewebsites.net shared domain.
Adding deployment slots to a web app requires that the app service plan it
runs on be in the Standard, Premium, or Isolated pricing tier.
App Service Environment is a feature of App Service that provides a virtual
network for your app service application (web app for example) to run in.
This allows you to use virtual network security controls such as network
security groups, virtual appliances, user defined routes, etc. to protect the
web app.
A web app has an implied production deployment slot by default. You can
add additional deployment slots if your app service plan is standard,
premium, or isolated.
When creating new deployment slots, you can clone the configuration
settings from another deployment slot or create a new deployment slot with
empty settings.
When swapping deployment slots, you can perform a single-phase swap or
a multi-phase swap. The latter is recommended for mission critical
workloads.
An app service plan defines the capacity (cores, memory, or storage) and
features available to the web apps running on it.
Migrating web apps to a different app service plan requires that the target
app service plan be in the same region and resource group as the source app
service plan.
Web App Application Settings is where you can configure language
versions for .NET Framework, PHP, Java, and Python. This is also where
you can set the site to run in 32-bit or 64-bit mode, enable web sockets and
the Always-On feature, and configure handler mappings.
Application settings and connection strings can be defined in the web app
infrastructure and retrieved at application runtime using environment
variables.
To configure a custom domain, you must first add an A record and/or
CNAME record using your DNS registrar. For A records, you must also
add a TXT record to verify domain ownership.
Azure Web Apps support Server Name Indication (SNI) SSL and IP-based
SSL for web apps with a custom domain.
App Service Certificate SKU can be either Standard or Wild Card.
You can obtain an SSL certificate using App Service Certificate or
separately through a 3rd party.
An App Service Certificate can only be used by other app services in the
same subscription.
When you configure a SSL certificate for a web app, users can still access
the web app using HTTP. To force HTTPS only traffic, a rewrite rule must
be defined in the applications configuration file (or code).
Azure Web Apps provides two categories of diagnostic logs: application
diagnostics and web server diagnostics. There are three web server
diagnostic log files available: Web Server, Detailed Errors, and Failed
Request.
When enabling application diagnostic logging, you must specify the
logging level, which can be Error, Warning, Information, or Verbose.
Monitor web app resources when you need to monitor just the web app.
Monitor app service plan resources when you need to monitor metrics such
as compute and memory that your web apps are running on.
Site Control Manager (Kudu) is a web app extension and is installed by
default for every web app. It is accessible at https://<your site
name>.scm.azurewebsites.net. To authenticate, sign-in using the same
credentials you use to sign-in to the Azure portal.
Use Application Insights to monitor performance metrics, availability, and
how users are using the application.
Use Application Insights and client-side JavaScript to capture client-side
telemetry data such as page load times, outbound AJAX call duration, and
browser exceptions.
Availability tests enable you to test the availability of your web app from
multiple regions and monitor the duration of the tests.
Use alerts to notify subscription administrators or others of potential
performance, availability, or Azure service issues.
Azure Web App backups can be used to back up web app configuration, file
content, and databases.
Azure Web App backups can be performed on-demand or on a schedule.
Azure Web App backups can be restored over the existing web app or to a
different web app environment.
Scaling up an app service plan within a pricing tier increases the number of
cores and RAM.
Scaling out an app service plan increases the number in instances.
Autoscale can be configured to scale on a schedule or based on a metric.
The metric source of a metric-based scale rule determines the metrics
available.
Traffic Manager provides support for the following routing methods:
Performance, Weighted, Priority, and Geographic.
Traffic Manager is not a load balancer. It is an advanced DNS query
resolver that resolves DNS queries based on the traffic manager profile
settings.
CHAPTER 2
Create and manage Compute Resources
Microsoft Azure offers many features and services that can be used to create
inventive solutions for almost any IT problem. Two of the most common
services for designing these solutions are Microsoft Azure Virtual Machines
(VM) and VM Scale Sets. Virtual machines are one of the key compute options
for deploying workloads in Microsoft Azure. Virtual machines can provide the
on-ramp for migrating workloads from on-premises (or other cloud providers) to
Azure, because they are usually the most compatible with existing solutions. The
flexibility of virtual machines makes them a key scenario for many workloads.
For example, you have a choice of server operating systems, with various
supported versions of Windows and Linux distributions. Azure virtual machines
also provide you full control over the operating system, along with advanced
configuration options for networking and storage. VM Scale Sets provide similar
capabilities to VMs, and provide the ability to scale out certain types of
workloads to handle large processing problems, or to just optimize cost by only
running instances when needed. The third option covered in this module is
Azure Container Service (ACS). Azure Container Service optimizes the
configuration of popular open source tools and technologies, specifically for
Azure. ACS provides a solution that offers portability for both container-based
workloads and application configuration. You select the size, number of hosts,
and choice of orchestrator tools and ACS handles everything else.
Distribution Version
CentOS CentOS 6.3+, 7.0+
CoreOS 494.4.0+
Debian Debian 7.9+, 8.2+
Oracle Linux 6.4+, 7.0+
Red Hat Enterprise Linux RHEL 6.7+, 7.1+
SLES/SLES for SAP
SUSE Linux Enterprise 11 SP4
12 SP1+
openSUSE openSUSE Leap 42.1+
Ubuntu Ubuntu 12.04, 14.04, 16.04, 16.10
This list is updated as new versions and distributions are on-boarded and can
be accessed online at https://docs.microsoft.com/en-us/azure/virtual-
machines/linux/endorsed-distros.
You can also bring your own custom version of Linux if you deploy the
Microsoft Azure Linux agent to it. You should be aware that the Microsoft
Azure support team offers various levels of support for open source technologies
including custom distributions of Linux. For more details see
https://support.microsoft.com/en-us/help/2941892/support-for-linux-and-open-
source-technology-in-azure.
Running Linux on Microsoft Azure Virtual Machines requires an additional
piece of software known as the Microsoft Azure Linux Agent (waagent). This
software agent provides much of the base functionality for provisioning and
communicating with the Azure Fabric Controller including the following:
Image provisioning
Creation of a user account
Configuring SSH authentication types
Deployment of SSH public keys and key pairs
Setting the host name
Publishing the host name to the platform DNS
Reporting SSH host key fingerprint to the platform
Resource disk management
Formatting and mounting the resource disk
Configuring swap space
Networking
Manages routes to improve compatibility with platform DHCP servers
Ensures the stability of the network interface name
Kernel
Configures virtual NUMA (disable for kernel <2.6.37)
Consumes Hyper-V entropy for devrandom
Configures SCSI timeouts for the root device (which could be remote)
Diagnostics
Console redirection to the serial port
SCVMM deployments
Detects and bootstraps the VMM agent for Linux when running in a System
Center Virtual Machine Manager 2012 R2 environment
Manages virtual machine extensions to inject component authored by
Microsoft and Partners into Linux VM (IaaS) to enable software and
configuration automation
VM Extension reference implementation at https://github.com/Azure/azure-
linux-extensions
The Azure Fabric Controller communicates to this agent in two ways:
A boot-time attached DVD for IaaS deployments. This DVD includes an
OVF-compliant configuration file that includes all provisioning information
other than the actual SSH keypairs.
A TCP endpoint exposing a REST API used to obtain deployment and
topology configuration.
You can specify an existing SSH public key or a password when creating a
Linux VM. If the SSH public key option is selected you must paste in the public
key for your SSH certificate. You can create the SSH certificate using the
key for your SSH certificate. You can create the SSH certificate using the
following command:
ssh-keygen -t rsa -b 2048
To retrieve the public key for your new certificate, run the following
command:
cat ~/.ssh/id_rsa.pub
From there, copy all the data starting with ssh-rsa and ending with the last
character on the screen and paste it into the SSH public key box, as shown in
Figure 2-3. Ensure you don’t include any extra spaces.
FIGURE 2-3 The Basics blade of the portal creation process for a Linux-based
virtual machine
After setting the basic configuration for a virtual machine you then specify the
virtual machine size, as show in Figure 2-4. The portal gives you the option of
filtering the available instance sizes by specifying the minimum number of
virtual CPUs (vCPUs) and the minimum amount of memory, as well as whether
the instance size supports solid state disks (SSD) or only traditional hard disk
drives (HDD).
FIGURE 2-4 Setting the size of the virtual machine
The Settings blade, shown in Figure 2-5, allows you to set the following
configuration
options:
Whether the virtual machine is part of an availability set
Whether to use managed or unmanaged disks
What virtual network and subnet the network interface should use
What public IP (if any) should be used
What network security group (if any) should be used (you can specify new
rules here as well)
FIGURE 2-5 Specifying virtual machine configuration settings
The last step to create a virtual machine using the Azure portal is to read
through and agree to the terms of use and click the purchase button, as shown in
Figure 2-6. From there, the portal performs some initial validation of your
template, as well as checks many of the resources against policies in place on the
subscription and resource group you are targeting. If there are no validation
errors the template is deployed.
FIGURE 2-6 Accepting the terms of use and purchasing
EXAM TIP
The link next to the purchase button allows you to download an Azure
Resource Manager template and parameters file for the virtual machine you
just configured in the portal. You can customize this template and use it for
future automated deployments.
A virtual machine and all its related resources such as network interfaces,
disks, and so on must be created inside of an Azure Resource Group. Using
PowerShell, you can create a new resource group with the New-
AzureRmResourceGroup cmdlet.
Click here to view code image
$rgName = "Contoso"
$location = "West US"
New-AzureRmResourceGroup -Name $rgName -Location $location
This cmdlet requires you to specify the resource group name, and the name of
the Azure region. These values are defined in the variables $rgName, and
$location. You can use the Get-AzureRmResourceGroup cmdlet to see if the
resource group already exists or not, and you can use the Get-AzureRmLocation
cmdlet to view the list of available regions.
Azure virtual machines must be created inside of a virtual network. Like the
portal, using PowerShell, you can specify an existing virtual network or you can
create a new one. In the code example below, the New-
AzureRmVirtualNetworkSubnetConfig cmdlet is used to create two local objects
that represent two subnets in the virtual network. The virtual network is actually
created within the call to New-AzureRmVirtualNetwork. It is passed in the
address space of 10.0.0.0/16, and you could also pass in multiple address spaces
similar to how the subnets were passed in using an array.
Click here to view code image
$subnets = @()
$subnet1Name = "Subnet1"
$subnet2Name = "Subnet2"
$subnet1AddressPrefix = "10.0.0.0/24"
$subnet2AddressPrefix = "10.0.1.0/24"
$vnetAddresssSpace = "10.0.0.0/16"
$VNETName = "ExamRefVNET-PS"
$subnets += New-AzureRmVirtualNetworkSubnetConfig -Name
$subnet1Name `
-AddressPrefix
-AddressPrefix
$subnet1AddressPrefix
$subnets += New-AzureRmVirtualNetworkSubnetConfig -Name
$subnet2Name `
-AddressPrefix
$subnet2AddressPrefix
$vnet = New-AzureRmVirtualNetwork -Name $VNETName `
-ResourceGroupName $rgName `
-Location $location `
-AddressPrefix $vnetAddresssSpace `
-Subnet $subnets
Virtual Machines store their virtual hard disk (VHD) files in an Azure storage
account. If you are using managed disks (see more in Skill 2.3) Azure manages
the storage account for you. This example uses unmanaged disks so the code
creates a new storage account to contain the VHD files. You can use an existing
storage account for storage or create a new storage account. The PowerShell
cmdlet Get-AzureRmStorageAccount returns an existing storage account. To
create a new one, use the New-AzureRmStorageAccount cmdlet, as the
following example shows.
Click here to view code image
$saName = "examrefstoragew123123"
$storageAcc = New-AzureRmStorageAccount -ResourceGroupName $rgName
`
-Name $saName `
-Location $location `
-SkuName Standard_LRS
$blobEndpoint = $storageAcc.PrimaryEndpoints.Blob.ToString()
Now that the public IP and the network security group are created, use the
New-AzureRmNetworkInterface cmdlet to create the network interface for the
VM. This cmdlet accepts the unique ID for the subnet, public IP, and the
network security group for configuration.
Click here to view code image
$nicName = "ExamRefVM-NIC"
$nic = New-AzureRmNetworkInterface -Name $nicName `
-ResourceGroupName $rgName `
-Location $location `
-SubnetId $vnet.Subnets[0].Id `
-PublicIpAddressId $pip.Id `
-NetworkSecurityGroupId $nsg.ID
Now that all the resources are created that the virtual machine requires, use
the New-AzureRmVMConfig cmdlet to instantiate a local configuration object
that represents a virtual machine to associate them together. The virtual
machine’s size and the availability set are specified during this call.
Click here to view code image
$vmSize = "Standard_DS1_V2"
$vm = New-AzureRmVMConfig -VMName $vmName -VMSize $vmSize `
-AvailabilitySetId $avSet.Id
After the virtual machine configuration object is created there are several
configuration options that must be set. This example shows how to set the
operating system and the credentials using the Set-
AzureRmVMOperatingSystem cmdlet. The operating system is specified by
using either the Windows or the Linux parameter. The ProvisionVMAgent
parameter tells Azure to automatically install the VM agent on the virtual
machine when it is provisioned. The Credential parameter specifies the local
administrator username and password with the values passed to the $cred object.
Click here to view code image
$cred = Get-Credential
Set-AzureRmVMOperatingSystem -Windows `
-ComputerName $vmName `
-Credential $cred `
-ProvisionVMAgent `
-VM $vm
The operating system image (or existing VHD) must be specified for the VM
to boot. Setting the image is accomplished by calling the Set-
AzureRmVMSourceImage cmdlet and specifying the Image publisher, offer, and
SKU. These values can be retrieved by calling the cmdlets Get-
AzureRmVMImagePublisher, Get-AzureRmVMImageOffer, and Get-
AzureRmVMImageSku.
AzureRmVMImageSku.
Click here to view code image
$pubName = "MicrosoftWindowsServer"
$offerName = "WindowsServer"
$skuName = "2016-Datacenter"
Set-AzureRmVMSourceImage -PublisherName $pubName `
-Offer $offerName `
-Skus $skuName `
-Version "latest" `
-VM $vm
$osDiskName = "ExamRefVM-osdisk"
$osDiskUri = $blobEndpoint + "vhds/" + $osDiskName + ".vhd"
Set-AzureRmVMOSDisk -Name $osDiskName `
-VhdUri $osDiskUri `
-CreateOption fromImage `
-VM $vm
The final step is to provision the virtual machine by calling the New-
AzureRmVMConfig cmdlet. This cmdlet requires you to specify the resource
group name to create the virtual machine in and the virtual machine
configuration, which is in the $vm variable.
Click here to view code image
New-AzureRmVM -ResourceGroupName $rgName -Location $location -VM
$vm
EXAM TIP
Create a new resource group by executing the az group create command and
specifying a unique name and the region.
#!/bin/bash
rgName="Contoso"
location="WestUS"
az group create --name $rgName --location $location
The following command can be used to identify available regions that you can
create resources and resource groups in.
az account list-locations
From here you have two options. You can create a virtual machine with a very
simple syntax that generates much of the underlying configuration for you such
as a virtual network, public IP address, storage account, and so on, or you can
create and configure each resource and link to the virtual machine at creation
time. Here is an example of the syntax to create a simple stand-alone virtual
machine:
Click here to view code image
# Creating a simple virtual machine
vmName="myUbuntuVM"
imageName="UbuntuLTS"
az vm create --resource-group $rgName --name $vmName --image
$imageName
--generate-ssh-keys
EXAM TIP
To create all the resources from scratch, as shown in the section on creating a
virtual machine using the PowerShell cmdlets, you can start with the virtual
network. Use the az network vnet create command to create the virtual
network. This command requires the name of the virtual network, a list of
address prefixes, and the location to create the virtual network in.
Click here to view code image
vnetName="ExamRefVNET-CLI"
vnetAddressPrefix="10.0.0.0/16"
az network vnet create --resource-group $rgName -n ExamRefVNET-CLI
--address-prefixes $vnetAddressPrefix -l $location
After the network security group is created, use the az network rule create
command to add rules. In this example, the rule allows inbound connections on
port 22 for SSH and another rule is created to allow in HTTP port 80.
Click here to view code image
# Create a rule to allow in SSH
az network nsg rule create -n SSH --nsg-name $nsgName --priority
100 -g $rgName --access Allow --description "SSH Access" --
direction Inbound --protocol Tcp --destination-address-prefix "*" -
-destination-port-range 22 --source-address-prefix "" --source-
port-range ""
The network interface for the virtual machine is created using the az network
nic create command.
Click here to view code image
nicname="WebVMNic1"
az network nic create -n $nicname -g $rgName --subnet $Subnet1Name
--network-security-
group $nsgName --vnet-name $vnetName --public-ip-address $ipName -l
$location
To create a virtual machine, you must specify whether it will boot from a
custom image, a marketplace image, or an existing VHD. You can retrieve a list
of marketplace images by executing the following command:
az vm image list
The command az image list is used to retrieve any of your own custom
images you have captured.
Another important piece of metadata needed to create a virtual machine is the
VM size. You can retrieve the available form factors that can be created in each
region by executing the following command:
Click here to view code image
az vm list-sizes --location $location
The last step is to use the az vm create command to create the virtual
machine. This command allows you to pass the name of the availability set, the
virtual machine size, the image the virtual machine should boot from, and other
configuration data such as the username and password, and the storage
configuration.
Click here to view code image
imageName="Canonical:UbuntuServer:17.04:latest"
imageName="Canonical:UbuntuServer:17.04:latest"
vmSize="Standard_DS1_V2"
containerName=vhds
user=demouser
vmName="WebVM"
osDiskName="WEBVM1-OSDISK.vhd"
az vm create -n $vmName -g $rgName -l $location --size $vmSize --
availability-set
$avSetName --nics $nicname --image $imageName --use-unmanaged-disk
--os-disk-name
$osDiskName --storageaccount $storageAccountName --storage-
container-name
$containerName --generate-ssh-keys
The Azure Cloud Shell, shown in Figure 2-7, is a feature of the Azure portal
that provides access to an Azure command line (CLI or PowerShell) using the
user credentials you are already logged into without the need to install additional
tools on your computer.
EXAM TIP
Stopping a virtual machine from the Azure portal, Windows PowerShell with
the Stop-AzureRmVM cmdlet, or the az vm deallocate command puts
the virtual machine in the Stopped (deallocated) state (az vm stop puts the
VM in the Stopped state). It is important to understand the difference between
Stopped (deallocated) and just Stopped. In the Stopped state a virtual machine
is still allocated in Azure, and the operating system is simply shut down. You
will still be billed for the compute time for a virtual machine in this state. A
virtual machine in the Stopped (deallocated) state is no longer occupying
physical hardware in the Azure region, and you will not be billed for the
compute time (you are still billed for the underlying storage).
From there, you have the option to build your own template using the portal’s
editor (you can paste your own template in or upload from a file using this
option too), or choose from one of the most common templates. Last of all, you
can search the existing samples in the quickstart samples repository and choose
one of them as a starting point. Figure 2-9 shows the various options after
clicking the template deployment search result.
FIGURE 2-9 Options for configuring a template deployment
Choosing one of the common templates links opens the next screen, which
gives you the options for deploying the template. A template deployment
requires you to specify a subscription and resource group, along with any
parameters that the template requires. In figure 2-10 the Admin Username,
Admin Password, DNS Label Prefix, and Windows operating system version
values are all parameters defined in the template.
FIGURE 2-10 Deploying a template
Clicking the Edit Template button opens the editor shown in Figure 2-11,
where you can continue modifying the template. On the left navigation, you can
see the parameters section that defines the four parameters shown in the previous
screen, as well as the resource list, which defines the resources that the template
will create. In this example, the template defines a storage account, public IP
address, virtual network, network interface, and the virtual machine.
The editor also allows you to download the template as a JavaScript Object
Notation (.json) file for further modification or for deployment using an
alternative method.
The Edit Parameters button allows you to edit a JSON view of the parameters
for the template, as shown in Figure 2-12. This file can also be downloaded and
is used to provide different behaviors for the template at deployment time
without modifying the entire template.
FIGURE 2-12 Editing template parameters using the Azure portal
The last step to creating an ARM template using the portal is to click the
Purchase button after reviewing and agreeing to the terms and conditions on the
screen.
The Azure command line tools can also deploy ARM templates. The template
files can be located locally on your file system or accessed via HTTP/HTTPs.
Common deployment models have the templates deployed into a source code
repository or an Azure storage account to make it easy for others to deploy the
template.
This example uses the Azure PowerShell cmdlets to create a new resource
group, specify the location and then deploy a template by specifying the URL
group, specify the location and then deploy a template by specifying the URL
from the Azure QuickStart GitHub repository.
Click here to view code image
# Create a Resource Group
$rgName = "Contoso"
$location = "WestUs"
New-AzureRmResourceGroup -Name $rgName -Location $location
$templateUri = "https://raw.githubusercontent.com/Azure/azure-
quickstart-templates/master/101-vm-simple-windows/azuredeploy.json"
New-AzureRmResourceGroupDeployment -Name $deploymentName `
-ResourceGroupName $rgName `
-TemplateUri $templateUri
If the template requires parameters without default values, the cmdlet will
prompt you to input their values.
EXAM TIP
The following example uses the Azure CLI tools to accomplish the same task.
Click here to view code image
#!/bin/bash
# Create the resource group
rgName="Contoso"
location="WestUS"
az group create --name $rgName --location $location
# Deploy the specified template to the resource group
deploymentName="simpleVMDeployment"
templateUri="https://raw.githubusercontent.com/Azure/azure-
templateUri="https://raw.githubusercontent.com/Azure/azure-
quickstart-templates/master/101-vm-simple-linux/azuredeploy.json"
az group deployment create --name $deploymentName --resource-group
$rgName --template-uri $templateUri
EXAM TIP
You can launch a remote desktop session from Windows PowerShell by using
the Get-AzureRmRemoteDesktopFile cmdlet. The Get-
AzureRmRemoteDesktopFile cmdlet performs the same validation as the Azure
portal. The API it calls validates that a public IP address is associated with the
virtual machine’s network interface. If a public IP exists, it generates an .rdp file
consumable with a Remote Desktop client. The .rdp file will have the IP address
of the VIP and public port (3389) of the virtual machine specified embedded.
There are two parameters that alter the behavior of what happens with the
generated file.
Use the Launch parameter to retrieve the .rdp file and immediately open it
with a Remote Desktop client. The following example launches the Mstsc.exe
(Remote Desktop client), and the client prompts you to initiate the connection.
Click here to view code image
Get-AzureRmRemoteDesktopFile -ResourceGroupName $rgName -Name
$vmName -Launch
Use the following command to connect to a Linux VM using the SSH bash
client.
ssh username@ipaddress
If the virtual machine is configured for password access, SSH then prompts
you for the password for the user you specified. If you specified the public key
for an SSH certificate during the creation of the virtual machine it attempts to
use the certificate from the ~/.ssh folder.
There are many options for SSH users from a Windows machine. For
example, if you install the Linux subsystem for Windows 10, you will also
install an SSH client that can be accessed from the bash command line. You can
also install one of many GUI-based SSH clients like PuTTy. The Azure Cloud
Shell shown in Figure 2-15 also provides an SSH client. So regardless of which
operating system you are on, if you have a modern browser and can access the
Azure portal you can connect to your Linux VMs.
FIGURE 2-15 Connecting to a Linux VM using SSH from within the Azure
Cloud Shell
EXAM TIP
DestinationPath = "C:\inetpub\wwwroot"
DependsOn = "[WindowsFeature]IIS"
}
Archive ArchiveExample
{
Ensure = "Present"
Path = "C:\inetpub\wwwroot\website.zip"
Destination = "C:\inetpub\wwwroot"
DependsOn = "[xRemoteFile]WebContent"
}
}
}
Before the DSC script can be applied to a virtual machine, you must use the
Publish-AzureRmVMDscConfiguration cmdlet to package the script into a .zip
file. This cmdlet also import any dependent DSC modules such as
xPSDesiredStateConfiguration into the .zip.
Click here to view code image
Publish-AzureRmVMDscConfiguration -ConfigurationPath
.\ContosoWeb.ps1 -OutputArchivePath
.\ContosoWeb.zip
The DSC configuration can then be applied to a virtual machine in several
ways such as using the Azure portal, as shown in Figure 2-16.
The Configuration Modules Or Script field expects the .zip file created by the
call to the Publish-AzureRmVMDscConfiguration. The Module-Qualified Name
Of Configuration field expects the name of the script file (with the .ps1
extension) concatenated with the name of the configuration in the script, which
in the example shown in Figure 2-17 is ContosoWeb.ps1\Main.
FIGURE 2-17 Configuring a VM extension
The previous examples apply the PowerShell DSC configuration only when
the extension is executed. If the configuration of the virtual machine changes
after the extension is applied, the configuration can drift from the state defined in
the DSC configuration. The Azure Automation DSC service allows you to
manage all your DSC configurations, resources, and target nodes from the Azure
portal or from PowerShell. It also provides a built-in pull server so your virtual
portal or from PowerShell. It also provides a built-in pull server so your virtual
machines will automatically check on a scheduled basis for new configuration
changes, or to compare the current configuration against the desired state and
update accordingly.
The following code example shows how this script can be applied to an Azure
The following code example shows how this script can be applied to an Azure
Virtual Machine named LinuxWebServer in the ExamRefRGCLI resource
group.
Click here to view code image
rgName="Contoso"
vmName="LinuxWebServer"
az vm extension set --resource-group $rgName --vm-name $vmName --
name
$scriptName --publisher Microsoft.Azure.Extensions --settings
./cseconfig.json
EXAM TIP
There are many other ways of configuring and executing the custom script
extension using the Azure CLI tools. The following article has several
relevant examples that might be used in an exam, which you can find at
https://docs.microsoft.com/en-us/azure/virtual-machines/linux/extensions-
customscript.
Like the PowerShell DSC extension, the custom script extension can be added
to the resources section of an Azure Resource Manager template. The following
example shows how to execute the same script using an ARM template instead
of the CLI tools.
Click here to view code image
{
"name": "apache",
"type": "extensions",
"type": "extensions",
"location": "[resourceGroup().location]",
"apiVersion": "2015-06-15",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/',
parameters('scriptextensionName'))]"
],
"tags": {
"displayName": "installApache"
},
"properties": {
"publisher": "Microsoft.Azure.Extensions",
"type": "CustomScript",
"typeHandlerVersion": "2.0",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": [
"
https://examplestorageaccount.blob.core.windows.net/scripts/apache.sh
"
],
"commandToExecute": "sh apache.sh"
}
}
}
This step opens the necessary port on the network security group for your
virtual machine, and then enables the virtual machine extension for remote
debugging. After both tasks have completed, right-click the virtual machine once
more and click the attach debugger option, as shown in Figure 2-20.
FIGURE 2-20 Attaching the debugger
Visual Studio will prompt you to attach the process on the virtual machine to
debug, as shown in figure 2-21. Select the process and click the Attach button.
You are then able to set one or more breakpoints in the application and debug
the problem directly on the offending virtual machine.
FIGURE 2-21 Selecting the process to debug
Azure disks
Azure VMs use three types of disks:
Operating System Disk (OS Disk) The C drive in Windows or devsda on
Linux. This disk is registered as an SATA drive and has a maximum
capacity of 2048 gigabytes (GB). This disk is persistent and is stored in
Azure storage.
Temporary Disk The D drive in Windows or devsdb on Linux. This disk is
used for short term storage for applications or the system. Data on this drive
can be lost in during a maintenance event, or if the VM is moved to a
different host because the data is stored on the local disk.
Data Disk Registered as a SCSI drive. These disks can be attached to a
virtual machine, the number of which depends on the VM instance size.
Data disks have a maximum capacity of 4095 gigabytes (GB). These disks
are persistent and stored in Azure Storage.
There are two types of disks in Azure: Managed or Unmanaged.
Unmanaged disks With unmanaged disks you are responsible for ensuring
for the correct distribution of your VM disks in storage accounts for
capacity planning as well as availability. An unmanaged disk is also not a
separate manageable entity. This means that you cannot take advantage of
features like role based access control (RBAC) or resource locks at the disk
level.
Managed disks Managed disks handle storage for you by automatically
distributing your disks in storage accounts for capacity and by integrating
with Azure Availability Sets to provide isolation for your storage just like
availability sets do for virtual machines. Managed disks also makes it easy
to change between Standard and Premium storage (HDD to SSD) without
the need to write conversion scripts.
After the VM is generalized, you then deallocate the VM, set its status to
generalized, and then use the Save-AzureRmVMImage cmdlet to capture the
VM (including operating system disks) into a container in the same storage
account. This cmdlet saves the disk configuration (including URIs to the VHDs)
in a .json file on your local file system.
$containerName = "vmimage"
$vhdPrefix = "img"
$localPath = "C:\Local\ImageConfig"
Save-AzureRmVMImage -ResourceGroupName $rgName -Name $vmName `
-DestinationContainerName $containerName -VHDNamePrefix
$vhdPrefix `
-Path $localPath
Using the CLI tools, specify the URI using the image parameter of the az vm
create command.
Click here to view code image
az vm create --resource-group $rgName --name $vmName --image
$osDiskUri
--generate-ssh-keys
To create using a manage image with PowerShell, you first retrieve the image
ID and pass it to Set-AzureRmVMOSDisk instead.
Click here to view code image
$image = Get-AzureRmImage -ImageName $imageName -ResourceGroupName
$rgName
$vmConfig = Set-AzureRmVMSourceImage -VM $vmConfig -Id $image.Id
Using the CLI tools saves a step because it retrieves the image ID for you, you
just need to specify the name of your managed image.
Click here to view code image
az vm create -g $rgName -n $vmName --image $imageName
EXAM TIP
Using the Azure CLI, there are two commands to use depending on whether
the virtual machine is an unmanaged or a managed disk. Also, the host cache
setting can only be specified when attaching a disk using the az vm
unmanaged-disk for unmanaged disks, or az vm disk attach for
managed and specifying the caching parameter. This means you would need to
detach and then attach an existing VHD to modify the cache setting or you can
specify during the creation of a new disk as the following example demonstrates.
Click here to view code image
rgName="StorageRG"
vmName="StandardVM"
diskName="ManagedDisk"
az vm disk attach --vm-name $vmName --resource-group $rgName --
size-gb 128 --disk
$diskName --caching ReadWrite –new
To configure the disk cache setting using an ARM template specify the
caching property of the OSDisk, or each disk in the dataDisks collection of the
virtual machine’s OSProfile configuration. The following example shows how to
set the cache setting on a data disk.
Click here to view code image
"dataDisks": [
{
"name": "datadisk1",
"diskSizeGB": "1023",
"lun": 0,
"caching": "ReadOnly",
"vhd": { "uri": "[variables('DISKURI')]" },
"createOption": "Empty"
}
]
Basic Standard
VM Tier
Tier VM Tier VM
Disk size 4095 GB 4095 GB
Max 8 KB IOPS per persistent disk 300 500
Max number of disks performing max IOPS
66 40
(per storage account)
Table 2-4 shows the disk sizes, IOPS, and throughput per disk for standard
managed disks.
Standard Disk
S4 S6 S10 S20
Type
Disk size 32 GB 64 GB 128 GB 512 GB
IOPS per disk 500 500 500 500
Throughput per 60
60 MB/sec 60 MB/sec 60 MB/sec
disk MB/sec
Standard Disk
S30 S40 S50
Type
1024 GB (1 2048 GB 4095 GB (4
Disk size
TB) (2TB) TB)
IOPS per disk 500 500 500
Throughput per
60 MB/sec 60 MB/sec 60 MB/sec
disk
TABLE 2-5 Premium unmanaged virtual machine disks: per account limits
Resource Default Limit
Total disk capacity per account 35 TB
Total snapshot capacity per account 10 TB
Max bandwidth per account (ingress + egress1) <=50 Gbps
This means that just like when using Standard storage, you must carefully
plan how many disks you create in each storage account as well as consider the
maximum throughput per Premium disk type because each type has a different
max throughput, which affects the overall max throughput for the storage
account (see Table 2-6).
TABLE 2-6 Premium unmanaged virtual machine disks: per disk limits
Premium Storage Disk
P10 P20 P30 P40 P50
Type
1024 2048 4095
128 512
Disk size GiB (1 GiB (2 GiB (4
GiB GiB
TB) TB) TB)
Max IOPS per disk 500 2300 5000 7500 7500
Max throughput per 100 150 200 250 250
disk MB/s MB/s MB/s MB/s MB/s
Max number of disks
280 70 35 17 8
per storage account
280 70 35 17 8
per storage account
Table 2-7 shows the disk sizes, IOPS, and throughput per disk for premium
managed disks.
TABLE 2-7 Premium managed virtual machine disks: per disk limits
Premium Disks
P4 P6 P10 P20
Type
Disk size 32 GB 64 GB 128 GB 512 GB
IOPS per disk 120 240 500 2300
Throughput per 150
25 MB/sec 50 MB/sec 100 MB/sec
disk MB/sec
Premium Disks
P30 P40 P50
Type
1024 GB (1 2048 GB (2 4095 GB (4
Disk size
TB) TB) TB)
IOPS per disk 5000 7500 7500
Throughput per
200 MB/sec 250 MB/sec 250 MB/sec
disk
Each Premium storage-supported virtual machine size has scale limits and
performance specifications for IOPS, bandwidth, and the number of disks that
can be attached per VM. When you use Premium storage disks with VMs, make
sure that there is sufficient IOPS and bandwidth on your VM to drive disk
traffic.
EXAM TIP
Disk encryption
Protecting data is critical whether your workloads are deployed on-premises or
in the cloud. Microsoft Azure provides several options for encrypting your
Azure Virtual Machine disks to ensure that they cannot be read by unauthorized
users.
EXAM TIP
One of the key differences between Azure Disk Encryption and Storage
Service Encryption is with Storage Service Encryption Microsoft owns and
manages the keys, and with Azure Disk Encryption you do. Understanding
this difference could come up on an exam.
When the dialog opens, specify the following configuration options, as shown
in Figure 2-27:
Folder: \\[name of storage account].files.core.windows.net\[name of
share]
When you click finish, you see another dialog like the one shown in Figure 2-
28 requesting the user name and password to access the file share. The user
name should be in the following format: Azure\[name of storage account], and
the password should be the access key for the Azure storage account.
FIGURE 2-28 Specifying credentials to the Azure File Share
Azure Monitor
This tool allows you to get base-level infrastructure metrics and logs across your
Azure subscription including alerts, metrics, subscription activity, and Service
Health information. The Azure Monitor landing page provides a jumping off
point to configure other more specific monitoring services such as Application
Insights, Network Watcher, Log Analytics, Management Solutions, and so on.
You can learn more about Azure Monitor at https://docs.microsoft.com/en-
us/azure/monitoring-and-diagnostics/monitoring-overview-azure-monitor.
Application Insights
Application Insights is used for development and as a production monitoring
solution. It works by installing a package into your app, which can provide a
more internal view of what’s going on with your code. Its data includes response
times of dependencies, exception traces, debugging snapshots, and execution
profiles. It provides powerful smart tools for analyzing all this telemetry both to
help you debug an app and to help you understand what users are doing with it.
You can tell whether a spike in response times is due to something in an app, or
some external resourcing issue. If you use Visual Studio and the app is at fault,
you can be taken right to the problem line(s) of code so you can fix it.
Application Insights provides significantly more value when your application is
instrumented to emit custom events and exception information. You can learn
more about Application Insights including samples for emitting custom
telemetry at https://docs.microsoft.com/en-us/azure/application-insights/.
Network Watcher
The Network Watcher service provides the ability to monitor and diagnose
networking issues without logging in to your virtual machines (VMs). You can
trigger packet capture by setting alerts, and gain access to real-time performance
information at the packet level. When you see an issue, you can investigate in
detail for better diagnoses. This service is ideal for troubleshooting network
connectivity or performance issues.
FIGURE 2-29 Enabling boot and guest operating system diagnostics during
VM creation
After the diagnostics extension is enabled, you can then capture performance
counter data. Using the portal, you can select basic sets of counters by category,
as Figure 2-31 shows.
The Azure portal allows you to configure the agent to transfer IIS logs and
failed request logs to Azure storage automatically, as Figure 2-33 demonstrates.
The agent can also be configured to transfer files from any directory on the VM.
However, the portal does not surface this functionality, and it must be configured
through a diagnostics configuration file.
FIGURE 2-33 Configuring the storage container location for IIS and failed
request logs
For .NET applications that emit trace data, the extension can also capture this
data and filter by the following log levels: All, Critical, Error, Warning,
Information, and Verbose, as Figure 2-35 shows.
Event Tracing for Windows (ETW) provides a mechanism to trace and log
events that are raised by user-mode applications and kernel-mode drivers. ETW
is implemented in the Windows operating system and provides developers a fast,
reliable, and versatile set of event tracing features. Figure 2-36 demonstrates
how to configure the diagnostics extension to capture ETW data from specific
sources.
Figure 2-37 demonstrates the portal UI that allows you to specify which
processes to monitor for unhandled exceptions, and the container in Azure
storage to move the crash dump (mini or full) to after it is captured.
FIGURE 2-37 Configuring processes to capture for crash dump
The final diagnostics data to mention is boot diagnostics. If enabled, the Azure
Diagnostics Agent captures a screenshot to a specific storage account of what the
console looks like on the last boot. This helps you understand the problem if
your VM does not start. Figure 2-39 shows a VM with boot diagnostics enabled.
Clicking the Boot Diagnostics link in the portal shows you the last captured
screen shot of your VM, as Figure 2-40 shows.
FIGURE 2-40 The screen shot from the last boot for a Windows VM with boot
diagnostics configured
The output for boot diagnostics on Linux is different than the Windows
output. In this case you get the log data in text form, as Figure 2-42
demonstrates. This is useful for downloading and searching output.
FIGURE 2-42 Boot diagnostics logs for a Linux VM
EXAM TIP
The Azure Diagnostics agent can also be configured through ARM templates
and the command line tools by specifying a configuration file. For the exam
you should be aware of the schema of this configuration and how to apply it
using automated tools. You can learn more about the diagnostics schema at
https://docs.microsoft.com/en-us/azure/monitoring-and-diagnostics/azure-
diagnostics-schema.
Configuring alerts
You can configure and receive two types of alerts.
Metric alerts This type of alert triggers when the value of a specified
metric crosses a threshold you assign in either direction. That is, it triggers
both when the condition is first met and then afterwards when that
condition is no longer being met.
Activity log alerts This type of event occurs when a new activity log event
occurs that matches the conditions specified in the alert. These alerts are
Azure resources, so they can be created by using an Azure Resource
Manager template. They also can be created, updated, or deleted in the
Azure portal.
On the new dialog, you specify the name, description, and the criteria for the
alert. Figure 2-44 shows the name and description for a new rule.
The next step is to configure the alert criteria. This is the metric to use, the
condition, threshold, and the period. The alert shown in Figure 2-45 will trigger
an alert when the Percentage CPU metric exceeds 70 percent over a five-minute
period.
FIGURE 2-45 The configuration of the alert
When an alert is triggered there are several actions that can be taken to either
trigger further notifications, or to remediate the alert. These range from simply
emailing users in the owners, contributors, and reader roles, or sending emails to
designated administrator email addresses. Alerts can also call a webhook, run an
Azure Automation runbook, or even execute a logic app for more advanced
actions.
Webhooks allow you to route an Azure alert notification to other systems for
post-processing or custom actions. For example, you can use a webhook on an
alert to route it to services that send text messages, log bugs, notify a team via
chat/messaging services, or do any number of other actions. You can learn more
about sending alert information to webhooks at https://docs.microsoft.com/en-
us/azure/monitoring-and-diagnostics/insights-webhooks-alerts.
A runbook is a set of PowerShell code that runs in the Azure Automation
Service. See the following to learn more about using Runbooks to remediate
alerts at https://azure.microsoft.com/en-us/blog/automatically-remediate-azure-
vm-alerts-with-automation-runbooks/.
Logic Apps provides a visual designer to model and automate your process as
a series of steps known as a workflow. There are many connectors across the
cloud and on-premises to quickly integrate across services and protocols. When
an alert is triggered the logic app can take the notification data and use it with
any of the connectors to remediate the alert or start other services. To learn more
about Azure Logic Apps visit https://docs.microsoft.com/en-us/azure/logic-
apps/logic-apps-what-are-logic-apps. Figure 2-46 shows the various actions that
can take place when an alert is triggered.
On the creation dialog, you must specify the resource group to create the new
alert in, and then configure the criteria starting with the event category. The
event category contains the following categories that expose different types of
event sources.
Administrative
Security
Security Health
Recommendation
Policy
Autoscale
After the event category is specified, you can filter to a specific resource
group or resource as well as the specific operation. In Figure 2-48, the alert will
trigger anytime the LinuxVM virtual machine is updated.
FIGURE 2-48 Configuring an activity log alert
After the criteria is established, you define the actions that take place. Like the
alerts, these are actual resources created in the resource group. You can add one
or more actions from the following available options: Email, SMS, Webhook, or
ITSM. Figure 2-49 demonstrates how to configure an Email action type.
FIGURE 2-49 Specifying the actions for an activity log alert
At the time of this writing the following services are supported with
availability zones:
Linux virtual machines
Windows virtual machines
Zonal virtual machine scale sets
Managed disks
Load balancer
Supported virtual machine size families:
Av2
Dv2
DSv2
Each virtual machine in your availability set is assigned a fault domain and an
Each virtual machine in your availability set is assigned a fault domain and an
update domain. Each availability set has up to 20 update domains available,
which indicates the groups of virtual machines and the underlying physical
hardware that can be rebooted at the same time for host updates. Each
availability set is also comprised of up to three fault domains. Fault domains
represent which virtual machines will be on separate physical racks in the
datacenter for redundancy. This limits the impact of physical hardware failures
such as server, network, or power interruptions. It is important to understand that
the availability set must be set at creation time of the virtual machine.
Change VM sizes
There are many situations where the amount of compute processing a workload
needs varies dramatically from day to day or even hour to hour. For example, in
many organizations line of business (LOB) applications are heavily utilized
during the workweek, but on the weekends, they see little to any actual usage.
Other examples are workloads that require more processing time due to
scheduled events such as backups or maintenance windows where having more
compute time may make it faster to complete these tasks. Azure Resource
Manager based VMs make it relatively easy to change the size of a virtual
machine even after it has been deployed. There are a few things to consider with
this approach.
The first consideration is to ensure that the region your VM is deployed to
supports the instance size that you want to change the VM to. In most cases this
is not an issue, but if you have a use case where the desired size isn’t in the
region the existing VM is deployed to, your only options are to either wait for
the size to be supported in the region, or to move the existing VM to a region
that already supports it.
The second consideration is if the new size is supported in the current
hardware cluster your VM is deployed to. This can be determined by clicking the
Size link in the virtual machine configuration blade in the Azure portal of a
running virtual machine, as Figure 2-55 demonstrates. If the size is available you
can select it. Changing the size reboots the virtual machine.
FIGURE 2-55 Creating the backend pool of the load balancer using VMs from
an availability set
If the size is not available, it means either the size is not available in the
region or the current hardware cluster. You can view the available sizes by
region at https://azure.microsoft.com/en-us/regions/services/. In the event you
need to change to a different hardware cluster you must first stop the virtual
machine, and if it is part of an availability set you must stop all instances of the
availability set at the same time. After all, when the VMs are stopped you can
change the size, which moves all of the VMs to the new hardware cluster where
they are resized and started. The reason all VMs in the availability set must be
stopped before performing the resize operation to a size that requires different
hardware. is that all running VMs in the availability set must use the same
physical hardware cluster. Therefore, if a change of physical hardware cluster is
required to change the VM size, all VMs must be first stopped and then restarted
one-by-one to a different physical hardware cluster.
A third consideration is the form factor of the new size compared to the old
size. Consider scaling from a DS3_V2 to a DS2_V2. A DS3_V2 supports up to
eight data disks and up to four network interfaces. A DS2_V2 supports up to
four data disks and up to two network interfaces. If the VM you are sizing from
(DS3_V2) is using more disks or network interfaces then the target size, the
resize operation will fail.
Resizing a VM (PowerShell)
Use the Get-AzureRmVMSize cmdlet and pass the name of the region to the
location parameter to view all the available sizes in your region to ensure the
new size is available. If you specify the resource group and the VM name, it
returns the available sizes in the current hardware cluster.
Click here to view code image
# View available sizes
$location = "WestUS"
Get-AzureRmVMSize -Location $location
After you have identified the available size, use the following code to change
the VM to the new size.
Click here to view code image
$rgName = "EXAMREGWEBRG"
$vmName = "Web1"
$vmName = "Web1"
$size = "Standard_DS2_V2"
$vm = Get-AzureRmVM -ResourceGroupName $rgName -VMName $vmName
$vm.HardwareProfile.VmSize = $size
Update-AzureRmVM -VM $vm -ResourceGroupName $rgName
If the virtual machine(s) are part of an availability set, the following code can
be used to shut them all down at the same time and restart them using the new
size.
$rgName = "ExamRefRG"
$vmName = "Web1"
$size = "Standard_DS2_V2"
$avSet = "WebAVSet"
Resizing a VM (CLI)
The az vm list-vm-resize-options command can be used to see
which VM sizes are available in the current hardware cluster.
Click here to view code image
rgName="ExamRefRG"
vmName="Web1"
az vm list-vm-resize-options --resource-group $rgName --name
$vmName --output table
Further down the page is where you specify the initial instance count, and
instance size, as shown in figure 2-57. You can also choose to use managed or
unmanaged disks, assign a public IP, and a DNS label. Creating a VMSS using
the Azure portal also creates an instance of the Azure Load Balancer. Choosing
Enable scaling beyond 100 instances creates the VMSS using the
singlePlacementGroup property set to false. This change will also not create and
associate the Azure Load Balancer with the scale set.
FIGURE 2-57 Configuring the instances and the load balancer for a VM scale
set
FIGURE 2-58 Configuring auto scale rules for a virtual machine scale set
The Azure portal creation process does not directly support applying
configuration management options like VM extensions. However, they can be
applied to a VMSS later using the command line tools or an ARM template.
applied to a VMSS later using the command line tools or an ARM template.
Run the dir or ls command to see the files that located in the source code
directory, as seen in Figure 2-60. Notice that there is a docker-compose.yaml
file. This file has the required information that allows you to create the
container.
Once this command is completed, you can list the images that are now local to
your PC by running this command. Figure 2-62 shows the images that are
located on the PC.
docker images
FIGURE 2-62 Local Docker images
To see the running containers, run the following command. In Figure 2-63,
notice that the application is now running on your local machine on port 8080.
docker ps
As seen in Figure 2-64, open your local web browser and see the Azure voting
application up and running using the container image that you just created. Make
sure to reference the port 8080 where the container is running on your local PC.
FIGURE 2-64 Azure Voting App running locally as a Docker container
Once you have verified that applications functions using this container image
you can stop it running the containers and remove them. You don’t want to
remove the image because this will be used again.
docker-compose stop
docker-compose down
Open the Azure Cloud Shell and first create a Resource group.
Click here to view code image
az group create --name ContainersRG --location westus2
Next, create the Azure Container Registry located in the Resource group you
just created.
Click here to view code image
az acr create --resource-group ContainersRG --name ExamRefRegistry
--sku Basic
--admin-enabled true
After the ACR has been created you can then login to the registry using the
az acr login command.
Click here to view code image
az acr login --name ExamRefRegistry --username ExamRefRegistry --
password <password
found in azure portal>
List the name of your ACR Server using the following command, as seen in
Figure 2-65. Make note that this must be a globally unique name.
Click here to view code image
az acr list --resource-group ContainersRG --query "[].
{acrLoginServer:loginServer}"
--output table
Run the following command to list the images that are local on your PC.
These should be there based on the images you already created, as seen in Figure
2-66.
docker images
These images need to be tagged with the loginServer name of the registry. The
tag is used for routing when pushing container images to the ACR. Run the
following command to tag the azure-vote-front image referencing the ACR
Server name. Notice that the v-1 is added to the end, which provides a version
number. This is important for production deployments because it allows multiple
versions of the same image to be stored and used.
docker tag azure-vote-front examrefregistry.azurecr.io/azure-vote-
front:redis-v1
As seen in Figure 2-67, run the docker images command again and notice that
a new image has been added with the TAG added.
Now it’s time to push this image up to the ACR. Use the following command,
as seen in Figure 2-68, and make sure that the ACR server is correct.
docker push examrefregistry.azurecr.io/azure-vote-front:redis-v1
When you push your image to the ACR it appears in the Azure portal as a
Repository. Open the Azure Portal and locate your ACR, then move to the
Repository section. In Figure 2-69, notice that azure-vote-front is the image that
you created and is now in Azure.
FIGURE 2-69 Azure Container Services in the Azure Portal showing the
image
If you click through the azure-vote-front you see the tag that you added as
redis-v1, as seen in Figure 2-70.
FIGURE 2-70 Azure-vote-front Image Tag in the Azure portal
To use the Azure CLI to create the Kubernetes cluster (K8s), you execute just
one command. This command simplifies the entire creation of the cluster to only
one line. Notice the agent count parameter; this is the number of nodes that will
be available to your applications. This of course could be changed if you wish to
have more than one node to your cluster. For this example, all the commands
will be run from the Azure Cloud Shell.
Click here to view code image
az acs create --orchestrator-type kubernetes --resource-group
ContainersRG
ContainersRG
--name ExamRefK8sCluster --generate-ssh-keys --agent-count 1
Once the cluster has been created you need to gain access by issuing a
command to get the credentials. This command configures the kubectl, which is
the K8s cluster that was just created in ACS. This is done using the following
command.
Click here to view code image
az acs kubernetes get-credentials --resource-group=ContainersRG --
name=ExamRefK8sCluster
Verify your connection to the cluster by listing the nodes, as seen in Figure 2-
71.
kubectl get nodes
Once this file has been updated it should be pushed to the ACR again because
this change is only found on your local PC currently. Use the following
command to push the image to the registry.
Click here to view code image
docker push examrefregistry.azurecr.io/azure-vote-front:redis-v1
Next, you deploy the application to Kubernetes using the kubectl command
from your local PC. First you need to install the kubectl command line. If you
are using Windows this can be done by opening a cmd prompt as administrator
and running the following command.
az acs kubernetes install-cli
Now that the kubectl is installed locally, you need to configure it to use your
K8s Cluster in Azure. Run the following command to configure kubectl for this
purpose. You need to reference your RSA key for this cluster, which can be
found in the .ssh folder of your cloud shell.
Click here to view code image
az acs kubernetes get-credentials -n ExamRefK8sCluster -g
ContainersRG
--ssh-key-file <name of private key file>
Once you have kubectl configured, a simple command starts the Azure voting
application running on the K8s customer in Azure. Figure 2-73 shows the
feedback from the command line after successfully starting the application on
the K8s cluster.
Click here to view code image
kubectl create -f azure-vote-all-in-one-redis.yml
FIGURE 2-73 Azure voting application sample running on Kubernetes in
Azure
Once the application has been started you can run the following command to
watch because Kubernetes and Azure configure it for use. In Figure 2-74, notice
how it moves from having a pending Public IP address to an address as shown.
Click here to view code image
Kubectl get service azure-vote-front --watch
Once the application has moved to having an external IP this means that you
can now connect to the application with a web browser, as seen in Figure 2-75.
FIGURE 2-75 Connected to the sample application running on a Kubernetes
cluster in Azure
After this command completes you can run the kubectl get nodes command
again to see that the nodes have been added to the cluster, as seen in Figure 2-76.
Within the orchestrator itself you can make a change to resources that are
being leveraged for the service. In the case of Kubernetes, you might want to
scale your pods. Pods are like instances of your application. To do this manually
you would use the command line using kubectl. First run a command to see the
number of pods, as seen in Figure 2-77.
kubectl get pods
Next, run the following command to scale the frontend of the application.
Figure 2-78 shows the pods after they have been scaled.
Click here to view code image
kubectl scale --replicas=5 deployment/azure-vote-front
FIGURE 2-78 Kubernetes pods scaled horizontally
You can view which containers are running, what container image they’re
running, and where those containers are running. You can also view detailed
audit information showing commands used with containers.
Troubleshooting containers can be complex because there are many of them.
The Container Management Solution simplifies these tasks by allowing you to
search centralized logs without having to remotely connect to hosts. You are also
able to search the data as one pool of information rather than having those logs
isolated on each machine.
Finding containers that may be using excess resources on a host is also easy.
You can view centralized CPU, memory, storage, and network usage and
performance information for containers. The solution supports the following
container orchestrators:
Docker Swarm
DC/OS
Kubernetes
Service Fabric
Red Hat OpenShift
Though experiment
In this thought experiment, apply what you have learned about in this Chapter.
You can find answers to these questions in the next section.
You are the IT administrator for Contoso and you are tasked with migrating
an existing web farm and database to Microsoft Azure. The web application is
written in PHP and is deployed across 20 physical servers running RedHat for
the operating system and Apache for the web server. The backend consists of
two physical servers running MySQL in an active/passive configuration.
The solution must provide the ability to scale to at least as many web servers
as the existing solution and ideally the number of web server instances should
automatically adjust based on the demand. All the servers must be reachable on
the same network so the administrator can easily connect to them using SSH
from a jump box to administer the VMs.
Answer the following questions for your manager:
1. Which compute option would be ideal for the web servers?
2. Should all of the servers be deployed into the same availability set, or
should they be deployed in their own?
3. What would be the recommended storage configuration for the web
servers? What about the database servers?
4. What feature could be used to ensure that traffic to the VMs only goes to
the appropriate services (Apache, MySQL, and SSH)?
Chapter summary
This chapter covered a broad range of topics ranging from which workloads are
supported in Azure VMs, to creating and configuring virtual machines and
monitoring them. This chapter also discussed containers and using Azure
Container Services to manage and monitor container based workloads. Here are
some of the key takeaways from this chapter:
Most workloads can run exceedingly well in Azure VMs; however, it is
important to understand that there are some limitations such as not being
able to run 32-bit operating systems, or low level network services such as
hosting your own DHCP server.
Each compute family is optimized for either general or specific workloads.
You should optimize your VM by choosing the most appropriate size.
You can create VMs from the portal, PowerShell, the CLI tools, and Azure
Resource Manager templates. You should understand when to use which
tool and how to configure the virtual machine resource during provisioning
and after provisioning. For example, availability sets can only be set at
provisioning time, but data disks can be added at any time.
You can connect to Azure VMs using a public IP address or a private IP
address with RDP, SSH, or even PowerShell. To connect to a VM using a
private IP you must also enable connectivity such as site-to-site, point-to-
site, or ExpressRoute.
The Custom Script Extension is commonly used to execute scripts on
Windows or Linux-based VMs. The PowerShell DSC extension is used to
apply desired state configurations to Windows-based VMs.
To troubleshoot a problem that only occurs when an application is deployed
you can deploy a debug version of your app, and enable remote debugging
on Windows-based VMs.
VM storage comes in standard and Premium storage. For I/O intensive
workloads or workloads that require low latency on storage you should use
Premium storage.
There are unmanaged and managed disks and images. The key difference
between the two is with unmanaged disks or images it is up to you to
manage the storage account. With managed disks, Azure takes care of this
for you so it greatly simplifies managing images and disks.
On Windows-based VMs you can enable the Azure Diagnostics Agent to
capture performance data, files and folders, crash dumps, event logs,
application logs, and events from ETW and have that data automatically
transfer to an Azure Storage account. On Linux VMs you can only capture
and transfer performance data.
You can configure alerts based on metric alerts (captured from Azure
Diagnostics) to Activity Log alerts that can notify by email, web hook,
SMS, Logic Apps, or even an Azure Automation Runbook.
Azure Fault Domains provide high availability at the data center level.
Azure Availability Sets provide high availability within a data center, and a
properly designed multi-region solution that takes advantage of regional
pairing provides availability at the Azure region level.
Managed disks provide additional availability over unmanaged disks by
aligning with availability sets and providing storage in redundant storage
units.
Virtual Machine Scale Sets (VMSS), can scale up to 1000 instances. You
need to ensure that you create the VMSS configured for large scale sets if
you intend to go above 100 instances. There are several other limits to
consider too. Using a custom image, you can only create up to 300
instances. To scale above 100 instances you must use the Standard SKU of
the Azure Load Balancer or the Azure App Gateway.
CHAPTER 3
Design and implement a storage strategy
Figure 3-1 shows some of the concepts of a storage account. Each blob
storage account can have one or more containers and all blobs must be uploaded
to a container. Containers are similar in concept to a folder on your computer, in
that they are used to group blobs within a storage account. There can be a
container at the base of the storage account, appropriately named root, and there
can be containers one level down from the root container.
FIGURE 3-1 Azure Storage account entities and hierarchy relationships
TABLE 3-1 Storage account types and their supported blob types
Storage General-
General-purpose Blob storage, hot
account purpose
Standard and cool access tiers
type Premium
Services Blob, File, Queue
Blob Service Blob Service
supported Services
Types of Block blobs, page
Block blobs and
blobs blobs, and append Page blobs
append blobs
supported blobs
EXAM TIP
The type of the blob is set at creation and cannot be changed after the fact. A
common problem that may show up on the exam is if a .vhd file was
accidently uploaded as a block blob instead of a page blob. The blob would
have to be deleted first and reuploaded as a page blob before it could be
mounted as an OS or Data Disk to an Azure VM.
After a container is created, you can also use the portal to upload blobs to the
container as demonstrated in Figure 3-4. Click the Upload button in the
container and then browse to the blob to upload. If you click the Advanced
button you can select the blob type (blob, page or append), the block size, and
optionally a folder to upload the blob to.
FIGURE 3-4 The Azure Management Portal uploads a blob to a storage
account container
You can use the PowerShell cmdlets to upload a file as well using the Set-
AzureStorageBlobContent cmdlet as shown in the following example.
Click here to view code image
$containerName = "[storage account container]"$blobName = "[blob
name]"
$localFileDirectory = "C:\SourceFolder"
$localFile = Join-Path $localFileDirectory $BlobName
Set-AzureStorageBlobContent -File $localFile `
-Container $ContainerName `
-Blob $blobName `
-Context $context
You can use the Azure CLI to upload a file as well using the az storage blob
upload command as shown in the following example.
Click here to view code image
container_name="[storage account container]"
file_to_upload="C:\SourceFolder\[blob name]"
blob_name="[blob name]"
az storage blob upload --containername $container_name --file
$file_to_upload
--name $blob_name
Azure Storage Explorer provides the ability to upload a single file or multiple
files at once. The Upload Folder feature provides the ability to upload all of the
files and folders, recreating the hierarchy in the Azure Storage Account. Figure
3-6 shows the two upload options.
FIGURE 3-6 Uploading files and folders using Azure Storage Explorer
This example shows how you can switch the /Dest and /Source
parameters to upload the file instead.
Click here to view code image
AzCopy /Source:C:\sourceFolder Dest:https:/[dest
storage].blob.core.windows.net/
[dest container] DestKey:key Pattern:"Workshop List - 2017.xlsx"
MORE INFO AZCOPY EXAMPLES
AzCopy provides many capabilities beyond simple uploading and
downloading of files. For more information see the following:
https://docs.microsoft.com/en-us/azure/storage/common/storage-use-
azcopy.
After the copy is started, you can monitor the status using the az storage blob
show command as shown in the following example.
Click here to view code image
az storage blob show \
--account-name "$destStorageAccount" \
--account-name "$destStorageAccount" \
--account-key "$destStorageKey" \
--containername "$destContainer"
--name "$blobName"
AzCopy offers a feature to mitigate the lack of SLA with the async copy
service. The /SyncCopy parameter ensures that the copy operation gets
consistent speed during a copy. AzCopy performs the synchronous copy by
downloading the blobs to copy from the specified source to local memory, and
then uploading them to the Blob storage destination.
Click here to view code image
AzCopy Source:https:/[source storage].blob.core.windows.net/[source
container]/
Dest:https:/[destination
storage].blob.core.windows.net/[destination container]/
SourceKey:[source key] DestKey:[destination key] /Pattern:*.vhd
/SyncCopy
FIGURE 3-7 Using the async blob copy service with StorageExplorer
There are several common use cases for using Azure files. A few examples
include the following:
Migration of existing applications that require a file share for storage.
Shared storage of files such as web content, log files, application
configuration files, or even installation media.
To create a new file share using the Azure portal, open the blade for a storage
account, click the Files tile, and then click the + File Share button, as shown in
Figure 3-9.
FIGURE 3-9 Adding a new share with Azure files
To create a share using the Azure PowerShell cmdlets, use the following code:
Click here to view code image
$storageAccount = "[storage account]"
$rgName = "[resource group name]"
$shareName = "contosoweb"
$storageKey = Get-AzureRmStorageAccountKey -ResourceGroupName
$rgName -Name
$storageAccount
$ctx = New-AzureStorageContext -StorageAccountName $storageAccount
`
-StorageAccountKey
$storageKey.Value[0]
New-AzureStorageShare -Name $shareName -Context $ctx
The Azure CLI tools can also be used to create a file share as the following
example demonstrates:
Click here to view code image
rgName="[resource group name]"
storageAccountName="[storage account]"
storageAccountName="[storage account]"
shareName="contosoweb"
constring=$(az storage account show-connection-string -n
$storageAccountName -g
$rgName --query 'connectionString' -o tsv)
az storage share create --name $shareName --quota 2048 --
connection-string $constring
To access a share created in Azure files from a Windows machine, you should
store the storage account name and key using the Cmdkey.exe utility. This
allows you to associate the credentials with the URI to the Azure files share. The
syntax for using Cmdkey.exe is shown in the following example.
Click here to view code image
cmdkey.exe add:[storage account name].file.core.windows.net user:
[storage account
name] /pass:[storage account key]
After the credentials are stored, use the net use command to map a drive to the
file share, as shown in the following example.
Click here to view code image
net use z: \\examrefstorage.file.core.windows.net\contosoweb
To access an Azure File share from a Linux machine you need to install the
cifs-utils package from the Samba project.
On Ubuntu and Debian-based distributions, use the apt-get package manager
to install the package as the following example shows:
Click here to view code image
sudo apt-get update
sudo apt-get install cifs-utils
After the cifs-utils package is installed create a mount point for the share:
After the cifs-utils package is installed create a mount point for the share:
mkdir mymountpoint
Next, you will mount the Azure File Share to the mount point.
Click here to view code image
sudo mount -t cifs //[storage account
name].file.core.windows.net/[share name]
./mymountpoint -o vers=2.1,username=[storage account
name],password=
[storage account key],dir_mode=0777,file_mode=0777,serverino
After the CDN profile is created, you next add an endpoint to the profile. Add
an Endpoint by opening the CDN profile in the portal and click the + Endpoint
button. On the creation dialog, specify a unique name for the CDN endpoint, and
the configuration for the origin such as the type (Storage, Web App, Cloud
Service, or Custom), the host header and the origin port for HTTP and HTTPS),
and then click the Add button. Figure 3-11 shows an endpoint using an Azure
Storage account as the origin type.
FIGURE 3-11 Creating a CDN endpoint using the Azure portal
Blobs stored in public access enabled containers are replicated to the CDN
edge endpoints. To access the content within the CDN, instead of your storage
account, change the URL for the blob to reference the absolute path of the
created CDN endpoint combined with the relative path of the original file, as
shown in the following:
Original URL within storage
http://storageaccount.blob.core.windows.net/imgs/logo.png
New URL accessed through CDN
http://examrefcdn-blob.azureedget.net/imgs/logo.png
Figure 3-12 shows how this process works at a high level. For example, a file
named Logo.png that was originally in the imgs public container in Azure
Storage can be accessed through the created CDN endpoint. Figure 3-12 also
shows the benefits of a user accessing the file from the United Kingdom to the
storage account in the West US versus accessing the same file through a CDN
endpoint, which will resolve much closer to the user.
The additional benefit of using a CDN goes beyond deploying your content
closer to users. A typical public-facing web page contains several images and
may contain additional media such as .pdf files. Each request that is served from
the Azure CDN means it is not served from your website, which can remove a
significant amount of load.
Managing how long content stays in the CDN is different depending on if
your origin domain is from an Azure storage account, or an Azure cloud service,
or an Azure web app.
For content served from a web site, set the CacheControl HTTP header. This
setting can be set programmatically when serving up the content, or by setting
the configuration of the web app.
Manage the content expiration through storage by setting the time-to-live
(TTL) period of the blob itself. Figure 3-13 demonstrates how using Storage
Explorer you can set the CacheControl property on the blob files directly. You
can also set the property using Windows PowerShell or the CLI tools when
uploading to storage.
EXAM TIP
You can control the expiration of blob data in the CDN by setting the
CacheControl metadata property of blobs. If you do not explicitly set this
CacheControl metadata property of blobs. If you do not explicitly set this
property the default value is seven days before the data is refreshed or purged
if the original content is deleted.
EXAM TIP
The Content path of the CDN purge dialog supports specifying regular
expressions and wildcards to purge multiple items at once. Purge all and
Wildcard purge are not currently supported by Azure CDN from Akamai. You
can see examples of expressions here: https://docs.microsoft.com/en-
us/azure/cdn/cdn-purge-endpoint.
EXAM TIP
By default, assets are first cached as they are requested. This means that the
first request from each region may take longer, since the edge servers will not
have the content cached and will need to forward the request to the origin
server. Pre-loading content avoids this first hit latency. If you are using Azure
CDN from Verizon you can pre-load assets to mitigate this initial lag.
Mapping a domain that is already in use within Azure may result in minor
downtime as the domain is updated. If you have an application with an SLA, by
using the domain you can avoid the downtime by using a second option to
validate the domain. Essentially, you use an intermediary domain to validate to
Azure that you own the domain by performing the same process as before, but
instead you add an intermediary step of using the asverify subdomain. The
asverify subdomain is a special subdomain recognized by Azure. By prepending
asverify to your own subdomain, you permit Azure to recognize your custom
domain without modifying the DNS record for the domain. After you modify the
DNS record for the domain, it will be mapped to the blob endpoint with no
downtime.
After the asverify records are verified in the Azure portal, you then add the
correct DNS records. You can then delete the asverify records, because they are
no longer used. Table 3-4 shows the example DNS records created when using
the asverify method.
TABLE 3-4 Mapping a domain to an Azure Storage account in DNS with the
asverify intermediary domain
To enable a custom domain for an Azure CDN endpoint, the process is almost
identical. Create a CNAME record that points from cdn.contoso.com to the
Azure CDN endpoint [CDN endpoint].azureedge.net. Table 3-5 shows mapping
a custom CNAME DNS record to the CDN endpoint.
The cdnverify intermediate domain can be used just like asverify for storage.
Use this intermediate validation if you’re already using the domain with an
application because updating the DNS directly can result in downtime. Table 3-6
shows the CNAME DNS records needed for verifying your domain using the
cdnverify subdomain.
TABLE 3-6 Mapping a domain to an Azure CDN endpoint in DNS with the cdn
intermediary domain
After the DNS records are created and verified you then associate the custom
domain with your CDN endpoint or blob storage account.
EXAM TIP
Azure Storage does not yet natively support HTTPS with custom domains.
You can currently use the Azure CDN to access blobs with custom domains
over HTTPS.
There are several options available for authenticating users, which provide
various levels of access. The first type of authentication is by using the Azure
storage account name and authentication key. With the storage account name
and key, you have full access to everything within the storage account. You can
create, read, update, and delete containers, blobs, tables, queues, and file shares.
You have full administrative access to everything other than the storage account
itself (you cannot delete the storage account or change settings on the storage
account, such as its type).
To access the storage account name and key, open the storage account from
within the Azure portal and click the Keys tile. Figure 3-15 shows the primary
and secondary access keys for the Examrefstorage Storage Account. With this
information, you can use storage manage ment tools like Storage Explorer, or
command-line tools like Windows PowerShell, CLI, and AzCopy.exe to manage
content in the storage account.
Each storage account has a primary and a secondary key. The reason there are
two keys is to allow you to modify applications to use the secondary key instead
of the first, and then regenerate the first key using the Azure portal or the
command line tools. In PowerShell, this is accomplished with the New-
AzureRmStorageAccountKey cmdlet and for the Azure CLI you will use the az
storage account keys renew command. This technique is known as key rolling,
and it allows you to reset the primary key with no downtime for applications that
access storage using the authentication key directly.
Applications will often use the storage account name and key for access to
Azure storage. Sometimes this is to grant access by generating a Shared Access
Signature token and sometimes for direct access with the name and key. It is
important to protect these keys because they provide full access to the storage
account.
Azure Key Vault helps safeguard cryptographic keys and secrets used by
cloud applications and services. By using Key Vault, you can encrypt keys and
secrets (such as authentication keys, storage account keys, data encryption keys,
.PFX files, and passwords) by using keys that are protected by hardware security
modules (HSMs).
The following example shows how to create an Azure Key Vault and then
securely store the key in Azure Key Vault (software protected keys) using
PowerShell.
Click here to view code image
$vaultName = "[key vault name]"
$rgName = "[resource group name]"
$location = "[location]"
$location = "[location]"
$keyName = "[key name]"
$secretName = "[secret name]"
$storageAccount = "[storage account]"
# create the key vault
New-AzureRmKeyVault -VaultName $vaultName -ResourceGroupName
$rgName -Location $location
# create a software managed key
$key = Add-AzureKeyVaultKey -VaultName $vaultName -Name $keyName -
Destination 'Software'
# retrieve the storage account key (the secret)
$storageKey = Get-AzureRmStorageAccountKey -ResourceGroupName
$rgName -Name
$storageAccount
The following example shows how to create a SAS URI using the Azure
PowerShell cmdlets. The example creates a storage context using the storage
account name and key that is used for authentication, and to specify the storage
account to use. The context is passed the New-AzureStorageBlobSASToken
cmdlet, which is also passed the container, blob, and permissions (read, write,
and delete), along with the start and end time that the SAS URI is valid for.
Click here to view code image
$storageAccount = "[storage account]"
$rgName = "[resource group name]"
$container = "[storage container name]"
$storageKey = Get-AzureRmStorageAccountKey -ResourceGroupName
$rgName
$rgName
-Name $storageAccount
$context = New-AzureStorageContext -StorageAccountName
$storageAccount `
-StorageAccountKey
$storageKey[0].Value
$startTime = Get-Date
$endTime = $startTime.AddHours(4)
New-AzureStorageBlobSASToken -Container $container `
-Blob "Workshop List - 2017.xlsx" `
-Permission "rwd" `
-StartTime $startTime `
-ExpiryTime $endTime `
-Context $context
Figure 3-17 shows the output of the script. After the script executes, notice the
SAS token output to the screen.
This is a query string that can be appended to the full URI of the blob or
container the SAS URI was created with, and passed to a client
(programmatically or manually). Use the SAS URI by combining the full URI to
the secure blob or container and appending the generated SAS token. The
following example shows the combination in more detail.
The full URI to the blob in storage.
Click here to view code image
https://examrefstorage.blob.core.windows.net/examrefcontainer1/Workshop%20
List%20-%202017.xlsx
List%20-%202017.xlsx?sv=2016-05-31&sr=b&sig=jFnSNYWvxt6
Miy6Lc5xvT0Y1IOwerdWcFvwba065fws%3D&st=2017-10-26T14%3A55%3A44Z&se=
2017-10-26T18%3A55%3A44Z&sp=rwd
The Azure CLI tools can also be used to create SAS tokens using the az
storage blob generate-sas command.
Click here to view code image
storageAccount="[storage account name]"
container="[storage container name]"
storageAccountKey="[storage account key]"
blobName="[blob name]"
az storage blob generate-sas \
--account-name "$storageAccount" \
--account-key "$storageAccountKey" \
--containername "$container" \
--name "$blobName" \
--permissions r \
--expiry "2018-05-31"
FIGURE 3-18 Creating stored access policies using Azure Storage Explorer
To use the created policies, reference them by name during creation of a SAS
token using storage explorer, or when creating a SAS token using PowerShell or
token using storage explorer, or when creating a SAS token using PowerShell or
the CLI tools.
You can also enable and configure storage diagnostics by using the Set-
AzureStorageServiceLoggingProperty and Set-
AzureStorageServiceMetricsProperty Azure PowerShell cmdlets.
In the following example, the Set-AzureStorageMetricsProperty cmdlet
enables hourly storage metrics on the blob service with a retention period of 30
days and at the ServiceAndApi level. The next call is to the Set-
AzureStorageServiceLoggingProperty cmdlet, which is also configuring the blob
service and a 30-day retention period but is only logging delete operations.
Click here to view code image
$storageAccount = "[storage account name]"
$rgName = "[resource group name]"
$storageKey = Get-AzureRmStorageAccountKey -ResourceGroupName
$rgName
-Name $storageAccount
$context = New-AzureStorageContext -StorageAccountName
$context = New-AzureStorageContext -StorageAccountName
$storageAccount `
-StorageAccountKey
$storageKey[0].Value
Set-AzureStorageServiceMetricsProperty -ServiceType Blob `
-MetricsType Hour `
-RetentionDays 30 `
-MetricsLevel ServiceAndApi
`
-Context $context
Set-AzureStorageServiceLoggingProperty -ServiceType Blob `
-RetentionDays 30 `
-LoggingOperations Delete `
-Context $context
Metrics data is recorded at the service level and at the service and API level.
At the service level, a basic set of metrics such as ingress and egress,
availability, latency, and success percentages, which are aggregated for the Blob,
Table, and Queue services, is collected. At the service and API level, a full set of
metrics that includes the same metrics for each storage API operation, in
addition to the service-level metrics, is collected. Statistics are written to a table
entity every minute or hourly depending on the value passed to the
MetricsType parameter (the Azure portal only supports using hour).
Logging data is persisted to Azure blob storage. As part of configuration, you
can specify which types of operations should be captured. The operations
supported are: All, None, Read, Write, and Delete.
To view the data, you can programmatically access table storage, or use a tool
such as Storage Explorer or Visual Studio as demonstrated in Figure 3-22.
Add an alert by clicking the Alerts Rules link on the storage account. Then
click Add Alert. Figure 3-24 shows the Alert Rules page in the Azure portal,
where you can select the Resource (blob, queue, or table), and specify the alert
name and description, along with the actual metric to alert on. In this example,
the value in the Metric drop-down is set to capacity and (not shown) is the
threshold and condition. The Condition is set to Greater Than, and the Threshold
is set to 5497558138880 (5 TB in bytes). Each alert can be configured to email
members of the owners, contributors, and reader roles, or a specific email
address.
EXAM TIP
Storage service encryption only encrypts newly created data after encryption
is enabled. For example, if you create a new Resource Manager storage
account but don’t turn on encryption, and then you upload blobs or archived
VHDs to that storage account and then turn on SSE, those blobs will not be
encrypted unless they are rewritten or copied.
FIGURE 3-26 Creating an Azure Data Lake Store and configuring encryption
Access to the Azure Data Lake store is over a public IP address. You can
enable the firewall to only allow in certain source IP addresses such as a client
application from in Azure or from on-premises. Figure 3-27 demonstrates how to
configure the firewall rules to allow in one or more IP addresses by specifying
the start IP and end IP range.
FIGURE 3-27 Configuring the firewall rules for an Azure Data Lake Store
Azure Data Lake Store implements an access control model that derives from
HDFS, which in turn derives from the POSIX access control model. This allows
you to specify file and folder access control lists (ACLs) on data in your data
lake using users or groups from your Azure AD tenant.
There are two kinds of access control lists (ACLs): Access ACLs and Default
ACLs.
Access ACLs These control access to an object. Files and folders both have
Access ACLs.
Default ACLs A “template” of ACLs associated with a folder that
determine the Access ACLs for any child items that are created under that
folder. Files do not have Default ACLs.
The permissions on a filesystem object are Read, Write, and Execute, and they
can be used on files and folders as shown in Table 3-10.
TABLE 3-10 File system permissions for Azure Data Lake Store
To configure permissions on a data item in your data lake, open the Data Lake
Store in the Azure portal and click Data Lake Explorer. From there, right click
the data file you wish to secure and click Access as shown in Figure 3-28.
The next screen will display the current users that have access to the file and
what their permissions are. You can add additional users by clicking the Add
button, then selecting the User or Group, and then specifying the permissions
(Read, Write or Execute) as shown in Figure 3-29.
FIGURE 3-29 Adding a new user to an Azure Data Lake Store file
There are several user and group concepts to understand at a basic level.
Super User A super-user has the most rights of all the users in the Data
Lake Store. A super-user has the following permissions:
Has read, write, and execute permissions to all files and folders.
Can change the permissions on any file or folder.
Can change the owning user or owning group of any file or folder.
Everyone in theAzure Ownersrole (role based access control) for a
Data Lake Store account is automatically a super-user for that account.
Owning User The user who created the item is automatically the owning
user of the item. The owner of an item can do the following:
Change the permissions of a file that is owned.
Change the owning group of a file that is owned, if the owning user is
also a member of the target group.
Owning Group Every user is associated with a “primary group.” For
example, user “alice” might belong to the “finance” group. Alice might also
belong to multiple groups, but one group is always designated as her
primary group. In POSIX, when Alice creates a file, the owning group of
that file is set to her primary group, which in this case is “finance.”
Thought experiment
In this thought experiment, apply what you have learned about this objective.
You can find answers to these questions in the “Answers” section at the end of
this chapter.
You are the web administrator for www.contoso.com which is hosted in
virtual machines in the West US Azure region. Several customers from England
and China complain that the PDF files for your product brochures take too long
time to download. Currently, the PDF files are served from the /brochures folder
of your website.
1. What steps should you take to mitigate the download time for your PDFs?
2. What changes need to happen on the www.contoso.com web site?
Chapter summary
This chapter covered a broad range of topics focused on Azure Storage, CDN,
and security related to Azure Data Lake Store.
Below are some of the key takeaways from this chapter:
Azure storage can be managed through several tools directly from
Microsoft. The Azure portal, PowerShell, CLI, Storage Explorer, and
AzCopy. It’s important to know when to use each tool.
Access to blobs can be controlled using several techniques. Among them
are: storage account name and key, shared access signature (SAS), public
access level of the container they reside in, and using firewall/virtual
network service endpoints.
Use the async blob copy service to copy files between storage accounts or
from outside publicly accessible locations to your Azure storage account.
Storage accounts and CDN both support custom domains. Enabling SSL is
only supported on custom domains when the blob is accessed via CDN.
Enable diagnostics and alerts to monitor the status of your storage accounts.
Storage Explorer and Visual Studio have capabilities for browsing blob and
table storage to download and review diagnostic data.
Storage service encryption automatically encrypts and decrypts data added
or updated in your storage account. If the data already existed in the storage
account prior to enabling SSE it will not be encrypted.
Azure Data Lake store will be default support encryption at rest and always
in transit.
You can choose to have Azure Data Lake store manage your encryption
keys, or reference keys out of an existing Azure Key Vault.
The security for Azure Data Lake store is based on POSIX permissions.
You can assign users / groups from Azure AD access with read, write, and
execute permissions per item.
Users that are in the role based access control Owners role will
automatically be added as Super User.
CHAPTER 4
Implement Virtual Networks
The Create Virtual Network blade opens. Here you can provide configuration
information about the Virtual Network. This blade requires the following inputs,
as shown in Figure 4-2:
Name of the Virtual Network
Address Space to be used for the VNet using CIDR notation
Subscription in which the VNet is created
The resource group where the VNet is created
Location for VNet
Subnet Name for the first subnet in the VNet
The Address Range of the first Subnet
FIGURE 4-2 Create Virtual Network Blade
The address space is the most critical configuration for a VNet in Azure. This
is the IP range for the entire network that will be divided into subnets. The
address space can almost be any IP range that you wish (public or private). You
can add multiple address spaces to a VNet. To ensure this VNet can be
connected to other networks, the address space should never overlap with any
connected to other networks, the address space should never overlap with any
other networks in your environment. If a VNet has an address space that
overlaps with another Azure VNet or on-premises network, the networks cannot
be connected, as the routing of traffic will not work properly.
Once the PowerShell session is authenticated to Azure, the first thing needed
will be a new resource group. Using the New-AzureRmResourceGroup cmdlet,
you can create a new resource group. This cmdlet requires you to specify the
resource group name as well as the name of the Azure region. These values are
defined in the variables $rgName and $location.
Click here to view code image
$rgName = "ExamRefRGPS"
$location = "Central US"
$location = "Central US"
New-AzureRmResourceGroup -Name $rgName -Location $location
If you wanted to use an existing resource group you can use the Get-
AzureRmResourceGroup cmdlet to see if the resource group. You can also use
the Get-AzureRmLocation cmdlet to view the list of available regions.
In the code example below, the New-AzureRmVirtualNetworkSubnetConfig
cmdlet is used to create two local objects that represent two subnets in the VNet.
The VNet is subsequently created with the call to New-
AzureRmVirtualNetwork. It is passed the address space of 10.0.0.0/16. You
could also pass in multiple address spaces like how the subnets were passed in
using an array. Notice how $subnets = @() creates an array and then the
array is loaded with two different commands using the New-
AzureRmVirtualNetworkSubnetConfig cmdlet. When the New-
AzureRmVirtualNetwork cmdlet is called in the last command of the script, the
two subnets are then populated by the array values that have been loaded in the
$subnets.
Click here to view code image
$subnets = @()
$subnet1Name = "Apps"
$subnet2Name = "Data"
$subnet1AddressPrefix = "10.0.0.0/24"
$subnet2AddressPrefix = "10.0.1.0/24"
$vnetAddresssSpace = "10.0.0.0/16"
$VNETName = "ExamRefVNET-PS"
$rgName = "ExamRefRGPS"
$location = "Central US"
$subnets = New-AzureRmVirtualNetworkSubnetConfig -Name $subnet1Name
`
-AddressPrefix
$subnet1AddressPrefix
$subnets = New-AzureRmVirtualNetworkSubnetConfig -Name $subnet2Name
`
-AddressPrefix
$subnet2AddressPrefix
$vnet = New-AzureRmVirtualNetwork -Name $VNETName `
-ResourceGroupName $rgName `
-Location $location `
-AddressPrefix $vnetAddresssSpace `
-Subnet $subnets
In this case here, let’s walk through each command to create the VNet using
the Azure CLI Cloud Shell. To initiate the Azure CLI Cloud Shell, open the
Azure portal and then click the CLI symbol along the upper right-hand corner as
seen in Figure 4-6.
After a few moments, the Cloud Shell will be ready, and you will see an
interactive bash prompt. In Figure 4-7, Azure Cloud Shell is ready to use with
your subscription.
The first step will be creating a new resource group for the VNet using the
Azure CLI. This will be accomplished using the az group create
command. You will need to specify a location for the resource group. To locate a
list of regions that are available for your subscription, you can use the command
az account list-locations.
Click here to view code image
az group create -n ExamRefRGCLI -l "centralus"
Next, you can create the new VNet using the az network vnet create
command.
Click here to view code image
az network vnet create --resource-group ExamRefRGCLI -n
ExamRefVNET-CLI
--address-prefixes 10.0.0.0/16 -l "centralus"
Then, following the creation of the VNet, create the App and Data subnets.
This is accomplished using the az network vnet subnet create command. You
will run these commands one at a time for each subnet.
Click here to view code image
az network vnet subnet create --resource-group ExamRefRGCLI --vnet-
name
ExamRefVNET-CLI -n Apps --address-prefix 10.0.1.0/24
az network vnet subnet create --resource-group ExamRefRGCLI --vnet-
name
ExamRefVNET-CLI -n Data --address-prefix 10.0.2.0/24
After running these commands there should be a new resource group named
ExamRefRGCLI and the newly provisioned VNet named ExamRefVNET-CLI.
In Figure 4-8, you see the ExamREFVNET-CLI, which was created in the
ExamRefRGCLI resource group. If you click the Subnets button you will see the
new App and Data subnets with the address ranges from the commands entered.
FIGURE 4-8 Virtual Network created using the Azure CLI Cloud Shell
Design subnets
A subnet is a child resource of a VNet, which defines segments of address
spaces within a VNets. These are created using CIDR blocks of the address
space that was defined for the VNet. NICs can be added to subnets and
connected to VMs. This will provide connectivity for various workloads.
The name of a subnet must be unique within that VNet. You cannot change
the subnet name subnet following its creation. During the creation of a VNet
while using the Azure portal, the requirement is for you to define one subnet,
even though a VNet isn’t required to have any subnets. In the portal, you can
define only one subnet when you create a VNet. You can add more subnets to
the VNet later after it has been created. You can create a VNet that has multiple
subnets by using Azure CLI or PowerShell.
When creating a subnet, the address range must be defined. The address range
of the new subnet must be within the address space you assigned for the VNet.
The range that is entered will determine the number of IP Addresses that are part
of the subnet.
EXAM TIP
Azure will hold back a total of 5 IP Addresses for every subnet that is created
in a VNet. Azure reserves the first and last IP addresses in each subnet like
standard IP networks with one for the network identification and the other for
broadcast. Azure also holds three additional addresses for internal use starting
broadcast. Azure also holds three additional addresses for internal use starting
from the first address in the subnet. For example, if the CIDR range of a
subnet has its first IP as .0 then the first useable IP would be .4. So, if the
address range was 192.168.1.0/24 then 192.168.1.4 would be the first address
assigned to a NIC. Also, the smallest subnet on an Azure VNet would be a
CIDR /29. This would provide 3 useable IP Addresses and 5 IP Addresses that
Azure would use.
Subnets provide the ability to isolate network traffic between various types of
workloads. These are often different types of servers or even tiers of
applications. Examples of this could include separating traffic bound for web
servers and database servers. These logical segmentations allow for clean
separations, so they can be secured and managed. This allows for very precise
application of rules securing data traffic as well as how traffic flows into and out
of a given set of VMs.
In Azure, the security rules are applied using network security groups, and the
traffic flows are controlled using route tables. Designing the subnets should be
completed upfront and should be considered while determining the address
space. Remember that for each subnet, Azure holds back 5 IP Addresses. If you
create a VNet with 10 subnets, you are losing 50 IP addresses to Azure. Careful
upfront planning is critical to not causing yourself a shortage of IPs later.
Changes to subnets and address ranges can only be made if there are no
devices connected to the subnet. If you wish to make a change to a subnet’s
address range, you would first have to delete all the objects in that subnet. If the
subnet is empty, you can change the range of addresses to any range that is
within the address space of the VNet not assigned to any other subnets.
Subnets can be only be deleted from VNets if they are empty. Once a subnet is
deleted, the addresses that were part of that address range would be released and
available again for use within new subnets that you could create.
Subnets have the following properties: Name, Location, Address range,
Network security group, Route table and Users. Table 4-1 discusses each of
these properties.
EXAM TIP
Gateway subnets
The basis for deploying hybrid clouds is the connection of an on-premises
network along with an Azure VNet. This configuration allows clients and servers
deployed in Azure to communicate with those in your datacenter and network.
deployed in Azure to communicate with those in your datacenter and network.
To deploy this type of connection, a VPN Gateway needs to be created in Azure.
All VPN Gateways must be placed into a special gateway subnet.
The gateway subnet contains the IP addresses the VPN Gateway VMs and
services will use. When you create your VPN Gateway, special Azure managed
VMs are deployed to the gateway subnet, and they are configured with the
required VPN Gateway settings. Only the VPN Gateways should be deployed to
the gateway subnet and its name must be “GatewaySubnet” to work properly.
When you create the gateway subnet, you are required to specify the number
of IP addresses available using an address range. The IP addresses in the
gateway subnet will be allocated to the gateway VMs and services. It’s
important to plan ahead because some configurations require more IP addresses
than others. For example, if you plan on using ExpressRoute and a Site to Site
VPN as a failover, you will need more than just two IPs. You can create a
gateway subnet as small as /29, but it’s Microsoft’s recommendation to create a
gateway subnet of 28 or larger (i.e., 28, 27, 26). That way, if you add
functionality in the future, you won’t have to tear down your gateway. Just
delete and recreate the gateway subnet to allow for more IP addresses.
Once the Gateway subnet is added, the VPN Gateway can be created and
placed into this subnet. Many network administrators will create this address
range much further away from their subnets in terms of the IP Addressing.
Figure 4-10 shows, the GatewaySubnet created using a CIDR block of
10.0.100.0/28. The other subnets are using 24 CIDR blocks for Apps and Data.
In this case the GatewaySubnet is 98 subnets away from the others. This is not
required, as the GatewaySubnet could be any CIDR address range belonging to
the address space of the VNet. This would provide for a continuation of the
subnet scheme put in place if the admin wanted to build additional subnets. The
next logical subnet would be in the 10.0.2.024 and so forth as more are created.
FIGURE 4-10 GatewaySubnet after being created using the Azure Portal
Recommended
Requirement DNS
infrastructure
Name resolution between role instances or VMs located in Azure provided
the same cloud service or VNet DNS
Name resolution between role instances or VMs located in Customer
different VNets managed DNS
Resolution of on-premises computer and service names Customer
from role instances or VMs in Azure managed DNS
Resolution of Azure hostnames from on-premises Customer
computers managed DNS
Customer
Reverse DNS for internal IPs
managed DNS
When using your own DNS servers, Azure provides the ability to specify
multiple DNS servers per VNet. Once in place, this configuration will cause the
Azure VMs in the VNet to use your DNS servers for name resolution services.
You must restart the VMs for this configuration to update.
You can alter the DNS Servers configuration for a VNet using the Azure
portal, PowerShell or Azure CLI.
Figure 4-13 shows an example of how these system routes make it easy to get
up and running. System routes provide for most typical scenarios by default, but
there are use cases where you will want to control the routing of packets.
One of the scenarios is when you want to send traffic through a virtual
One of the scenarios is when you want to send traffic through a virtual
appliance such as a third-party load balancer, firewall or router deployed into
your VNet from the Azure Marketplace.
To make this possible, you must create User Defined Routes (UDRs). These
UDRs specify the next hop for packets flowing to a specific subnet through your
appliance instead of following the system routes. As seen in Figure 4-14, by
using the UDR, traffic will be routed through the device to the destination.
FIGURE 4-14 N-Tier Application Deployed with a Firewall using User
Defined Routes
EXAM TIP
You can have multiple route tables, and the same route table can be associated
to one or more subnets. Each subnet can only be associated to a single route
table. All VMs in a subnet use the route table associated to that subnet.
Figure 4-15 shows a UDR that has been created to allow for traffic to be
directed to a virtual appliance. In this case, it would be a Firewall running as a
VM in Azure in the DMZ subnet.
FIGURE 4-15 User Defined Route forcing network traffic through firewall
FIGURE 4-16 VNet peering between two networks in the North Central
Region
Creating a VNet peering using the Azure portal
The VNets you wish to peer must already be created to establish a VNet peering.
To create a new VNet peering from VNETA to VNETB as shown in Figure 4-
17, connect to the Azure portal and locate VNETA. Once this is located under
Settings, click peerings, and then select +Add. This will load the Add peering
blade. Use the following inputs to connect a standard VNet peering:
Name VNETA-to-VNETB
Peer Details Resource Manager (leave the I Know My Resource ID
unchecked)
Subscription Select the Subscription for VNETB
Virtual Network Choose VNETB
Configuration Enabled (leave the remaining three boxes unchecked for
this simple VNet Peering)
FIGURE 4-17 Adding peering from VNETA to VNETB using the Portal
Once this process has been completed, the VNet peering will appear in the
portal along with the initiation of peering status, as seen in Figure 4-18. To
complete the VNet peering, you will follow the same steps on VNETB.
Once the portal has completed the provisioning of the VNet Peering, it will
appear in the peering of VNETB and show as Connected with a peer of VNETA,
as seen in Figure 4-20. Now the two VNets: VNETA and VNETB are peers, and
VMs on these networks can see each other. They are accessible, as if this was
one Virtual Network.
FIGURE 4-20 VNETB-to-VNETA Peering showing as Connected in the
Azure Portal
In Figure 4-21, the peering blade of VNETA shows the peering status
VNETA-to-VNETB is also as Connected to VNETB.
$vnetb = Get-AzureRmVirtualNetwork `
-Name "VNETB" `
-ResourceGroupName "VNETBRG"
# Peer VNETB to VNET: the output from the Command to find the
Resource ID
for VNETA & VNETB is used with the --remote-vnet-id argument
az network vnet peering create --name VNETA-to-VNETB --resource-
group VNETARG
--vnet-name VNETA --allow-vnet-access --remote-vnet-id
subscriptions
111111111111-11111111-111111111111/resourceGroups/VNETBRG/
providers/Microsoft.Network/virtualNetworks/VNETB
# Peer VNETB to VNETA. the output from the Command to find the
Resource ID for
VNETA is used with the --remote-vnet-id argument
az network vnet peering create --name VNETB-to-VNETA --resource-
group
VNETBRG --vnet-name VNETB --allow-vnet-access --remote-vnet-id
subscriptions
111111111111-11111111-
111111111111/resourceGroups/VNETARG/providers/
Microsoft.Network/virtualNetworks/VNETA
EXAM TIP
NSG Rules
NSG Rules are the mechanism defining traffic the administrator is looking to
control. Table 4-3 captures the important information to understand about NSG
Rules.
Default Rules
All NSGs have a set of default rules, as shown in Table 4-5 and Table 4-6. These
default rules cannot be deleted, but since they have the lowest possible priority,
they can be overridden by the rules that you create. The lower the number, the
sooner it will take precedence.
The default rules allow and disallow traffic as follows:
Virtual network Traffic originating and ending in a Virtual Network is
allowed both in inbound and outbound directions.
internet Outbound traffic is allowed, but inbound traffic is blocked.
Load balancer Allow Azure’s load balancer to probe the health of your
VMs and role instances. If you are not using a load balanced set, you can
override this rule.
AllowAzureLoad
65001 AzureLoadBalancer * *
BalancerInBound
DenyAllInBound 65500 * * *
DenyAllOutBound 65500 * * *
EXAM TIP
NSG Rules are enforced based on their Priority. Priority values start from 100
and go to 4096. Rules will be read and enforced starting with 100 then 101,
102 etc., until all rules have been evaluated in this order. Rules with the
priority “closest” to 100 will be enforced first. For example, if you had an
inbound rule that allowed TCP traffic on Port 80 with a priority of 250 and
another that denied TCP traffic on Port 80 with a priority of 125, the NSG rule
of deny would be put in place. This is because the “deny rule”, with a priority
of 125 is closer to 100 than the “allow rule”, containing a priority of 250.
Default Tags
Default tags are system-provided identifiers to address a category of IP
addresses. You can use default tags in the source address prefix and
destination address prefix properties of any rule.
There are three default tags you can use:
VirtualNetwork (Resource Manager) (VIRTUAL_NETWORK for
classic) This tag includes the Virtual Network address space (CIDR ranges
defined in Azure) all connected on-premises address spaces and connected
Azure VNets (local networks).
AzureLoadBalancer (Resource Manager)
(AZURE_LOADBALANCER for classic) This tag denotes Azure’s
infrastructure load balancer. The tag translates to an Azure datacenter IP
where Azure’s health probes originate.
internet (Resource Manager) (INTERNET for classic) This tag denotes
the IP address space that is outside the Virtual Network and reachable by
public internet. The range includes the Azure owned public IP space.
Associating NSGs
NSGs are used to define the rules of how traffic is filtered for your IaaS
deployments in Azure. NSGs by themselves are not implemented until they are
“associated”, with a resource in Azure. NSGs can be associated to ARM network
interfaces (NIC), which are associated to the VMs, or subnets.
For NICs associated to VMs, the rules are applied to all traffic to/from that
Network Interface where it is associated. It is possible to have a multi-NIC VM,
and you can associate the same or different NSG to each Network Interface.
When NSGs are applied to subnets, rules are applied to traffic to/from all
resources connect to that subnet.
EXAM TIP
Understanding the effective rules of NSGs is critical for the exam. Security
rules are applied to the traffic by priority in each NSG in the following order:
NSG applied to subnet If a subnet NSG has a matching rule to deny
traffic, the
packet is dropped.
NSG applied to NIC If VM\NIC NSG has a matching rule that denies
traffic,
packets are dropped at the VM\NIC, even if a subnet NSG has a
matching rule
that allows traffic.
Outbound traffic:
NSG applied to NIC If a VM\NIC NSG has a matching rule that denies
traffic,
packets are dropped.
NSG applied to subnet If a subnet NSG has a matching rule that denies
traffic,
packets are dropped, even if a VM\NIC NSG has a matching rule that
allows traffic.
After AppsNSG is created, the portal opens the Overview blade. Here, you see
that the NSG has been created, but there are no inbound or outbound security
rules beyond the default rules. In Figure 4-23, the Inbound Security Rules blade
of the AppsNSG is shown.
FIGURE 4-23 The Inbound Security Rules showing only the Default Rules
The next step is to create the inbound rule for HTTP. Under the settings area,
click on Inbound Security Rules link. The next step will be to click +Add to
allow HTTP traffic on Port 80 into the Apps subnet. In the Add inbound security
rule blade, configure the following items, and click OK as seen in Figure 4-24.
Source Any
Source Port Ranges *
Destination IP Addresses
Destination IP Addresses/CIDR Ranges The Apps Subnet: 10.0.0.0/24
Destination Port Ranges 80
Protocol TCP
Action Allow
Priority 100
Name Port 80_HTTP
FIGURE 4-24 Adding an Inbound Rule to allow HTTP traffic
Once the portal is saved the inbound rule, it will appear in the portal. Review
your rule to ensure it has been created correctly. This NSG with its default rules
and the newly created inbound rule named Port_80_HTTP are not filtering any
traffic. It has yet to be associated with a subnet or a Network Interface, so the
rules are currently not in effect. The next task will be to associate it with the
Apps subnet. In the Azure portal / Settings, click subnets button, and click
+Associate. The portal will ask for two configurations: “Name of the Virtual
Network” and the “Name of the subnet”. In Figure 4-25, the VNet
ExamRefVNET and subnet Apps has been selected.
FIGURE 4-25 The AppsNSG has been associated with the Apps Subnet
After being saved, the rules of the NSG are now being enforced for all
network interfaces that are associated with this subnet. This means that TCP
traffic on Port 80 is allowed for all VMs that are connected to this subnet. Of
course, you need to have a webserver VM configured and listening on Port 80 to
respond, but with this NSG, you have opened the ability for Port 80 traffic to
flow to the VMs in this subnet from any other subnet in the world.
IMPORTANT NSGS
Remember that NSGs can be associated with network interfaces as well as
subnets. For example, if a webserver is connected to this Apps subnet and it
didn’t have an NSG associated with its network interface, the traffic would
be allowed. If the VM had an NSG associated to its network interface, an
inbound rule configured exactly like the PORT_80_HTTP rule created here
would be required to allow the traffic through. To learn how to work with
NSGs associated to network interfaces in Skill 4.3 Configure ARM VM
Networking.
After the NSG has been created along with the inbound rule, next you need to
associate this with the subnet to control the flow of network traffic using this
filter. To achieve this goal, you need to use Get-AzureRmVirtualNetwork and
the Set-AzureRmVirtualNetworkSubnetConfig. After the configuration on the
subnet has been set, use Set-AzureRmVirtualNetwork to save the configuration
in Azure.
Click here to view code image
#Associate the Rule with the Subnet Apps in the Virtual Network
ExamRefVNET-PS
$vnet = Get-AzureRmVirtualNetwork –ResourceGroupName ExamRefRGPS –
Name ExamRefVNET-PS
FIGURE 4-26 The Virtual Network is selected from a list of those in the same
subscription and Azure Region.
EXAM TIP
Availability sets are used to inform the Azure fabric that two or more of your
VMs are providing the same workload and thus should not be susceptible to
the same fault or update domains. If you select an availability set during the
creation of a VM in the portal, you can only deploy your VM to the VNet
creation of a VM in the portal, you can only deploy your VM to the VNet
where the other VMs are deployed and the option to create a new VNet is
removed.
After you select the VNet where the VM is connected, you are required to
specify the subnet. In Figure 4-27, the subnet choices are only those that are
within the VNet that was selected previously.
EXAM TIP
VMs cannot be moved from one VNet to another without deleting and
recreating them, but it is possible to move the subnet where a VM is located
within the same VNet. This is done by changing the IP configuration of the
NIC. If the VM has a static IP address this must be changed to dynamic prior
NIC. If the VM has a static IP address this must be changed to dynamic prior
to the move. This is due to the static IP address being outside the address
range of the new subnet.
EXAM TIP
App Gateway performs load balancing using a round robin scheme and its
Load balancing is accomplished at Layer 7. This means it only handles
HTTP(S) and WebSocket traffic. This is different from the Azure load
balancer which works at Layer 4 for many different types of TCP and UDP
traffic. It can offload SSL Traffic, handle cookie-based session affinity and
act as a Web Application Firewall (WAF).
Web Application Firewall (WAF)
A key capability of App Gateway is acting as a web application firewall (WAF).
When enabled, the WAF provides protection to web applications from common
web vulnerabilities and exploits. These include common web-based attacks such
as cross-site scripting, SQL injection attacks and session hijacking.
FIGURE 4-28 Creating a New Application Gateway using the Azure Portal
The Create Application Gateway blade opens. Next complete the basics, such
as the name, tier (Standard or WAF), size of the App Gateway, and number of
instances of the App Gateway, among others, as shown in Figure 4-29.
FIGURE 4-29 Completed Basics blade for App Gateway
The third step is the Settings blade where critical information is collected in
regards to how the App Gateway will be deployed. The first selection is the
Virtual Network where it will be deployed. A subnet will be selected next.
Remember, the subnet will already have to be created before creating the App
Gateway. In Figure 4-30, the App Gateway is being deployed to the
ExamRefVNET into a subnet called AppGateway using a /26. Next will be the
Frontend IP Configuration (this is important if this App Gateway will be made
available to the internet or only the Intranet). Here you can select Public and
then create a New Public IP Address. Additional selections will be the Protocol,
Port, and WAF configurations.
FIGURE 4-30 Completed Settings blade for an internet facing App Gateway
The next step is to create the Public IP Address that will be used by the App
Gateway. This code uses the New-AzureRMPublicIpAddress cmdlet. It is
important to note that you can’t use a Static IP Address with the App Gateway.
Click here to view code image
# Create a Public IP address that is used to connect to the
application gateway.
$publicip = New-AzureRmPublicIpAddress -ResourceGroupName
ExamRefRGPS `
-Name ExamRefAppGW-PubIP `
-Location "Central US" `
-AllocationMethod Dynamic
The following commands then used to create the various configurations for
the App Gateway. Each of these commands use different cmdlets to load these
configurations into variables that are ultimately passed to the New-
AzureRmApplicationGateway cmdlet. Upon completion of the App Gateway,
AzureRmApplicationGateway cmdlet. Upon completion of the App Gateway,
the last command will set the WAF configuration.
Click here to view code image
# Create a gateway IP configuration. The gateway picks up an IP
address from the
configured subnet
FIGURE 4-31 Remote Developers and Tester connecting to Azure VNet using
P2S
There is a variation of this S2S network where you create more than one VPN
connection from your VPN Gateway typically connecting to multiple on-
premises sites. This is known as a Multi-Site S2S connection. When working
with multiple connections, you must use a route-based VPN type. Because each
VNet can only have one VPN Gateway, all connections through the gateway
share the available bandwidth. In Figure 4-33, you see an example of a network
with three sites and two VNets in different Azure regions.
FIGURE 4-33 Multi-Site S2S Network with three locations and two Azure
VNets
EXAM TIP
Azure ExpressRoute
ExpressRoute lets you connect your on-premises networks into the Microsoft
cloud over a private connection hosted by a Microsoft ExpressRoute provider.
With ExpressRoute, you can establish connections to Microsoft cloud services,
such as Microsoft Azure, Office 365, and Dynamics 365.
ExpressRoute is a secure and reliable private connection. Network traffic does
not egress to the internet. The latency for an ExpressRoute circuit is predictable
because traffic stays on your provider’s network and never touches the internet.
Connectivity can be from a Multiprotocol Label Switching (MPLS), any-to-
any IPVPN network, a point-to-point ethernet network, or a virtual cross-
connection through a connectivity provider at a co-location facility. Figure 4-34
shows the options for connecting to ExpressRoute.
FIGURE 4-34 Examples of ExpressRoute Circuits
Each ExpressRoute circuit has two connections to two Microsoft edge routers
from your network edge. Microsoft requires dual BGP connections from your
edge to each Microsoft edge router. You can choose not to deploy redundant
devices or ethernet circuits at your end; however, connectivity providers use
redundant devices to ensure that your connections are handed off to Microsoft in
a redundant manner. Figure 4-35 shows a redundant connectivity configuration.
FIGURE 4-35 Multiple Cities Connected to ExpressRoute in Two Azure
Regions
EXAM TIP
ExpressRoute supports the following speeds: 50 Mbps, 100 Mbps, 200 Mbps,
500 Mbps, 1 Gbps, 2 Gbps, 5 Gbps, and 10 Gbps.
There is also a premium add-on that can be enabled if your network is global
There is also a premium add-on that can be enabled if your network is global
enterprise in nature. The following features are added to your ExpressRoute
circuit when the premium add-on is enabled:
Increased routing table limit from 4000 routes to 10,000 routes for private
peering.
More than 10 VNets can be connected to the ExpressRoute circuit.
Connectivity to Office 365 and Dynamics 365.
Global connectivity over the Microsoft core network. You can now link a
VNet in one geopolitical region with an ExpressRoute circuit in another
region.
EXAM TIP
If the ExpressRoute Premium Add-on is enabled, you can link a VNet created
in a different part of the world to your ExpressRoute circuit. For example, you
can link a VNet in Europe West to an ExpressRoute circuit created in Silicon
Valley. This is also true for your public peering resources. An example of this
would be connecting to your SQL Azure database located in Europe West
from a circuit in New York.
EXAM TIP
Service Chaining
You can configure user-defined routes that point to VMs in peered VNets as the
“next hop,” IP address to enable service chaining. Service chaining enables you
to direct traffic from one VNet to a virtual appliance in a peered Virtual Network
through user-defined routes. Figure 4-39 provides a view of a network where
service chaining is implemented.
FIGURE 4-39 Service chaining allows for the use of common services across
VNet Peerings
FIGURE 4-42 Adding a GatewaySubnet to VNETC
This process needs to be repeated for VNETC. Open VNETC in the Azure
portal, as shown in Figure 4-46. Under Settings, locate Connections and click it
to open. When the Connections blade opens, click +Add.
Complete the VNETC Add connection blade by using the following inputs:
Name VNETC-VNETB-Conn1
Connections Vnet-to-Vnet
Second Virtual Network Gateway VNETBGW
Shared Key A1B2C3D4E5 (any unique value matching on both sides)
FIGURE 4-46 Creating the Connection between VNETC to VNETB
FIGURE 4-49 Status of the Connection between VNETB and VNETC shown
as Connected
EXAM TIP
A private IP address is allocated from the subnet’s address range that the
A private IP address is allocated from the subnet’s address range that the
network interface is attached to. The address range of the subnet itself is a part of
the Virtual Network’s address space.
You can set the allocation method to static to ensure the IP address remains
the same. When you specify static, you specify a valid IP address that is part of
the resource’s subnet.
Static private IP addresses are commonly used for:
Virtual machines that act as domain controllers or DNS servers
Resources that require firewall rules using IP addresses
Resources accessed by other apps/resources through an IP address
All VMs on a Vnet are assigned a private IP address. If the VM has multiple
NICs, a private IP address is assigned to each one. You can specify the
allocation method as either dynamic or static for a NIC.
All Azure VMs are assigned Azure Managed DNS servers by default, unless
custom DNS servers are assigned. These DNS servers provide internal name
resolution for VMs within the same Vnet.
When you create a VM, a mapping for the hostname to its private IP address
is added to the Azure DNS servers. If a VM has multiple network interfaces, the
hostname is mapped to the private IP address of each NIC. VMs assigned Azure
DNS servers can resolve the hostnames of all VMs within the same Vnet to their
private IP addresses.
Supports Support
Resource Association
Dynamic Static
Network
Virtual Machine Yes Yes
Interface
Internal load FrontEnd
Yes Yes
balancer Config
FrontEnd
App Gateway Yes Yes
Config
Public IP Address
Public IP addresses allow Azure resources to communicate with internet and
public-facing Azure services. Public IP addresses can be created with an IPv4 or
IPv6 address. Only internet-facing load balancers can be assigned a Public IPv6
address.
A Public IP address is an Azure Resource that has its own properties, in the
same way a VM or VNet is a Resource. Some of the resources you can associate
a Public IP address resource with include:
Virtual machine via network interfaces
internet-facing load balancers
VPN Gateways
Application gateways
Like private IP addresses, there are two methods an IP address is allocated to
an Azure Public IP address resource: dynamic or static. The default allocation
method is dynamic. In fact, an IP address is not allocated at the time the Public
IP address resource is created by the Azure fabric.
EXAM TIP
Dynamic Public IP addresses are released when you stop (or delete) the
resource. After being released from the resource, the IP address will be
resource. After being released from the resource, the IP address will be
assigned to a different resource by Azure. If the IP address is assigned to a
different resource while your resource is stopped, once you restart the
resource, a different IP address will be assigned. If you wish to retain the IP
address, the Public IP address should be changed to static, and that is assigned
immediately and never changed.
If you change the allocation method to static, you as the administrator cannot
specify the actual IP address assigned to the Public IP address resource. Azure
assigns the IP address from a pool of IP addresses in the Azure region where the
resource is located.
Static Public IP addresses are commonly used in the following scenarios:
When you must update firewall rules to communicate with your Azure
resources.
DNS name resolution, where a change in IP address would require updating
host or A records.
Your Azure resources communicate with other apps or services that use an
IP address-based security model.
You use SSL certificates linked to an IP address.
One unique property of the Public IP address is the DNS domain name label.
This allows you to create a mapping for a configured, fully qualified domain
name (FQDN) to your Public IP address. You must provide an Azure globally
unique host name that consists of 3-24 alpha-numeric characters, and then Azure
adds the domains, creating a FQDN.
EXAM TIP
When you add a DNS Name to your Public IP address, the name will follow
this pattern: hostname.region.cloudapp.azure.com. For example, if you create
a public IP resource with contosowebvm1 as a DNS Name, and the VM was
deployed to a VNet in the Central US region, the fully-qualified domain name
(FQDN) of the VM would be: contosowebvm1.centralus.cloudapp.azure.com.
This DNS name would resolve on the public internet as well as your VNet to
the Public IP address of the resource. You could then use the FQDN to create
a custom domain CNAME record pointing to the Public IP address in Azure.
If you owned the contoso.com domain, you could create a CNAME record of
www.contoso.com to resolve to
contosowebvm1.centralus.cloudapp.azure.com. That DNS name would
contosowebvm1.centralus.cloudapp.azure.com. That DNS name would
resolve to your Public IP Address assigned by Azure. The client traffic would
be directed to the public IP Address associated with that name.
Table 4-10, shows the specific property through which a Public IP address can
be associated to a top-level resource and the possible allocation methods
(dynamic or static) that can be used.
Virtual machines
You can associate a Public IP address with any VM by associating it to its NIC.
Public IP addresses, by default, are set to dynamic allocation, but this can be
changed to static.
VPN Gateways
An Azure VPN Gateway connects an Azure VNet to other Azure VNets or to an
on-premises network. A Public IP address is required to be assigned to the VPN
Gateway to enable it to communicate with the remote network. You can only
assign a dynamic Public IP address to a VPN Gateway.
Application gateways
You can associate a Public IP address with an Azure App Gateway by assigning
it to the gateway’s frontend configuration. This Public IP address serves as a
load-balanced VIP. You can only assign a dynamic Public IP address to an
application gateway frontend configuration.
FIGURE 4-53 Custom DNS Servers for Network Interface configured using
the Portal
EXAM TIP
When you change your VNet settings to point to your customer provided DNS
servers on its network interfaces, you must restart VMs for the new settings to
be assigned to the VM’s operating system. When the VM reboots it re-
acquires its IP address and the new DNS settings are in place.
EXAM TIP
DNS Servers specified for a Network Interface take precedence over those
specified for the VNet. This means if you want specific machines on your
VNet, use a different DNS Server, you can assign this at the NIC level, and
the VNet DNS Server setting will be ignored by that VM.
EXAM TIP
Once the ExamRefWEBVM1-nsg has been created, the portal will open the
Overview blade. Here, you will see the NSG has been created, but there are no
inbound or outbound security rules beyond the default rules.
To create the inbound rule to allow port 80 select Inbound Security Rules
followed by +Add. For the Add inbound security rule, update using the
following details, as seen in Figure 4-55:
Source Any
Source port ranges *
Destination Any
Destinations port ranges 80
Protocol Any
Action Allow
Priority 100
Name PORT_HTTP_80
Description All HTTP
FIGURE 4-55 An Inbound Rule to Allow traffic on Port 80 is created
Once the portal has configured the inbound rule, it will appear in the portal.
Review your rule to ensure it has been created correctly. Now, this NSG with its
default rules and newly created inbound rule named Port_80_HTTP is currently
not filtering any traffic since it has yet to be associated with NIC. In Figure 4-56,
you see the NIC of ExamRefWEBVM1 being selected after selecting Network
Interfaces under Settings, followed by selecting +Associate. The portal will ask
you to select the NIC name associated with ExamRefWEBVM1. The NSG will
immediately start filtering traffic.
Upon association of the NSG with the NIC, TCP traffic on Port 80 will be
allowed to this VM. Of course, you would need to have a webserver VM
configured and listening on Port 80 to respond, but with this NSG, the ability is
now opened for that traffic to flow to the VMs.
#Create a new Network Security Group and add the HTTP Rule
$nsg = New-AzureRmNetworkSecurityGroup -ResourceGroupName
ExamRefRGPS `
-Location centralus `
-Name "ExamRefWEBVM1-nsg" `
-SecurityRules $rule1
After the NSG is created, along with the inbound rule, next you need to
associate this with the NIC to control the flow of network traffic by using this
filter. To achieve this goal, use Get-AzureRmNetworkInterface. After the
configuration on the NIC has been set, use Set-AzureRmNetworkInterface to
save the configuration to the NIC.
Click here to view code image
#Associate the Rule with the NIC from ExamRefWEBVM1
$nic = Get-AzureRmNetworkInterface -ResourceGroupName ExamRefRGPS -
Name examrefwebvm1892
$nic.NetworkSecurityGroup = $nsg
Set-AzureRmNetworkInterface -NetworkInterface $nic
Health Probes
At its core, the purpose of a load balancer is twofold: to spread traffic across a
farm of VMs that are providing a service so you don’t overload them and to
ensure that the VMs are healthy and ready to accept traffic.
The Azure load balancer can probe the health of your VMs deployed into a
VNet. When a VM probe experiences a failure, this means that the VM is no
longer able to provide the service, therefore the load balancer marks it as an
unhealthy instance and stops sending new connections to the VM. Existing
connections are not impacted by being removed from the pool of healthy
instances, but users could experience failures if they have current open
connections to that VM.
The Azure load balancer supports two types of probes for virtual machines:
TCP Probe This probe relies on a successful TCP session establishment to
a defined probe port. If the VM responds to the request to establish a simple
TCP connection on the port defined when creating the probe, the VM is
marked as healthy. For example, a TCP probe could be created connecting
to portal 80. If the machine is active and allows connections on port 80, the
load balancer would be able to connect and the machine would pass the
probe. If for some reason the machine was stopped or the load balancer
could no longer connect to port 80, it would be marked as unhealthy.
HTTP Probe This probe is used to determine if a VM is serving webpages
without issues by using the HTTP protocol. When a webpage loads
successfully, there is an HTTP error code that is given: 200. This error code
means that the page loaded successfully. One of the configurations on the
HTTP probe is the path to the file used for the probe which by default, is
marked a “/”. This tells the Azure load balancer to load the default webpage
from the VM. Often this would be default.aspx or index.html, but you can
configure this if you want to create your own page to check the health of a
site. Just returning the default page with 200 doesn’t provide deeper insight
as to the true functionally of your site, so using some custom logic to
determine the health of the site could make sense. A developer would create
a custom page and then you would configure the load balancer to load that
page. If it loads correctly, the 200 is provided and the VM is put into the
pool of available VMs to service client requests.
EXAM TIP
For both TCP and HTTP probes you can configure the interval and the
unhealthy threshold. The interval is the amount of time between the probe
attempts, or in other words how often Azure uses this probe. Unhealthy
threshold is the number of consecutive failures that must occur before the VM
is considered unhealthy.
To ensure your Web Servers are ready, you will need to add the HTTP Probe.
To begin configuring the HTTP probe, select the Health probe link in Settings
and then +Add. As seen in Figure 4-60, provide a Name, select the HTTP
protocol, and accept the defaults of Port 80, Interval of 5 and Unhealth threshold
of 2. Then click OK. Notice that there is an additional item named path which is
the location of a file or folder on the web server for the load balancer to connect.
FIGURE 4-60 Creating a HTTP Health Probe
Now that you have created the backend pool telling the load balancer which
machines are to be used, and you have configured the probes to help determine
which ones are healthy, you will now put it all together with the load balancing
rules. These rules help to bring these configurations together connecting the
Frontend to the Backend. To create the rule, click the load balancing rules link
under settings, and then select +Add. Complete the following configurations, as
seen in Figure 4-61:
Name ExamRefLBRule
IP Version IPv4
Frontend IP Address Select the Public IP Address
Protocol TCP
Port 80
Backend port 80
Backend pool Select the Pool you created
Heath Probe Select the TCP Rule
Session Persistence None
Idle Timeout 4 minutes
Floating IP Disabled
FIGURE 4-61 Creating the Load Balancing Rule using the Backend Pool and
Health Probe
After this is put in the place, if the VMs added to the backend pool are
configured with a web server and there are no network security groups or other
firewalls blocking port 80, you should be able to connect to the Public IP address
of the load balancer and see the webpage.
of the load balancer and see the webpage.
In this example, an internet facing load balancer will be created with a public
IP Address, and it will point to two Web Servers named ExamRefWEBVM1 and
ExamRefWEBVM2, and they are part of an Availability Set called
ExamRefWebAVSet. Both VMs have one NIC connected to the Apps subnet of
the ExamRefVNET-PS VNet created in earlier steps.
Click here to view code image
# Set Variables
$publicIpName = "ExamRefLB-PublicIP-PS"
$rgName = "ExamRefRGPS"
$dnsPrefix = "examreflbps"
$location = "centralus"
$lbname = "ExamRefLBPS"
$vnetName = "ExamRefVNET-PS"
FIGURE 4-63 App Gateway Backend Pool with two VMs added
After the portal reports that the App Gateway update is complete, you can
connect to the Public IP address of the App Gateway by using your web
browser. It does take a few minutes for the sites to come online, so be patient.
When these connections are created it only allows the Web App to talk to the
datacenter. It does not enable full communication between the local server and
the Web App running in Azure. Each hybrid connection correlates to a single
TCP host and port combination. This means that the hybrid connection endpoint
can be on any operating system and any application, provided you are hitting a
TCP listening port. Hybrid connections do not know or care what the application
protocol is or what you are accessing. It is simply providing network access.
The hybrid connections feature consists of two outbound calls to Service Bus
Relay. There is a connection from a library on the host where your app is
running in the app service and then there is a connection from the Hybrid
Connection Manager (HCM) to Service Bus Relay. The HCM is a relay service
that you deploy within the network hosting.
Through the two joined connections your app has a TCP tunnel to a fixed
host:port combination on the other side of the HCM. The connection uses TLS
1.2 for security and SAS keys for authentication/authorization.
There are many benefits to the hybrid connections capability including:
Apps can securely access on premises systems and services securely
The feature does not require an internet accessible endpoint
Each hybrid connection matches to a single host:port combination, which is
an excellent security aspect
It normally does not require firewall holes as the connections are all
outbound over standard web ports
The feature is running at the network level, so it is agnostic to the language
used by your app and the technology used by the endpoint
Thought experiment
Thought experiment
In this thought experiment, apply what you have learned about virtual networks
in this chapter. You can find answers to these questions in the Answers section
at the end of this chapter.
Your management team has named you as the lead architect to implement the
first cloud deployment in Contoso’s history. There is a new web based
application that runs on IIS Server using a SQL database that they want
implemented in Azure.
During a meeting with the application vendor and your manager, you have
gained a better understanding of the implementation needs and Contoso’s
requirements. The application must run on Azure VMs and the SQL server needs
to be implemented as an Always on Availability Group cluster. The vendor has
told you that the application supports multiple web front ends for high-
availability. Your manager has mentioned multiple times how important security
is given this is the first cloud installation. During the conversation, she made it
clear that the Azure implementation should be secured using a multi-layered
approach using firewall rules, and that it must be deployed using a web
application firewall (WAF).
At the end of the meeting your manager also mentioned that as a part of this
project you should implement a permanent low latency connection between your
primary datacenter and Azure, as there are many follow-on projects after this
one. It is also important to have all servers be able to communicate using their
host names and not IP addresses as well because it must support authentications
to your Active Directory domain controllers. The onsite network is a class A
10.0.0.0/16 Network, but you do have access to 8 class C public addresses
provided by your network service provider and registered in your company’s
name with the ARIN.
1. Given that the solution required VMs, the configuration will require a
VNet. What should you consider with respect to the address space of the
VNet? What address space will you use? Also, what subnets should you
create to support the requirements? What are the CIDR ranges for these
subnets?
2. Where would each tier of the application be deployed using the subnets
that you have defined? How will you secure these subnets and VMs?
3. What type of connection will be created between your on-premises
datacenter and Azure? How will DNS Services be implemented?
4. What is the basic architecture for the application?
Chapter summary
Chapter summary
This chapter covered the many topics that make up Virtual Networks in Azure.
These topics range from designing and implementing Virtual Networks, to
connecting Virtual Networks to other Virtual Networks. Configuring Azure VMs
for use with Virtual Networks was also covered including how to secure them
using network security groups which are essentially firewalls. You also reviewed
deploying web applications, both internet and Intranet facing, by using the Azure
load balancer and the Azure Application Gateway. This chapter also discussed
the different options for connecting on-premises networks to Azure, including
Site-to-Site VPNs and ExpressRoute.
Below are some of the key takeaways from this chapter:
Azure Virtual Networks are isolated cloud networks using the IP address
space and are required for deploying virtual machines in Azure.
Subnets allow you to isolate workloads and can be used with network
security groups to create firewall rules.
The GatewaySubnet is a special subnet that is only used for VPN
Gateways.
Azure provides DNS services, but a customer can implement their own
DNS servers. The DNS servers can be configured either at the VNet or the
network interface level.
The Azure Application Gateway is a Layer 7 load balancer that can offload
SSL traffic, provide web application firewall services, and URL based
routing.
Azure VNets can be connected to each other either by using peering or
VPN tunnels.
VNet peering allows VMs to see each other as one network, but their
relationships are non-transitive. If VNETA and VNETB are peered and
VNETB and VNETC are peered VNETA and VNETC are not peered.
There are three types of hybrid connections with Azure Point to Site, Site-
to-Site and ExpressRoute.
VPN Gateways make hybrid connections possible and choosing the correct
one should be based on the throughput that is required and the type of
connection, but most connections are route-based.
BGP Routing is used for ExpressRoute and Multi-Site VPN connections.
ExpressRoute is only available in certain cities around the world and has a
premium add-on to support large global networks.
Public and private IP addresses have two allocation methods: dynamic or
static.
Public IPs can be assigned to VMs, VPN Gateways, internet-facing load
balancers or Application Gateways.
User Defined Routes change the default behavior of subnets allowing you
to direct the traffic to other locations. Typically, traffic is sent through a
virtual appliance such as a firewall. If traffic is sent to a virtual appliance,
IP forwarding must be enabled on the NIC of the VM.
The Azure load balancer can be used for internet or intranet workloads
providing web based applications in a highly available configuration.
Health probes are used to ensure the VMs are ready to accept traffic.
Direct Server Return is an Azure load balancer configuration that is used
with SQL Server Always On Availability group clusters deployed on VMs
in an Azure VNet.
Hybrid connections in Azure are a specific type of connection that allows
for Azure Applications Apps to connect to on-premises resources such as
databases without the need for a VPN. These are different than the hybrid
cloud connections that are created by using S2S VPNs.
CHAPTER 5
Design and deploy ARM templates
The Azure Resource Manager (ARM) provides a central control plane whereby
Azure resources can be provisioned and managed. When you use the Azure
portal to create and configure resources, you are using ARM. Likewise, if you
use command line tools such as Azure PowerShell or the CLI, you are using
ARM because these tools are simply invoking what the API’s ARM exposes.
The Azure Resource Manager enables IT professionals and developers to
describe an infrastructure in a declarative way using JSON documents, known as
ARM templates, and then send those documents to ARM to provision the
resources described in the template. This is a significant advancement from the
approach of invoking a sequence of imperative commands to provision an
infrastructure. The declarative model for provisioning resources is arguably what
ARM is best known for. However, ARM is much more than a means for
provisioning complex infrastructures. It is also the control plane through which
access to resources, permissions, and policies are configured.
This chapter covers the Azure Resource Manager through the lens of an IT
professional responsible for implementing infrastructures on Azure, configuring
access to resources, and implementing built-in and custom policies to govern the
environment.
The schema file referenced in the $schema property defines five additional
elements for an ARM template file as described here:
contentVersion A four-part version number for the document, such as
1.0.0.0. This is required.
variables An object containing variable definitions used in the document.
This is optional.
parameters An object containing parameter values that can be passed in
when deploying the ARM template. This is optional.
resources An array containing resource definitions. This is required.
outputs An object containing values resulting from the deployment of an
ARM template. For example, if a deployment organizes a public facing
load balancer, you may choose to output the IP address of the load balancer.
This is optional.
The JSON code below is an example of a valid ARM template with all the
required and optional elements. You can name the ARM template file anything
you want. However, a common naming convention is to name it
azuredeploy.json and is the filename that is used in this text.
Click here to view code image
{
"$schema": "http://schema.management.azure.com/schemas/2015-01
-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": { },
"variables": { },
"resources": [ ],
"outputs": { }
}
The code in Listing 5-1 below shows how a virtual network resource could be
defined in the azuredeploy.json file. The brief narrative of the code added to
define the virtual network is as follows:
A parameter was added to allow for the name of the virtual network to be
parameterized for each deployment. The name of the parameter,
vnetName, is how the parameter value is referenced later in the resources
array. The parameter is of type string, with a default value of vnet. In the
resources array, the value of the parameter is accessed using the
parameters() function.
A few variables were added. The first is the vnetPrefix, which is
referenced in the resources section to define the address space for the
virtual network. The vnetAppsSubnetName and
vnetAppsSubnetPrefix variables provide a name and address space
for a subnet named Apps. Following are similar variables for a subnet
named Data. In the resources array, the value of these variables are
accessed using the variables() function.
The actual definition of the virtual network resource is added to the
resources array. The first four elements in the virtual network resource
definition are common to all resources. Their values are different, but every
resource must provide the following:
name The name of the resource, which can be any value. In this
scenario, the value is passed in as a parameter during template
deployment.
type The type of the resource is always in the format of <provider
namespace>/<resource type>. For the virtual network resource, the
provider namespace is Microsoft.Network and the resource type is
virtualNetwork.
apiVersion A resource type is defined in a resource provider. A
resource provider exposes the APIs and schema for the resource types
it contains. A resource provider takes on a new version number when
resource types are added, changed, deleted, and when there are schema
changes. As a result, resource providers usually have multiple API
versions. So, when defining a resource in an ARM template, you must
specify the version of the resource provider you wish to use.
location The location refers to the region to deploy the resource in,
such as East US, West US, West Europe, etc. It is common convention
for a resource to be located in the same region as the resource group it
is contained in. The resourceGroup() function exposes a location
property for this purpose.
Next is the properties element, where resource-specific properties can be set.
The properties element is present for most resources in Azure. However, its
shape (i.e. properties inside) varies for each resource type. For the virtual
network resource, the properties element provides settings for the address space
of the virtual network and its subnets.
LISTING 5-1 The azuredeploy.json file after adding a virtual network resource
The Azure platform provides hundreds of resource types that can be used to
define an infrastructure. As you learned in this section, a resource type is made
available through a resource provider. You can get a list of all the resource
providers using the Get-AzureRmResourceProvider Azure PowerShell cmdlet or
by using the az provider list CLI command. For each of the providers
returned, you can see the namespace, resource types, and API versions it
supports.
MORE INFORMATION
Documentation on Azure’s subscription and service limits, quotas, and
constraints is available at https://aka.ms/azurelimits.
The code in Listing 5-2 shows the azuredeploy.json file with a pair of NIC
resources added using ARM’s looping construct. A brief narrative of the code
added is as follows:
A parameter was added to allow for the name of the NIC to be
parameterized for each deployment.
A few variables were added. The vnetID uses the resourceId() function to
construct a string representing the unique resource ID of the virtual network
resource. The vnetID is then referenced in the definition of variables
nicAppsSubnetRef and nicDataSubnetRef, which uses the concat() function
to construct a reference to the two subnets. These two variables are then
placed into an array variable called nicSubnetRefs that are referenced by
index in the NIC resource definition. The nicCount is used to indicate the
number of NICs to be created.
The NIC resource definition is added after the virtual network resource
definition. In this resource definition, the dependsOn array is added with a
reference to the vnetID variable. The dependsOn element provides a way to
order the provisioning of resources defined in an ARM template. By
default, ARM attempts to provision resources in parallel. This results in
efficient deployment times. However, some resource types depend on the
presence of another resource type as part of their definition, which is the
case for the NIC resource. The NIC resource definition must specify the
subnet in the virtual network it will be bound to, which requires that the
virtual network resource already be provisioned. If you don’t indicate this
dependency in the dependsOn array, ARM will try to provision the NIC at
the same time it provisions the virtual network, resulting in a failed
deployment. ARM evaluates all the dependencies in an ARM template and
then provisions the resources according to the dependency graph.
The looping mechanism for creating multiple resources of this type is
provided by the copy element, where you need to provide only a name for
the copy operation and a count value, which is provided by the nicCount
variable. When ARM sees the copy element, it goes into a loop and
provisions a new resource in each of the iterations. In this scenario, there
are two iterations. A unique name must be provided for every resource,
which can be problematic when using this looping mechanism. To resolve
this conflict, the name makes use of the copyIndex() function to
append the iteration number of the resource being provisioned. In other
words, the copyIndex() returns the iteration number of the resource
type being provisioned in the loop. The copyIndex() function can only
be used in a resource definition if the copy object is also provided. The
complete name for each NIC is constructed using the concat() function
to concatenate the name provided as a parameter and then a number which
is the iteration number from copyIndex(). The ARM looping
mechanism is zero-based, which means copyIndex() will return 0 in the
first iteration of the loop. The ‘1’ passed to copyIndex() is used to
increment the zero-based iteration number by 1 so that the number added to
the end of the name begins with 1 instead of 0.
The copyIndex() function is also used to index back into the nicSubnetRefs
array variable. This results in the first NIC being bound to the Apps subnet
and the second NIC being bound to the Data subnet.
LISTING 5-2 The azuredeploy.json file after adding a pair NIC resources
LISTING 5-3 The azuredeploy.json file after adding the virtual machine
resource
Recall that some of the parameters defined in the template file included
default values. So, it’s only required to provide a parameter value for a
parameter if you want a different value or if a default value is not provided and
the parameter is required. In this case, a different value for the vmSize is
provided and the adminUser value is provided since it is required. If a parameter
is defined in a template that is required and you don’t provide a value for the
parameter in a template parameters file, you are prompted to enter the value at
the time you deploy the template. This is another way to address the same
security concern mentioned previously regarding the passing of credentials
through a template parameter. Just don’t store the password in the template
parameter file because it results in the user being prompted to enter it at the time
of deployment.
Then, in the template parameter file, define the settings for the virtual machine
as shown here.
Click here to view code image
"parameters": {
"adminUser": {
"value": "adminuser"
},
"vmSettings": {
"value": {
"vmName": "vm",
"vmSize": "Standard_A2"
}
}
}
In the ARM template file, you can then reference these parameter values using
the parameters() function and object notation as shown here.
Click here to view code image
"name": "[parameters('vmSettings').vmName]",
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2017-03-30",
"location": "[resourceGroup().location]",
"dependsOn": [ "nicCopy" ],
"properties": {
"hardwareProfile": {
"vmSize": "[parameters('vmSettings').vmSize]"
},
For further details on this technique and examples on how to use objects as
parameters in ARM templates see https://docs.microsoft.com/en-
us/azure/architecture/building-blocks/extending-templates/objects-as-
parameters.
resourceGroupName="contoso"
location="westus"
deploymentName = "contoso-deployment-01"
FIGURE 5-1 Custom deployment of ARM template file using the Azure portal
After the application registration is complete, you see the properties for the
application registration, as shown in Figure 5-2.
Using the Azure portal, you can create a new application registration from the
Azure Active Directory blade. In the Azure AD blade, click App Registrations.
In the App Registrations blade, click New Application Registration in the
toolbar. Provide a name, select the application type, and provide a sign-on URL,
which can be any unique URL in your Azure AD tenant, as shown in Figure 5-3.
When you create a new application registration using the Azure portal, an
associated service principal is created automatically for you. If you use the
command-line tools, you must explicitly create a service principal for your
application, which is covered in the next section.
After the service principal is created, you see the properties for the service
principal as shown in Figure 5-4.
FIGURE 5-5 The Keys blade for an application registered with Azure AD
Now you have all the pieces required for an application to authenticate itself
with Azure AD using the service principal. The permissions for the service
principal, such as being allowed to access the key vault, are configured with the
key vault instance. The application, such as the web application, authenticates to
Azure AD using the Application ID from the application registration and the
Key (client secret).
There is a significant number of developer-related tasks needed to complete
this scenario. Because this text is targeting the IT professional, those tasks are
not covered here. Instead, only the tasks that most IT professionals would
complete are covered. For a complete end-to-end description of all the tasks
required to complete this, including assigning permissions to the key vault for
the service principal see https://docs.microsoft.com/en-
us/azure/keyvault/keyvault-use-from-web-application.
MORE INFORMATION APPLICATIONS AND SERVICE
PRINCIPALS
It should be clear that there is a subtle, but very distinct, difference between
an application registration in Azure AD and a service principal. In some
documentation, these are sometimes referred to as one in the same. But,
understanding the differences will aid the IT professional in creating robust
service principal to application registration configurations. For more details
on the differences between applications and service principals see
https://docs.microsoft.com/en-us/azure/active-directory/develop/active-
directory-application-objects.
FIGURE 5-7 Resource policies listed in the Subscription Policies blade in the
Azure portal
In the if section, there are two conditions defined. The first condition states
that the type field for a resource must match the namespace
Microsoft.Storage/storageAccounts. In other words, this policy rule only applies
to storage accounts. The second condition states that the storage account field,
enableBlobEncryption, be set to false. The allOf array states that all of the
conditions described in the array must match.
In the then section, the effect property tells Azure Resource Manager to
deny the request to provision the storage account.
To create a custom resource policy using Azure PowerShell, use the New-
AzureRmPolicyDefinition cmdlet as shown in the following code (this assumes
the JSON above was saved to a JSON file):
Click here to view code image
$policyName = "denyStorageAccountWithoutEncryption"
To create a resource policy assignment at the subscription level for the custom
policy definition from the previous section using the CLI, use the az policy
assignment create command, as shown in the following code:
Click here to view code image
#!/bin/bash
policyName = "denyStorageAccountWithoutEncryption"
Lock resources
For some infrastructure resources, such as a virtual network, express route
circuit, or network virtual appliance, it is often desired to lock down resources to
mitigate the risks of accidental deletion or modification of resources. In Azure,
resource locks provide this capability and are available at two levels:
CanNotDelete This lock level indicates that a resource can be read and
modified, but it cannot be deleted.
ReadOnly This lock level indicates that a resource can be read, but it
cannot be modified or deleted.
Users assigned the Owner or User Access Administrator built-in role can
create and remove locks. Users in these roles are not excluded from the lock. For
example, if an owner creates a ReadOnly lock on a resource and then later tries
to modify the resource, the operation will not be allowed because of the lock that
is in place. The owner would have to remove the lock first, and then proceed to
modify the resource.
The scope of a resource lock can vary. It can be applied at a subscription
level, resource group, or as granular as a single resource. A resource lock applied
at the subscription level would apply to all resources in the subscription. For
example, if you have implemented a hub and spoke network topology where the
resources in the hub are in a separate subscription from the spokes, you may
want to apply a ReadOnly lock at the subscription level to protect against
accidental modification or deletion.
If a lock is scoped at the resource group level, it applies to all resources in the
resource group for which the lock has been applied. Resources in the resource
group inherit the resource group lock. If a resource is added to a resource group
group inherit the resource group lock. If a resource is added to a resource group
where resource group lock of CanNotDelete is already present, that new resource
inherits the CanNotDelete lock.
FIGURE 5-9 Creating a resource group level lock using the Azure portal
The Locks blade is where you can view and manage locks at a resource group
level or resource level. If you need to manage locks at a subscription level, open
the subscription blade and click Resource Locks, as shown in Figure 5-10.
FIGURE 5-10 Managing subscription level resource locks using the Azure
portal
Using the CLI, you can use the az lock create command to create a
new resource lock. The code below creates a resource lock on a virtual machine
resource.
Click here to view code image
#!/bin/bash
Resource locks are proper resources just like any other resource you may
already be familiar with. A difference is that a lock is specific to a resource
unless it is scoped to the subscription.
You can also get a list of roles (custom and built-in) using the Azure portal.
Open the Subscription blade, or a Resource Group blade, or an individual
resource blade, and click Access Control (IAM) in the left navigation. Next,
click Roles in the toolbar at the top of the blade. This lists the roles that are
available, and the number of users and groups already assigned to each role. For
example, Figure 5-12 shows an abbreviated list of roles available for a resource
group named “contoso-app.”
FIGURE 5-12 Getting a list of built-in roles using the Azure portal
The actions allowed by the Owner role are defined as ‘*’, which means any
entity assigned to the Owner role for a resource has unrestricted permissions to
perform any action within the assigned scopes. The assigned scopes are defined
as ‘/’, which means this role is available to any subscription, resource group, and
resource.
The actions not allowed by the Owner role are defined as an empty array,
meaning there are not any actions the Owner is not allowed to perform.
The three actions not allowed by the Contributor role are defined as:
Microsoft.Authorization/*/Delete
Microsoft.Authorization/*/Write
Microsoft.Authorization/elevateAccess/Action
These actions are all part of the Microsoft.Authorization namespace. In the
first action, which is Microsoft.Authorization/*/Delete, the wildcard notation
indicates that this role cannot perform any delete operations on any resource type
in the Microsoft.Authorization namespace. To find all the delete operations in
the Microsoft.Authorization namespace, use the az provider operation
show command, as shown in Figure 5-15.
FIGURE 5-15 All Delete operations in the Microsoft.Authorization namespace
FIGURE 5-17 The role definition for the Reader built-in role
The same role assignment scoped to a resource group using the Azure CLI
could be implemented as follows:
Click here to view code image
#!/bin/bash
The custom role definition above sets the assignable scope to a specific
subscription in the AssignableScopes array. This limits the role’s availability to
just this subscription. If you have a situation where you need to make the role
available to several subscriptions, you can simply add additional subscription Ids
to the scope.
The Actions array identifies names spaces and operations/actions that this role
can perform. This list can be whatever you need it to be, as long as it is
syntactically correct. The actions listed are an example of a SysOps role for
users needing to manage spokes in a hub and spoke configuration.
To create a custom role definition using Azure CLI, use the az role
definition create command as shown in the following code (this
assumes the JSON above was saved to a JSON file named sysops.json):
Click here to view code image
az role definition create --role-definition "./sysops.json"
To create the same role definition using Azure PowerShell, use the New-
AzureRmRoleDefinition cmdlet as shown in the following code:
Click here to view code image
New-AzureRmRoleDefinition -InputFile .\sysops.json
Thought experiment
In this thought experiment, demonstrate your skills and knowledge of the topics
covered in this chapter. You can find answer to this thought experiment in the
next section.
You are the IT Administrator for Contoso and need to provision an
environment in Azure to run a new line-of-business application. The application
is a web application that internal users will access using their browsers. The
application will be protected by Azure AD and Contos’s Azure AD tenant is
already synced with their on-premises Server Active Directory. The application
will use SQL Database (PaaS) for data. The logical SQL Server that will host the
SQL Database for this application will also need to host SQL Databases for
other applications. Contoso has a strict policy stating keys and secrets must be
stored in Azure Key Vault. Furthermore, all services and data storage must
reside in the U.S. You need to implement ARM templates and scripts to deploy
this environment. You also need to implement controls to insure the SQL Server
is not accidentally deleted.
1. What kind of resources will you deploy to support the web application?
2. How many resource groups would you use, and which resources would
exist in each?
3. How will you handle the requirement to protect the SQL Server from
accidental deletion?
4. What will you need to do to support the requirement that all passwords
and secrets be stored in Azure Key Vault?
5. How will you ensure data and services exist only in the U.S?
Chapter summary
Infrastructure described as code includes ARM template files, ARM
template parameter files, artifacts such as custom scripts and DSC, and
deployment scripts.
The elements of an ARM template file are $schema, contentVersion,
parameters, variables, resource, and outputs.
The elements of an ARM template parameter file are $schema,
contentVersion, and parameters.
Every resource defined in an ARM template file must include the name,
type, apiVersion, and location. The type is used to describe the resource
type you want to implement, and is of the form < resource provider
namespace >/< resource type >.
Every resource in Azure is made available through a resource provider. The
resource provider may define more than one resource type. For example,
the resource provider for the Microsoft.Network namespace defines many
resource types, such as virtualNetworks, loadBalancers, publicIPAddresses,
routeTables, and more.
The dependsOn element in an ARM template is used to inform Azure
Resource Manager of resource dependencies for the resource.
The copy element in an ARM template is used to create multiple instances
of a resource type. When the copy element is used, the copyIndex()
function can be used to return the current iteration ARM is in when
provisioning the resource. The iteration value returned from
copyIndex() is zero-based.
There are two steps to deploy an ARM template. First, you must create a
resource group if one doesn’t already exist. Second, invoke a resource
group deployment, where you pass your ARM template files to Azure
Resource Manager to provision the resources described in the ARM
template.
When implementing complex architectures using ARM templates, you
should de-compose the architecture into reusable nested templates that are
invoked from the main template.
A service principal can be created to use a password or certificate to
authenticate with Azure AD. The latter is recommended when the service
principal is used to run unattended code or script.
An app registration must first be created in Azure AD before you can create
a service principal. The service principal requires the application ID from
the app registration.
To create a new app registration using PowerShell, use the New-
AzureRmADApplication cmdlet.
To create a new app registration using CLI, use the az ad app create
command.
To create a new service principal using PowerShell, use the New-
AzureRmADServicePrincipal cmdlet.
To create a new service principal using CLI, use the az ad sp create
command.
Resource policies are comprised of policy definitions and policy
assignments.
Resource policies are evaluated against resource properties at the time of
deployment.
A policy rule is what ARM evaluates during a deployment. The policy rule
is described in JSON and uses an if/then programming construct.
To create a policy definition using PowerShell, use the New-
AzureRmPolicyDefinition cmdlet.
To create a policy definition using CLI, use az policy definition
create command.
To create a policy assignment using PowerShell, use the New-
AzureRmPolicyAssignment cmdlet.
To create a policy assignment using CLI, use the az policy
assignment create command.
A policy assignment can be scoped to an Azure subscription or to a
resource group.
The two types of resource locks are ReadOnly and CanNotDelete. Both
types protect a resource from deletion and allow the resource to be read.
The CanNotDelete also allows the resource to be modified.
To create lock using PowerShell, use the New-AzureRmResourceLock
cmdlet.
To create lock using CLI, use the az lock create command.
The three most basic built-in roles are Owner, Contributor, and Reader.
Many of the other built-in roles are a derivation of these, but scoped to a
specific resource type.
Role-based Access Control (RBAC) is comprised of role definitions and
role assignments.
RBAC is evaluated against user actions on a resource at different scopes.
A role assignment can be scoped to an Azure subscription, resource group,
or resource. The entity for which a role can be assigned can be a user, user
group, or application.
The permissions property of a role definition includes an actions array and a
notactions array. The actions array defines the actions/operations a member
of the role can perform, while the notactions array defines the
actions/operations a member of the role is not allowed to perform. The
actions and notactions that are defined are scoped according to the scopes in
the assignableScopes property.
To create a role definition using PowerShell, use the New-
AzureRmRoleDefinition cmdlet.
To create a role definition using CLI, use az role definition
create command.
To create a role assignment using PowerShell, use the New-
AzureRmRoleAssignment cmdlet.
To create a role assignment using CLI, use the az role assignment
create command.
CHAPTER 6
Manage Azure Security and Recovery Services
Microsoft Azure provides many features that help customers secure their
deployments as well as protect and recover their data or services should the need
arise. The first section of this chapter focuses on security and reviews several
related capabilities, including the use of Key Vault to securely store
cryptographic keys and other secrets, Azure Security Center to help prevent,
detect, and respond to threats, and several others. Even with proper precautions
taken, the need eventually arises to recover data or a critical workload. The
second section covers recovery-related services, including the use of snapshots
and platform replication, and Azure Backup and Site Recovery to quickly restore
access to data and services.
EXAM TIP
The key difference between the A1 and P1 pricing tiers is the A1 tier only
allows for software-protected keys, whereas the P1 tier allows for keys to be
protected by Hardware Security Modules (HSMs). If the workload requires
keys be stored in HSMs, be sure to select the Premium tier.
FIGURE 6-1 Pricing tier options for Key Vault as shown in the Azure portal
To begin using Key Vault, create a vault by using the Azure portal,
PowerShell, or the Azure CLI.
1. Create a Key Vault (Azure portal) In the Azure portal, search the
Marketplace for Key Vault and open the Create key vault blade. Specify
the name, resource group, location, and pricing tier (shown in Figure 6-2).
Note that the name must be unique and follow these rules:
Must only contain alphanumeric characters and hyphens
Must start with a letter and end with a letter or digit
Must be between 3-24 characters in length
Cannot contain consecutive hyphens
This blade also allows the creator to specify an Azure Active Directory user
or group and the permissions they have. These are defined within an Access
policy, and the permissions apply to the data within the key vault, such as
keys, secrets, and certificates, as shown in Figure 6-3.
Finally, this creation blade allows you to set advanced access policies,
which govern the access of Azure resources (virtual machines, Resource
Manager template deployments, and disk encryption) to retrieve key vault
data (shown in Figure 6-4).
FIGURE 6-4 Set Advanced access policy
3. Create a Key Vault (CLI) To create an Azure key vault with the CLI,
begin by creating a resource group.
Click here to view code image
az group create --name "MyKeyVaultRG" --location "South Central
US"
Once the key vault is created, it is ready to securely store keys, secrets and
certificates. This section shows how to create keys using the Azure portal,
PowerShell and the CLI.
4. Create a Key (Azure portal) After the key vault is created, you can
create keys used for encrypting and decrypting data within the vault. Also,
secrets such as passwords can be added to the key vault. Lastly, you can
create or import certificates (*.PFX or *.PEM file format) into the vault.
Once a key, secret or certificate exists in the vault, it can be referenced by
URI, and each URI request is authenticated by Azure AD.
To create a key in the Azure portal, open the Key Vault created in the
previous section and under Settings, click Keys (shown in Figure 6-5).
FIGURE 6-5 CREATE A NEW KEY IN THE AZURE PORTAL
Next, select Add, and enter in a name. If this is a P1 Premium key vault, an
HSM protected key can be selected. Otherwise it is software-protected. The
key can also be given activation and expiration dates in this interface. These
options are shown in Figure 6-6.
FIGURE 6-6 Specify the parameters for creating a new key
6. Create a Key (CLI) To create a key with the CLI, use this syntax.
Click here to view code image
az keyvault key create --vault-name ‘MyKeyVault-001’ --name
‘MyThirdKey’ --protection ‘software’
This section demonstrates the process to create secrets using the Azure portal,
This section demonstrates the process to create secrets using the Azure portal,
PowerShell and the CLI.
1. Add a Secret (Azure portal) To create a secret in the vault such as a
password, from within the Azure portal, click Secrets under Settings, and
then click Add (shown in Figure 6-7).
Set the Upload options to Manual, and enter the secret name and value. You
can add a Content type (optionally), which is a good place to store a
password reminder. You can also enter an activation and expiration date as
well. Finally, the secret can either be enabled (meaning it is useable) or not
enabled. These options are shown in Figure 6-8.
FIGURE 6-8 COMPLETE THE CREATION OF THE SECRET
3. Add a Secret (CLI) Use this syntax to add a secret to the key vault with
the CLI.
Click here to view code image
az keyvault secret set --vault-name ‘MyKeyVault-001’ --name
‘MySecondSecret’ --value ‘P@ssword321’
If you look in the Azure portal after this operation, notice that in addition to
the certificate, a managed key and secret are added to the vault. The
certificate and its corresponding key and secret together represent the
certificate in key vault.
Creating a certificate in this way submits a job, and the status of this job can
be checked with the Get-AzureKeyVaultCertificateOperation cmdlet,
passing the Key Vault name and the certificate name.
Now, set the Key Vault certificate issuer with the Set-
AzureKeyVaultCertificateIssuer cmdlet, passing the variables you
previously populated.
Remember that this command submits a job. To review the status, use the
Get-AzureKeyVaultCertificateOperation cmdlet, passing the vault name
and the certificate name.
After creating the App Service Certificate, the first configuration step is to
select an existing Key Vault or to create one for use with this service. Next, you
must verify ownership for the domain that you entered during the service
creation. After you verify ownership, the certificate can be imported into an App
Service. These choices are shown in Figure 6-10.
With the App Service Certificate solution, you can easily rekey the certificate
with one click and sync this update with the services that use the certificate. This
feature eliminates human error and reduces the time normally required to
accomplish this task manually. The certificate can also be configured to
automatically renew, relieving administrators from another task that sometimes
is forgotten. These settings are shown in Figure 6-11.
FIGURE 6-11 App Service Certificate Auto Renew settings
This reveals the four sections of the security policy, namely Data collection,
Security policy, Email notifications, and Pricing tier.
1. Data Collection Azure Security Center can deduce many valuable
security findings from your Azure deployments without collecting data
from any virtual machines. However, the advanced features of ASC,
including daily security monitoring and event analysis with threat
detection, are not possible without this data collection. It is recommended
that data collection be enabled, which involves an automatic installation of
the Microsoft Monitoring Agent. This agent can be configured to collect
and store data in a default Azure Log Analytics workspace, automatically
created by ASC, or you can choose an existing workspace. Be sure to save
any changes made in this dialog box, as shown in Figure 6-14. Note that
data collection is enabled or disabled at the subscription level.
2. Security Policy The security policy allows users to choose the specific
recommendations that ASC surfaces, shown below in Figure 6-15. These
settings can be adjusted at the subscription level and at the resource group
level. This can be useful because an organization might not want to see
certain recommendations on resources that are designated as development
or test. Notice that the first three options in this dialog box only produce
recommendations if data collection is enabled.
FIGURE 6-15 Setting the Azure Security Center security policy
4. Pricing Tier The last section of the security policy is the pricing tier, as
shown in Figure 6-17. There are two pricing tiers available for ASC. The
free tier is enabled by default and offers continuous security assessment
and actionable recommendations for Azure deployments. The standard tier
brings many other capabilities to bear, such as advanced threat detection
and extended security monitoring to resources on-premises and in other
clouds. The pricing tier can be set at the subscription level, which is then
inherited by all resource groups in that subscription, or it can be set per
resource group. This enables an organization to lower costs by only
enabling the standard pricing tier on selected resources.
FIGURE 6-17 Choosing the Azure Security Center pricing tier
This opens the JIT VM access configuration blade, where any ports not
required can be deleted and any ports not pre-configured can be added.
Also the maximum request time is set here, which defaults to three hours.
These configurations are shown in Figure 6-22.
3. Storage and data The Storage and Data node expands outside of pure
IaaS, to surface recommendations surrounding platform as a service
(PaaS) offerings. Azure Storage is Azure’s robust cloud storage service,
providing blob, disk, file, queue, and table storage types. Azure SQL
database is Azure’s database as a service (DBaaS) offering providing SQL
databases without the server to manage. Example recommendations under
storage and data include enabling encryption for Azure Storage (using
Storage Service Encryption) and enabling transparent data encryption on
SQL databases, as shown in Figure 6-25.
FIGURE 6-25 Storage and data recommendations
Figure 6-26 shows a cloudshopip web application that does not have a
WAF deployed in front of it. ‘In front’ of the application implies all
inbound network traffic should be directed to the WAF so that it can be
inspected for known attack patterns.
inspected for known attack patterns.
Walk through the guided steps to deploy the WAF. Most of the partner-
provided options are automatically provisioned with no additional
configuration required. Also, the WAF is a new source of security
telemetry, which Azure Security Center evaluates and surfaces where
appropriate.
Select the VMs that application control should apply to. This reveals a list
of processes that can be added or removed from application control.
After you select the processes, click Create. Application control is enabled
in audit mode. After you validate that the whitelist does not adversely affect
in audit mode. After you validate that the whitelist does not adversely affect
the workload, Application control can be changed to enforce mode. At this
point, only approved processes are allowed to execute.
Another prevention area with ASC is the Identity and Access solution.
Within this solution, customers can see visualizations created from the
security logging that is collected from monitored machines. This includes
information about logons that are occurring (both successful and failed), a
list of the accounts that are being used to attempt logons, and accounts with
changed or reset passwords, as shown in Figure 6-30.
FIGURE 6-30 The Identity and Access tiles within Azure Security Center
EXAM TIP
Be certain to know the protocols that are supported for use with federated
single sign-on, namely SAML 2.0, WS-Federation, or OpenID Connect
The first step in configuring federated single sign-on is to add the application
in Azure Active Directory. To do this within the Azure AD blade, click
Enterprise applications, and then click New application, as shown in Figure 6-
35.
Within the search dialog box enter the application name to add one from the
gallery. There are over 2,800 SaaS applications listed there. Click the application
you want and select Add. After the application is added, select it to open its
properties, and then select Single sign-on. Next, set the Single Sign-on Mode to
SAML-based Sign-On. In the following example (figure 6-36), the SaaS
application Aha! has been added. This application supports SAML 2.0 and is
pre-integrated with Azure Active Directory.
With this information in place, click Save at the top of the page. This
completes the federated single sign-on configuration on the Azure side. Now the
SaaS application must be configured to use Azure AD as the SAML identity
provider. This can be as simple as uploading the certificate metadata file
previously discussed, or the certificate and other information might need to be
entered manually. The steps to enable each application vary, so Microsoft has
provided tutorials for hundreds of SaaS applications at this URL:
https://docs.microsoft.com/en-us/azure/active-directory/active-directory-saas-
tutorial-list. In the case of the application Aha!, which is used as an example
here, the configuration involves uploading the metadata XML file. This
populates all the required fields to configure Azure AD as the SAML identity
provider, as shown in Figure 6-39.
FIGURE 6-39 Aha! SaaS application SAML identity provider configuration
Click the application you want to grant access to, and then click Users and
groups. Next, click Add user, which allows either users or groups to be assigned
to an application. Clearly, assigning individual users is not feasible at scale, so
group-based assignment is preferred in this case. As shown in Figure 6-45, a
group and a user have been granted access to the Aha! SaaS application.
On the marketplace page, click Create. Choose whether to create a new tenant
or link a tenant to an existing Azure subscription. As shown in Figure 6-49, a
new tenant is being created.
After the operation is complete, the resource that represents the B2C tenant
can be viewed in the resource group chosen during the linking step, as shown in
Figure 6-52.
FIGURE 6-52 Azure AD B2C Resource in the linked subscription
Clicking the tile opens a new browser tab focused on the Azure AD B2C
tenant. Within the B2C tenant, you can register the applications you want so that
they can use the tenant for authentication. As mentioned earlier, Azure AD B2C
can be used with several types of applications, including web, mobile, and API
apps. For example purposes, the next section focuses on web applications to
demonstrate how to register an application.
The application is displayed after the creation process completes. Click the
application to display its properties. Take special note of the Application ID
because this is a globally unique representation of the application within Azure
AD B2C. This is used in the web application code during authentication
operations.
The web application being referred in this example might also need to make
calls to a web API secured by Azure AD B2C. If this is the case, the web
application needs a client secret. The secret functions as a security credential,
and therefore should be secured appropriately. To create the web app client
secret, while within the properties blade of the newly created web application,
click Keys. Next, click Generate key and then click Save to view the key. This
key value is used as the application secret in the web application’s code.
Next, click Set up this identity provider, and enter the Client ID and Client
Secret that were provided when the Facebook identity provider configuration
was accomplished. Next, click OK and then click Create on the Add social
identity provider blade, as shown in Figure 6-56.
FIGURE 6-56 Entering the social identity provider client ID and secret
NOTE ENTER THE CORRECT CLIENT ID AND SECRET
The client ID and Secret being asked for in this step is NOT the Application
ID and Secret provided during the initial registration of the web application
within Azure AD B2C. Supply instead the Client ID and secret that were
provided during the configuration of the identity provider side (in this case,
Facebook).
Assuming all is configured properly within the web application, on the social
identity provider, and within Azure AD B2C, the web application now allows
users to authenticate via their existing Facebook credentials.
Within the marketplace page for Backup and Site Recovery (OMS), click
Create. Enter the name of the vault and choose or create the resource group
where it resides. Next, choose the region where you want to create the resource,
and click Create (see Figure 6-58).
The storage redundancy type should be set at this point. The options are
Locally Redundant Storage or Geo Redundant Storage. It is a good idea to use
Geo Redundant Storage when protecting IaaS virtual machines. This is because
the vault must be in the same region as the VM being backed up. Having the
only backup copy in the same region as the item being protected is not wise, so
Geo Redundant storage gives you three additional copies of the backed-up data
in the sister (paired) region.
Click here to view code image
$vault1 = Get-AzureRmRecoveryServicesVault –Name 'MyRSVault'
Set-AzureRmRecoveryServicesBackupProperties -Vault $vault1 -
BackupStorageRedundancy
GeoRedundant
Notice there is only a Windows agent because the backup of files and folders
is only supported on Windows computers. Click the link to download the agent.
Before initiating the installation of the MARS agent, also download the vault
credentials file, which is right under the download links for the Recovery
Services agent. The vault credentials file is needed during the installation of the
MARS agent.
The agent needs to communicate to the Azure Backup service on the internet,
so on the next setup screen, configure any required proxy settings. On the last
installation screen, any required Windows features are added to the system
where the agent is being installed. After it is complete, the installation prompts
you to Proceed to Registration, as shown in Figure 6-61.
FIGURE 6-61 Final screen of the MARS agent installation
Click Proceed to Registration to open the agent registration dialog box. Within
this dialog box the vault credentials must be provided by browsing to the path of
the downloaded file. The next dialog box is one of the most important ones. On
the Encryption Settings screen, either specify a passphrase or allow the
installation program to generate one. Enter this in twice, and then specify where
the passphrase file should be saved. The passphrase file is a text file that
contains the passphrase, so stored this file securely.
Next, schedule how often backups should occur. The agent can be configured
to back up daily or weekly, with a maximum of three backups taken per day.
Specify the retention you want, and the initial backup type (Over the network or
Offline). Confirm the settings to complete the wizard. Backups are now
scheduled to occur, but they can also be initiated at any time by clicking Back up
now on the main screen of the agent. The dialog showing an active backup is
shown in Figure 6-63.
FIGURE 6-63 Backup Now Wizard
To recover data, click the Recover Data option on the main screen of the
MARS agent. This initiates the Recover Data Wizard. Choose which computer
to restore the data to. Generally, this is the same computer the data was backed
up from. Next, choose the data to recover, the date on which the backup took
place, and the time the backup occurred. These choices comprise the recovery
point to restore. Click Mount to mount the selected recovery point as a volume,
and then choose the location to recover the data. Confirm the options selected
and the recovery begins.
In addition to the MARS agent and protecting files and folders with Azure
Backup, it is also possible to back up IaaS virtual machines in Azure. This
solution provides a way to restore an entire virtual machine, or individual files
from the virtual machine, and it is quite easy to set up. To back up an IaaS VM
in Azure with Azure backup, navigate to the Recovery Service vault and under
Getting Started, click Backup. Select Azure as the location where the workload
is running, and Virtual machine as the workload to backup and click Backup, as
shown in Figure 6-64.
The next item to configure is the Backup policy. This policy defines how
often backups occur and how long the backups are retained. The default policy
accomplishes a daily backup at 06:00am and retains backups for 30 days. It is
also possible to configure custom Backup policies. In this example, a custom
Backup policy is configured that includes daily, weekly, monthly, and yearly
backups, each with their own retention values. Figure 6-65 shows the creation of
a custom backup policy.
FIGURE 6-65 Configuring a custom backup policy
Next, choose the VMs to back up. Only VMs within the same region as the
Next, choose the VMs to back up. Only VMs within the same region as the
Recovery Services vault are available for backup.
When you click the Enable Backup button, behind the scenes the VMSnapshot
(for Windows) or VMSnapshotLinux (for Linux) extension is automatically
deployed by the Azure fabric controller to the VMs. This allows for snapshot-
based backups to occur, meaning that first a snapshot of the VM is taken, and
then this snapshot is streamed to the Azure storage associated with the Recovery
Services vault. The initial backup is not taken until the day/time configured in
the backup policy, however an ad-hock backup can be initiated at any time. To
do so, navigate to the Protected Items section of the vault properties, and click
Backup items. Then, click Azure Virtual Machine under Backup Management
type. The VMs that are enabled for backup are listed here. To begin an ad-hock
backup, right-click on a VM and select Backup now, as shown in Figure 6-67.
Use of snapshots
Many organizations choose to use Azure Backup to protect their IaaS virtual
machines. For those that elect not to use Azure Backup, another strategy is to use
blob snapshots to protect virtual machines. Unmanaged VM disks are actually
page blobs that are stored within the customer’s storage account. A snapshot of
page blobs that are stored within the customer’s storage account. A snapshot of
these page blobs can be taken, which can then be copied to a storage account in
the same or a different Azure Region. If the need arises to recover the virtual
machine, it can be recreated from the blob snapshot. To walk through these
steps, begin by creating a destination storage account. In this example, the
virtual machine to be protected is in West US 2. To ensure the snapshot survives
a region-wide outage, it is copied to a destination storage account in a different
region. To begin, create a resource group and the destination storage account.
The storage account is created and a reference to it is stored in the variable
$destStorageAcct. This variable is used later.
Click here to view code image
New-AzureRmResourceGroup -Name MyRecoveryStorageRG -Location
eastus2
$destStorageAcct = New-AzureRmStorageAccount -ResourceGroupName
MyRecoveryStorageRG
-Name recoverysa0434 -SkuName Standard_LRS -Location eastus2 -Kind
Storage
Next, create a blob container for the snapshot to exist in. To do this, first set
the storage account context.
Click here to view code image
Set-AzureRmCurrentStorageAccount -ResourceGroupName
MyRecoveryStorageRG
-Name recoverysa0434
The snapshot is created in the same resource group as the source storage
account (the one containing the VHD from the virtual machine to be protected),
as shown in Figure 6-68.
Before the snapshot can be copied, the destination storage account key is
needed. Obtain this within the Azure portal from the properties of the storage
account, as shown in Figure 6-69.
FIGURE 6-69 Copying the storage key form the destination storage account
A shared access signature is created to grant access to the snapshot. Set the
duration for the shared access signature (how long it functions).
$sasExpiryDuration = "3600"
A variable was already populated with the destination storage account earlier
in these steps. This serves as the destination storage account name. Now, run the
cmdlet to create the shared access signature.
Click here to view code image
$sas = Grant-AzureRmSnapshotAccess -ResourceGroupName
$resourceGroupName -SnapshotName
$SnapshotName -DurationInSecond $sasExpiryDuration -Access Read
Now create the destination storage context to use in the snapshot copy.
Click here to view code image
$destinationContext = New-AzureStorageContext –StorageAccountName
$storageAcct.StorageAccountName -StorageAccountKey
$storageAccountKey
Finally, begin the copy operation. Notice that the snapshot is converted to a
VHD in the destination storage account.
Click here to view code image
Start-AzureStorageBlobCopy -AbsoluteUri $sas.AccessSAS -
DestContainer "recovery"
-DestContext $destinationContext -DestBlob
"recoveredcriticalserveros.vhd"
When complete, the copied VHD is visible in the destination storage account
container, as shown in Figure 6-70.
Now, create the network security group that only allows the required network
traffic.
Click here to view code image
$nsgName = "myNsg"
$rdpRule = New-AzureRmNetworkSecurityRuleConfig -Name myRdpRule -
Description "Allow RDP"
-Access Allow -Protocol Tcp -Direction Inbound -Priority 110 -
SourceAddressPrefix
internet -SourcePortRange * -DestinationAddressPrefix * -
DestinationPortRange 3389
$nsg = New-AzureRmNetworkSecurityGroup -ResourceGroupName
RecoveredCriticalServerRG
-Location eastus2 -Name $nsgName -SecurityRules $rdpRule
-Location eastus2 -Name $nsgName -SecurityRules $rdpRule
Next, create a public IP address and network interface card for the VM.
Click here to view code image
$ipName = "myIP"
$pip = New-AzureRmPublicIpAddress -Name $ipName -ResourceGroupName
RecoveredCriticalServerRG -Location eastus2 -AllocationMethod
Dynamic
$nicName = "myNicName"
$nic = New-AzureRmNetworkInterface -Name $nicName -
ResourceGroupName
RecoveredCriticalServerRG -Location eastus2 -SubnetId
$vnet.Subnets[0].Id
-PublicIpAddressId $pip.Id -NetworkSecurityGroupId $nsg.Id
Next, specify the VM name, series, and size, and assign the network interface
to the VM configuration.
Click here to view code image
$vmName = "RecoveredCriticalVM"
$vmConfig = New-AzureRmVMConfig -VMName $vmName -VMSize
"Standard_D1_V2"
$vm = Add-AzureRmVMNetworkInterface -VM $vmConfig -Id $nic.Id
Next, add the OS disk that was created from the snapshot.
Click here to view code image
$vm = Set-AzureRmVMOSDisk -VM $vm -ManagedDiskId $osDisk.Id -
StorageAccountType
StandardLRS -DiskSizeInGB 128 -CreateOption Attach -Windows
This final step results in a virtual machine that is created from the copied
snapshot. As such, it is an exact replica (from a disk contents perspective) of the
source virtual machine that the original snapshot was taken from. If the source
VM was shut down prior to the collection of the VM snapshot, the recovered
VM would be in an application-consistent state, meaning a state similar to that of
a cleanly shut down machine. If the snapshot was collected from a running VM,
the recovered VM would be in a crash-consistent state. This means the VM is in
a state similar to if the machine was powered off without a clean shut down. In
this case the VM will show an unplanned shutdown on the first boot after
recovery, as shown in Figure 6-71.
After the data has been collected, visualizations are created that provide
information critical to the planning phase, as shown in Figure 6-74.
FIGURE 6-74 Output from the Site Recovery Deployment Planner
Another toolset that is valuable during Site Recovery planning is the ASR
Capacity Planner. This is a spreadsheet that allows customers to enter
information about their workloads and key planning information is calculated.
Figure 6-75 shows a screenshot of the capacity planner spreadsheet.
FIGURE 6-75 ASR Capacity Planner
Commonly, the use of these planning tools reveals a need for more network
bandwidth. Depending on the number of machines being protected, their amount
of storage, and the data change rate, ASR might be replicating a tremendous
amount of data into Azure. Organizations can either increase their outbound
internet connection bandwidth or consider implementing ExpressRoute. Refer to
Chapter four for more details on ExpressRoute.
Chapter four for more details on ExpressRoute.
There are on-premises components required for the VMware scenario of ASR.
These components support the replication of data from on-premises to Azure.
They include:
The Configuration Server Generally a VMware vm which coordinates
communications between on-premises and Azure, and manages data
replication.
The Process Server Serves as a replication gateway, receiving replicated
data from the mobility service, caches, compresses, encrypts, and transfers
this data to Azure. Performs auto-discovery of new VMs and performs push
installation of mobility service. Can scale out as needed.
Master Target Server Handles replication data during failback from
Azure. Can scale out as needed.
Mobility Service Installed on all protected VMs. Intercepts and replicates
disk writes to Process Server.
Figure 6-76 shows these on-premise components.
16 VCPUS (2
1 TB
SOCKETS * 8 Replicate between 150-
32 GB 1 TB to 2
CORES @ 250 machines
TB
2.5GHZ)
The Configuration server also has network requirements. Remember that ASR
is a PaaS service and so it is accessible over the public internet. As such, the
configuration server must have direct or proxy-based access to the following
URLs:Ports
*.hypervrecoverymanager.windowsazure.com:443
*.accesscontrol.windows.net:443
*.backup.windowsazure.com:443
*.blob.core.windows.net:443
*.store.core.windows.net:443
https://dev.mysql.com/get/archives/mysql-5.5/mysql-5.5.37-win32.msi (for
MySQL download)
time.windows.com:123
time.nist.gov:123
ASR is ready to be implemented after the planning tools have produced
ASR is ready to be implemented after the planning tools have produced
output, any bandwidth increases have been procured, the on-premises
infrastructure (Configuration server) is appropriately sized and ready, and the
required URLs and ports are allowed from the Configuration server.
To implement the VMware scenario of ASR, start by preparing the VMware
environment. This involves preparing for automatic discovery of new VMs for
protection and failover of those protected VMs. Both capabilities require a read-
only user defined on each vSphere host or on the vCenter server that manages
the hosts, as shown in Figure 6-77.
If both failover and failback are features you want, the vSphere or vCenter
user requires additional permissions, as shown in Table 6-3. Ideally, a role
should be created with the following permissions, and this role should be
assigned to a user or group.
TABLE 6-3 VMware permissions required for Azure Site Recovery integration
FIGURE 6-78 Download the ASR Unified Setup and the vault credentials
The next configuration step is to set up the target environment in Azure. This
includes provisioning or choosing an existing storage account and virtual
network. The storage account holds replicated data from the on-premises
environment to be used for building virtual machines during a failover. The
virtual network provides failed-over VMs with a network context so they can
continue to provide the services they did when on-premises.
Following the target environment set up, the next item to configure is the
replication policy. This policy defines how long recovery points and application-
consistent snapshots should be retained. The replication policy is created and
associated with the Configuration server in this step.
The next step is to enable replication. In this interface, the source and target
infrastructures are selected. The VMs to protect and which disks per VM to
protect are also chosen in this step. Also in this step, the replication policy that
was created in the previous step (or a pre-existing replication policy) is selected.
After these steps are complete, replication begins immediately. The initial
replication can take many hours, depending on the amount of data, the churn of
the data, available bandwidth, and other factors. A fairly accurate estimation of
this timeframe should have been an output of the planning phase. Failover is not
possible until the replicated VMs show as Protected. In Figure 6-81, the status of
the initial replication is seen for two protected VMs.
The final item to create when implementing ASR is the Recovery Plan. This is
an important construct in Site Recovery because it defines the orchestration of
how workloads fail over and are powered on. Some applications need changes to
be made via script, such as a connection string change and some applications
require certain machines to power on before the others. All of these
orchestrations are set up in the recovery plan. A screen shot of a recovery plan is
shown in Figure 6-82.
FIGURE 6-82 ASR Recovery Plan
After the test failover has completed, validate that the application is
functional. If it is not functional, this is a good opportunity to troubleshoot the
issue and learn what is required for a functional failover of the protected
application. You can add adjustments to the Recovery Plan to ensure future
failovers produce a working instance of the protected application. As a final step,
right-click the recovery plan and choose Cleanup test failover. This deletes the
resources created during the test failover.
After these installations, a dialog box open, allowing the Hyper-V host to be
registered with the ASR Hyper-V Site. In the Hyper-V only scenario (no
SCVMM) all hosts must be registered.
If the Site Recovery Provider is being installed on SCVMM, it alone is
installed and then the SCVMM server alone is registered with the Hyper-V Site,
as shown in Figure 6-89.
FIGURE 6-89 Installation of the Site Recovery Provider on SCVMM
After the SCVMM server or the Hyper-V hosts are installed and registered,
the target environment must be configured. In this step, the Azure storage
account and virtual network are selected. The storage account holds the
replicated data from on-premises and the virtual network is used to host the
networking capabilities of VMs that are failed over.
With the target environment prepared, next configure the replication policy.
The replication policy defines how long recovery points and application-
consistent snapshots should be retained, and this policy is associated with the
Hyper-V site. The screen shot in Figure 6-91 shows the association of the Hyper-
V site with a custom replication policy.
FIGURE 6-91 Creating and associated a replication policy with the Hyper-V
site
When a failover is initiated, replication stops and VMs are created within the
target region and are configured to participate in the virtual network, subnet, and
availability zone.
Some caveats to be aware of with Azure-to-Azure Site Recovery (as of this
writing) include:
The management plan is Azure portal only
VM Scale Sets are not supported
Maximum OS disk size is 1023 GB
Maximum data change rate supported is 6 MBps
Managed disks are not supported
Azure disk encryption is not supported
Cannot replicate/failover between geographic clusters (basically must stay
within the same continent)
Though the planning is easier with Azure-to-Azure Site Recovery, there are
still things to consider. For example, the target region is a critical concern.
Choose this region considering the same criteria as you would in a normal
disaster recovery configuration. The replicated data should be a significant
distance away from the primary data, and in areas less prone to natural disasters.
distance away from the primary data, and in areas less prone to natural disasters.
Also keep in mind the outbound networking requirements mentioned earlier.
To implement Azure-to-Azure Site Recovery, begin by creating a Recovery
Services vault in a target region that meets requirements. As in other scenarios,
primarily ensure there is sufficient distance from the source region. Create the
vault in one of the ways shown earlier in this chapter. This can be done via the
Azure portal or PowerShell.
Next, configure the required outbound connectivity so that protected VMs can
access the Site Recovery service endpoints, either by URL (if using a URL
proxy) or IP address (if using firewall or NSG rules). The protected VMs must
also be able to access the IP ranges of the Office 365 authentication and identity
IP V4 endpoints and the blob service endpoint of the cache storage account. To
make this configuration easier, Microsoft has released a PowerShell script that
configures a Network Security Group with all of the required outbound rules.
Access this script here: https://gallery.technet.microsoft.com/Azure-Recovery-
script-to-0c950702.
Configure ASR in this scenario by navigating to the vault properties, choosing
to enable replication, and selecting the appropriate source environment. Also
choose the deployment model (which is generally Resource Manager), and select
the resource group containing the VMs to protect. These configuration options
are shown in Figure 6-94.
Thought experiment
In this thought experiment, demonstrate your skills and knowledge of the topics
covered in this chapter. You can find answers to this thought experiment in the
next section.
You are the administrator at Fabrikam, and your director has asked you to
help solve a problem the users in your organization are experiencing. It seems
most users access software as a service (SaaS) applications in their daily work
stream. In particular, your project managers are leveraging an application called
Aha!, the legal department utilize Dropbox for the storage of their least-sensitive
tier of client documents, and HR is piloting the use of Workday to replace an
antiquated people management system. These teams all have a similar problem.
They must remember several usernames and passwords in their daily work
stream. They log into their workstations in the morning, and they also log into
several SaaS applications to accomplish their work. You need to provide a single
sign-on experience to your users, but do so in a cost-effective, secure way. In
particular, the legal team has a requirement to use multi-factor authentication
when they access client documents. However, the lawyers are notoriously
opposed to having to carry around dedicated authentication tokens.
On a potentially-related note, Fabrikam underwent a migration to Office 365
one year ago, and all is going well with the use of this cloud service.
How do you:
1. Provide a single sign-on experience for various SaaS applications, and
preferably do so by group, instead of by user?
2. Ensure the single sign-on solution is secure and supports multi-factor
authentication?
3. How will you deal with turnover in your organization as it pertains to
SaaS application access?
Chapter summary
This chapter covered a wide variety of security-related topics, including securing
and managing company secrets, keys, and certificates, provisioning access to
SaaS applications, and ensuring that access to data and services is maintained.
Some of the key things to remember include:
Azure Key Vault is a great way to protect secrets, keys, and certificates.
This service can be created and managed via the Azure portal, PowerShell,
or the Azure CLI, and includes support for protecting these items with FIPS
140-2 Level 2 validated hardware security modules (HSMs).
Azure Security Center is enabled by default within your Azure subscription
and helps to prevent and detect security issues. Security Center helps secure
more than just your Azure deployments. The Microsoft Monitoring Agent
can be installed on workloads in other clouds or on-premises to extend the
value of this service.
Azure Active Directory can be used to configure single sign-on access to
thousands of SaaS applications and can do so with either federated access
or through securely storing and presenting the SaaS application password
on behalf of the user.
Authentication via social providers, such as Facebook, Google, and
LinkedIn can be enabled by using Azure Active Directory Business to
Consumer (B2C) feature set. This is generally used by developers who want
to simplify the addition of robust authentication in their web and mobile
apps.
Azure Backup can be used to protect files and folders, applications, and
IaaS virtual machines. This cloud-based data protection service helps
organizations by providing offsite backups in the cloud and protection of
VM workloads they have already moved to the cloud.
Azure IaaS VMs can be protected through the use of disk snapshots. These
can be copied to storage accounts in other regions and, when required, VMs
can be provisioned from these disk snapshots. This can be thought of as a
way to provide for disaster recovery of IaaS VMs before Azure Backup and
Site Recovery were available. Today, Azure Backup and Site Recovery
represent a superior way to protect data and quickly restore service in the
case of a sustained outage.
Azure Site Recovery has several scenarios for enabling the replication and
failover of workloads. ASR can protect physical, VMware, and Hyper-V-
based workloads from on-premises into Azure. It can also replicate and
failover VMs from one Azure region to another.
CHAPTER 7
Manage Azure Operations
This command creates the Automation account, but does not create the Run
As accounts. The Run As accounts can be created via PowerShell, but to do so
requires the use of a lengthy script. This process is described in the following
documentation link: https://docs.microsoft.com/en-
us/azure/automation/automation-update-account-powershell.
It is also possible to create the Run As accounts within the portal after the
Automation
Account is created. To do so, navigate to the properties of the Automation
Account and click Run As Accounts. Then click Create for the Run As account
type that is desired, shown on Figure 7-2.
FIGURE 7-2 Creating Run As accounts in the portal
There are several resources that are needed to make runbooks functional. For
example, runbooks may need to be scheduled to run at certain intervals, they
may need specific PowerShell modules to enable management of non-Microsoft
systems (as in the VMware example in Figure 7-6), and they may need variables
that can be called at runbook execution. Shared resources enable these
capabilities. The next few paragraphs go over these resources.
SCHEDULES
SCHEDULES
Schedules are created within Azure Automation to allow runbooks to be
executed at a specific start time. Schedules can be configured to run once or on a
recurring basis. They can be quite flexible, allowing for execution start time to
be defined hourly, daily, monthly, or for specific days of the week or month. To
create a schedule in the Azure portal, within the properties of the Automation
Account and under Shared Resources, click Schedules, and then click Add A
Schedule. Figure 7-7 shows an example of a simple schedule that executes
hourly.
FIGURE 7-7 A simple schedule that executes hourly
MODULES
Modules represent PowerShell modules that can be added to the Azure
Automation Account to extend automation capabilities. Many third-party
solutions have PowerShell modules that are written by the solution provider. The
example of VMware was used earlier in this chapter. These modules can be
integrated into the Azure Automation Account to enable runbooks to integrate
with many types of solutions. To add a new module from within the Azure
Portal, click Modules under Shared Resources. From there, click Add A Module,
or to explore options, click Browse Gallery, shown in Figure 7-8.
EXAM TIP
FIGURE 7-10 Updating Azure Modules
CREDENTIALS
Azure Automation credentials represent PSCredential objects that contain
authentication credentials such as a user name and password. These credentials
are stored securely within Azure Automation and they can be invoked during
runbook execution, or when using DSC configurations (discussed later in this
chapter). To create a credential within the portal, click Credentials under Shared
Resources, and then click Add A Credential, shown in Figure 7-11.
FIGURE 7-11 Adding an Azure Automation credential
CONNECTIONS
The next Azure Automation resource type to consider is connections. A
connection is an object that contains the information needed to connect to an
external service or application. Connections often include the URL and port
required to connect to a service, along with credentials needed to authenticate to
that service. Connections can be used with automation runbooks or with DSC
configurations. Earlier in this chapter, Run As accounts were discussed. Run As
accounts are referenced by automatically created connection objects in Azure
Automation Accounts, as shown in Figure 7-12.
To create a new connection within the Azure Portal, click Connections under
Shared Resources, and then click Add A Connection. Enter the name,
description (optional), and the type. Connection types are defined by Modules
(discussed earlier in this chapter). As an example, the SSH PowerShell Module
adds an SSH connection type, allowing parameters to be defined for SSH
connections to Linux systems. Setting up a connection of this type is shown in
Figure 7-13.
FIGURE 7-13 Creating a new Azure Automation Connection
CERTIFICATES
Another Azure Automation resource type is Certificates. Certificates resources
are x.509 certificates that are uploaded into Azure Automation and securely
stored for authentication by runbooks, or DSC configurations. Creating a new
certificate in the Azure Portal involves clicking Add A Certificate within the
Certificates section of Shared Resources. Give the certificate a name, optionally
a description, and the path to the exported certificate file (.cer or .pfx). This can
also be accomplished in PowerShell using the New-
AzureRmAutomationCertificate cmdlet.
Click here to view code image
$certName = 'MyAutomationCertificate'
$certPath = '.\MyCert.pfx'
$certPwd = ConvertTo-SecureString -String 'Password!' -AsPlainText
-Force
$ResourceGroup = "MyAutomationRG"
New-AzureRmAutomationCertificate -AutomationAccountName
"FabrikamAutomation"
-Name $certName -Path $certPath –Password $certPwd -Exportable -
ResourceGroupName $ResourceGroup
VARIABLES
Azure Automation Variables are resources that are useful for storing values that
can be used by runbooks and DSC configurations. At creation time the variable
must be assigned one of several types: String, Integer, DateTime, Boolean, or
Null. A common use for a variable is to store the Azure subscription ID so that a
runbook can use this during authentication. In Figure 7-14 a parameter is being
set in a runbook that utilizes the variable, AzureSubscriptionId.
FIGURE 7-14 A runbook parameter referencing an Azure Automation
Variable
New-AzureRMAutomationSchedule –AutomationAccountName
$automationAccountName –Name `
$scaleUpScheduleName -StartTime "10/29/2017 00:00:00" -
WeekInterval 1 `
-DaysOfWeek Friday -ResourceGroupName $resourceGroup -TimeZone
$TimeZone.Id
New-AzureRMAutomationSchedule –AutomationAccountName
New-AzureRMAutomationSchedule –AutomationAccountName
$automationAccountName –Name `
$scaleDownScheduleName -StartTime "10/29/2017 00:00:00" -
WeekInterval 1 `
-DaysOfWeek Saturday -ResourceGroupName $resourceGroup -
TimeZone $TimeZone.Id
Verify that the schedules are created in the Azure Portal, shown in Figure 7-
16.
Next, add a new PowerShell runbook to perform the scale up operation. The
PowerShell code to scale the App Service Plan is as follows and shown in Figure
7-17:
Click here to view code image
Param(
[string]$myAppServiceName="FabrikamBasicASP",
[string]$rgName="MyWebAppRG"
)
It’s always a good idea to test the runbook before publishing it. To test the
runbook, click Test Pane. In the example code, the default parameter values are
provided, so click Start to begin the test. A Completed result indicates the
runbook ran successfully, as shown in Figure 7-18. It is also a good idea to
verify the desired action was accomplished. This is because a runbook can
execute successfully, but not accomplish the desired result.
In Figure 7-19 the App Service shows as successfully scaled to two instances.
Now that the runbook has been tested, publish it by clicking on the Publish
button. Finally, schedule the runbook to scale up the App Service Plan by
associating the runbook with the previously created schedule. To do this, click
on Schedule within the runbook properties then click Link A Schedule To Your
Runbook. Choose the ScaleAppServiceUpSchedule schedule, then click OK,
shown in Figure 7-20.
FIGURE 7-20 Associating a schedule with a runbook
There are built-In DSC resources and custom resources. Some of the built-in
There are built-In DSC resources and custom resources. Some of the built-in
resources include the Archival Resource used to unpack an archive (such as a
*.zip file), the File Resource allowing for management of files and folders on
target nodes, and the Registry Resource. Custom resources extend the
management capabilities of DSC and are authored by Microsoft, partners of
Microsoft, and the technical community. An example of a custom resource is
included in the xPSDesiredStateConfiguration module. This module contains
enhancements to the built-in DSC resources (such as xArchive), and also adds
new resources (such as xDSCWebService).
To import resources or modules from within a DSC configuration, use the
Import-DscResource dynamic keyword. To import a specific resource, use the
following syntax:
Click here to view code image
Import-DscResource <NameOfResource>
For example, to import the built-in resource, Service, use this command just
before the Node statement:
Import-DscResource Service
Modules can also be imported into Azure Automation DSC. This was
discussed earlier in this chapter in the Modules section under Azure Automation
Runbooks.
Generate DSC node configurations
As mentioned earlier, Azure Automation DSC is built from PowerShell DSC. In
a sense, it is simply PowerShell DSC with a “cloud-based” pull server
(configurations are pulled rather than pushed). It just so happens that with Azure
Automation DSC, the pull server is a platform (PaaS) service rather than a
virtual machine that an organization must manage. Earlier in the Creating
PowerShell DSC configurations section, the concept of creating DSC
configurations was discussed. Once a configuration is created and (ideally)
tested, it can be uploaded into Azure Automation DSC as a DSC node
configuration. When this is accomplished, the configuration is available to be
applied by Azure Automation DSC nodes (computers registered with the Azure
Automation DSC service). Creating DSC node configurations will be covered in
the next section, while registering machines as DSC nodes will be discussed
later in this chapter.
Creating a DSC node configuration begins with creating or downloading a
sample DSC configuration. In Figure 7-23 an example DSC configuration
checks to ensure the Windows feature WebServer is present. This is a
declarative method of configuration. If the WebServer feature is absent, then it is
installed.
At the Import Configuration dialog, specify the path to the *.PS1 file
containing the DSC configuration code, optionally give a description, then click
OK.
Once a DSC configuration has been added it must be compiled. To compile a
configuration, click on its name, then click on the Compile button as shown in
Figure 7-25.
Once a node has been connected to Azure Automation DSC, its compliance
with the configuration can be discovered by clicking on DSC Nodes under
Configuration Management. A high-level view of the status is available and
additional details can be obtained by clicking on the node itself (Figure 7-27).
[Parameter(Mandatory=$True)]
[String]$RegistrationKey,
[Parameter(Mandatory=$True)]
[String[]]$ComputerName,
[Int]$RefreshFrequencyMins = 30,
[Int]$ConfigurationModeFrequencyMins = 15,
[String]$ConfigurationMode = "ApplyAndMonitor",
[String]$NodeConfigurationName,
[Boolean]$RebootNodeIfNeeded= $False,
[Boolean]$RebootNodeIfNeeded= $False,
[String]$ActionAfterReboot = "ContinueConfiguration",
[Boolean]$AllowModuleOverwrite = $False,
[Boolean]$ReportOnly
)
if(!$NodeConfigurationName -or $NodeConfigurationName -eq "")
{
$ConfigurationNames = $null
}
else
{
$ConfigurationNames = @($NodeConfigurationName)
}
if($ReportOnly)
{
$RefreshMode = "PUSH"
}
else
{
$RefreshMode = "PULL"
}
Node $ComputerName
{
Settings
{
RefreshFrequencyMins = $RefreshFrequencyMins
RefreshMode = $RefreshMode
ConfigurationMode = $ConfigurationMode
AllowModuleOverwrite = $AllowModuleOverwrite
RebootNodeIfNeeded = $RebootNodeIfNeeded
ActionAfterReboot = $ActionAfterReboot
ConfigurationModeFrequencyMins =
$ConfigurationModeFrequencyMins
}
if(!$ReportOnly)
{
ConfigurationRepositoryWeb AzureAutomationDSC
{
ServerUrl = $RegistrationUrl
RegistrationKey = $RegistrationKey
ConfigurationNames = $ConfigurationNames
}
ResourceRepositoryWeb AzureAutomationDSC
{
ServerUrl = $RegistrationUrl
RegistrationKey = $RegistrationKey
}
}
}
ReportServerWeb AzureAutomationDSC
{
ServerUrl = $RegistrationUrl
RegistrationKey = $RegistrationKey
}
}
}
Next, run the script. This will create a folder called DscMetaConfigs in the
directory where the script was run. Copy this folder to the computer to be
onboarded, then run the following command:
Click here to view code image
Set-DscLocalConfigurationManager -Path ./DscMetaConfigs
Within a few minutes, the computer will show up under DSC nodes within the
Automation Account. From this point, a DSC node configuration can be
assigned to the new DSC node.
The process to collect machine data begins with choosing whether to use the
default Log Analytics workspace or to create a new one. In every Azure
subscription with IaaS VMs provisioned, a default Log Analytics workspace will
be created automatically (Figure 7-30.
FIGURE 7-30 Default Log Analytics workspace
Initially, this workspace is only used for storing security-related data, but it
can be extended to accomplish all of the features available with Log Analytics.
In the next few sections, the assumption is made that a new Log Analytics
workspace will be created. Once created, systems must be connected to it
connecting systems to the Log Analytics service. The process to connect systems
varies based on the system type and where it is hosted. Several potential
scenarios are considered in the next few sections. First, creating a Log Analytics
workspace will be covered, and then the process to connect data sources to the
workspace will be discussed.
Notice that most of the VMs are connected to another workspace. Remember
that a default Log Analytics workspace is created when virtual machines are
added to the subscription. VMs will be connected to this workspace by default.
To connect a virtual machine to the newly created workspace, click on it and
click Disconnect. This will disconnect it from the default workspace, enabling it
to be connected to the new workspace.
FIGURE 7-35 MMA setup dialog, choosing the agent connection type
On the Azure Log Analytics dialog, enter the workspace ID and primary key.
Choose Azure Commercial for the Azure Cloud selection. The agent will need to
communicate to the Log Analytics service over TCP port 443. If a proxy is
required for this connection click Advanced to configure this. These options are
shown in Figure 7-36.
FIGURE 7-36 MMA installation connecting to the Log Analytics workspace
Click Next and choose whether to allow Windows Update to keep the agent
up to date (this is recommended). Finally click Install. Once the install
completes, machine data should be forwarded into the workspace within a few
minutes. Figure 7-37 shows data from the AWSWinVM computer.
FIGURE 7-37 A non-Azure connected VM with data visible in a Log Search
As shown in Figure 7-39, this query returns all collected records across all
data sources.
FIGURE 7-39 The Log Search dialog show all records returned
To get more specific, imagine the need to see all computers that require
operating system updates. To search across all records and all data sources to
find this out, use this query:
Click here to view code image
search * | where Type == "Update" | where ( UpdateState ==
"Needed" )
This query searches all records (search *) and within that dataset looking only
for records of the type “Update” (where Type == “Update”) and within that data
set looking only for records where UpdateState is equal to “Needed” (where (
UpdateState == “Needed” ). The result of this query is seen in Figure 7-40.
FIGURE 7-40 Log Search showing required operating system updates
To save such a query, from within the Log Search dialog, click Saved
Searches, click Add, and then enter a meaningful name, a category, the query
syntax, and optionally a Function Alias (a short name given to a saved search).
Then click OK. This dialog is shown in Figure 7-41.
Within the View Designer, an overview tile and a Dashboard tile is created.
The overview tile ideally is a view of the desired data at the highest level. In the
case of this example, a number view works best. The dashboard tile will be a
deeper dive into the data, showing a breakout of information by computer and
allowing the user to click on a result to get more detail.
Notice the highlighted portion of the query is different from the saved query.
This section summarizes the query output by count of computers, giving the
value to display in the Overview tile. The Overview tile setup is displayed in
Figure 7-42.
FIGURE 7-42 Adding the Overview tile in a custom visualization
Click Apply, and then click the View Dashboard tab to begin configuring the
Dashboard tile.
FIGURE 7-43 The General section of the Number and List dashboard
visualization
In the Title section enter a legend value that describes the data and a query
that summarizes the data. In the case of this example, the same query is used that
was specified in the Overview tile. In the List section enter a query that displays
the computers that have services in this state, as shown in Figure 7-44. The
query syntax is like before.
Click here to view code image
search * | where Type == "ConfigurationData" | where (
SvcStartupType == "Auto" ) |
where ( SvcState == "Stopped" ) | summarize AggregatedValue =
count() by Computer
FIGURE 7-44 The Tile section of the Number and List dashboard
visualization
The only difference is that the final count statement is removed so that the list
of computers is shown. For the column titles, the defaults of Name = Computer
and Value = Count are appropriate as shown in Figure 7-45.
FIGURE 7-45 The List section of the Number and List dashboard visualization
Finally, the Navigation query is entered so that Log Analytics knows what
data to return when the visualization is clicked.
Click here to view code image
search {selected item} | where ( SvcStartupType == "Auto" ) |
where ( SvcState ==
"Stopped" )
The highlighted section uses as a value whichever line item was clicked in the
visualization to perform a log search against that specific computer. Finally,
click Apply, and then click Save at the top of the View Designer. Now, as shown
in Figure 7-46, the Overview section shows the custom visualization that was
just created. Clicking it reveals a greater depth of information.
FIGURE 7-46 The Overview section of Log Analytics showing a custom
visualization
To the left under the custom time range section of Log Search, click the Azure
Activity type. Notice that this dynamically changes the Log Analytics search,
which now shows:
Click here to view code image
search "AzureActivity" | where Type == "AzureActivity"
Now click the Succeeded type under ActivityStatus, and notice that this is also
added to the query automatically.
Click here to view code image
search "AzureActivity" | where Type == "AzureActivity" | where
ActivityStatus ==
"Succeeded"
"Succeeded"
Next click a user under the Caller section, again automatically creating a more
precise query.
Click here to view code image
search "AzureActivity" | where Type == "AzureActivity" | where
ActivityStatus ==
"Succeeded" | where Caller == "[email protected]"
The resulting query shows all Azure Activity Log entries where the status is
succeeded and the caller is a specific user. Figure 7-51 shows the results of this
query. Notice that the log search shows that this user successfully deleted a
virtual machine. This virtual machine was in a different subscription from the
Log Analytics workspace.
FIGURE 7-51 Searching the Azure Activity Log via Log Analytics
Clicking into a visualization provides more insights, and ultimately reveals the
log search behind the visualization data. For example, the following log search
log search behind the visualization data. For example, the following log search
produces a view of missing security updates for all systems being monitored by
Log Analytics.
Click here to view code image
Update | where OSType!="Linux" and Optional==false and
Classification=~"Security
Updates" | summarize hint.strategy=partitioned
arg_max(TimeGenerated, *) by
Computer,SourceComputerId,UpdateID| where UpdateState=~"Needed" and
Approved!=false |
render table
It is also possible to set up alerts based on the results of log search queries.
This feature is not currently available in the Azure portal so it must be
accomplished from the OMS portal. To access the OMS portal easily from the
Azure portal, navigate to the OMS Workspace option under the Log Analytics
properties, and then click OMS Portal. This opens the OMS portal and
automatically authenticates the user with the credentials used to log into Azure.
In this example, an alert is set up that sends an email if the number of security
alerts missing in the environment are above two. Within the OMS portal home
screen, click the System Update Assessment tile. Under the Missing Updates
tiles, click the Security Updates classification. In the Log Search screen showing
the results, click Alerts in the upper left part of the dialog. Under General, give
the alert a name, description, and select the severity. The query is already
populated and the time window is set to 15 minutes. Under Schedule set the
frequency the alert is checked for and choose Number Of Results for the
Generate Alert Based On option. For Number Of Results, leave the option set to
Greater Than and enter 2. Select Suppress Alerts and under Suppress Alerts For,
set the number to 24 hours. This means we will get one alert for this condition
each day while it is active. Finally, under the Actions section set Email
Notification to Yes and ensure a valid email is configured. Note, there are other
actions that can be taken, such as executing a runbook or calling a webhook.
These options are displayed in Figure 7-54.
FIGURE 7-54 Alert creation dialog within the OMS Log Search portal
Log Analytics alerts can be set up in a similar way to what was described
earlier under Monitoring System Updates with Log Analytics.
In either case, the process to enable change tracking for these objects is to
click the configuration area and enter in the path to the files or registry keys that
must be monitored for changes. In addition to the path, additional properties are
available for Linux file change tracking, including Type (File or Directory),
Links (how Linux simlinks are handled), Recurse (tracking all files under the
specified path), and Sudo (enables tracking of files that require sudo privilege).
Clicking See All runs a log search query that returns all recorded changes. The
query syntax looks like this.
Click here to view code image
search in (ConfigurationChange) ConfigChangeType == "Daemons" and
(SvcChangeType ==
"StartupType" or SvcChangeType == "Path" or SvcChangeType ==
"Runlevels") or
ConfigChangeType == "Files" or ConfigChangeType == "Registry" or
ConfigChangeType ==
"Software" and SoftwareName !contains_cs "KB2461484" and
SoftwareName !contains_cs
"KB2267602" or ConfigChangeType == "WindowsServices" and
SvcChangeType != "State" and
not (SvcName == "BITS" and SvcChangeType == "StartupType")
Thought experiment
Thought experiment
In this thought experiment, demonstrate your skills and knowledge of the topics
covered in this chapter. You can find answer to this thought experiment in the
next section.
You are the administrator at Fabrikam and your HR contact has asked you to
help solve several problems they have been experiencing with their Time
Reporting application. The application runs on a single IaaS VM in Fabrikam’s
Azure subscription and because it is a legacy application it cannot be clustered
nor host multiple instances. The problem they have been experiencing is load-
related slowness during end of the month timeframes, when employees are most
likely to use the application. The HR department could choose a different IaaS
VM series/size with more CPU cores and RAM, but they want to avoid running
with too much capacity during slow times (most of the month). “Over-
provisioning the server during slow periods would be a waste of money,” says
Helen Wilson, the HR Director.
An additional problem is that the application has been brought down three
times over the past 12 months. In each case, a change was made to the server
that impacted the application, but this conclusion only was discovered after
hours of troubleshooting. The HR department needs your help.
How will you:
1. Help the HR department address their performance issue without over-
provisioning the virtual machine during the slower times over most of the
month?
2. Protect the configuration of the application server to maximize its
availability?
3. Help administrators quickly discover the root cause whenever the next
‘mystery’ outage occurs?
Chapter summary
This chapter covered a wide variety of topics related to automating processes
and configurations, and collecting and analyzing machine logs and data from
cloud-based on-premises deployments in order to gain insights from this data.
Some of the key things to remember include:
Azure Automation runbooks can automate processes within on-premises or
cloud-based deployments. Any process that can be accomplished with
PowerShell, Python or Bash, can be automated with Azure Automation
runbooks.
Azure Automation DSC automates the configuration of computers in on-
premises or cloud environments. This solution can automatically correct or
alert on configuration drift and can work with Windows or Linux
computers.
Azure Log Analytics can consolidate machine data from on-premises and
cloud-based workloads and this data is indexed and categorized for quick
searching.
Azure Log Analytics has many management solutions that help
administrators gain value out of complex machine data. These solutions
contain pre-built visualizations and queries that help surface insights
quickly.
One of the key management solutions included with Log Analytics is the
Antimalware Assessment solution. This helps organizations find out about
systems missing malware protection, or discover malware infestations.
Another important Log Analytics management solution is the Change
Tracking solution. This solution enables organizations discover “what
changed” quickly, helping in the troubleshooting of outages.
CHAPTER 8
Manage Azure Identities
Microsoft has long been a leader in the identity space. This leadership goes back
to the introduction of Active Directory (AD) with Windows 2000 before the
cloud even existed. Microsoft moved into cloud identity with the introduction of
Azure Active Directory (Azure AD), which is now used by over five million
companies around the world. The adoption of Office 365 led this extended use of
Azure AD. These two technologies however have very different purposes, with
AD primarily used on-premises and Azure AD primarily used for the cloud.
Microsoft has poured resources into making AD and Azure AD work together.
The concept is to extend the identity that lives on-premises to the cloud by
synchronizing the identities. This ability is provided by a technology named
Azure AD Connect. Microsoft has also invested in extending those identities to
enable scenarios such as single sign-on by using Active Directory Federation
Services (ADFS), which is deployed in many large enterprises.
Microsoft has continued pushing forward by developing options for
developers to leverage Azure AD for their applications. Microsoft provides the
ability for developers to extend a company’s Azure AD to users outside of the
organization. The first option is known as Azure AD B2C (Business to
Consumer). This allows consumers to sign into applications using their social
media accounts, such as a Facebook ID. A complementary technology, known as
Azure AD B2B (Business to Business), extends Azure AD to business partners.
As the cloud becomes more popular and Azure AD adoption continues to pick
up, there are some legacy applications that require you to use the traditional AD,
even in the cloud. For this, Microsoft has developed a service called Azure AD
Domain Services. This allows for traditional Kerberos and LDAP functionality
in the cloud without deploying Domain Controllers into a VNet.
This area of the 70-533 exam is focused on the management of identities by
using Azure, as well as monitoring their health and functionality by using Azure
AD Connect Health.
Skill 8.1: Monitor On-Premises Identity Infrastructure and Synchronization
Services with Azure AD Connect Health
Skill 8.2: Manage Domains with Azure Active Directory Domain Services
Skill 8.3: Integrate with Azure Active Directory (Azure AD)
Skill 8.4: Implement Azure AD B2C and Azure AD B2B
EXAM TIP
To determine the cause of the unhealthy state, you can review the alerts and
click through to Alerts Details. Figure 8-3 shows the details of the following
alert, stating the Health service data is not up to date.
The health agent installer should be downloaded to the DC you want to add to
the Azure AD Connect Health monitoring portal. Figure 8-5 shows the tool after
the initial install has been completed. Click Configure Now to start the process
to connect the DC to Azure AD Connect Health.
After clicking Configure Now, you are prompted to log in to your Azure AD
tenant by using a global admin account, as shown in Figure 8-6.
When you click through to the next blade in the portal, important information
about the Domain and its DCs appears, as shown in Figure 8-9. This blade
includes information about the Domain, DC errors, and monitoring of
authentications along with other performance monitors.
You can configure email alerts by clicking Notification Settings on the blade.
By default, the notifications are enabled and setup to send email to global
administrators. Figure 8-11, shows that another email address has been
configured to send email to a distribution group named
[email protected].
FIGURE 8-11 Email Notifications configured in Azure AD Connect Health
EXAM TIP
The following are some important capabilities that Azure AD Connect Health
for ADFS and App Proxy provide:
Monitoring with alerts to know when ADFS and ADFS proxy servers are
not healthy
Email notifications for critical alerts
Trends in performance data, which are useful for ADFS capacity
planning
Usage analytics for ADFS signins with pivots (apps, users, and network
location), which are useful to understand how ADFS is used
Reports for ADFS, such as top 50 users who have bad
username/password attempts and their last IP address
The Azure AD Connect Health Alerts for ADFS shows a list of active alerts.
You can open an alert and additional information is provided to information can
include steps that you can take to resolve the alert and links to documentation.
You can also view historical data on alerts resolved in the past. Figure 8-12,
shows alerts for an ADFS installation in the Azure AD Connect Health portal.
Security Reports
By using the security reports in Azure AD, you can protect your organization’s
identities.
Azure AD detects suspicious activities that are related to your user accounts. For
each detected action, a record called risk event is created and shown in the
reports.
There are two types of security reports in Azure Active Directory:
Users flagged for risk Report showing an overview of user accounts that
might have been compromised.
Risky signins Report showing indicators for sign-in attempts that might
have been performed by someone who is not the legitimate owner of a user
account.
Activity Reports
The audit logs report provides you with records of system activities that are
generally used for compliance purposes.
There are two types of activity reports in Azure Active Directory:
Audit logs The audit logs activity report provides you with access to the
history of every task performed in your tenant.
Signins With the signins activity report, you can determine who has
performed the tasks reported by the audit logs report.
On the Network blade, choose the VNet where domain services should be
deployed. Then, select the subnet, as shown in Figure 8-15.
On the Administrator group blade, you need to add a user or group to the new
AAD DC Administrators group to be provisioned. You should select at least one
global admin for this group. Figure 8-16 shows that the user CloudAdmin, a
global admin for this Azure subscription, is selected.
FIGURE 8-16 Azure AD Domain Services Administrator group blade
The final step is to click OK after you review the Summary blade. Following
the completion of the provisioning, you can review the Domain Services object.
After the Domain Services blade is opened, next you need to configure your
DNS servers for the VNet to point to the domain services as the DNS servers. As
shown in Figure 8-17, click Configure DNS Servers to update them with the IP
addresses that are shown. Notice these addresses are in the dedicated
DomainServices subnet.
After the records are added, click Verify, and the domain is added to the list of
domains that can be used with this Azure AD.
EXAM TIP
If your network has more than one Active Directory Forest, you must use the
customized settings option in Azure AD Connect.
As shown in Figure 8-23, you are challenged for the global admin credentials
for the Azure AD you want to synchronize.
FIGURE 8-23 Enter the Global Admin Credentials for the Azure AD
As shown in Figure 8-24, you are now challenged to enter the on-premises
AD Domain Enterprise Admin credentials.
FIGURE 8-24 Enter the Enterprise Admin Credentials for the Active Directory
Domain
You can now complete the confirmations, and Azure AD Connect installs and
You can now complete the confirmations, and Azure AD Connect installs and
synchronizes the identities. After this is complete, the users and groups from the
on-premises AD appear in the Azure AD portal.
Using the Azure portal on the Azure AD that is being configured, click the
Licenses link then, select Product. When Azure AD Premium is listed, select it
and then, click +Assign. Select the user. In this case the CloudAdmin user is
selected, as shown in Figure 8-26.
FIGURE 8-26 The CloudAdmin user is selected for an Azure AD Premium
License
After you select the user(s), next click Assignment options. The portal loads,
as shown in Figure 8-27. Click On for each of the three options, and then click
Assign. By assigning this license it is now possible to enable MFA feature for
CloudAdmin user.
FIGURE 8-27 The Azure AD Premium License options are selected including
MFA
To force the user to use MFA, you need to create a Conditional Access Policy.
To create this policy by using the Azure portal, click Conditional Access under
the Security section of your Azure AD. Next, click +New Policy and provide a
name and select the users, the groups, or all users for this policy to apply to in
the Assignments section. Then click All Cloud Apps to have this policy impact
the applications in this Azure AD. Next, under Access Controls, click Grant
access and then select the Require multi-factor authentication option, as shown
in Figure 8-28.
FIGURE 8-28 Conditional Access Policy to require All Users to using MFA
After this is complete, upon their next login to any Azure or Office 365
service the users are required to enroll with the MFA service. Figure 8-29 shows
the MFA enrollment screen that the users see when they first enroll.
FIGURE 8-29 Azure requiring a user to enroll in MFA
After it is set up, every time users authenticate to one of the applications that
are a part of your Azure AD, they need to complete a two-step login. The user
enters their username and password and are then challenged, as shown in Figure
8-30.
FIGURE 8-30 Azure MFA challenging a user for a code sent to a mobile
phone
To verify their identity, a code is sent to the user’s mobile phone via SMS
message. After the user receives the code on their mobile phone, as shown in
Figure 8-31, the code must be entered into the webpage to complete the sign in
process to access the Azure AD application.
FIGURE 8-31 Azure MFA verification code sent to a user on a mobile phone
Click Access work and school and then click connect, as shown in Figure 8-
34.
FIGURE 8-34 Click Connect to add the Windows 10 device to Azure AD
Next, sign in by using your account. If the users are set up for MFA, they are
challenged, which is to be expected for this type of sign in, as shown in Figure 8-
35.
The device is now added to Azure AD and appears in the Azure AD portal as
a managed device.
Implement Azure AD Integration in Web and
Desktop Applications
Independent Software Vendors (ISV), enterprise developers, and software as a
service (SaaS) providers can develop cloud applications services that can be
integrated with Azure Active Directory (Azure AD) to provide secure sign-in
and authorization for their services. To integrate an application or service with
Azure AD, a developer must first register the application with Azure AD.
Any application that wants to use the capabilities of Azure AD must first be
registered in an Azure AD tenant. This registration involves providing Azure AD
details about the application, such as the URL where it’s located, the URL to
send replies to after a user is authenticated, and the URI that identifies it.
To register a new application by using the Azure portal in the Azure AD
blade, click App registrations, and click +New application registration. Next,
enter your application’s registration information, including Name, Application
type: “Native” for client applications that are installed locally or “Web app API”
for client applications and resourceAPI applications that are installed on a
secure server.
As shown in Figure 8-35, you need to complete the Sign-on URL field. If the
application is a “Web app / API” you should provide the base URL. For
“Native” applications, provide the Redirect URI used by Azure AD to return
token responses. Notice the app shown in Figure 8-36 is registered by using a
sign-on URL of http://localhost:30533.
FIGURE 8-36 Azure AD Application Registration
Next, you need to enter the Organization name, enter the Initial domain name,
and select your country or region. In Figure 8-38, the name ExamRefB2C is
used for the Organization name and the Initial domain name (must be globally
unique to the Microsoft cloud). Next, click Create.
FIGURE 8-38 Creating a new B2C Tenant using the Azure portal
After the directory is created, a link appears that says Click here, to manage
your new directory. Click the link, as shown in Figure 8-39, to open the Azure
AD B2C tenant that was just created.
FIGURE 8-39 Link to the new B2C Tenant
The next step is to register this B2C tenant with your subscription. When the
window loads after clicking the link shown in Figure 8-39, you land on the
management page. Note that no subscription has been linked to the B2C tenant,
as shown in Figure 8-40.
To link the new Azure AD B2C tenant to an Azure subscription by using the
Azure portal, first click New in the Azure portal and then search the marketplace
for Azure Active Directory B2C and click Create. Next, click Link an existing
Azure AD B2C Tenant to my Azure subscription, as shown in Figure 8-41.
After this has completed, you are directed to the B2C page. As shown in
Figure 8-43, you can then click the Settings link.
FIGURE 8-43 B2C Tenant to a subscription after linked to the Azure
Subscription
The settings for the Azure AD B2C tenant are then used for registering
applications, implementing social identity providers, enabling multi-factor
authentication, and other configurations. These settings can be selected from the
Azure portal, as shown in Figure 8-44.
FIGURE 8-44 B2C Tenant settings in the Azure portal
You can also switch to this Azure AD tenant by using the Azure portal and
selecting the B2C directory in the top-right corner of the portal. Figure 8-45
shows the ExamRefB2C directory being selected as the Directory being viewed
in the Azure portal.
Register an Application
To build an application that accepts consumer sign-up and sign-in, first you need
To build an application that accepts consumer sign-up and sign-in, first you need
to register the application with an Azure Active Directory B2C tenant.
Applications created from the Azure AD B2C blade in the Azure portal must
be managed from the same location.
You can register the following types of applications:
Web Applications
API Applications
Mobile or Native Applications (Client Desktop)
From the B2C Settings in the Azure portal, click Applications and then click
+Add. To Register a Web App, use the following settings, as shown in Figure 8-
46.
FIGURE 8-46 Registering an Application with the B2C Tenant
EXAM TIP
Reply URLs are endpoints where Azure AD B2C returns any tokens that your
application requests. Make sure to enter a properly formatted Reply URL. In
this example, the app is running locally and listening on port 40533 as
represented by the http://localhost:40533 URL.
After the application is created you can view its properties by clicking the
name in the portal. Changes can be made to the configuration of the application
registration. Also, the Application ID is shown and this is required for the code
of the application to call this application from your B2C. The application created
in this example is shown in Figure 8-47.
FIGURE 8-47 The Application Registration after being added to the B2C
Tenant
After you complete this process, you are provided an App ID and an App
Secret, as shown in Figure 8-49. These are used to add the social provider to
your B2C tenant by using the Azure portal.
After the identity provider is created, you need to create a Sign-up or Sign-in
Policy. This is done by using the Azure portal and clicking Sign-up or Sign-in
Policy and clicking +Add. From here, provide a name for the policy and select
the providers, along with the attributes and claims that you need for your
application. After it is configured, the portal page will resemble Figure 8-51.
FIGURE 8-51 Creating a Sign-up or Sign-in Policy
After you save this policy, you can open it and click Run Policy. Running the
policy then opens a new web browser to show the user experience when users
connect their Facebook accounts to your application, as shown in Figure 8-52.
FIGURE 8-52 The Facebook Login Page as an Identity Provider to the B2C
Tenant Application
EXAM TIP
Azure AD B2B works with any type of partner directory. Partners use their
own credentials. There is no requirement for partners to use Azure AD and no
external directories or complex setup is required. The invitation to join is the
critical action that makes this scenario work.
Global admins and limited admins can use the Azure portal to invite B2B
collaboration users to the directory, to any group, or to any application.
To add a B2B user, open Azure AD in the Azure portal and click through to
All-Users. From this point, click +New Guest User, as shown in Figure 8-55.
The next step is to add the user’s email address and then include a welcome
message, as shown in Figure 8-56.
FIGURE 8-56 Adding a Guest User’s Email and Welcome Message
The user receives an invite in their inbox to the Azure AD as a B2B user, as
shown in Figure 8-57. They can click through and then complete a process to
add their user name to the directory.
FIGURE 8-57 Email Invitation received by a B2B user added to Azure AD
Click the application to open it and then click Users and Groups. Next, click
+Add User as shown in Figure 8-59.
From the list of users, add the B2B user, as shown in Figure 8-60. Click
Assign and the user is added to the application.
FIGURE 8-60 Adding a B2B user to an Azure AD Application
Thought experiment
In this thought experiment, apply what you have learned in this chapter. You can
find answers to these questions in the “Answers” section at the end of this
chapter.
You are the new IT Administrator at TailSpin Toys, which is a global leader
in selling and distributing toys around the globe. Recently TailSpin purchased
one of the leading online websites for gaming. As a part of this transition, your
CIO has determined that you will use Azure and Office 365 moving forward.
As a result, you have been put in charge of setting up the identity with Azure
AD for the following systems:
Any TailSpin user needs to be able to authenticate to Azure solutions and
Office 365 by using the same user name and password as they use today.
Currently, they log in to the on-premises AD domain, which is running
Windows Server 2012R2 Native mode.
The online gaming system should use Azure AD and allow users to use
their social media accounts.
TailSpin’s main ordering system will be moved to Azure. The security team
has advised you that partners can no longer have accounts in the TailSpin
AD, so you need a way to provide them access to this application.
1. What Azure tools should you use to securely synchronize the AD accounts
from the AD Domain on-premises to Azure? How should you monitor this
replication as well as your domain controllers?
2. What type of directory should you deploy for the online gaming site? How
will you allow for the users to sign-up and sign-in to the application by
using their social media identities?
3. What type of directory should you use for the ordering system? What user
login IDs should the partners use since security is taking away the IDs that
had been provided to partners in the past?
Chapter summary
Below are some of the key takeaways from this chapter:
All versions of Active Directory can be managed by using Azure AD
Connect Health
On-premises domain controllers can be managed by using Azure AD
Connect Health
Email notifications can be set up to send emails to alert of issues found
while monitoring your different AD directories
Azure AD Domain Services allows for joining Azure VMs to a directory
without the need to deploy DCs into Azure IaaS
Azure AD Domain Services supports the use of GPOs
Traditional AD aware applications can be deployed to the cloud and use
LDAP and Kerberos authentications with the support of Azure AD Domain
Services
Custom domains can be added to Azure AD, such as contoso.com, but there
is always a default contoso.onmicrosoft.com domain
Multi-Factor Authentication (MFA), requires users to supply another form
of verification other than just user name and password. This is in the form
of phone call, text message, or verification app on a mobile phone.
MFA requires a Premium license for each user and their location must be
set prior to enabling the service
Windows 10 can be added to Azure AD as a device to be managed,
enabling BYOD or corporate cloud only deployments
Azure AD B2C allows developers to leverage the social identity of users
such as Facebook and Microsoft ID amongst others
Azure AD B2B allows administrators to invite partner companies to gain
access to their cloud resources
Index
A
access control 178–184, 310–321. See also security
access policies 338
ARM authentication 311–315
lock resources 319–321
management policies 315–318
role-based 192–195, 322–330
SaaS applications 370–371
Shared Access Signatures 180–182
stored access policy 182–183
Virtual Network Service Endpoints 183–184
access control lists (ACLs) 193
access panel extension 368
ACE. See access control entries
ACLs. See access control lists
ACR. See Azure Container Registry
ACS. See Azure Container Services
Active Directory (AD) 311, 469
registering application in 311–313
service principals in 313–314
Active Directory Federation Services (ADFS) 469
proxy monitoring 477–478
activity data 457–459
activity log alerts 119, 122–123
activity logs 456–459
activity reports 479
AD. See Active Directory
Adaptive application controls 357–358
Add-AzureRmAccount cmdlet 66
Add-AzureRmVirtualNetworkPeering cmdlet 221
Add-AzureRmVirtualNetworksubnetConfig cmdlet 237
Add-AzureRmVmssExtension cmdlet 134
ADFS. See Active Directory Federation Services
Alert Rules 189–190
alerts 39
activity log 119, 122–123
Azure Storage 189–190
based on log search queries 461
configuration 119–123
critical, email notifications for 476–477
metric 119–121
security 359–361
Allow Gateway Transit option 258–259
Antimalware Assessment management solution 462–463
append blobs 158
application delivery controller (ADC) 232
Application Gateway (App Gateway)
cookie-based session affinity 233
creating 234–239
deployment into virtual networks 234
design and implementation 285–286
end to end SSL 233
implementing 232–239
internal load balancers and 262
load balancing 233
secure sockets layer (SSL) offload 233
sizes 234
URL-based content routing 234
web application firewall 233
application gateways 266
Application Insights 6, 35–39, 111, 116–117
application logs 115
applications. See also Web Apps
Adaptive application controls 357–358
adding users and groups to 369–370
availability tests 37–39
deploying to web apps 14
desktop 495–496
diagnostic logs 28–29
directory-aware 485
Enterprise 369
health check pages 49
integration with Azure AD 495–496
integration with Azure AD B2B 510–511
LOB 130
migrating on-premise to Azure 485
registering, in Azure AD 311–313
registering with Azure AD B2C 502–504
SaaS 363–368, 485
revoking access to 370–371
scaling, in ACS 148–149
service principals and 315
virtual 27
web 477–478, 495–496
firewalls 355–357
registering 374–375
application settings
connection strings and 18–20
App Service Certificate 344–346
App Service Certificates 23–24
App Service Environments (ASE) 4
app service plans
creating 2–6
Azure portal 4–5
CLI 6
PowerShell 5
instances, scaling 43–45
migrating web app to 15–16
pricing tiers 2–4
resource monitoring 34–35
scaling up or down 42–43
apt-get package manager 170
A records 21, 26
ARM template files 294
ARM template parameter files 294
artifact files 295
ASC. See Azure Security Center
ASE. See App Service Environments
ASN. See Autonomous System Numbers
AS numbers 247
ASR. See Azure Site Recovery
ASR Capacity Planner 395, 403
asverify records 176–177
async blob copy service 164–170
authentication 311–315, 371
multi-factor 488–493, 506
social identity provider 503–505
storage account 178
author ARM templates 294–308
automation 446
accounts, creating 416–417
certificates 426–427
configuration 432–441
connections 425–426
credentials 424–425
for cloud management 415–441
integrating, with Web Apps 428–431
modules 422–424
process 416–428
runbooks 418–431
schedules 421–422
variables 427–428
Autonomous System Numbers (ASN) 243
Autoscale 134
Autoscale feature 42, 44–46
availability
availability
managing, with Load Balancer 128–129
sets 125–129
virtual machines 124–131
zones 124–125
availability sets 231
availability tests 37–39
Average Response Time metric 34
az account list-locations command 207
az ad app create command 312
az ad sp create command 313
AzCopy
async blob copy 167
blob and container management 163
az group create command 70, 309
az group deployment create command 309
az image command 97
az image list command 72
az lock create command 321
az network application-gateway address-pool
command 286
az network application gateway command 239
az network command 239
az network lb command 282
az network nic create command 72
az network nic update command 274
az network nsg create command 72
az network public-ip create command 72
az network rule create command 72
az network vnet create command 71, 207
az network vnet peering list command 221
az network vnet subnet create command 71, 207
az policy assignment create command 319
az provider list CLI command 298
az role definition command 324
az role definition create command 329
az storage account create command 71
az storage blob generate-sas command 182
az storage container create command 161
Azure Active Directory 469
adding application to 364
adding custom domains 485–486
Azure AD Connect and 487–488
Editions 369
integrating on-premises AD with 485–496
integration in web and desktop applications 495–496
Microsoft Graph API and 496
Multi-Factor Authentication 488–493
registering application in 311–313
service principals in 313–314
single-sign on 363–368
Azure Activity Log 456–459
Azure AD. See Azure Active Directory
Azure AD B2C (business to consumer) 371–377
Azure AD Business to Business (B2B) 469, 497
application integration 510–511
collaboration implementation 508–510
partner users configuration 508–510
Azure AD Business to Consumer (B2C) 469, 497–508
directory creation 497–502
enabling multi-factor authentication 506
registering application 502–504
Self-Service Password Reset 507
social identity provider authentication 503–505
tenant creation 497–502
Azure AD Connect 469, 485, 487–488
Azure AD Connect Health 470–479
activity reports 479
ADFS and web application proxy server
monitoring 477–478
domain controller monitoring 473–476
email notifications for critical alerts 476–477
security reports 479
sync engine and replication monitoring 470–473
utilization reports 479
Azure AD Domain Join 493–495
Azure AD Domain Services 469, 479–485
implementation 480–482
joining Azure virtual machines to 482–483
on-premise app migration 485
VM management using Group Policy 484
Azure AD tenant 495
Azure AppService Web Apps. See Web Apps
Azure Automation. See automation
Azure Automation DSC service 88
Azure Backup
agents 380–382
backup and restore data 383–387
encryption passphrase 382
Azure Cloud Shell 61, 73, 82, 206
Azure Container Registry (ACR) 142–144, 149
Azure Container Services (ACS) 57, 138–150
Kubernetes cluster in 145
managing containers with 146–148
open-source tooling configuration 138–139
scaling applications in 148–149
Azure Data Lake 192–195
Azure Diagnostics agent 119
Azure Diagnostics Extension 112
Azure Disk Encryption 105
Azure Domain Services
adding and managing devices 493–495
Azure Fabric Controller 59
Azure files
adding new share with 169–170
hierarchy 168
use cases 169
Azure File Service 106–110
connecting to, outside of Azure 108–110
Azure Key Vault 19, 24, 179–180, 302, 336–342
certificate management 342–344
Azure Load Balancer 128–129
Azure Log Analytics. See Log Analytics
Azure Managed Disks 126
Azure Monitor 111
Azure portal
App Gateway creation in 234–236
Application Insights in 36–37
app service plan creation in 4–5
ARM template deployment in 309
automation account creation in 416–417
availability set creation in 127
Azure AD B2C tenant creation in 497–502
Azure File Share creation in 106
blob and container management in 159–160
configuration of VMs as backend pool in 285–286
deployment slot creation in 8–9
diagnostics configuration 114–117
DNS settings configuration in 212, 268–269
enabling diagnostic logs in 28
gateway subnet creation in 210–211
handler mappings configuration in 26
load balancer creation in 277–280
Log Analytics workspace creation in 443
migrating web app to separate app service plan in 15
NSG creation in 270–273
NSGs using 226–229
public IP address creation with 266–267
Recovery Services vault in 378–379
static private IP addresses with 263
swapping deployment swaps in 11–13
using custom script extension 88
viewing streaming logs in 32–33
virtual machine creation in 60–66
virtual machine scale set creation in 132–134
virtual network creation in 201–204
VNet peering in 218–221
VNet-to-VNet connections in 251–257
web app creation in 6–7
Azure private peering 244
Azure public peering 244
Azure resource connectivity 200
Azure Resource Manager (ARM)
authentication 311–315
private IP addresses 261
templates 293–334
author 294–308
creating VM from 74–79
deployment 308–309
file types 294–295
functions 299
implementing 294–309
NIC resources 299–302
parameter file 305–307
schemas 295–296
virtual machine resource 302–308
virtual network resources 296–299
virtual machines
alerts configuration 119–123
availability 124–129
diagnostics 112–119
monitoring 110–123
networking 260–286
resizing 130–132
scaling 129–137
VNet peering 248–250
VNets
connecting 217–222
Azure resource policy 316–317
Azure Security Center (ASC) 346–363
applications node 355–358
data collection 347
email notifications 349
enabling 346–350
enabling protection for non-Azure computers 350
Identity and Access solution 358–359
networking node 353–354
preventing security threats with 351–359
pricing tiers 349–350
recommendations 358
responding to security threats with 359–363
security policy 348
Storage and Data node 354–356
Threat intelligence detection capability 362–363
Azure Site Recovery (ASR) 393–412
failover testing 401–402
Hyper-V virtual machine protection 402–408
Recovery Plan 401
Unified Setup and vault credential 398
virtual machine protection 408–412
VMWare and physical machine protection 394–402
Azure Storage 155–177
access control 178–184
async blob copy service 164–170
blob 155–164
encryption 190–195
monitoring and alerts 189–190
replication options 163–164
SMB file storage 168–170
Azure Storage Diagnostics 185–188
Azure Storage Explorer 162, 167–168, 182–183
Azure Storage Service Encryption 104–105
Azure Storage Service Encryption (SSE) 191
Azure-to-Azure Site Recovery 408–412
Azure Traffic Manager
adding endpoints 51–52
configuration 47–52
profile creation 48–50
Azure Virtual Networks (VNets) 199–292
address ranges 202–203
App Gateway 232–239
ARM VM networking 260–286
communication strategy 287–289
configuration 199–239, 251–260
connecting, using VNet peering 217–222
connectivity 200–201
creating 200–207
design subnets 208–209
DNS setup 211–214
gateway subnets 210
introduction to 199
multi-site 251–260
network connectivity 239–259
network security groups 222–230
service chaining 250
system routes 214–215
user defined routes 214–217
virtual machine deployment into 230–232
az vm availability-set create command 72, 128
az vm create command 73, 98
az vm disk attach command 99
az vm extension set command 90
az vm generalize command 97
az vm list-vm-resize-options command 132
az vmss create command 137
az vmss update-instances command 137
az vm unmanaged-disk command 99
B
backend pools 129, 130, 278–279, 285–286
backup agents 380–382
Backup policy 385–386
backups 383–387
configuration 40–41
BGP routing protocol 243
BlobCache 98–100
blob files 93–94
blob snapshots 387–392
blob storage 155–164
account types 157
async blob copy service 164–170
CacheControl property of 174
CDN endpoints and 172
encryption 191
managing 159–164
metadata 158–159
time-to-live (TTL) period 173
types 158
Blob Storage account 94
block blobs 158
BMC. See baseboard management controller (BMC)
boot diagnostics 112, 117–118, 119
Bring Your Own Device (BYOD) policy 493
brute-force attacks 359
built-in roles 323–327
C
CacheControl HTTP header 173
Capacity Planner for Hyper-V Workloads 403–404
CDN. See Content Delivery Network
certificate authorities 23
certificate authority (CA) 343
certificate management 342–344
certificates 342
App Service Certificate 344–346
automation 426–427
creating 343–344
importing 342–343
SAML signing 365
Change Tracking management solution 463–466
CIFS. See Command Internet File System
cifs-utils package 170
Classless Inter-Domain Routing (CIDR) 200
CLI. See Command Line Interface
client-side telemetry data 39
cloud computing 1
cloud identity 469
cloud management
automation for 415–441
cloud services 351
Cloud Shell 73, 82, 206
CLR. See Common Language Runtime (CLR)
Cmdkey.exe 170
CNAME records 21, 22–23, 52, 176, 177
Command Line Interface (CLI) 1
App Gateway creation 239–240
application settings configuration 20
app service plan creation 6
ARM template deployment 309–310
async blob copy 166–167
availability set creation 128
Azure File Share creation 107
blob and container management 161–162
configuring VMs as backend pools 286
custom script extension 89–91
deployment slot creation 10
DNS settings configuration 213–214, 270
enabling diagnostic logs 29
load balancer creation 282–283
NSG creation 274
NSG creation and association 230
public IP addresses 268
resizing VM 132
retrieving diagnostic logs 32
static private IP addresses 264
swapping deployment slots 14
viewing streaming logs 33
virtual machine scale set creation 137
virtual network creation 206–208
VM creation 70–73
VM image creation 97
VNet peering 221–222
web app creation 8
Common Internet File System (CIFS) 106
communication strategy 287–289
community sourced runbooks 419
Conditional Access Policy 490
configuration
ACS 138–139
alerts 119–123
application settings 17–21
availability sets 125–129
availability zones 124–125
Azure diagnostics 112–119
Azure Storage Diagnostics 185–188
Azure Traffic Manager 47–52
backups 40–41
Content Delivery Network 170–176
custom domains 20–22, 176–177
DNS settings 212–214, 268–270
handler mappings 26
private static IP addresses 261–264
social identity providers 376–377
SSL certificates 22–26
virtual applications and directories 27
virtual machines 64–65, 82–92
virtual machine scale sets 132–137
virtual networks 199–239, 251–260
Virtual Network Service Endpoints 183–184
Web Apps 16–27
for scalability and resilience 42–51
configuration automation 432–441
Configuration Server 396, 397–398, 399–400
connections
automation 425–426
connection strings
application settings and 18–20
connectivity. See also network connectivity
VNets 200–201
containers 156
Azure Container Registry 142–144
Azure Container Services 138–150
images 139–142
managing 146–148, 159–164
metadata 158, 159
migrating workloads 149
root 156, 157
troubleshooting 150
Containers Monitoring Solution 149–150
Content Delivery Network (CDN) 2–16
configuration 170–176
custom domains for 176–177
endpoints 170–172
pricing tiers 171
profile creation 171
versioning assets with 174–176
content routing
URL-based 234
continuous integration/continuous delivery (CI/CD) workflow 433
Contributor built-in role 324–326
ConvertTo-SecureString cmdlet 110, 342
cookie-based session affinity 233
copyIndex() function 300
CPU Percentage 35
creation, renaming, updating, and deletion (CRUD) operations 457
credentials
authentication of 488–493
automation 424–425
on-premises 486
custom domains 485–486
associating with web apps 22
configuration, for web apps 20–22
customer-managed DNS settings 213
custom resource policy 317–318
Custom Script Extension 82, 88–91
custom security alerts 360–362
custom visualizations
in Log Analytics 452–455
D
Dashboard tile 453–455
data
activity 457–459
backup and restore 383–387
diagnostic, analyzing 187–188
logging 187, 188
machine 441–443, 448–449
metrics 187, 188
querying. See queries
resource 457–459
data analysis. See Log Analytics
database as a service (DBaaS) 354
data churn 394
data collection 347, 441–466
data disks 95
data encryption. See encryption
data protection 335–336. See also security
encryption 342–346
DCs. See domain controllers
debugging
remote, of VMs 91–92
default tags 225
deployment
applications 14
ARM template 308–309
ARM templates 74–77
backup agents 380–382
Web Apps 2–16
deployment script files 294
deployment slots
cloning existing 9
creating
in Azure portal 8–9
in CLI 10
in PowerShell 9–10
defining 8–10
multi-phase 11, 12–13
production 8
swapping 11–14
Azure portal 11–13
CLI 14
with PowerShell 13
design subnets 208–209
Desired State Configuration (DSC) 432–441
Azure Automation 434–436
monitor and update machine configurations
with 436–441
configurations
creating 432
managing 433
metaconfigurations 438–441
node configurations
generating 434–436
nodes
adding 436–441
registration options 436
resources
built-in 434
custom 434
importing 433–434
Desired State Configuration (DSC) extension 82, 83–88
desktop applications 495–496
DevOps principles 433
diagnostic logs
application 28–29
enabling 27–29
locations 30
retrieving 29–32
in PowerShell 32
using FTP 30
using Site Control Manager 30–31
streaming, viewing 32–33
web server 28–29
diagnostics
Azure Storage Diagnostics 185–188
boot 112, 117–118, 119
configuring Azure 112–119
data analysis 187–188
guest operating system 112
Linux, enabling and configuring 118–119
DigiCert 343
directories
virtual 27
Direct Server Return (DSR) 283–284
Disable-AzureRmTrafficManagerEndpoint cmdlet 52
disaster recovery. See recovery services
disk caching 98–100
disk encryption 104–106
disk redundancy 103–104
disks
managed 126
mounting 104
VM 95
DNS names 48
DNS records
adding 21–23
updating 48
docker-compose command 140
Docker containers 6
Domain Controller (DC) 213
domain controllers (DCs)
monitoring 473–476
Domain Naming Service (DNS) 211–214
at NIC level 268–270
domains
custom 485–486
for storage and CDN 176–177
custom, configuration of 20–22
DPM protection agent 380
DSR. See Direct Server Return
dynamic IP addresses 261
E
elastic scale 42
email notifications 349
encryption
encryption
Azure Data Lake 192–195
data 342–346
disk 104–106
keys, create and import 336–342
passphrase, Azure Backup 382
storage 190–195
Enterprise applications 369
enterprise Azure scaffold 316
error messages 28
event categories 122
event log data 115
Event Tracing for Windows (ETW) 115–116
ExpressRoute 242–245
External ASEs 4
F
Facebook 371–377, 503–505
failed requests
logs style sheet file 30
tracing 28
failover testing 401–402
federation
with public consumer identity providers 371–377
federation-based single sign-on 363–366
files
change tracking 464
hierarchy 168
purging 175
file shares
adding new 169–170
file share service 106–110
file system permissions 193–195
firewall rules 193–194
firewalls 201
network 353
web application 353, 355–357
web application firewall 233
FTP client
for retrieving log files 30
Fully Qualified Domain Name (FQDN) 211, 265
G
gateway subnets 210–211, 252–253
generate-ssh-keys parameter 71
geo-redundant storage (GRS) 94
Geo-Redundant Storage (GRS) 392–393
geo-replication 392–393
Get-AzureKeyVaultCertificateOperation cmdlet 343, 344
Get-AzureRmApplicationGatewayBackendAddressPool cmdlet 286
Get-AzureRmApplicationGateway cmdlet 286
Get-AzureRmLocation cmdlet 204
Get-AzureRmNetworkInterface cmdlet 263, 269, 273, 286
Get-AzureRmRemoteDesktopFile cmdlet 80
Get-AzureRmResourceGroup cmdlet 67, 204
Get-AzureRmResourceProvider cmdlet 298
Get-AzureRmStorageAccount cmdlet 67
Get-AzureRmStorageAccountKey cmdlet 110
Get-AzureRmStorageKey cmdlet 165
Get-AzureRmStoragerAccountKey cmdlet 107
Get-AzureRmTrafficManagerProfile cmdlet 51
Get-AzureRmVirtualNetwork cmdlet 230
Get-AzureRMVirtualNetwork cmdlet 237
Get-AzureRmVirtualNetworkPeering cmdlet 221
Get-AzureRmVM cmdlet 99
Get-AzureRmVMImageOffer cmdlet 69
Get-AzureRmVMImagePublisher cmdlet 69
Get-AzureRmVMImageSku cmdlet 69
Get-AzureRmVMSize cmdlet 131
Get-AzureRmWebAppSlot cmdlet 10
Get-AzureStorageBlobCopyState cmdlet 166
Get-AzureWebsiteLog cmdlet 33
Get-PhysicalDisk cmdlet 103
GlobalSign 343
Google 371–377
Group Policy Objects (GPOs) 484
groups
adding to applications 369–370
GRS. See geo-redundant storage
guest operating system diagnostics 112
H
handler mappings 26
hard disk drives (HDDs) 63
hardware security modules (HSMs) 180, 336
HCM. See Hybrid Connection Manager
health check pages 49
health probes 276–277, 279
host caching 99
hot access tier 157
HSMs. See hardware security modules
HTTP GET requests 37
HTTP probe 276
HTTP probes 129
HTTPS traffic 233
hub and spoke network topology 320
hybrid cloud 239
Hybrid Connection Manager (HCM) 288
hybrid connections 288–289
hybrid network connectivity 239–259
Hybrid Runbook Worker 419–421
Hyper-V 58
Hyper-V-based workloads 402–408
Hyper-V hosts 407
Hyper-V sites 405–407
I
I
identity infrastructure 470–479
identity management 469–514
Azure Active Directory 485–496
Azure AD B2C 497–508
Azure AD Connect Health 470–479
Azure AD Domain Services 479–485
social identity provider authentication 503–505
identity providers 376–377
ILB ASEs 4
Import-AzureKeyVaultCertificate cmdlet 343
Independent Software Vendors (ISV) 495
Infrastructure as a Service (IaaS) 351
infrastructure-as-code (IaC) assets 433
internal load balancer (ILB) 4
internal load balancers (ILBs) 262
Internet connectivity 200
Internet-facing load balancers 266
IP addresses
allocation of 262
default tags 225
dynamic 261
private 261
private static 261–264
public 247, 261, 264–268
static private 263–264
subnets 208, 210–211
VNets 202–203
IP address spaces 247
IP forwarding 274–275
IPSec VPN 240
J
JavaScript Object Notation (.json) files 77
Just in time (JIT) VM access 352–353
K
Key Vault 302, 336–342
certificate management 342–344
Kubernetes 147–149
clusters 145
monitoring 149–150
Kubernetes API endpoints 138
L
large scale sets 132
LCM. See Local Configuration Manager
line of business (LOB) applications 130
Linux agent 60
Linux-based virtual machines
connect and mount Azure File from 110
connecting to 81–82
custom script extension with 89–91
diagnostics, enabling and configuring 118–119
Linux distributions
for VMs 58–59
load balancers 128–129, 233, 262, 266, 275–283
health probes 276–277
Local Configuration Manager (LCM) 433, 436
locally redundant storage (LRS) 94
lock resources 319–321
Log Analytics 112, 441–466
connecting Activity Log to 457–458
custom visualizations 452–455
data sources
connecting 444–448
searching 441–452, 449–452
default workspace 442–443
malware status monitoring with 462–463
management solutions 448–449
monitoring system updates with 459–461
queries 458–459
server configuration change tracking in 463–466
visualizing Azure resources across multiple
subscriptions 456–457
workspace creation 443–444
workspace ID and keys 446–447
writing activity data to 457–459
Log Analytics query language 361
log files. See also diagnostic logs
retrieving
using FTP 30
using Site Control Manager 30–31
logging data 187, 188
Logic Apps 121
Login-AzureRmAccount cmdlet 66, 204
Log Search feature 449–452
LRS. See locally redundant storage
M
machine configurations 436–441
machine data 441–443, 448–449
makecert.exe 80
malware 462–463
managed disks 95, 126
managed images 96, 97
management policies 315–318
man-in-the-middle attacks 80
MARS. See Microsoft Azure Recovery Services
Master Target Server 396
Memory Percentage 35
metadata
setting, with storage 158–159
metric alerts 39, 119–121
metric-based scale conditions 44–45
metrics data 187, 188
Microsoft Azure AppService Web Apps. See Web Apps
Microsoft Azure Datacenter
IP ranges 225
Microsoft Azure Linux Agent (waagent) 59–60, 95
Microsoft Azure Recovery Services (MARS) agent 380–382
Microsoft Graph API 496
Microsoft Monitoring Agent (MMA) 446–448
Microsoft peering 244
migration
lift and shift 106
on-premises apps to Azure 485
workloads 149
MMA. See Microsoft Monitoring Agent
mobile devices 493–495
Mobility service extension 396, 409
modules 422–424
monitoring
ARM VMs 110–123
Azure Storage 189–190
clusters 150
Kubernetes 149–150
options 111–112
Most attached resources 359
multi-factor authentication (MFA) 488–493, 506
multi-phase deployment swaps 11, 12–13
Multiprotocol Label Switching (MPLS) 242
multi-site network connectivity 239–259
multi-step web tests 37
N
net use command 110
network connectivity 239–259
ExpressRoute 242–245
network prerequisites 247–248
on-premises 260
VNet peering 248–250, 257–259
VNet-to-VNet connections 251–256
VPN Gateway 246–247
network interface (NIC) 268–270
associating NSG with 270–274
resources 299–302
network security groups (NSGs) 208, 209, 222–230, 270–278, 353, 391
associating 225–229
default rules 224–225
default tags 225
properties 222, 222–223
rules 222–223
Network Security Groups (NSGs) 4
network traffic 248, 249
Network Watcher 111
New-AzureKeyVaultCertificateOrganizationDetails cmdlet 343
New-AzureKeyVaultCertificatePolicy cmdlet 343
New-AzureRmADApplication cmdlet 312
New-AzureRmAppServicePlan cmdlet 5
New-AzureRmAutomationCertificate cmdlet 427
New-AzureRmAutomationVariable cmdlet 428
New-AzureRmAvailabilitySet cmdlet 68, 128
New-AzureRmImage cmdlet 97
New-AzureRmImageConfig cmdlet 97
New-AzureRmKeyVault cmdlet 339
New-AzureRmNetworkInterface cmdlet 69
New-AzureRmNetworkSecurityGroup cmdlet 68, 273
New-AzureRmNetworkSecurityGroup PowerShell
cmdlet 229
New-AzureRmNetworkSecurityRuleConfig
cmdlet 68, 229, 273
New-AzureRmOperationalInsightsWorkspace
cmdlet 444
New-AzureRmPolicyDefinition cmdlet 318
New-AzureRmPublicIpAddress cmdlet 267
New-AzureRMPublicIpAddress cmdlet 237
New-AzureRmResourceGroup cmdlet 67, 204, 308, 338
New-AzureRmResourceGroupDeployment
cmdlet 79, 308
New-AzureRmResourceLock cmdlet 321
New-AzureRmRoleDefinition cmdlet 329
New-AzureRmStorageAccount cmdlet 67
New-AzureRmTrafficManagerEndpoint cmdlet 51
New-AzureRmTrafficManagerProfile cmdlet 50
New-AzureRmVirtualNetwork cmdlet 205, 213
New-AzureRmVirtualNetworkSubnetConfig
cmdlet 67, 205
New-AzureRmVMConfig cmdlet 69, 70
New-AzureRmVmssConfig cmdlet 134
New-AzureRmWebApp cmdlet 7
New-AzureRmWebAppSlot cmdlet 9–10
New-AzureStorageAccount cmdlet 164
New-AzureStorageBlobSASToken cmdlet 181
New-AzureStorageContainer cmdlet 160–161, 165
New-AzureStorageContext cmdlet 107, 165
New-AzureStorageShare cmdlet 107
New-PSDrive cmdlet 110
New-SelfSignedCertificate cmdlet 80
New-VirtualDisk cmdlet 103
NIC. See network interface
NSGs. See network security groups
O
OAuth2 protocol 312
OMS. See Operating Management Suite; See Operations Management Suite
OMS Gateway 442
OMS Portal 461
on-premises connectivity 201, 260
on-premises credentials 486
on-premises environment
data collection in 442
on-premises infrastructures 287
OpenID Connect 363, 374
open-source tooling 138–139
operating system disks 95
operating system images 95–98
creating VMs from 97–98
managed 96, 97
unmanaged 96–97
Operations Management Suite (OMS) 149–150, 378
Organization Units (OUs) 484
Overview tile 452–453
Owner built-in role 324
owning groups 195
owning users 195–196
P
page blobs 158
password-based single sign-on 366–368
passwords
Self-Service Password Reset 507
synchronization 482
performance counters 113, 115
permissions 311
file system 193–195
VMWare 398–399
Personal Information Exchange (.pfx) files 23
ping tests 38
platform as a service (PaaS) 354
Platform-as-a-Service (PaaS) 1
point-to-site virtual private network (VPN) 240–241
port reuse 283
PowerShell
App Gateway creation in 236–238
application settings configuration in 20
app service plan creation in 5
ARM template deployment using 308
async blob copy with 164–166
automation account creation using 417
availability set creation with 128
Azure File Share creation with 107–108
blob and container management with 160–161
configuring VMs as backend pools with 286
connect and mount Azure File using 110
Custom Script Extension 82, 88–91
deployment slot creation with 9–10
Desired State Configuration 432–441
DNS settings configuration with 213–214, 269
DSC extension 82, 83–88
enabling diagnostic logs with 29
load balancer creation with 280–282
Log Analytics workspace creation with 444
modules 422–424
NSG creation with 273–274
NSGs using 229–230
public IP address creation using 267–268
Recovery Services vault with 379
remoting 80–81
resizing VM with 131
retrieving diagnostic logs with 32
runbooks 416–428
static private IP addresses with 263–264
swapping deployment slots with 13
Traffic Manager profile creation with 50
unmanaged VM image creation with 96–97
viewing streaming logs in 33
virtual machine scale set creation with 134–136
virtual network creation in 204–205
VM creation with 66–70
VNet peering with 221
web app creation in 7
PowerShell cmdlets. See also specific cmdlets
Azure 1
PowerShell Gallery 423
Premium storage 101–102
pricing tiers 336, 349–350
primary keys 179
private IP addresses 263–264
private static IP addresses
configuration 261–264
proactive diagnostic alerts 39
process automation 416–428
Process Server 396
production deployment slot 8
ProvisionVMAgent parameter 69, 83
public IP addresses 261, 264–268
Publish-AzureRmVMDscConfiguration cmdlet 85
Q
query strips 175–176
R
RAID 0 disk striping 103
RBAC. See role based access control
read-access geo-redundant storage (RA-GRS) 94
Read Access-Geo Redundant Storage (RA-GRS) 393–394
Reader built-in role 326–327
Recover Data Wizard 384
recovery services
Azure Site Recovery 393–412
backup agents 380–382
backup and restore data 383–387
geo-replication 392–393
planning 394–396
snapshots 387–392
vault 378–379, 393, 398, 405
Recovery Services Agent 403–404, 407
redundancy 125
Register-AzureRmAutomationDscNode cmdlet 437
registry change tracking 464
remote debugging
of VMs 91–92
remote desktop protocol (RDP) 79–80, 359
Remote Gateways 257–259
Remove-AzureRmPolicyDefinition cmdlet 318
Remove-AzureRmTrafficManagerEndpoint
cmdlet 51–52
replication 400–401, 409–412
monitoring 470–473
resizing
virtual machines 130–132
resource data 457–459
resource locks 319–321
resource policies
assignment 318–319
Azure 316–317
built-in 316
custom 317–318
resource schemas 296
role assignment 327–328
role-based access control 192–195
role based access control (RBAC) 95
role-based access control (RBAC) 322–330
custom roles 328–329
standard roles, implementing 322–328
root container 156, 157
route tables 208
Run As accounts 417
Runbook Gallery 418
runbooks 121, 416–431
S
SAML 2.0 363
SAML signing certificates 365
SAN. See subject alternative name
Save-AzureRmVMImage cmdlet 96
Save-AzureWebsiteLog cmdlet 32
scalability features 42–47
scaling
Azure VMs 129–137
basic tier Web App 428–431
in ACS 148–149
virtual machine scale sets 132–137
schedule-based scale conditions 46–47
schedules
automation 421–422
$schema property 305
SCOM. See System Center Operations Manager;
See System Center Configuration Manager
secondary keys 179
secure shell (SSH) protocol 81–82
secure sockets layer (SSL)
end to end 233
secure sockets layer (SSL) offload 233
security 335–377. See also access control
authentication 371
Azure Security Center 346–363
encryption 336–342
multi-factor authentication 488–493, 506
network security groups 270–274, 391
single sign-on 363–368
social accounts 371–377
SSL/TLS certificates 342–346
threat prevention 351–359
threat response 359–363
Security alerts 359–363
security policy 348
security reports 479
security rules 208, 271–272
Self-Service Password Reset 507
self-signed SSL certificates 80
server configuration changes 463–466
Server Messaging Block (SMB) protocol 106
Server Name Indication (SNI) 25
Service Bus Relay 288
service chaining 250
Service Health 39
service level agreements (SLAs) 125
service principals 313–314
Set-AzureKeyVaultCertificateIssuer cmdlet 344
Set-AzureKeyVaultSecret cmdlet 342
Set-AzureRmApplicationGatewayBackendAddressPool cmdlet 286
Set-AzureRmNetworkInterface cmdlet 263, 269, 273
Set-AzureRmOsDisk cmdlet 98
Set-AzureRmVirtualNetwork cmdlet 237
Set-AzureRmVirtualNetworkSubnetConfig cmdlet 230
Set-AzureRmVMCustomScriptExtension cmdlet 89
Set-AzureRmVMDataDisk cmdlet 99
Set-AzureRmVMOperatingSystem cmdlet 69, 80, 81, 83
Set-AzureRmVMOSDisk cmdle 99
Set-AzureRmVMOSDisk cmdlet 98
Set-AzureRmVMSourceImage cmdlet 69
Set-AzureRmWebApp cmdlet 20, 29
Set-AzureStorageAccount cmdlet 164
Set-AzureStorageBlobContent cmdlet 161
Set-AzureStorageServiceLoggingProperty cmdlet 186
Set-AzureStorageServiceMetricsProperty cmdlet 186
shadow IT system 315
Shared Access Signature (SAS) 180–182
shared access signature (SAS URL) 88
single sign-on 363–368
federated 363–366
password-based 366–368
Site Control Manager (Kudu) 30–31
Site Recovery Deployment Planner (SRDP) 394
Site Recovery Provider 403–404, 405–406
site-to-site (S2S) virtual private networks
240–242, 247, 287
SLAs. See service level agreements
SMB. See Server Messaging Block
SMB file storage 168–170
snapshots 387–392, 408
social identity provider authentication 503–505
social identity providers 376–377
social media accounts 469
software as a service (SaaS) applications 485
revoking access to 370–371
single sign-on with 363–368
solid state disks (SSDs) 63
SQL Server Always On Availability Groups 284
SSDs. See solid state disks
SSH certificates 62–63
SSL certificates
App Service Certificates and 23–24
configuration of 22–26
self-signed 80
third-party 23, 25
SSL/TLS certificates 342–346
Standard Azure Storage account 100–101
standard certificates 23
Standard storage 100–101
Start-AzureStorageBlobCopy cmdlet 164–165, 166
static IP addresses 261–264
enabling on VMs 263–264
storage 155–198
Azure Storage 155–177
Azure Storage Diagnostics 185–188
blob 155–164
async blob copy service 164–170
capacity planning 100–104
custom domains for 176–177
disk caching 98–100
encryption 190–195
Geo-Redundant Storage 392–393
monitoring and alerts 189–190
redundancy type 379
setting metadata with 158–159
SMB file storage 168–170
virtual machines 93–110
blob types 93–94
storage accounts 93–94
storage accounts
access control 178–184
accessing content from CDN instead of 172–173
blob 157
custom domains 176–177
diagnostics 185–188
entities and hierarchy relationships 156
key management 178–180
replication options 163–164
root container 157
types 157
zone replicated 164
Storage Explorer
async blob copy 167–168
blob and container management with 162
stored access policies 182–183
storage pools 103
Storage Spaces 103
stored access policy 182–183
streaming log files 32–33
subject alternative name (SAN) 23
subnets
associating NSG with 273–274
deleting 209
design 208–209
gateway 210–211, 252–253
NSGs with 222, 226–230
properties 209
subscription policies 311
subscriptions 248
visualizing resources across multiple 456–457
super users 194
support devices 247
Swap-AzureRmWebAppSlot cmdlet 13
Sync Error 476
synchronization services 470–479
sysprep.exe tool 96
System Center Configuration Manager (SCOM) 442
System Center Operations Manager (SCOM) 38
System Center Virtual Machine Manager (SCVMM) 404–405
system routes 214–215
System Update Assessment management solution 459–461
system updates 459–461
T
TCP probe 276
telemetry data 39
TemplateParameterFile parameter 79
TemplateParameterObject parameter 79
TemplateParameterUri parameter 79
temporary disks 95
Thales nShield family 180
Threat intelligence detection capability 362–363
time-to-live (TTL) period 173
top-AzureRmVM cmdlet 73
trace data 115
traffic encryption/decryption 233
traffic filtering 201
Transport Layer Security (TLS) 342
U
unmanaged disks 95
unmanaged images 96–97
update-AzureRmVM cmdlet 99
Update-AzureRmVmssInstance cmdlet 137
URL-based content routing 234
URLs
Azure Automation 420
Usage location 489
user defined routes (UDRs) 214–217, 274–275
users
adding to applications 369–370
utilization reports 479
V
variables
automation 427–428
vault credentials 380
vCenter/vSphere server 400–401
View Designer 452–453
virtual appliances 274
virtual applications
configuration 27
virtual CPUs (vCPUs) 63
virtual directories
configuration 27
virtual hard disks (VHDs) 95
virtual hard disk (VHD) files 67
virtual machine scale sets (VMSS) 57, 95, 129, 132–137
upgrading 137
virtual machines (VMs) 57
adding DSC node 436–438
agents 83
ARM
alerts configuration 119–123
availability 124–129
diagnostics 112–119
monitoring 110–123
networking 260–286
resizing 130–132
scaling 129–137
ASR Azure to Azure protection 408–412
Azure Container Services 138–150
configuration 64–65
management 82–92
configuring as backend pools for App Gateway 285–286
connecting to 79–82
connecting to Log Analytics workspace 444–446
creating 60–79
from ARM template 74–79
from images 97–98
in Azure portal 60–66
in PowerShell 66–70
with CLI 70–73
deployment into virtual network 230–232
disk caching 98–100
disk encryption 104–106
disk redundancy 103–104
enabling static private IP addresses on 263–264
GPOs with 484
Hyper-V
protection of 402–408
IaaS, backing up 384–386
joining to a Domain 482–483
Just-in-time access 352–353
Linux distributions for 58–59
name resolution 268
operating system images 95–98
public IP addresses 266
redundancy for 125
remote debugging 91–92
replication 400–401, 409–412
resources 302–308
setting size of 64, 72
snapshots 391
stopping 73
storage 93–110
account replication 94
accounts 93–94
Azure File Service 106–110
blob types 93–94
capacity planning 100–104
disks 95
overview 93–95
workloads
deployment 58–82
identify and run 58–60
virtual network resources 296–299
virtual networks. See Azure Virtual Networks
Virtual Network Service Endpoints (VSPE) 178, 183–184
virtual private networks (VPNs)
devices 248
ExpressRoute 242–245
point-to-site 240–241
site-to-site 240–242, 287
support devices and software solutions 247
Visual Studio Cloud Explorer 91
Visual Studio Code 295
Visual Studio Community 2017 295
VMs. See virtual machines
VMSnapshot extension 380, 386
VMSnapshotLinux extension 380, 386
VMSS. See virtual machine scale sets
VNet peering 217–222, 248–250, 251, 257–259
VNets. See Azure Virtual Networks
VNet-to-VNet connections 251–257
VPN Gateways 246–247, 253–255, 260, 266
VPNs. See virtual private networks
W
WAF. See web application firewall
web application firewall (WAF) 233, 353, 355–357
web applications
integration with Azure AD 495–496
proxy monitoring 477–478
registering 374–375
Web Apps 1–56
application settings 17–21
app service plans 2–6
availability tests 37–39
Azure Traffic Manager for 47–51
basic tier 428–431
configuration 16–27
application settings 17–21
backups 40–41
custom domain 20–22
for scale and resilience 42–52
handler mappings 26
SSL certificates 22–26
virtual applications and directories 27
connection strings 18–20
creating 6–8
Azure portal 6–7
in CLI 8
in PowerShell 7
deploying application to 14
deployment 2–16
deployment slots
defining 8–10
swapping 11–14
diagnostic logs
enabling 27–29
retrieving 29–32
integrating Azure Automation with 428–431
introduction to 1
migration to separate App Service Plan 15–16
monitoring 27
app service plan resources 34–35
Azure services 39–40
resources 33–34
with Application Insights 35–39
multiple deployments of 47
restoring from backup 41–42
Web-Asp-Net45 feature 83
webhooks 121
Web-Server role 83
web servers
diagnostic logs 28–29
web tests alerts 39
wildcard certificates 23
Windows 10 493–495
Windows Explorer 108–109
Windows Management Framework version 5 (WMF 5) 438
Windows PowerShell. See PowerShell
Windows Remote Management (WinRM) 80–81
Windows Server 2003 58
Windows virtual machines
connecting to 79–81
diagnostics, enabling and configuring 112–118
WinRMHttp 80
WinRMHttps 80
workloads
Hyper-V-based 402–408
migrating 149
on virtual machines
identify and run 58–60
WS-Federation 363
X
xPSDesiredStateConfiguration module 84
Y
yum package manager 170
Z
zone redundant storage (ZRS) 94
zone-replicated storage accounts 164
About the Authors