Azure Linux
Azure Linux
Azure Linux
Microsoft Azure is a growing collection of integrated public cloud services including analytics, Virtual Machines,
databases, mobile, networking, storage, and web—ideal for hosting your solutions. Microsoft Azure provides a
scalable computing platform that allows you to only pay for what you use, when you want it - without having to
invest in on-premises hardware. Azure is ready when you are to scale your solutions up and out to whatever scale
you require to service the needs of your clients.
If you are familiar with the various features of Amazon's AWS, you can examine the Azure vs AWS definition
mapping document.
Regions
Microsoft Azure resources are distributed across multiple geographical regions around the world. A "region"
represents multiple data centers in a single geographical area. Azure currently (as of November 2017) has 36
regions generally available around the world with an additional 6 regions announced. An updated list of existing
and newly announced regions can be found in the following page:
Azure Regions
Availability
Azure announced an industry leading single instance virtual machine Service Level Agreement of 99.9% provided
you deploy the VM with premium storage for all disks. In order for your deployment to qualify for the standard
99.95% VM Service Level Agreement, you still need to deploy two or more VMs running your workload inside of
an availability set. An availability set ensures that your VMs are distributed across multiple fault domains in the
Azure data centers as well as deployed onto hosts with different maintenance windows. The full Azure SL A
explains the guaranteed availability of Azure as a whole.
Managed Disks
Managed Disks handles Azure Storage account creation and management in the background for you, and ensures
that you do not have to worry about the scalability limits of the storage account. You specify the disk size and the
performance tier (Standard or Premium), and Azure creates and manages the disk. As you add disks or scale the
VM up and down, you don't have to worry about the storage being used. If you're creating new VMs, use the Azure
CLI 2.0 or the Azure portal to create VMs with Managed OS and data disks. If you have VMs with unmanaged
disks, you can convert your VMs to be backed with Managed Disks.
You can also manage your custom images in one storage account per Azure region, and use them to create
hundreds of VMs in the same subscription. For more information about Managed Disks, see the Managed Disks
Overview.
VM Sizes
The size of the VM that you use is determined by the workload that you want to run. The size that you choose then
determines factors such as processing power, memory, and storage capacity. Azure offers a wide variety of sizes to
support many types of uses.
Azure charges an hourly price based on the VM’s size and operating system. For partial hours, Azure charges only
for the minutes used. Storage is priced and charged separately.
Automation
To achieve a proper DevOps culture, all infrastructure must be code. When all the infrastructure lives in code it can
easily be recreated (Phoenix Servers). Azure works with all the major automation tooling like Ansible, Chef,
SaltStack, and Puppet. Azure also has its own tooling for automation:
Azure Templates
Azure VMAccess
Azure is rolling out support for cloud-init across most Linux Distros that support it. Currently Canonical's Ubuntu
VMs are deployed with cloud-init enabled by default. Red Hat's RHEL, CentOS, and Fedora support cloud-init,
however the Azure images maintained by RedHat do not currently have cloud-init installed. To use cloud-init on a
RedHat family OS, you must create a custom image with cloud-init installed.
Using cloud-init on Azure Linux VMs
Quotas
Each Azure Subscription has default quota limits in place that could impact the deployment of a large number of
VMs for your project. The current limit on a per subscription basis is 20 VMs per region. Quota limits can be raised
quickly and easily by filing a support ticket requesting a limit increase. For more details on quota limits:
Azure Subscription Service Limits
Partners
Microsoft works closely with partners to ensure the images available are updated and optimized for an Azure
runtime. For more information on Azure partners, see the following links:
Linux on Azure - Endorsed Distributions
SUSE - Azure Marketplace - SUSE Linux Enterprise Server
Redhat - Azure Marketplace - RedHat Enterprise Linux 7.2
Canonical - Azure Marketplace - Ubuntu Server 16.04 LTS
Debian - Azure Marketplace - Debian 8 "Jessie"
FreeBSD - Azure Marketplace - FreeBSD 10.3
CoreOS - Azure Marketplace - CoreOS (Stable)
RancherOS - Azure Marketplace - RancherOS
Bitnami - Bitnami Library for Azure
Mesosphere - Azure Marketplace - Mesosphere DC/OS on Azure
Docker - Azure Marketplace - Azure Container Service with Docker Swarm
Jenkins - Azure Marketplace - CloudBees Jenkins Platform
Networking
Virtual Network Overview
IP addresses in Azure
Opening ports to a Linux VM in Azure
Create a Fully Qualified Domain Name in the Azure portal
Containers
Virtual Machines and Containers in Azure
Azure Container Service introduction
Deploy an Azure Container Service cluster
Next steps
You now have an overview of Linux on Azure. The next step is to dive in and create a few VMs!
Explore the growing list of sample scripts for common tasks via AzureCLI
Quickstart: Create a Linux virtual machine with the
Azure CLI 2.0
4/25/2018 • 3 min to read • Edit Online
The Azure CLI 2.0 is used to create and manage Azure resources from the command line or in scripts. This
quickstart shows you how to use the Azure CLI 2.0 to deploy a Linux virtual machine (VM ) in Azure that runs
Ubuntu. To see your VM in action, you then SSH to the VM and install the NGINX web server.
If you don't have an Azure subscription, create a free account before you begin.
If you choose to install and use the CLI locally, this quickstart requires that you are running the Azure CLI version
2.0.30 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI 2.0.
It takes a few minutes to create the VM and supporting resources. The following example output shows the VM
create operation was successful.
{
"fqdns": "",
"id":
"/subscriptions/<guid>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM",
"location": "eastus",
"macAddress": "00-0D-3A-23-9A-49",
"powerState": "VM running",
"privateIpAddress": "10.0.0.4",
"publicIpAddress": "40.68.254.142",
"resourceGroup": "myResourceGroup"
}
Note your own publicIpAddress in the output from your VM. This address is used to access the VM in the next
steps.
ssh azureuser@publicIpAddress
# update packages
sudo apt-get -y update
# install NGINX
sudo apt-get -y install nginx
Clean up resources
When no longer needed, you can use the az group delete command to remove the resource group, VM, and all
related resources. Make sure that you have exited the SSH session to your VM, then delete the resources as
follows:
Next steps
In this quickstart, you deployed a simple virtual machine, open a network port for web traffic, and installed a
basic web server. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs.
Azure Linux virtual machine tutorials
Quickstart: Create a Linux virtual machine in the
Azure portal
5/10/2018 • 4 min to read • Edit Online
Azure virtual machines (VMs) can be created through the Azure portal. This method provides a browser-based
user interface to create VMs and their associated resources. This quickstart shows you how to use the Azure portal
to deploy a Linux virtual machine (VM ) in Azure that runs Ubuntu. To see your VM in action, you then SSH to the
VM and install the NGINX web server.
If you don't have an Azure subscription, create a free account before you begin.
For more detailed information on how to create SSH key pairs, including the use of PuTTy, see How to use SSH
keys with Windows.
Log in to Azure
Log in to the Azure portal at http://portal.azure.com
2. In the Connect to virtual machine page, keep the default options to connect by DNS name over port 22.
In Login using VM local account a connection command is shown. Click the button to copy the
command. The following example shows what the SSH connection command looks like:
3. Paste the SSH connection command into a shell, such as the Azure Cloud Shell or Bash on Ubuntu on
Windows to create the connection.
# update packages
sudo apt-get -y update
# install NGINX
sudo apt-get -y install nginx
When done, exit the SSH session and return to the VM properties in the Azure portal.
Clean up resources
When no longer needed, you can delete the resource group, virtual machine, and all related resources. To do so,
select the resource group for the virtual machine, select Delete, then confirm the name of the resource group to
delete.
Next steps
In this quickstart, you deployed a simple virtual machine, created a Network Security Group and rule, and installed
a basic web server. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs.
Azure Linux virtual machine tutorials
Quickstart: Create a Linux virtual machine in Azure
with PowerShell
4/25/2018 • 5 min to read • Edit Online
The Azure PowerShell module is used to create and manage Azure resources from the PowerShell command line
or in scripts. This quickstart shows you how to use the Azure PowerShell module to deploy a Linux virtual machine
(VM ) in Azure that runs Ubuntu. To see your VM in action, you then SSH to the VM and install the NGINX web
server.
If you don't have an Azure subscription, create a free account before you begin.
Click the Cloud Shell button on the menu in the upper right
of the Azure portal.
If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version
5.7.0 or later. Run Get-Module -ListAvailable AzureRM to find the version. If you need to upgrade, see Install Azure
PowerShell module. If you are running PowerShell locally, you also need to run Connect-AzureRmAccount to create a
connection with Azure.
Finally, a public SSH key with the name id_rsa.pub needs to be stored in the .ssh directory of your Windows user
profile. For detailed information on how to create and use SSH keys, see Create SSH keys for Azure.
Create an Azure Network Security Group and traffic rule. The Network Security Group secures the VM with
inbound and outbound rules. In the following example, an inbound rule is created for TCP port 22 that allows SSH
connections. To allow incoming web traffic, an inbound rule for TCP port 80 is also created.
Create a virtual network interface card (NIC ) with New -AzureRmNetworkInterface. The virtual NIC connects the
VM to a subnet, Network Security Group, and public IP address.
# Create a virtual network card and associate with public IP address and NSG
$nic = New-AzureRmNetworkInterface -Name "myNic" -ResourceGroupName "myResourceGroup" -Location "EastUS" `
-SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id -NetworkSecurityGroupId $nsg.Id
Now, combine the previous configuration definitions to create with New -AzureRmVM:
New-AzureRmVM -ResourceGroupName "myResourceGroup" -Location eastus -VM $vmConfig
Use an SSH client to connect to the VM. You can use the Azure Cloud Shell from a web browser, or if you use
Windows, you can use Putty or the Windows Subsystem for Linux. Provide the public IP address of your VM:
ssh azureuser@IpAddress
When prompted, the login user name is azureuser. If a passphrase is used with your SSH keys, you need to enter
that when prompted.
# update packages
sudo apt-get -y update
# install NGINX
sudo apt-get -y install nginx
Next steps
In this quickstart, you deployed a simple virtual machine, created a Network Security Group and rule, and installed
a basic web server. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs.
Azure Linux virtual machine tutorials
Tutorial: Create and Manage Linux VMs with the
Azure CLI 2.0
4/26/2018 • 9 min to read • Edit Online
Azure virtual machines provide a fully configurable and flexible computing environment. This tutorial covers basic
Azure virtual machine deployment items such as selecting a VM size, selecting a VM image, and deploying a VM.
You learn how to:
Create and connect to a VM
Select and use VM images
View and use specific VM sizes
Resize a VM
View and understand VM state
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version
2.0.30 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI 2.0.
The resource group is specified when creating or modifying a VM, which can be seen throughout this tutorial.
az vm create \
--resource-group myResourceGroupVM \
--name myVM \
--image UbuntuLTS \
--admin-username azureuser \
--generate-ssh-keys
It may take a few minutes to create the VM. Once the VM has been created, the Azure CLI outputs information
about the VM. Take note of the publicIpAddress , this address can be used to access the virtual machine..
{
"fqdns": "",
"id": "/subscriptions/d5b9d4b7-6fc1-0000-0000-
000000000000/resourceGroups/myResourceGroupVM/providers/Microsoft.Compute/virtualMachines/myVM",
"location": "eastus",
"macAddress": "00-0D-3A-23-9A-49",
"powerState": "VM running",
"privateIpAddress": "10.0.0.4",
"publicIpAddress": "52.174.34.95",
"resourceGroup": "myResourceGroupVM"
}
Connect to VM
You can now connect to the VM with SSH in the Azure Cloud Shell or from your local computer. Replace the
example IP address with the publicIpAddress noted in the previous step.
Once logged in to the VM, you can install and configure applications. When you are finished, you close the SSH
session as normal:
exit
Understand VM images
The Azure marketplace includes many images that can be used to create VMs. In the previous steps, a virtual
machine was created using an Ubuntu image. In this step, the Azure CLI is used to search the marketplace for a
CentOS image, which is then used to deploy a second virtual machine.
To see a list of the most commonly used images, use the az vm image list command.
A full list can be seen by adding the --all argument. The image list can also be filtered by --publisher or
–-offer . In this example, the list is filtered for all images with an offer that matches CentOS.
Partial output:
To deploy a VM using a specific image, take note of the value in the Urn column, which consists of the publisher,
offer, SKU, and optionally a version number to identify the image. When specifying the image, the image version
number can be replaced with “latest”, which selects the latest version of the distribution. In this example, the
--image argument is used to specify the latest version of a CentOS 6.5 image.
Understand VM sizes
A virtual machine size determines the amount of compute resources such as CPU, GPU, and memory that are
made available to the virtual machine. Virtual machines need to be sized appropriately for the expected work load.
If workload increases, an existing virtual machine can be resized.
VM Sizes
The following table categorizes sizes into use cases.
General purpose Dsv3, Dv3, DSv2, Dv2, DS, D, Av2, A0-7 Balanced CPU-to-memory. Ideal for dev
/ test and small to medium applications
and data solutions.
Memory optimized Esv3, Ev3, M, GS, G, DSv2, DS, Dv2, D High memory-to-core. Great for
relational databases, medium to large
caches, and in-memory analytics.
Partial output:
MaxDataDiskCount MemoryInMb Name NumberOfCores OsDiskSizeInMb
ResourceDiskSizeInMb
------------------ ------------ ---------------------- --------------- ---------------- -----------------
-----
2 3584 Standard_DS1 1 1047552
7168
4 7168 Standard_DS2 2 1047552
14336
8 14336 Standard_DS3 4 1047552
28672
16 28672 Standard_DS4 8 1047552
57344
4 14336 Standard_DS11 2 1047552
28672
8 28672 Standard_DS12 4 1047552
57344
16 57344 Standard_DS13 8 1047552
114688
32 114688 Standard_DS14 16 1047552
229376
1 768 Standard_A0 1 1047552
20480
2 1792 Standard_A1 1 1047552
71680
4 3584 Standard_A2 2 1047552
138240
8 7168 Standard_A3 4 1047552
291840
4 14336 Standard_A5 2 1047552
138240
16 14336 Standard_A4 8 1047552
619520
8 28672 Standard_A6 4 1047552
291840
16 57344 Standard_A7 8 1047552
619520
az vm create \
--resource-group myResourceGroupVM \
--name myVM3 \
--image UbuntuLTS \
--size Standard_F4s \
--generate-ssh-keys
Resize a VM
After a VM has been deployed, it can be resized to increase or decrease resource allocation. You can view the
current of size of a VM with az vm show:
Before resizing a VM, check if the desired size is available on the current Azure cluster. The az vm list-vm-resize-
options command returns the list of sizes.
If the desired size is not on the current cluster, the VM needs to be deallocated before the resize operation can
occur. Use the az vm deallocate command to stop and deallocate the VM. Note, when the VM is powered back on,
any data on the temp disk may be removed. The public IP address also changes unless a static IP address is being
used.
VM power states
An Azure VM can have one of many power states. This state represents the current state of the VM from the
standpoint of the hypervisor.
Power states
POWER STATE DESCRIPTION
Output:
Management tasks
During the life-cycle of a virtual machine, you may want to run management tasks such as starting, stopping, or
deleting a virtual machine. Additionally, you may want to create scripts to automate repetitive or complex tasks.
Using the Azure CLI, many common management tasks can be run from the command line or in scripts.
Get IP address
This command returns the private and public IP addresses of a virtual machine.
Next steps
In this tutorial, you learned about basic VM creation and management such as how to:
Create and connect to a VM
Select and use VM images
View and use specific VM sizes
Resize a VM
View and understand VM state
Advance to the next tutorial to learn about VM disks.
Create and Manage VM disks
Tutorial - Manage Azure disks with the Azure CLI 2.0
4/26/2018 • 9 min to read • Edit Online
Azure virtual machines use disks to store the VMs operating system, applications, and data. When creating a VM it
is important to choose a disk size and configuration appropriate to the expected workload. This tutorial covers
deploying and managing VM disks. You learn about:
OS disks and temporary disks
Data disks
Standard and Premium disks
Disk performance
Attaching and preparing data disks
Resizing disks
Disk snapshots
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version
2.0.30 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI 2.0.
GPU N series 48
VM disk types
Azure provides two types of disk.
Standard disk
Standard Storage is backed by HDDs, and delivers cost-effective storage while still being performant. Standard
disks are ideal for a cost effective dev and test workload.
Premium disk
Premium disks are backed by SSD -based high-performance, low -latency disk. Perfect for VMs running production
workload. Premium Storage supports DS -series, DSv2-series, GS -series, and FS -series VMs. Premium disks come
in three types (P10, P20, P30), the size of the disk determines the disk type. When selecting, a disk size the value is
rounded up to the next type. For example, if the disk size is less than 128 GB, the disk type is P10. If the disk size is
between 129 GB and 512 GB, the size is a P20. Anything over 512 GB, the size is a P30.
Premium disk performance
PREMIUM STORAGE DISK TYPE P10 P20 P30
While the above table identifies max IOPS per disk, a higher level of performance can be achieved by striping
multiple data disks. For instance, a Standard_GS5 VM can achieve a maximum of 80,000 IOPS. For detailed
information on max IOPS per VM, see Linux VM sizes.
Create a VM using the az vm create command. The following example creates a VM named myVM, adds a user
account named azureuser, and generates SSH keys if they do not exist. The --datadisk-sizes-gb argument is used
to specify that an additional disk should be created and attached to the virtual machine. To create and attach more
than one disk, use a space-delimited list of disk size values. In the following example, a VM is created with two
data disks, both 128 GB. Because the disk sizes are 128 GB, these disks are both configured as P10s, which
provide maximum 500 IOPS per disk.
az vm create \
--resource-group myResourceGroupDisk \
--name myVM \
--image UbuntuLTS \
--size Standard_DS2_v2 \
--admin-username azureuser \
--generate-ssh-keys \
--data-disk-sizes-gb 128 128
az vm disk attach --vm-name myVM --resource-group myResourceGroupDisk --disk myDataDisk --size-gb 128 --sku
Premium_LRS --new
The disk can now be accessed through the datadrive mountpoint, which can be verified by running the df -h
command.
df -h
To ensure that the drive is remounted after a reboot, it must be added to the /etc/fstab file. To do so, get the UUID
of the disk with the blkid utility.
sudo -i blkid
The output displays the UUID of the drive, /dev/sdc1 in this case.
Now that the disk has been configured, close the SSH session.
exit
Resize VM disk
Once a VM has been deployed, the operating system disk or any attached data disks can be increased in size.
Increasing the size of a disk is beneficial when needing more storage space or a higher level of performance (P10,
P20, P30). Note, disks cannot be decreased in size.
Before increasing disk size, the Id or name of the disk is needed. Use the az disk list command to return all disks in
a resource group. Take note of the disk name that you would like to resize.
The VM must also be deallocated. Use the az vm deallocate command to stop and deallocate the VM.
Use the az disk update command to resize the disk. This example resizes a disk named myDataDisk to 1 terabyte.
If you’ve resized the operating system disk, the partition is automatically expanded. If you have resized a data disk,
any current partitions need to be expanded in the VMs operating system.
Now that you have the id of the virtual machine disk, the following command creates a snapshot of the disk.
Next steps
In this tutorial, you learned about VM disks topics such as:
OS disks and temporary disks
Data disks
Standard and Premium disks
Disk performance
Attaching and preparing data disks
Resizing disks
Disk snapshots
Advance to the next tutorial to learn about automating VM configuration.
Automate VM configuration
Tutorial - How to use cloud-init to customize a Linux
virtual machine in Azure on first boot
4/26/2018 • 8 min to read • Edit Online
In a previous tutorial, you learned how to SSH to a virtual machine (VM ) and manually install NGINX. To create
VMs in a quick and consistent manner, some form of automation is typically desired. A common approach to
customize a VM on first boot is to use cloud-init. In this tutorial you learn how to:
Create a cloud-init config file
Create a VM that uses a cloud-init file
View a running Node.js app after the VM is created
Use Key Vault to securely store certificates
Automate secure deployments of NGINX with cloud-init
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version
2.0.30 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI 2.0.
Cloud-init overview
Cloud-init is a widely used approach to customize a Linux VM as it boots for the first time. You can use cloud-init
to install packages and write files, or to configure users and security. As cloud-init runs during the initial boot
process, there are no additional steps or required agents to apply your configuration.
Cloud-init also works across distributions. For example, you don't use apt-get install or yum install to install a
package. Instead you can define a list of packages to install. Cloud-init automatically uses the native package
management tool for the distro you select.
We are working with our partners to get cloud-init included and working in the images that they provide to
Azure. The following table outlines the current cloud-init availability on Azure platform images:
ALIAS PUBLISHER OFFER SKU VERSION
For more information about cloud-init configuration options, see cloud-init config examples.
Now create a VM with az vm create. Use the --custom-data parameter to pass in your cloud-init config file.
Provide the full path to the cloud -init.txt config if you saved the file outside of your present working directory.
The following example creates a VM named myAutomatedVM:
az vm create \
--resource-group myResourceGroupAutomate \
--name myVM \
--image UbuntuLTS \
--admin-username azureuser \
--generate-ssh-keys \
--custom-data cloud-init.txt
It takes a few minutes for the VM to be created, the packages to install, and the app to start. There are
background tasks that continue to run after the Azure CLI returns you to the prompt. It may be another couple of
minutes before you can access the app. When the VM has been created, take note of the publicIpAddress
displayed by the Azure CLI. This address is used to access the Node.js app via a web browser.
To allow web traffic to reach your VM, open port 80 from the Internet with az vm open-port:
keyvault_name=mykeyvault
az keyvault create \
--resource-group myResourceGroupAutomate \
--name $keyvault_name \
--enabled-for-deployment
Create secure VM
Now create a VM with az vm create. The certificate data is injected from Key Vault with the --secrets
parameter. As in the previous example, you also pass in the cloud-init config with the --custom-data parameter:
az vm create \
--resource-group myResourceGroupAutomate \
--name myVMSecured \
--image UbuntuLTS \
--admin-username azureuser \
--generate-ssh-keys \
--custom-data cloud-init-secured.txt \
--secrets "$vm_secret"
It takes a few minutes for the VM to be created, the packages to install, and the app to start. There are
background tasks that continue to run after the Azure CLI returns you to the prompt. It may be another couple of
minutes before you can access the app. When the VM has been created, take note of the publicIpAddress
displayed by the Azure CLI. This address is used to access the Node.js app via a web browser.
To allow secure web traffic to reach your VM, open port 443 from the Internet with az vm open-port:
az vm open-port \
--resource-group myResourceGroupAutomate \
--name myVMSecured \
--port 443
Your secured NGINX site and Node.js app is then displayed as in the following example:
Next steps
In this tutorial, you configured VMs on first boot with cloud-init. You learned how to:
Create a cloud-init config file
Create a VM that uses a cloud-init file
View a running Node.js app after the VM is created
Use Key Vault to securely store certificates
Automate secure deployments of NGINX with cloud-init
Advance to the next tutorial to learn how to create custom VM images.
Create custom VM images
Tutorial: Create a custom image of an Azure VM with
the Azure CLI 2.0
5/10/2018 • 3 min to read • Edit Online
Custom images are like marketplace images, but you create them yourself. Custom images can be used to
bootstrap configurations such as preloading applications, application configurations, and other OS configurations.
In this tutorial, you create your own custom image of an Azure virtual machine. You learn how to:
Deprovision and generalize VMs
Create a custom image
Create a VM from a custom image
List all the images in your subscription
Delete an image
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version
2.0.30 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI 2.0.
exit
Finally, set the state of the VM as generalized with az vm generalize so the Azure platform knows the VM has
been generalized. You can only create an image from a generalized VM.
az image create \
--resource-group myResourceGroup \
--name myImage \
--source myVM
Image management
Here are some examples of common image management tasks and how to complete them using the Azure CLI.
List all images by name in a table format.
az image list \
--resource-group myResourceGroup
Delete an image. This example deletes the image named myOldImage from the myResourceGroup.
az image delete \
--name myOldImage \
--resource-group myResourceGroup
Next steps
In this tutorial, you created a custom VM image. You learned how to:
Deprovision and generalize VMs
Create a custom image
Create a VM from a custom image
List all the images in your subscription
Delete an image
Advance to the next tutorial to learn about highly available virtual machines.
Create highly available VMs.
Tutorial: Create and deploy highly available virtual
machines with the Azure CLI 2.0
4/26/2018 • 4 min to read • Edit Online
In this tutorial, you learn how to increase the availability and reliability of your Virtual Machine solutions on Azure
using a capability called Availability Sets. Availability sets ensure that the VMs you deploy on Azure are distributed
across multiple isolated hardware clusters. Doing this ensures that if a hardware or software failure within Azure
happens, only a subset of your VMs is impacted and that your overall solution remains available and operational.
In this tutorial, you learn how to:
Create an availability set
Create a VM in an availability set
Check available VM sizes
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version
2.0.30 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI 2.0.
az vm availability-set create \
--resource-group myResourceGroupAvailability \
--name myAvailabilitySet \
--platform-fault-domain-count 2 \
--platform-update-domain-count 2
Availability Sets allow you to isolate resources across fault domains and update domains. A fault domain
represents an isolated collection of server + network + storage resources. In the preceding example, the
availability set is distributed across at least two fault domains when the VMs are deployed. The availability set is
also distributed across two update domains. Two update domains ensure that when Azure performs software
updates, the VM resources are isolated, preventing all the software that runs on the VM from being updated at the
same time.
There are now two virtual machines within the availability set. Because they are in the same availability set, Azure
ensures that the VMs and all their resources (including data disks) are distributed across isolated physical
hardware. This distribution helps ensure much higher availability of the overall VM solution.
The availability set distribution can be viewed in the portal by going to Resource Groups >
myResourceGroupAvailability > myAvailabilitySet. The VMs are distributed across the two fault and update
domains, as shown in the following example:
Check for available VM sizes
Additional VMs can be added to the availability set later, where VM sizes are available on the hardware. Use az vm
availability-set list-sizes to list all the available sizes on the hardware cluster for the availability set:
az vm availability-set list-sizes \
--resource-group myResourceGroupAvailability \
--name myAvailabilitySet \
--output table
Next steps
In this tutorial, you learned how to:
Create an availability set
Create a VM in an availability set
Check available VM sizes
Advance to the next tutorial to learn about virtual machine scale sets.
Create a virtual machine scale set
Tutorial: Create a virtual machine scale set and
deploy a highly available app on Linux with the
Azure CLI 2.0
4/26/2018 • 9 min to read • Edit Online
A virtual machine scale set allows you to deploy and manage a set of identical, auto-scaling virtual machines. You
can scale the number of VMs in the scale set manually, or define rules to autoscale based on resource usage such
as CPU, memory demand, or network traffic. In this tutorial, you deploy a virtual machine scale set in Azure. You
learn how to:
Use cloud-init to create an app to scale
Create a virtual machine scale set
Increase or decrease the number of instances in a scale set
Create autoscale rules
View connection info for scale set instances
Use data disks in a scale set
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version
2.0.30 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI 2.0.
#cloud-config
package_upgrade: true
packages:
- nginx
- nodejs
- npm
write_files:
- owner: www-data:www-data
- path: /etc/nginx/sites-available/default
content: |
server {
listen 80;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
- owner: azureuser:azureuser
- path: /home/azureuser/myapp/index.js
content: |
var express = require('express')
var app = express()
var os = require('os');
app.get('/', function (req, res) {
res.send('Hello World from host ' + os.hostname() + '!')
})
app.listen(3000, function () {
console.log('Hello world app listening on port 3000!')
})
runcmd:
- service nginx restart
- cd "/home/azureuser/myapp"
- npm init
- npm install express -y
- nodejs index.js
az vmss create \
--resource-group myResourceGroupScaleSet \
--name myScaleSet \
--image UbuntuLTS \
--upgrade-policy-mode automatic \
--custom-data cloud-init.txt \
--admin-username azureuser \
--generate-ssh-keys
It takes a few minutes to create and configure all the scale set resources and VMs. There are background tasks that
continue to run after the Azure CLI returns you to the prompt. It may be another couple of minutes before you can
access the app.
Enter the public IP address in to a web browser. The app is displayed, including the hostname of the VM that the
load balancer distributed traffic to:
To see the scale set in action, you can force-refresh your web browser to see the load balancer distribute traffic
across all the VMs running your app.
Management tasks
Throughout the lifecycle of the scale set, you may need to run one or more management tasks. Additionally, you
may want to create scripts that automate various lifecycle-tasks. The Azure CLI 2.0 provides a quick way to do
those tasks. Here are a few common tasks.
View VMs in a scale set
To view a list of VMs running in your scale set, use az vmss list-instances as follows:
az vmss list-instances \
--resource-group myResourceGroupScaleSet \
--name myScaleSet \
--output table
az vmss show \
--resource-group myResourceGroupScaleSet \
--name myScaleSet \
--query [sku.capacity] \
--output table
You can then manually increase or decrease the number of virtual machines in the scale set with az vmss scale. The
following example sets the number of VMs in your scale set to 3:
az vmss scale \
--resource-group myResourceGroupScaleSet \
--name myScaleSet \
--new-capacity 3
To reuse the autoscale profile, you can create a JSON (JavaScript Object Notation) file and pass that to the
az monitor autoscale-settings create command with the --parameters @autoscale.json parameter. For more
design information on the use of autoscale, see autoscale best practices.
Get connection info
To obtain connection information about the VMs in your scale sets, use az vmss list-instance-connection-info. This
command outputs the public IP address and port for each VM that allows you to connect with SSH:
az vmss list-instance-connection-info \
--resource-group myResourceGroupScaleSet \
--name myScaleSet
az vmss create \
--resource-group myResourceGroupScaleSet \
--name myScaleSetDisks \
--image UbuntuLTS \
--upgrade-policy-mode automatic \
--custom-data cloud-init.txt \
--admin-username azureuser \
--generate-ssh-keys \
--data-disk-sizes-gb 50
When instances are removed from a scale set, any attached data disks are also removed.
Add data disks
To add a data disk to instances in your scale set, use az vmss disk attach. The following example adds a 50Gb disk
to each instance:
Next steps
In this tutorial, you created a virtual machine scale set. You learned how to:
Use cloud-init to create an app to scale
Create a virtual machine scale set
Increase or decrease the number of instances in a scale set
Create autoscale rules
View connection info for scale set instances
Use data disks in a scale set
Advance to the next tutorial to learn more about load balancing concepts for virtual machines.
Load balance virtual machines
Tutorial: Load balance Linux virtual machines in
Azure to create a highly available application with
the Azure CLI 2.0
4/26/2018 • 10 min to read • Edit Online
Load balancing provides a higher level of availability by spreading incoming requests across multiple virtual
machines. In this tutorial, you learn about the different components of the Azure load balancer that distribute
traffic and provide high availability. You learn how to:
Create an Azure load balancer
Create a load balancer health probe
Create load balancer traffic rules
Use cloud-init to create a basic Node.js app
Create virtual machines and attach to a load balancer
View a load balancer in action
Add and remove VMs from a load balancer
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version
2.0.30 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI 2.0.
az network lb create \
--resource-group myResourceGroupLoadBalancer \
--name myLoadBalancer \
--frontend-ip-name myFrontEndPool \
--backend-pool-name myBackEndPool \
--public-ip-address myPublicIP
To add a network security group, you use az network nsg create. The following example creates a network security
group named myNetworkSecurityGroup:
Create a network security group rule with az network nsg rule create. The following example creates a network
security group rule named myNetworkSecurityGroupRule:
Virtual NICs are created with az network nic create. The following example creates three virtual NICs. (One virtual
NIC for each VM you create for your app in the following steps). You can create additional virtual NICs and VMs
at any time and add them to the load balancer:
for i in `seq 1 3`; do
az network nic create \
--resource-group myResourceGroupLoadBalancer \
--name myNic$i \
--vnet-name myVnet \
--subnet mySubnet \
--network-security-group myNetworkSecurityGroup \
--lb-name myLoadBalancer \
--lb-address-pools myBackEndPool
done
When all three virtual NICs are created, continue on to the next step
az vm availability-set create \
--resource-group myResourceGroupLoadBalancer \
--name myAvailabilitySet
Now you can create the VMs with az vm create. The following example creates three VMs and generates SSH
keys if they do not already exist:
for i in `seq 1 3`; do
az vm create \
--resource-group myResourceGroupLoadBalancer \
--name myVM$i \
--availability-set myAvailabilitySet \
--nics myNic$i \
--image UbuntuLTS \
--admin-username azureuser \
--generate-ssh-keys \
--custom-data cloud-init.txt \
--no-wait
done
There are background tasks that continue to run after the Azure CLI returns you to the prompt. The --no-wait
parameter does not wait for all the tasks to complete. It may be another couple of minutes before you can access
the app. The load balancer health probe automatically detects when the app is running on each VM. Once the app
is running, the load balancer rule starts to distribute traffic.
You can then enter the public IP address in to a web browser. Remember - it takes a few minutes for the VMs to be
ready before the load balancer starts to distribute traffic to them. The app is displayed, including the hostname of
the VM that the load balancer distributed traffic to as in the following example:
To see the load balancer distribute traffic across all three VMs running your app, you can force-refresh your web
browser.
To see the load balancer distribute traffic across the remaining two VMs running your app you can force-refresh
your web browser. You can now perform maintenance on the VM, such as installing OS updates or performing a
VM reboot.
To view a list of VMs with virtual NICs connected to the load balancer, use az network lb address-pool show.
Query and filter on the ID of the virtual NIC as follows:
The output is similar to the following example, which shows that the virtual NIC for VM 2 is no longer part of the
backend address pool:
/subscriptions/<guid>/resourceGroups/myResourceGroupLoadBalancer/providers/Microsoft.Network/networkInterfaces
/myNic1/ipConfigurations/ipconfig1
/subscriptions/<guid>/resourceGroups/myResourceGroupLoadBalancer/providers/Microsoft.Network/networkInterfaces
/myNic3/ipConfigurations/ipconfig1
To verify that the virtual NIC is connected to the backend address pool, use az network lb address-pool show
again from the preceding step.
Next steps
In this tutorial, you created a load balancer and attached VMs to it. You learned how to:
Create an Azure load balancer
Create a load balancer health probe
Create load balancer traffic rules
Use cloud-init to create a basic Node.js app
Create virtual machines and attach to a load balancer
View a load balancer in action
Add and remove VMs from a load balancer
Advance to the next tutorial to learn more about Azure virtual network components.
Manage VMs and virtual networks
Tutorial: Create and manage Azure virtual networks
for Linux virtual machines with the Azure CLI 2.0
4/26/2018 • 10 min to read • Edit Online
Azure virtual machines use Azure networking for internal and external network communication. This tutorial
walks through deploying two virtual machines and configuring Azure networking for these VMs. The examples in
this tutorial assume that the VMs are hosting a web application with a database back-end, however an application
is not deployed in the tutorial. In this tutorial, you learn how to:
Create a virtual network and subnet
Create a public IP address
Create a front-end VM
Secure network traffic
Create a back-end VM
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version
2.0.30 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI 2.0.
VM networking overview
Azure virtual networks enable secure network connections between virtual machines, the internet, and other
Azure services such as Azure SQL database. Virtual networks are broken down into logical segments called
subnets. Subnets are used to control network flow, and as a security boundary. When deploying a VM, it generally
includes a virtual network interface, which is attached to a subnet.
As you complete the tutorial, the following virtual network resources are created:
myVNet - The virtual network that the VMs use to communicate with each other and the internet.
myFrontendSubnet - The subnet in myVNet used by the front-end resources.
myPublicIPAddress - The public IP address used to access myFrontendVM from the internet.
myFrontentNic - The network interface used by myFrontendVM to communicate with myBackendVM.
myFrontendVM - The VM used to communicate between the internet and myBackendVM.
myBackendNSG - The network security group that controls communication between the myFrontendVM and
myBackendVM.
myBackendSubnet - The subnet associated with myBackendNSG and used by the back-end resources.
myBackendNic - The network interface used by myBackendVM to communicate with myFrontendVM.
myBackendVM - The VM that uses port 22 and 3306 to communicate with myFrontendVM.
Create subnet
A new subnet is added to the virtual network using the az network vnet subnet create command. In this example,
the subnet is named myBackendSubnet and is given an address prefix of 10.0.2.0/24. This subnet is used with all
back-end services.
At this point, a network has been created and segmented into two subnets, one for front-end services, and another
for back-end services. In the next section, virtual machines are created and connected to these subnets.
When creating a VM with the az vm create command, the default public IP address allocation method is dynamic.
When creating a virtual machine using the az vm create command, include the
--public-ip-address-allocation static argument to assign a static public IP address. This operation is not
demonstrated in this tutorial, however in the next section a dynamically allocated IP address is changed to a
statically allocated address.
Change allocation method
The IP address allocation method can be changed using the az network public-ip update command. In this
example, the IP address allocation method of the front-end VM is changed to static.
First, deallocate the VM.
Use the az network public-ip update command to update the allocation method. In this case, the
--allocation-method is being set to static.
No public IP address
Often, a VM does not need to be accessible over the internet. To create a VM without a public IP address, use the
--public-ip-address "" argument with an empty set of double quotes. This configuration is demonstrated later in
this tutorial.
Create a front-end VM
Use the az vm create command to create the VM named myFrontendVM using myPublicIPAddress.
az vm create \
--resource-group myRGNetwork \
--name myFrontendVM \
--vnet-name myVNet \
--subnet myFrontendSubnet \
--nsg myFrontendNSG \
--public-ip-address myPublicIPAddress \
--image UbuntuLTS \
--generate-ssh-keys
Instead of associating the NSG to a network interface, it is associated with a subnet. In this configuration, any VM
that is attached to the subnet inherits the NSG rules.
Update the existing subnet named myBackendSubnet with the new NSG.
The front-end VM is only accessible on port 22 and port 80. All other incoming traffic is blocked at the network
security group. It may be helpful to visualize the NSG rule configurations. Return the NSG rule configuration with
the az network rule list command.
az network nsg rule list --resource-group myRGNetwork --nsg-name myFrontendNSG --output table
Secure VM to VM traffic
Network security group rules can also apply between VMs. For this example, the front-end VM needs to
communicate with the back-end VM on port 22 and 3306. This configuration allows SSH connections from the
front-end VM, and also allow an application on the front-end VM to communicate with a back-end MySQL
database. All other traffic should be blocked between the front-end and back-end virtual machines.
Use the az network nsg rule create command to create a rule for port 22. Notice that the --source-address-prefix
argument specifies a value of 10.0.1.0/24. This configuration ensures that only traffic from the front-end subnet is
allowed through the NSG.
az network nsg rule create \
--resource-group myRGNetwork \
--nsg-name myBackendNSG \
--name SSH \
--access Allow \
--protocol Tcp \
--direction Inbound \
--priority 100 \
--source-address-prefix 10.0.1.0/24 \
--source-port-range "*" \
--destination-address-prefix "*" \
--destination-port-range "22"
Finally, because NSGs have a default rule allowing all traffic between VMs in the same VNet, a rule can be created
for the back-end NSGs to block all traffic. Notice here that the --priority is given a value of 300, which is lower
that both the NSG and MySQL rules. This configuration ensures that SSH and MySQL traffic is still allowed
through the NSG.
Create back-end VM
Now create a virtual machine, which is attached to the myBackendSubnet. Notice that the --nsg argument has a
value of empty double quotes. An NSG does not need to be created with the VM. The VM is attached to the back-
end subnet, which is protected with the pre-created back-end NSG. This NSG applies to the VM. Also, notice here
that the --public-ip-address argument has a value of empty double quotes. This configuration creates a VM
without a public IP address.
az vm create \
--resource-group myRGNetwork \
--name myBackendVM \
--vnet-name myVNet \
--subnet myBackendSubnet \
--public-ip-address "" \
--nsg "" \
--image UbuntuLTS \
--generate-ssh-keys
The back-end VM is only accessible on port 22 and port 3306 from the front-end subnet. All other incoming traffic
is blocked at the network security group. It may be helpful to visualize the NSG rule configurations. Return the
NSG rule configuration with the az network rule list command.
az network nsg rule list --resource-group myRGNetwork --nsg-name myBackendNSG --output table
Next steps
In this tutorial, you created and secured Azure networks as related to virtual machines. You learned how to:
Create a virtual network and subnet
Create a public IP address
Create a front-end VM
Secure network traffic
Create back-end VM
Advance to the next tutorial to learn about securing data on virtual machines using Azure backup.
Back up Linux virtual machines in Azure
Tutorial: Back up and restore files for Linux virtual
machines in Azure
4/26/2018 • 5 min to read • Edit Online
You can protect your data by taking backups at regular intervals. Azure Backup creates recovery points that are
stored in geo-redundant recovery vaults. When you restore from a recovery point, you can restore the whole VM
or specific files. This article explains how to restore a single file to a Linux VM running nginx. If you don't already
have a VM to use, you can create one using the Linux quickstart. In this tutorial you learn how to:
Create a backup of a VM
Schedule a daily backup
Restore a file from a backup
Backup overview
When the Azure Backup service initiates a backup, it triggers the backup extension to take a point-in-time
snapshot. The Azure Backup service uses the VMSnapshotLinux extension in Linux. The extension is installed
during the first VM backup if the VM is running. If the VM is not running, the Backup service takes a snapshot of
the underlying storage (since no application writes occur while the VM is stopped).
By default, Azure Backup takes a file system consistent backup for Linux VM but it can be configured to take
application consistent backup using pre-script and post-script framework. Once the Azure Backup service takes
the snapshot, the data is transferred to the vault. To maximize efficiency, the service identifies and transfers only
the blocks of data that have changed since the previous backup.
When the data transfer is complete, the snapshot is removed and a recovery point is created.
Create a backup
Create a scheduled daily backup to a Recovery Services Vault:
1. Sign in to the Azure portal.
2. In the menu on the left, select Virtual machines.
3. From the list, select a VM to back up.
4. On the VM blade, in the Settings section, click Backup. The Enable backup blade opens.
5. In Recovery Services vault, click Create new and provide the name for the new vault. A new vault is created
in the same Resource Group and location as the virtual machine.
6. Click Backup policy. For this example, keep the defaults and click OK.
7. On the Enable backup blade, click Enable Backup. This creates a daily backup based on the default schedule.
8. To create an initial recovery point, on the Backup blade click Backup now.
9. On the Backup Now blade, click the calendar icon, use the calendar control to select the last day this recovery
point is retained, and click Backup.
10. In the Backup blade for your VM, you see the number of recovery points that are complete.
The first backup takes about 20 minutes. Proceed to the next part of this tutorial after your backup is finished.
Restore a file
If you accidentally delete or make changes to a file, you can use File Recovery to recover the file from your backup
vault. File Recovery uses a script that runs on the VM, to mount the recovery point as a local drive. These drives
remain mounted for 12 hours so that you can copy files from the recovery point and restore them to the VM.
In this example, we show how to recover the default nginx web page /var/www/html/index.nginx-debian.html. The
public IP address of our VM in this example is 13.69.75.209. You can find the IP address of your vm using:
1. On your local computer, open a browser and type in the public IP address of your VM to see the default
nginx web page.
ssh 13.69.75.209
3. Delete /var/www/html/index.nginx-debian.html.
sudo rm /var/www/html/index.nginx-debian.html
4. On your local computer, refresh the browser by hitting CTRL + F5 to see that default nginx page is gone.
5. On your local computer, sign in to the Azure portal.
6. In the menu on the left, select Virtual machines.
7. From the list, select the VM.
8. On the VM blade, in the Settings section, click Backup. The Backup blade opens.
9. In the menu at the top of the blade, select File Recovery. The File Recovery blade opens.
10. In Step 1: Select recovery point, select a recovery point from the drop-down.
11. In Step 2: Download script to browse and recover files, click the Download Executable button. Save the
downloaded file to your local computer.
12. Click Download script to download the script file locally.
13. Open a Bash prompt and type the following, replacing Linux_myVM_05 -05 -2017.sh with the correct path
and filename for the script that you downloaded, azureuser with the username for the VM and 13.69.75.209
with the public IP address for your VM.
ssh 13.69.75.209
chmod +x Linux_myVM_05-05-2017.sh
16. On your VM, run the script to mount the recovery point as a filesystem.
./Linux_myVM_05-05-2017.sh
17. The output from the script gives you the path for the mount point. The output looks similar to this:
Microsoft Azure VM Backup - File Recovery
______________________________________________
Connection succeeded!
Please wait while we attach volumes of the recovery point to this machine...
************ Volumes of the recovery point and their mount paths on this machine ************
After recovery, to remove the disks and close the connection to the recovery point, please click
'Unmount Disks' in step 3 of the portal.
18. On your VM, copy the nginx default web page from the mount point back to where you deleted the file.
19. On your local computer, open the browser tab where you are connected to the IP address of the VM
showing the nginx default page. Press CTRL + F5 to refresh the browser page. You should now see that the
default page is working again.
20. On your local computer, go back to the browser tab for the Azure portal and in Step 3: Unmount the
disks after recovery click the Unmount Disks button. If you forget to do this step, the connection to the
mountpoint is automatically closed after 12 hours. After those 12 hours, you need to download a new script
to create a new mountpoint.
Next steps
In this tutorial, you learned how to:
Create a backup of a VM
Schedule a daily backup
Restore a file from a backup
Advance to the next tutorial to learn about monitoring virtual machines.
Govern virtual machines
Tutorial: Learn about Linux virtual machine
governance with Azure CLI 2.0
4/26/2018 • 11 min to read • Edit Online
When deploying resources to Azure, you have tremendous flexibility when deciding what types of resources to
deploy, where they are located, and how to set them up. However, that flexibility may open more options than you
would like to allow in your organization. As you consider deploying resources to Azure, you might be wondering:
How do I meet legal requirements for data sovereignty in certain countries?
How do I control costs?
How do I ensure that someone does not inadvertently change a critical system?
How do I track resource costs and bill it accurately?
This article addresses those questions. Specifically, you:
Assign users to roles and assign the roles to a scope so users have permission to perform expected actions but
not more actions.
Apply policies that prescribe conventions for resources in your subscription.
Lock resources that are critical to your system.
Tag resources so you can track them by values that make sense to your organization.
This article focuses on the tasks you take to implement governance. For a broader discussion of the concepts, see
Governance in Azure.
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version
2.0.30 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI 2.0.
Understand scope
Before creating any items, let's review the concept of scope. Azure provides four levels of management:
management groups, subscription, resource group, and resource. Management groups are in a preview release.
The following image shows an example of these layers.
You apply management settings at any of these levels of scope. The level you select determines how widely the
setting is applied. Lower levels inherit settings from higher levels. When you apply a setting to the subscription,
that setting is applied to all resource groups and resources in your subscription. When you apply a setting on the
resource group, that setting is applied the resource group and all its resources. However, another resource group
does not have that setting.
Usually, it makes sense to apply critical settings at higher levels and project-specific requirements at lower levels.
For example, you might want to make sure all resources for your organization are deployed to certain regions. To
accomplish this requirement, apply a policy to the subscription that specifies the allowed locations. As other users
in your organization add new resource groups and resources, the allowed locations are automatically enforced.
In this tutorial, you apply all management settings to a resource group so you can easily remove those settings
when done.
Let's create that resource group.
It takes a moment after the command prompt returns for the group to propagate throughout Azure Active
Directory. After waiting for 20 or 30 seconds, use the az role assignment create command to assign the new Azure
Active Directory group to the Virtual Machine Contributor role for the resource group. If you run the following
command before it has propagated, you receive an error stating Principal does not exist in the directory. Try
running the command again.
az role assignment create --assignee-object-id $adgroupId --role "Virtual Machine Contributor" --resource-group
myResourceGroup
Typically, you repeat the process for Network Contributor and Storage Account Contributor to make sure users are
assigned to manage the deployed resources. In this article, you can skip those steps.
Azure policies
Azure policies help you make sure all resources in subscription meet corporate standards. Use policies to reduce
your costs by restricting deployment options to only those resource types and SKUs that are approved. You define
rules and actions for your resources and those rules are automatically enforced during deployment. For example,
you can control the types of resources that are deployed. Or, you can restrict the approved locations for resources.
Some policies deny an action, and some policies set up auditing of an action.
Policy is complementary to role-based access control (RBAC ). RBAC focuses on user access, and is a default deny
and explicit allow system. Policy focuses on resource properties during and after deployment. It's a default allow
and explicit deny system.
There are two concepts to understand with policies - policy definitions and policy assignments. A policy definition
describes the management conditions you want to enforce. A policy assignment puts a policy definition into action
for a particular scope.
Azure provides several built-in policy definitions you can use without any modification. You pass parameter values
to specify the values that are permitted in your scope. If built-in policy definition don't fulfill your requirements, you
can create custom policy definitions.
Apply policies
Your subscription already has several policy definitions. To see the available policy definitions, use the az policy
definition list command:
az policy definition list --query "[].[displayName, policyType, name]" --output table
You see the existing policy definitions. The policy type is either BuiltIn or Custom. Look through the definitions for
ones that describe a condition you want assign. In this article, you assign policies that:
Limit the locations for all resources.
Limit the SKUs for virtual machines.
Audit virtual machines that do not use managed disks.
In the following example, you retrieve three policy definitions based on the display name. You use the az policy
assignment create command to assign those definitions to the resource group. For some policies, you provide
parameter values to specify the allowed values.
# Get policy definitions for allowed locations, allowed SKUs, and auditing VMs that don't use managed disks
locationDefinition=$(az policy definition list --query "[?displayName=='Allowed locations'].name | [0]" --
output tsv)
skuDefinition=$(az policy definition list --query "[?displayName=='Allowed virtual machine SKUs'].name | [0]" -
-output tsv)
auditDefinition=$(az policy definition list --query "[?displayName=='Audit VMs that do not use managed
disks'].name | [0]" --output tsv)
The preceding example assumes you already know the parameters for a policy. If you need to view the parameters,
use:
After your deployment finishes, you can apply more management settings to the solution.
Lock resources
Resource locks prevent users in your organization from accidentally deleting or modifying critical resources. Unlike
role-based access control, resource locks apply a restriction across all users and roles. You can set the lock level to
CanNotDelete or ReadOnly.
To create or delete management locks, you must have access to Microsoft.Authorization/locks/* actions. Of the
built-in roles, only Owner and User Access Administrator are granted those actions.
To lock the virtual machine and network security group, use the az lock create command:
You see an error stating that the delete operation cannot be performed because of a lock. The resource group can
only be deleted if you specifically remove the locks. That step is shown in Clean up resources.
Tag resources
You apply tags to your Azure resources to logically organize them by categories. Each tag consists of a name and a
value. For example, you can apply the name "Environment" and the value "Production" to all the resources in
production.
To add two tags to a resource group, use the az group update command:
Let's suppose you want to add a third tag. Run the command again with the new tag. It is appended to the existing
tags.
# Get the resource IDs for all resources in the resource group
r=$(az resource list -g myResourceGroup --query [].id --output tsv)
Alternatively, you can apply tags from the resource group to the resources without keeping the existing tags:
# Get the resource IDs for all resources in the resource group
r=$(az resource list -g myResourceGroup --query [].id --output tsv)
To apply tags to a virtual machine, use the az resource tag command. Any existing tags on the resource are not
retained.
az resource tag -n myVM \
-g myResourceGroup \
--tags Dept=IT Environment=Test Project=Documentation \
--resource-type "Microsoft.Compute/virtualMachines"
You can use the returned values for management tasks like stopping all virtual machines with a tag value.
Clean up resources
The locked network security group can't be deleted until the lock is removed. To remove the lock, retrieve the IDs of
the locks and provide them to the az lock delete command:
When no longer needed, you can use the az group delete command to remove the resource group, VM, and all
related resources. Exit the SSH session to your VM, then delete the resources as follows:
Next steps
In this tutorial, you created a custom VM image. You learned how to:
Assign users to a role
Apply policies that enforce standards
Protect critical resources with locks
Tag resources for billing and management
Advance to the next tutorial to learn about how highly available virtual machines.
Monitor virtual machines
Tutorial: Monitor and update a Linux virtual machine
in Azure
4/26/2018 • 13 min to read • Edit Online
To ensure your virtual machines (VMs) in Azure are running correctly, you can review boot diagnostics,
performance metrics and manage package updates. In this tutorial, you learn how to:
Enable boot diagnostics on the VM
View boot diagnostics
View host metrics
Enable diagnostics extension on the VM
View VM metrics
Create alerts based on diagnostic metrics
Manage package updates
Monitor changes and inventory
Set up advanced monitoring
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version
2.0.30 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI 2.0.
Create VM
To see diagnostics and metrics in action, you need a VM. First, create a resource group with az group create. The
following example creates a resource group named myResourceGroupMonitor in the eastus location.
Now create a VM with az vm create. The following example creates a VM named myVM:
az vm create \
--resource-group myResourceGroupMonitor \
--name myVM \
--image UbuntuLTS \
--admin-username azureuser \
--generate-ssh-keys
storageacct=mydiagdata$RANDOM
When enabling boot diagnostics, the URI to the blob storage container is needed. The following command queries
the storage account to return this URI. The URI value is stored in a variable names bloburi, which is used in the
next step.
Now enable boot diagnostics with az vm boot-diagnostics enable. The --storage value is the blob URI collected in
the previous step.
az vm boot-diagnostics enable \
--resource-group myResourceGroupMonitor \
--name myVM \
--storage $bloburi
The basic host metrics are available, but to see more granular and VM -specific metrics, you need to install the
Azure diagnostics extension on the VM. The Azure diagnostics extension allows additional monitoring and
diagnostics data to be retrieved from the VM. You can view these performance metrics and create alerts based on
how the VM performs. The diagnostic extension is installed through the Azure portal as follows:
1. In the Azure portal, click Resource Groups, select myResourceGroup, and then select myVM in the resource
list.
2. Click Diagnosis settings. The list shows that Boot diagnostics are already enabled from the previous section.
Click the check box for Basic metrics.
3. In the Storage account section, browse to and select the mydiagdata [1234 ] account created in the previous
section.
4. Click the Save button.
View VM metrics
You can view the VM metrics in the same way that you viewed the host VM metrics:
1. In the Azure portal, click Resource Groups, select myResourceGroup, and then select myVM in the resource
list.
2. To see how the VM is performing, click Metrics on the VM blade, and then select any of the diagnostics
metrics under Available metrics.
Create alerts
You can create alerts based on specific performance metrics. Alerts can be used to notify you when average CPU
usage exceeds a certain threshold or available free disk space drops below a certain amount, for example. Alerts
are displayed in the Azure portal or can be sent via email. You can also trigger Azure Automation runbooks or
Azure Logic Apps in response to alerts being generated.
The following example creates an alert for average CPU usage.
1. In the Azure portal, click Resource Groups, select myResourceGroup, and then select myVM in the resource
list.
2. Click Alert rules on the VM blade, then click Add metric alert across the top of the alerts blade.
3. Provide a Name for your alert, such as myAlertRule
4. To trigger an alert when CPU percentage exceeds 1.0 for five minutes, leave all the other defaults selected.
5. Optionally, check the box for Email owners, contributors, and readers to send email notification. The default
action is to present a notification in the portal.
6. Click the OK button.
Schedule settings - You can either accept the default date and time, which is 30 minutes after current time,
or specify a different time. You can also specify whether the deployment occurs once or set up a recurring
schedule. Click the Recurring option under Recurrence to set up a recurring schedule.
Maintenance window (minutes) - Specify the period of time you want the update deployment to occur
within. This helps ensure changes are performed within your defined service windows.
After you have completed configuring the schedule, click Create button and you return to the status dashboard.
Notice that the Scheduled table shows the deployment schedule you created.
WARNING
For updates that require a reboot, the VM is restarted automatically.
In Update results tile is a summary of the total number of updates and deployment results on the VM. In the
table to the right is a detailed breakdown of each update and the installation results, which could be one of the
following values:
Not attempted - the update was not installed because there was insufficient time available based on the
maintenance window duration defined.
Succeeded - the update succeeded
Failed - the update failed
Click All logs to see all log entries that the deployment created.
Click the Output tile to see job stream of the runbook responsible for managing the update deployment on the
target VM.
Click Errors to see detailed information about any errors from the deployment.
After the solution has been enabled, it may take some time while inventory is being collected on the VM before
data appears.
Track changes
On your VM, select Change Tracking under OPERATIONS. Click Edit Settings, the Change Tracking page is
displayed. Select the type of setting you want to track and then click + Add to configure the settings. The available
option Linux is Linux Files
For detailed information on Change Tracking see, Troubleshoot changes on a VM
View inventory
On your VM, select Inventory under OPERATIONS. On the Software tab, there is a table list the software that
had been found. The high-level details for each software record are viewable in the table. These details include the
software name, version, publisher, last refreshed time.
Monitor Activity logs and changes
From the Change tracking page on your VM, select Manage Activity Log Connection. This task opens the
Azure Activity log page. Select Connect to connect Change tracking to the Azure activity log for your VM.
With this setting enabled, navigate to the Overview page for your VM and select Stop to stop your VM. When
prompted, select Yes to stop the VM. When it is deallocated, select Start to restart your VM.
Stopping and starting a VM logs an event in its activity log. Navigate back to the Change tracking page. Select
the Events tab at the bottom of the page. After a while, the events shown in the chart and the table. Each event can
be selected to view detailed information on the event.
The chart shows changes that have occurred over time. After you have added an Activity Log connection, the line
graph at the top displays Azure Activity Log events. Each row of bar graphs represents a different trackable
Change type. These types are Linux daemons, files, and software. The change tab shows the details for the changes
shown in the visualization in descending order of time that the change occurred (most recent first).
Advanced monitoring
You can do more advanced monitoring of your VM by using the solutions like Update Management and Change
and Inventory provided by Azure Automation.
When you have access to the Log Analytics workspace, you can find the workspace key and workspace identifier
on by selecting Advanced settings under SETTINGS. Replace <workspace-key> and <workspace-id> with the
values for from your Log Analytics workspace and then you can use az vm extension set to add the extension to
the VM:
az vm extension set \
--resource-group myResourceGroupMonitor \
--vm-name myVM \
--name OmsAgentForLinux \
--publisher Microsoft.EnterpriseCloud.Monitoring \
--version 1.3 \
--protected-settings '{"workspaceKey": "<workspace-key>"}' \
--settings '{"workspaceId": "<workspace-id>"}'
After a few minutes, you should see the new VM in the Log Analytics workspace.
Next steps
In this tutorial, you configured, reviewed, and managed updates for a VM. You learned how to:
Enable boot diagnostics on the VM
View boot diagnostics
View host metrics
Enable diagnostics extension on the VM
View VM metrics
Create alerts based on diagnostic metrics
Manage package updates
Monitor changes and inventory
Set up advanced monitoring
Advance to the next tutorial to learn about Azure Security Center.
Manage VM security
Tutorial: Use Azure Security Center to monitor Linux
virtual machines
4/26/2018 • 5 min to read • Edit Online
Azure Security Center can help you gain visibility into your Azure resource security practices. Security Center
offers integrated security monitoring. It can detect threats that otherwise might go unnoticed. In this tutorial, you
learn about Azure Security Center, and how to:
Set up data collection
Set up security policies
View and fix configuration health issues
Review detected threats
Security Center goes beyond data discovery to provide recommendations for issues that it detects. For example, if
a VM was deployed without an attached network security group, Security Center displays a recommendation, with
remediation steps you can take. You get automated remediation without leaving the context of Security Center.
To automate the build and test phase of application development, you can use a continuous integration and
deployment (CI/CD ) pipeline. In this tutorial, you create a CI/CD pipeline on an Azure VM including how to:
Create a Jenkins VM
Install and configure Jenkins
Create webhook integration between GitHub and Jenkins
Create and trigger Jenkins build jobs from GitHub commits
Create a Docker image for your app
Verify GitHub commits build new Docker image and updates running app
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version
2.0.30 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI 2.0.
Before you can create a VM, create a resource group with az group create. The following example creates a
resource group named myResourceGroupJenkins in the eastus location:
Now create a VM with az vm create. Use the --custom-data parameter to pass in your cloud-init config file.
Provide the full path to cloud -init-jenkins.txt if you saved the file outside of your present working directory.
Configure Jenkins
To access your Jenkins instance, obtain the public IP address of your VM:
For security purposes, you need to enter the initial admin password that is stored in a text file on your VM to start
the Jenkins install. Use the public IP address obtained in the previous step to SSH to your VM:
ssh azureuser@<publicIps>
View the initialAdminPassword for your Jenkins install and copy it:
If the file isn't available yet, wait a couple more minutes for cloud-init to complete the Jenkins and Docker install.
Now open a web browser and go to http://<publicIps>:8080 . Complete the initial Jenkins setup as follows:
Choose Select plugins to install
Search for GitHub in the text box across the top. Check the box for GitHub, then select Install
Create the first admin user. Enter a username, such as admin, then provide your own secure password. Finally,
type a full name and e-mail address.
Select Save and Finish
Once Jenkins is ready, select Start using Jenkins
If your web browser displays a blank page when you start using Jenkins, restart the Jenkins service.
From your SSH session, type sudo service jenkins restart , then refresh you web browser.
Log in to Jenkins with the username and password you created.
response.end("Hello World!");
To commit your changes, select the Commit changes button at the bottom.
In Jenkins, a new build starts under the Build history section of the bottom left-hand corner of your job page.
Choose the build number link and select Console output on the left-hand side. You can view the steps Jenkins
takes as your code is pulled from GitHub and the build action outputs the message Testing to the console. Each
time a commit is made in GitHub, the webhook reaches out to Jenkins and triggers a new build in this way.
cd /var/lib/jenkins/workspace/HelloWorld
Create a file in this workspace directory with sudo sensible-editor Dockerfile and paste the following contents.
Make sure that the whole Dockerfile is copied correctly, especially the first line:
FROM node:alpine
EXPOSE 1337
WORKDIR /var/www
COPY package.json /var/www/
RUN npm install
COPY index.js /var/www/
This Dockerfile uses the base Node.js image using Alpine Linux, exposes port 1337 that the Hello World app runs
on, then copies the app files and initializes it.
The Docker build steps create an image and tag it with the Jenkins build number so you can maintain a history of
images. Any existing containers running the app are stopped and then removed. A new container is then started
using the image and runs your Node.js app based on the latest commits in GitHub.
Open a web browser and enter http://<publicIps>:1337 . Your Node.js app is displayed and reflects the latest
commits in your GitHub fork as follows:
Now make another edit to the index.js file in GitHub and commit the change. Wait a few seconds for the job to
complete in Jenkins, then refresh your web browser to see the updated version of your app running in a new
container as follows:
Next steps
In this tutorial, you configured GitHub to run a Jenkins build job on each code commit and then deploy a Docker
container to test your app. You learned how to:
Create a Jenkins VM
Install and configure Jenkins
Create webhook integration between GitHub and Jenkins
Create and trigger Jenkins build jobs from GitHub commits
Create a Docker image for your app
Verify GitHub commits build new Docker image and updates running app
Advance to the next tutorial to learn more about how to integrate Jenkins with Visual Studio Team Services.
Deploy apps with Jenkins and Team Services
Tutorial: Deploy your app to Linux virtual machines in
Azure with using Jenkins and Visual Studio Team
Services
4/26/2018 • 7 min to read • Edit Online
Continuous integration (CI) and continuous deployment (CD ) form a pipeline by which you can build, release, and
deploy your code. Visual Studio Team Services provides a complete, fully featured set of CI/CD automation tools
for deployment to Azure. Jenkins is a popular third-party CI/CD server-based tool that also provides CI/CD
automation. You can use Team Services and Jenkins together to customize how you deliver your cloud app or
service.
In this tutorial, you use Jenkins to build a Node.js web app. You then use Team Services or Team Foundation
Server to deploy it to a deployment group that contains Linux virtual machines (VMs). You learn how to:
Get the sample app.
Configure Jenkins plug-ins.
Configure a Jenkins Freestyle project for Node.js.
Configure Jenkins for Team Services integration.
Create a Jenkins service endpoint.
Create a deployment group for the Azure virtual machines.
Create a Team Services release definition.
Execute manual and CI-triggered deployments.
NOTE
For more information, see Connect to Team Services.
You need a Linux virtual machine for a deployment target. For more information, see Create and manage
Linux VMs with the Azure CLI.
Open inbound port 80 for your virtual machine. For more information, see Create network security groups
using the Azure portal.
NOTE
The app was built through Yeoman. It uses Express, bower, and grunt. And it has some npm packages as dependencies. The
sample also contains a script that sets up Nginx and deploys the app. It is executed on the virtual machines. Specifically, the
script:
1. Installs Node, Nginx, and PM2.
2. Configures Nginx and PM2.
3. Starts the Node app.
4. Filter the list to find the VS Team Services Continuous Deployment plug-in and select the Install without
restart option.
5. Go back to the Jenkins dashboard and select Manage Jenkins.
6. Select Global Tool Configuration. Find NodeJS and select NodeJS installations.
7. Select the Install automatically option, and then enter a Name value.
8. Select Save.
1. Create a PAT in your Team Services account if you don't already have one. Jenkins requires this information
to access your Team Services account. Be sure to store the token information for upcoming steps in this
section.
To learn how to generate a token, read How do I create a personal access token for VSTS and TFS?.
2. In the Post-build Actions tab, select Add post-build action. Select Archive the artifacts.
3. For Files to archive, enter **/* to include all files.
4. To create another action, select Add post-build action.
5. Select Trigger release in TFS/Team Services. Enter the URI for your Team Services account, such as
https://{your-account-name}.visualstudio.com.
6. Enter the Team Project name.
7. Choose a name for the release definition. (You create this release definition later in Team Services.)
8. Choose credentials to connect to your Team Services or Team Foundation Server environment:
Leave Username blank if you are using Team Services.
Enter a username and password if you are using an on-premises version of Team Foundation Server.
NOTE
In the following procedure, be sure to install the prerequisites and don't run the script with sudo privileges.
1. Open the Releases tab of the Build & Release hub, open Deployment groups, and select + New.
2. Enter a name for the deployment group, and an optional description. Then select Create.
3. Choose the operating system for your deployment target virtual machine. For example, select Ubuntu 16.04+.
4. Select Use a personal access token in the script for authentication.
5. Select the System prerequisites link. Install the prerequisites for your operating system.
6. Select Copy script to clipboard to copy the script.
7. Log in to your deployment target virtual machine and run the script. Don't run the script with sudo privileges.
8. After the installation, you are prompted for deployment group tags. Accept the defaults.
9. In Team Services, check for your newly registered virtual machine in Targets under Deployment Groups.
Next steps
In this tutorial, you automated the deployment of an app to Azure by using Jenkins for build and Team Services for
release. You learned how to:
Build your app in Jenkins.
Configure Jenkins for Team Services integration.
Create a deployment group for the Azure virtual machines.
Create a release definition that configures the VMs and deploys the app.
To learn about how to deploy a L AMP (Linux, Apache, MySQL, and PHP ) stack, advance to the next tutorial.
Deploy L AMP stack
Tutorial: Install a LAMP web server on a Linux virtual
machine in Azure
4/26/2018 • 6 min to read • Edit Online
This article walks you through how to deploy an Apache web server, MySQL, and PHP (the L AMP stack) on an
Ubuntu VM in Azure. If you prefer the NGINX web server, see the LEMP stack tutorial. To see the L AMP server in
action, you can optionally install and configure a WordPress site. In this tutorial you learn how to:
Create an Ubuntu VM (the 'L' in the L AMP stack)
Open port 80 for web traffic
Install Apache, MySQL, and PHP
Verify installation and configuration
Install WordPress on the L AMP server
This setup is for quick tests or proof of concept. For more on the L AMP stack, including recommendations for a
production environment, see the Ubuntu documentation.
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version
2.0.30 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI 2.0.
az vm create \
--resource-group myResourceGroup \
--name myVM \
--image UbuntuLTS \
--admin-username azureuser \
--generate-ssh-keys
When the VM has been created, the Azure CLI shows information similar to the following example. Take note of
the publicIpAddress . This address is used to access the VM in later steps.
{
"fqdns": "",
"id": "/subscriptions/<subscription
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM",
"location": "eastus",
"macAddress": "00-0D-3A-23-9A-49",
"powerState": "VM running",
"privateIpAddress": "10.0.0.4",
"publicIpAddress": "40.68.254.142",
"resourceGroup": "myResourceGroup"
}
Use the following command to create an SSH session with the virtual machine. Substitute the correct public IP
address of your virtual machine. In this example, the IP address is 40.68.254.142. azureuser is the administrator
user name set when you created the VM.
You are prompted to install the packages and other dependencies. When prompted, set a root password for
MySQL, and then [Enter] to continue. Follow the remaining prompts. This process installs the minimum required
PHP extensions needed to use PHP with MySQL.
apache2 -v
With Apache installed, and port 80 open to your VM, the web server can now be accessed from the internet. To
view the Apache2 Ubuntu Default Page, open a web browser, and enter the public IP address of the VM. Use the
public IP address you used to SSH to the VM:
MySQL
Check the version of MySQL with the following command (note the capital V parameter):
mysql -V
To help secure the installation of MySQL, run the mysql_secure_installation script. If you are only setting up a
temporary server, you can skip this step.
mysql_secure_installation
Enter a root password for MySQL, and configure the security settings for your environment.
If you want to try MySQL features (create a MySQL database, add users, or change configuration settings), login
to MySQL. This step is not required to complete this tutorial.
mysql -u root -p
php -v
If you want to test further, create a quick PHP info page to view in a browser. The following command creates the
PHP info page:
Now you can check the PHP info page you created. Open a browser and go to
http://yourPublicIPAddress/info.php . Substitute the public IP address of your VM. It should look similar to this
image.
Install WordPress
If you want to try your stack, install a sample app. As an example, the following steps install the open source
WordPress platform to create websites and blogs. Other workloads to try include Drupal and Moodle.
This WordPress setup is only for proof of concept. To install the latest WordPress in production with
recommended security settings, see the WordPress documentation.
Install the WordPress package
Run the following command:
sudo apt install wordpress
Configure WordPress
Configure WordPress to use MySQL and PHP.
In a working directory, create a text file wordpress.sql to configure the MySQL database for WordPress:
Add the following commands, substituting a database password of your choice for yourPassword (leave other
values unchanged). If you previously set up a MySQL security policy to validate password strength, make sure the
password meets the strength requirements. Save the file.
Because the file wordpress.sql contains database credentials, delete it after use:
sudo rm wordpress.sql
To configure PHP, run the following command to open a text editor of your choice and create the file
/etc/wordpress/config-localhost.php :
Copy the following lines to the file, substituting your WordPress database password for yourPassword (leave other
values unchanged). Then save the file.
<?php
define('DB_NAME', 'wordpress');
define('DB_USER', 'wordpress');
define('DB_PASSWORD', 'yourPassword');
define('DB_HOST', 'localhost');
define('WP_CONTENT_DIR', '/usr/share/wordpress/wp-content');
?>
Now you can complete the WordPress setup and publish on the platform. Open a browser and go to
http://yourPublicIPAddress/wordpress . Substitute the public IP address of your VM. It should look similar to this
image.
Next steps
In this tutorial, you deployed a L AMP server in Azure. You learned how to:
Create an Ubuntu VM
Open port 80 for web traffic
Install Apache, MySQL, and PHP
Verify installation and configuration
Install WordPress on the L AMP server
Advance to the next tutorial to learn how to secure web servers with SSL certificates.
Secure web server with SSL
Tutorial: Install a LEMP web server on a Linux virtual
machine in Azure
4/26/2018 • 7 min to read • Edit Online
This article walks you through how to deploy an NGINX web server, MySQL, and PHP (the LEMP stack) on an
Ubuntu VM in Azure. The LEMP stack is an alternative to the popular L AMP stack, which you can also install in
Azure. To see the LEMP server in action, you can optionally install and configure a WordPress site. In this tutorial
you learn how to:
Create an Ubuntu VM (the 'L' in the LEMP stack)
Open port 80 for web traffic
Install NGINX, MySQL, and PHP
Verify installation and configuration
Install WordPress on the LEMP server
This setup is for quick tests or proof of concept.
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version
2.0.30 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI 2.0.
az vm create \
--resource-group myResourceGroup \
--name myVM \
--image UbuntuLTS \
--admin-username azureuser \
--generate-ssh-keys
When the VM has been created, the Azure CLI shows information similar to the following example. Take note of
the publicIpAddress . This address is used to access the VM in later steps.
{
"fqdns": "",
"id": "/subscriptions/<subscription
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM",
"location": "eastus",
"macAddress": "00-0D-3A-23-9A-49",
"powerState": "VM running",
"privateIpAddress": "10.0.0.4",
"publicIpAddress": "40.68.254.142",
"resourceGroup": "myResourceGroup"
}
Use the following command to create an SSH session with the virtual machine. Substitute the correct public IP
address of your virtual machine. In this example, the IP address is 40.68.254.142. azureuser is the administrator
user name set when you created the VM.
You are prompted to install the packages and other dependencies. When prompted, set a root password for
MySQL, and then [Enter] to continue. Follow the remaining prompts. This process installs the minimum required
PHP extensions needed to use PHP with MySQL.
nginx -v
With NGINX installed, and port 80 open to your VM, the web server can now be accessed from the internet. To
view the NGINX welcome page, open a web browser, and enter the public IP address of the VM. Use the public IP
address you used to SSH to the VM:
MySQL
Check the version of MySQL with the following command (note the capital V parameter):
mysql -V
To help secure the installation of MySQL, run the mysql_secure_installation script. If you are only setting up a
temporary server, you can skip this step.
mysql_secure_installation
Enter a root password for MySQL, and configure the security settings for your environment.
If you want to try MySQL features (create a MySQL database, add users, or change configuration settings), login
to MySQL. This step is not required to complete this tutorial.
mysql -u root -p
php -v
Configure NGINX to use the PHP FastCGI Process Manager (PHP -FPM ). Run the following commands to back up
the original NGINX server block config file and then edit the original file in an editor of your choice:
In the editor, replace the contents of /etc/nginx/sites-available/default with the following. See the comments for
explanation of the settings. Substitute the public IP address of your VM for yourPublicIPAddress, and leave the
remaining settings. Then save the file.
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
# Homepage of website is index.php
index index.php;
server_name yourPublicIPAddress;
location / {
try_files $uri $uri/ =404;
}
sudo nginx -t
If you want to test further, create a quick PHP info page to view in a browser. The following command creates the
PHP info page:
Now you can check the PHP info page you created. Open a browser and go to
http://yourPublicIPAddress/info.php . Substitute the public IP address of your VM. It should look similar to this
image.
Install WordPress
If you want to try your stack, install a sample app. As an example, the following steps install the open source
WordPress platform to create websites and blogs. Other workloads to try include Drupal and Moodle.
This WordPress setup is only for proof of concept. To install the latest WordPress in production with
recommended security settings, see the WordPress documentation.
Install the WordPress package
Run the following command:
Configure WordPress
Configure WordPress to use MySQL and PHP.
In a working directory, create a text file wordpress.sql to configure the MySQL database for WordPress:
Add the following commands, substituting a database password of your choice for yourPassword (leave other
values unchanged). If you previously set up a MySQL security policy to validate password strength, make sure the
password meets the strength requirements. Save the file.
CREATE DATABASE wordpress;
GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP,ALTER
ON wordpress.*
TO wordpress@localhost
IDENTIFIED BY 'yourPassword';
FLUSH PRIVILEGES;
Because the file wordpress.sql contains database credentials, delete it after use:
sudo rm wordpress.sql
To configure PHP, run the following command to open a text editor of your choice and create the file
/etc/wordpress/config-localhost.php :
Copy the following lines to the file, substituting your WordPress database password for yourPassword (leave other
values unchanged). Then save the file.
<?php
define('DB_NAME', 'wordpress');
define('DB_USER', 'wordpress');
define('DB_PASSWORD', 'yourPassword');
define('DB_HOST', 'localhost');
define('WP_CONTENT_DIR', '/usr/share/wordpress/wp-content');
?>
Now you can complete the WordPress setup and publish on the platform. Open a browser and go to
http://yourPublicIPAddress/wordpress . Substitute the public IP address of your VM. It should look similar to this
image.
Next steps
In this tutorial, you deployed a LEMP server in Azure. You learned how to:
Create an Ubuntu VM
Open port 80 for web traffic
Install NGINX, MySQL, and PHP
Verify installation and configuration
Install WordPress on the LEMP stack
Advance to the next tutorial to learn how to secure web servers with SSL certificates.
Secure web server with SSL
Tutorial: Create a MongoDB, Express, AngularJS, and
Node.js (MEAN) stack on a Linux virtual machine in
Azure
4/26/2018 • 6 min to read • Edit Online
This tutorial shows you how to implement a MongoDB, Express, AngularJS, and Node.js (MEAN ) stack on a Linux
virtual machine (VM ) in Azure. The MEAN stack that you create enables adding, deleting, and listing books in a
database. You learn how to:
Create a Linux VM
Install Node.js
Install MongoDB and set up the server
Install Express and set up routes to the server
Access the routes with AngularJS
Run the application
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version
2.0.30 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI 2.0.
Create a Linux VM
Create a resource group with the az group create command and create a Linux VM with the az vm create
command. An Azure resource group is a logical container into which Azure resources are deployed and managed.
The following example uses the Azure CLI to create a resource group named myResourceGroupMEAN in the
eastus location. A VM is created named myVM with SSH keys if they do not already exist in a default key location.
To use a specific set of keys, use the --ssh-key-value option.
az group create --name myResourceGroupMEAN --location eastus
az vm create \
--resource-group myResourceGroupMEAN \
--name myVM \
--image UbuntuLTS \
--admin-username azureuser \
--admin-password 'Azure12345678!' \
--generate-ssh-keys
az vm open-port --port 3300 --resource-group myResourceGroupMEAN --name myVM
When the VM has been created, the Azure CLI shows information similar to the following example:
{
"fqdns": "",
"id": "/subscriptions/{subscription-
id}/resourceGroups/myResourceGroupMEAN/providers/Microsoft.Compute/virtualMachines/myVM",
"location": "eastus",
"macAddress": "00-0D-3A-23-9A-49",
"powerState": "VM running",
"privateIpAddress": "10.0.0.4",
"publicIpAddress": "13.72.77.9",
"resourceGroup": "myResourceGroupMEAN"
}
Take note of the publicIpAddress . This address is used to access the VM.
Use the following command to create an SSH session with the VM. Make sure to use the correct public IP address.
In our example above our IP address was 13.72.77.9.
Install Node.js
Node.js is a JavaScript runtime built on Chrome's V8 JavaScript engine. Node.js is used in this tutorial to set up the
Express routes and AngularJS controllers.
On the VM, using the bash shell that you opened with SSH, install Node.js.
3. Install MongoDB.
5. We also need to install the body-parser package to help us process the JSON passed in requests to the
server.
Install the npm package manager.
6. Create a folder named Books and add a file to it named server.js that contains the configuration for the web
server.
2. In the Books folder, create a folder named apps and add a file named routes.js with the express routes
defined.
var Book = require('./models/book');
module.exports = function(app) {
app.get('/book', function(req, res) {
Book.find({}, function(err, result) {
if ( err ) throw err;
res.json(result);
});
});
app.post('/book', function(req, res) {
var book = new Book( {
name:req.body.name,
isbn:req.body.isbn,
author:req.body.author,
pages:req.body.pages
});
book.save(function(err, result) {
if ( err ) throw err;
res.json( {
message:"Successfully added book",
book:result
});
});
});
app.delete("/book/:isbn", function(req, res) {
Book.findOneAndRemove(req.query, function(err, result) {
if ( err ) throw err;
res.json( {
message: "Successfully deleted the book",
book: result
});
});
});
var path = require('path');
app.get('*', function(req, res) {
res.sendfile(path.join(__dirname + '/public', 'index.html'));
});
};
3. In the apps folder, create a folder named models and add a file named book.js with the book model
configuration defined.
2. In the public folder, create a file named index.html with the web page defined.
<!doctype html>
<html ng-app="myApp" ng-controller="myCtrl">
<head>
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.6.4/angular.min.js"></script>
<script src="script.js"></script>
</head>
<body>
<div>
<table>
<tr>
<td>Name:</td>
<td><input type="text" ng-model="Name"></td>
</tr>
<tr>
<td>Isbn:</td>
<td><input type="text" ng-model="Isbn"></td>
</tr>
<tr>
<td>Author:</td>
<td><input type="text" ng-model="Author"></td>
</tr>
<tr>
<td>Pages:</td>
<td><input type="number" ng-model="Pages"></td>
</tr>
</table>
<button ng-click="add_book()">Add</button>
</div>
<hr>
<div>
<table>
<tr>
<th>Name</th>
<th>Isbn</th>
<th>Author</th>
<th>Pages</th>
</tr>
<tr ng-repeat="book in books">
<td><input type="button" value="Delete" data-ng-click="del_book(book)"></td>
<td>{{book.name}}</td>
<td>{{book.isbn}}</td>
<td>{{book.author}}</td>
<td>{{book.pages}}</td>
</tr>
</table>
</div>
</body>
</html>
nodejs server.js
2. Open a web browser to the address that you recorded for the VM. For example, http://13.72.77.9:3300. You
should see something like the following page:
3. Enter data into the textboxes and click Add. For example:
4. After refreshing the page, you should see something like this page:
5. You could click Delete and remove the book record from the database.
Next steps
In this tutorial, you created a web application that keeps track of book records using a MEAN stack on a Linux VM.
You learned how to:
Create a Linux VM
Install Node.js
Install MongoDB and set up the server
Install Express and set up routes to the server
Access the routes with AngularJS
Run the application
Advance to the next tutorial to learn how to secure web servers with SSL certificates.
Secure web server with SSL
Tutorial: Secure a web server on a Linux virtual
machine in Azure with SSL certificates stored in Key
Vault
4/30/2018 • 5 min to read • Edit Online
To secure web servers, a Secure Sockets Layer (SSL ) certificate can be used to encrypt web traffic. These SSL
certificates can be stored in Azure Key Vault, and allow secure deployments of certificates to Linux virtual
machines (VMs) in Azure. In this tutorial you learn how to:
Create an Azure Key Vault
Generate or upload a certificate to the Key Vault
Create a VM and install the NGINX web server
Inject the certificate into the VM and configure NGINX with an SSL binding
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version
2.0.30 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI 2.0.
Overview
Azure Key Vault safeguards cryptographic keys and secrets, such certificates or passwords. Key Vault helps
streamline the certificate management process and enables you to maintain control of keys that access those
certificates. You can create a self-signed certificate inside Key Vault, or upload an existing, trusted certificate that
you already own.
Rather than using a custom VM image that includes certificates baked-in, you inject certificates into a running
VM. This process ensures that the most up-to-date certificates are installed on a web server during deployment. If
you renew or replace a certificate, you don't also have to create a new custom VM image. The latest certificates are
automatically injected as you create additional VMs. During the whole process, the certificates never leave the
Azure platform or are exposed in a script, command-line history, or template.
Next, create a Key Vault with az keyvault create and enable it for use when you deploy a VM. Each Key Vault
requires a unique name, and should be all lower case. Replace in the following example with your own unique Key
Vault name:
keyvault_name=<mykeyvault>
az keyvault create \
--resource-group myResourceGroupSecureWeb \
--name $keyvault_name \
--enabled-for-deployment
Create a secure VM
Now create a VM with az vm create. The certificate data is injected from Key Vault with the --secrets parameter.
You pass in the cloud-init config with the --custom-data parameter:
az vm create \
--resource-group myResourceGroupSecureWeb \
--name myVM \
--image UbuntuLTS \
--admin-username azureuser \
--generate-ssh-keys \
--custom-data cloud-init-web-server.txt \
--secrets "$vm_secret"
It takes a few minutes for the VM to be created, the packages to install, and the app to start. When the VM has
been created, take note of the publicIpAddress displayed by the Azure CLI. This address is used to access your
site in a web browser.
To allow secure web traffic to reach your VM, open port 443 from the Internet with az vm open-port:
az vm open-port \
--resource-group myResourceGroupSecureWeb \
--name myVM \
--port 443
Next steps
In this tutorial, you secured an NGINX web server with an SSL certificate stored in Azure Key Vault. You learned
how to:
Create an Azure Key Vault
Generate or upload a certificate to the Key Vault
Create a VM and install the NGINX web server
Inject the certificate into the VM and configure NGINX with an SSL binding
Follow this link to see pre-built virtual machine script samples.
Linux virtual machine script samples
Azure CLI Samples for Linux virtual machines
4/9/2018 • 1 min to read • Edit Online
The following table includes links to bash scripts built using the Azure CLI.
Create a virtual machine Creates a Linux virtual machine with minimal configuration.
Create a fully configured virtual machine Creates a resource group, virtual machine, and all related
resources.
Create highly available virtual machines Creates several virtual machines in a highly available and load
balanced configuration.
Create a VM with Docker enabled Creates a virtual machine, configures this VM as a Docker
host, and runs an NGINX container.
Create a VM and run configuration script Creates a virtual machine and uses the Azure Custom Script
extension to install NGINX.
Create a VM with WordPress installed Creates a virtual machine and uses the Azure Custom Script
extension to install WordPress.
Create a VM from a managed OS disk Creates a virtual machine by attaching an existing Managed
Disk as OS disk.
Create a VM from a snapshot Creates a virtual machine from a snapshot by first creating a
managed disk from snapshot and then attaching the new
managed disk as OS disk.
Manage storage
Create managed disk from a VHD Creates a managed disk from a specialized VHD as a OS disk
or from a data VHD as data disk.
Create a managed disk from a snapshot Creates a managed disk from a snapshot.
Copy managed disk to same or different subscription Copies managed disk to same or different subscription but in
the same region as the parent managed disk.
Export a snapshot as VHD to a storage account Exports a managed snapshot as VHD to a storage account in
different region.
Copy snapshot to same or different subscription Copies snapshot to same or different subscription but in the
same region as the parent snapshot.
Encrypt a VM and data disks Creates an Azure Key Vault, encryption key, and service
principal, then encrypts a VM.
Monitor a VM with Operations Management Suite Creates a virtual machine, installs the Operations
Management Suite agent, and enrolls the VM in an OMS
Workspace.
Troubleshoot a VMs operating system disk Mounts the operating system disk from one VM as a data
disk on a second VM.
Azure Virtual Machine PowerShell samples
4/9/2018 • 1 min to read • Edit Online
The following table includes links to PowerShell scripts samples that create and manage Linux virtual machines.
Create a fully configured virtual machine Creates a resource group, virtual machine, and all related
resources.
Create a VM with Docker enabled Creates a virtual machine, configures this VM as a Docker
host, and runs an NGINX container.
Create a VM and run configuration script Creates a virtual machine and uses the Azure Custom Script
extension to install NGINX.
Create a VM with WordPress installed Creates a virtual machine and uses the Azure Custom Script
extension to install WordPress.
Monitor a VM with Operations Management Suite Creates a virtual machine, installs the Operations
Management Suite agent, and enrolls the VM in an OMS
Workspace.
Azure Resource Manager overview
4/11/2018 • 15 min to read • Edit Online
The infrastructure for your application is typically made up of many components – maybe a virtual machine,
storage account, and virtual network, or a web app, database, database server, and 3rd party services. You do not
see these components as separate entities, instead you see them as related and interdependent parts of a single
entity. You want to deploy, manage, and monitor them as a group. Azure Resource Manager enables you to work
with the resources in your solution as a group. You can deploy, update, or delete all the resources for your solution
in a single, coordinated operation. You use a template for deployment and that template can work for different
environments such as testing, staging, and production. Resource Manager provides security, auditing, and tagging
features to help you manage your resources after deployment.
Terminology
If you are new to Azure Resource Manager, there are some terms you might not be familiar with.
resource - A manageable item that is available through Azure. Some common resources are a virtual
machine, storage account, web app, database, and virtual network, but there are many more.
resource group - A container that holds related resources for an Azure solution. The resource group can
include all the resources for the solution, or only those resources that you want to manage as a group. You
decide how you want to allocate resources to resource groups based on what makes the most sense for your
organization. See Resource groups.
resource provider - A service that supplies the resources you can deploy and manage through Resource
Manager. Each resource provider offers operations for working with the resources that are deployed. Some
common resource providers are Microsoft.Compute, which supplies the virtual machine resource,
Microsoft.Storage, which supplies the storage account resource, and Microsoft.Web, which supplies resources
related to web apps. See Resource providers.
Resource Manager template - A JavaScript Object Notation (JSON ) file that defines one or more resources
to deploy to a resource group. It also defines the dependencies between the deployed resources. The template
can be used to deploy the resources consistently and repeatedly. See Template deployment.
declarative syntax - Syntax that lets you state "Here is what I intend to create" without having to write the
sequence of programming commands to create it. The Resource Manager template is an example of
declarative syntax. In the file, you define the properties for the infrastructure to deploy to Azure.
Guidance
The following suggestions help you take full advantage of Resource Manager when working with your solutions.
1. Define and deploy your infrastructure through the declarative syntax in Resource Manager templates, rather
than through imperative commands.
2. Define all deployment and configuration steps in the template. You should have no manual steps for setting up
your solution.
3. Run imperative commands to manage your resources, such as to start or stop an app or machine.
4. Arrange resources with the same lifecycle in a resource group. Use tags for all other organizing of resources.
For guidance on how enterprises can use Resource Manager to effectively manage subscriptions, see Azure
enterprise scaffold - prescriptive subscription governance.
Resource groups
There are some important factors to consider when defining your resource group:
1. All the resources in your group should share the same lifecycle. You deploy, update, and delete them together.
If one resource, such as a database server, needs to exist on a different deployment cycle it should be in
another resource group.
2. Each resource can only exist in one resource group.
3. You can add or remove a resource to a resource group at any time.
4. You can move a resource from one resource group to another group. For more information, see Move
resources to new resource group or subscription.
5. A resource group can contain resources that reside in different regions.
6. A resource group can be used to scope access control for administrative actions.
7. A resource can interact with resources in other resource groups. This interaction is common when the two
resources are related but do not share the same lifecycle (for example, web apps connecting to a database).
When creating a resource group, you need to provide a location for that resource group. You may be wondering,
"Why does a resource group need a location? And, if the resources can have different locations than the resource
group, why does the resource group location matter at all?" The resource group stores metadata about the
resources. Therefore, when you specify a location for the resource group, you are specifying where that metadata
is stored. For compliance reasons, you may need to ensure that your data is stored in a particular region.
Resource providers
Each resource provider offers a set of resources and operations for working with an Azure service. For example, if
you want to store keys and secrets, you work with the Microsoft.KeyVault resource provider. This resource
provider offers a resource type called vaults for creating the key vault.
The name of a resource type is in the format: {resource-provider}/{resource-type}. For example, the key vault
type is Microsoft.KeyVault/vaults.
Before getting started with deploying your resources, you should gain an understanding of the available resource
providers. Knowing the names of resource providers and resources helps you define resources you want to
deploy to Azure. Also, you need to know the valid locations and API versions for each resource type. For more
information, see Resource providers and types.
Template deployment
With Resource Manager, you can create a template (in JSON format) that defines the infrastructure and
configuration of your Azure solution. By using a template, you can repeatedly deploy your solution throughout its
lifecycle and have confidence your resources are deployed in a consistent state. When you create a solution from
the portal, the solution automatically includes a deployment template. You do not have to create your template
from scratch because you can start with the template for your solution and customize it to meet your specific
needs. You can retrieve a template for an existing resource group by either exporting the current state of the
resource group, or viewing the template used for a particular deployment. Viewing the exported template is a
helpful way to learn about the template syntax.
To learn about the format of the template and how you construct it, see Create your first Azure Resource Manager
template. To view the JSON syntax for resources types, see Define resources in Azure Resource Manager
templates.
Resource Manager processes the template like any other request (see the image for Consistent management
layer). It parses the template and converts its syntax into REST API operations for the appropriate resource
providers. For example, when Resource Manager receives a template with the following resource definition:
"resources": [
{
"apiVersion": "2016-01-01",
"type": "Microsoft.Storage/storageAccounts",
"name": "mystorageaccount",
"location": "westus",
"sku": {
"name": "Standard_LRS"
},
"kind": "Storage",
"properties": {
}
}
]
It converts the definition to the following REST API operation, which is sent to the Microsoft.Storage resource
provider:
PUT
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Micr
osoft.Storage/storageAccounts/mystorageaccount?api-version=2016-01-01
REQUEST BODY
{
"location": "westus",
"properties": {
}
"sku": {
"name": "Standard_LRS"
},
"kind": "Storage"
}
How you define templates and resource groups is entirely up to you and how you want to manage your solution.
For example, you can deploy your three tier application through a single template to a single resource group.
But, you do not have to define your entire infrastructure in a single template. Often, it makes sense to divide your
deployment requirements into a set of targeted, purpose-specific templates. You can easily reuse these templates
for different solutions. To deploy a particular solution, you create a master template that links all the required
templates. The following image shows how to deploy a three tier solution through a parent template that includes
three nested templates.
If you envision your tiers having separate lifecycles, you can deploy your three tiers to separate resource groups.
Notice the resources can still be linked to resources in other resource groups.
For information about nested templates, see Using linked templates with Azure Resource Manager.
Azure Resource Manager analyzes dependencies to ensure resources are created in the correct order. If one
resource relies on a value from another resource (such as a virtual machine needing a storage account for disks),
you set a dependency. For more information, see Defining dependencies in Azure Resource Manager templates.
You can also use the template for updates to the infrastructure. For example, you can add a resource to your
solution and add configuration rules for the resources that are already deployed. If the template specifies creating
a resource but that resource already exists, Azure Resource Manager performs an update instead of creating a
new asset. Azure Resource Manager updates the existing asset to the same state as it would be as new.
Resource Manager provides extensions for scenarios when you need additional operations such as installing
particular software that is not included in the setup. If you are already using a configuration management service,
like DSC, Chef or Puppet, you can continue working with that service by using extensions. For information about
virtual machine extensions, see About virtual machine extensions and features.
Finally, the template becomes part of the source code for your app. You can check it in to your source code
repository and update it as your app evolves. You can edit the template through Visual Studio.
After defining your template, you are ready to deploy the resources to Azure. For the commands to deploy the
resources, see:
Deploy resources with Resource Manager templates and Azure PowerShell
Deploy resources with Resource Manager templates and Azure CLI
Deploy resources with Resource Manager templates and Azure portal
Deploy resources with Resource Manager templates and Resource Manager REST API
Tags
Resource Manager provides a tagging feature that enables you to categorize resources according to your
requirements for managing or billing. Use tags when you have a complex collection of resource groups and
resources, and need to visualize those assets in the way that makes the most sense to you. For example, you could
tag resources that serve a similar role in your organization or belong to the same department. Without tags, users
in your organization can create multiple resources that may be difficult to later identify and manage. For example,
you may wish to delete all the resources for a particular project. If those resources are not tagged for the project,
you have to manually find them. Tagging can be an important way for you to reduce unnecessary costs in your
subscription.
Resources do not need to reside in the same resource group to share a tag. You can create your own tag
taxonomy to ensure that all users in your organization use common tags rather than users inadvertently applying
slightly different tags (such as "dept" instead of "department").
The following example shows a tag applied to a virtual machine.
"resources": [
{
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2015-06-15",
"name": "SimpleWindowsVM",
"location": "[resourceGroup().location]",
"tags": {
"costCenter": "Finance"
},
...
}
]
To retrieve all the resources with a tag value, use the following PowerShell cmdlet:
You can also view tagged resources through the Azure portal.
The usage report for your subscription includes tag names and values, which enables you to break out costs by
tags. For more information about tags, see Using tags to organize your Azure resources.
Access control
Resource Manager enables you to control who has access to specific actions for your organization. It natively
integrates role-based access control (RBAC ) into the management platform and applies that access control to all
services in your resource group.
There are two main concepts to understand when working with role-based access control:
Role definitions - describe a set of permissions and can be used in many assignments.
Role assignments - associate a definition with an identity (user or group) for a particular scope (subscription,
resource group, or resource). The assignment is inherited by lower scopes.
You can add users to pre-defined platform and resource-specific roles. For example, you can take advantage of the
pre-defined role called Reader that permits users to view resources but not change them. You add users in your
organization that need this type of access to the Reader role and apply the role to the subscription, resource
group, or resource.
Azure provides the following four platform roles:
1. Owner - can manage everything, including access
2. Contributor - can manage everything except access
3. Reader - can view everything, but can't make changes
4. User Access Administrator - can manage user access to Azure resources
Azure also provides several resource-specific roles. Some common ones are:
1. Virtual Machine Contributor - can manage virtual machines but not grant access to them, and cannot manage
the virtual network or storage account to which they are connected
2. Network Contributor - can manage all network resources, but not grant access to them
3. Storage Account Contributor - Can manage storage accounts, but not grant access to them
4. SQL Server Contributor - Can manage SQL servers and databases, but not their security-related policies
5. Website Contributor - Can manage websites, but not the web plans to which they are connected
For the full list of roles and permitted actions, see RBAC: Built in Roles. For more information about role-based
access control, see Azure Role-based Access Control.
In some cases, you want to run code or script that accesses resources, but you do not want to run it under a user’s
credentials. Instead, you want to create an identity called a service principal for the application and assign the
appropriate role for the service principal. Resource Manager enables you to create credentials for the application
and programmatically authenticate the application. To learn about creating service principals, see one of following
topics:
Use Azure PowerShell to create a service principal to access resources
Use Azure CLI to create a service principal to access resources
Use portal to create Azure Active Directory application and service principal that can access resources
You can also explicitly lock critical resources to prevent users from deleting or modifying them. For more
information, see Lock resources with Azure Resource Manager.
Activity logs
Resource Manager logs all operations that create, modify, or delete a resource. You can use the activity logs to find
an error when troubleshooting or to monitor how a user in your organization modified a resource. To see the logs,
select Activity logs in the Settings blade for a resource group. You can filter the logs by many different values
including which user initiated the operation. For information about working with the activity logs, see View
activity logs to manage Azure resources.
Customized policies
Resource Manager enables you to create customized policies for managing your resources. The types of policies
you create can include diverse scenarios. You can enforce a naming convention on resources, limit which types
and instances of resources can be deployed, or limit which regions can host a type of resource. You can require a
tag value on resources to organize billing by departments. You create policies to help reduce costs and maintain
consistency in your subscription.
You define policies with JSON and then apply those policies either across your subscription or within a resource
group. Policies are different than role-based access control because they are applied to resource types.
The following example shows a policy that ensures tag consistency by specifying that all resources include a
costCenter tag.
{
"if": {
"not" : {
"field" : "tags",
"containsKey" : "costCenter"
}
},
"then" : {
"effect" : "deny"
}
}
There are many more types of policies you can create. For more information, see What is Azure Policy?.
SDKs
Azure SDKs are available for multiple languages and platforms. Each of these language implementations is
available through its ecosystem package manager and GitHub.
Here are our Open Source SDK repositories. We welcome feedback, issues, and pull requests.
Azure SDK for .NET
Azure Management Libraries for Java
Azure SDK for Node.js
Azure SDK for PHP
Azure SDK for Python
Azure SDK for Ruby
For information about using these languages with your resources, see:
Azure for .NET developers
Azure for Java developers
Azure for Node.js developers
Azure for Python developers
NOTE
If the SDK doesn't provide the required functionality, you can also call to the Azure REST API directly.
Next steps
For a simple introduction to working with templates, see Export an Azure Resource Manager template from
existing resources.
For a more thorough walkthrough of creating a template, see Create your first Azure Resource Manager
template.
To understand the functions you can use in a template, see Template functions
For information about using Visual Studio with Resource Manager, see Creating and deploying Azure resource
groups through Visual Studio.
Here's a video demonstration of this overview:
Regions and availability for virtual machines in Azure
4/9/2018 • 6 min to read • Edit Online
Azure operates in multiple datacenters around the world. These datacenters are grouped in to geographic regions,
giving you flexibility in choosing where to build your applications. It is important to understand how and where
your virtual machines (VMs) operate in Azure, along with your options to maximize performance, availability, and
redundancy. This article provides you with an overview of the availability and redundancy features of Azure.
Region pairs
Each Azure region is paired with another region within the same geography (such as US, Europe, or Asia). This
approach allows for the replication of resources, such as VM storage, across a geography that should reduce the
likelihood of natural disasters, civil unrest, power outages, or physical network outages affecting both regions at
once. Additional advantages of region pairs include:
In the event of a wider Azure outage, one region is prioritized out of every pair to help reduce the time to
restore for applications.
Planned Azure updates are rolled out to paired regions one at a time to minimize downtime and risk of
application outage.
Data continues to reside within the same geography as its pair (except for Brazil South) for tax and law
enforcement jurisdiction purposes.
Examples of region pairs include:
PRIMARY SECONDARY
West US East US
Feature availability
Some services or VM features are only available in certain regions, such as specific VM sizes or storage types.
There are also some global Azure services that do not require you to select a particular region, such as Azure
Active Directory, Traffic Manager, or Azure DNS. To assist you in designing your application environment, you can
check the availability of Azure services across each region. You can also programmatically query the supported VM
sizes and restrictions in each region.
Storage availability
Understanding Azure regions and geographies becomes important when you consider the available storage
replication options. Depending on the storage type, you have different replication options.
Azure Managed Disks
Locally redundant storage (LRS )
Replicates your data three times within the region in which you created your storage account.
Storage account-based disks
Locally redundant storage (LRS )
Replicates your data three times within the region in which you created your storage account.
Zone redundant storage (ZRS )
Replicates your data three times across two to three facilities, either within a single region or across two
regions.
Geo-redundant storage (GRS )
Replicates your data to a secondary region that is hundreds of miles away from the primary region.
Read-access geo-redundant storage (RA-GRS )
Replicates your data to a secondary region, as with GRS, but also then provides read-only access to the
data in the secondary location.
The following table provides a quick overview of the differences between the storage replication types:
REPLICATION
STRATEGY LRS ZRS GRS RA-GRS
Number of copies of 3 3 6 6
data maintained on
separate nodes.
You can read more about Azure Storage replication options here. For more information about managed disks, see
Azure Managed Disks overview.
Storage costs
Prices vary depending on the storage type and availability that you select.
Azure Managed Disks
Premium Managed Disks are backed by Solid-State Drives (SSDs) and Standard Managed Disks are backed by
regular spinning disks. Both Premium and Standard Managed Disks are charged based on the provisioned
capacity for the disk.
Unmanaged disks
Premium storage is backed by Solid-State Drives (SSDs) and is charged based on the capacity of the disk.
Standard storage is backed by regular spinning disks and is charged based on the in-use capacity and desired
storage availability.
For RA-GRS, there is an additional Geo-Replication Data Transfer charge for the bandwidth of
replicating that data to another Azure region.
See Azure Storage Pricing for pricing information on the different storage types and availability options.
Availability sets
An availability set is a logical grouping of VMs within a datacenter that allows Azure to understand how your
application is built to provide for redundancy and availability. We recommended that two or more VMs are created
within an availability set to provide for a highly available application and to meet the 99.95% Azure SL A. There is
no cost for the Availability Set itself, you only pay for each VM instance that you create. When a single VM is using
Azure Premium Storage, the Azure SL A applies for unplanned maintenance events.
An availability set is composed of two additional groupings that protect against hardware failures and allow
updates to safely be applied - fault domains (FDs) and update domains (UDs). You can read more about how to
manage the availability of Linux VMs or Windows VMs.
Fault domains
A fault domain is a logical group of underlying hardware that share a common power source and network switch,
similar to a rack within an on-premises datacenter. As you create VMs within an availability set, the Azure platform
automatically distributes your VMs across these fault domains. This approach limits the impact of potential
physical hardware failures, network outages, or power interruptions.
Update domains
An update domain is a logical group of underlying hardware that can undergo maintenance or be rebooted at the
same time. As you create VMs within an availability set, the Azure platform automatically distributes your VMs
across these update domains. This approach ensures that at least one instance of your application always remains
running as the Azure platform undergoes periodic maintenance. The order of update domains being rebooted may
not proceed sequentially during planned maintenance, but only one update domain is rebooted at a time.
Managed Disk fault domains
For VMs using Azure Managed Disks, VMs are aligned with managed disk fault domains when using a managed
availability set. This alignment ensures that all the managed disks attached to a VM are within the same managed
disk fault domain. Only VMs with managed disks can be created in a managed availability set. The number of
managed disk fault domains varies by region - either two or three managed disk fault domains per region. You can
read more about these managed disk fault domains for Linux VMs or Windows VMs.
Availability zones
Availability zones, an alternative to availability sets, expand the level of control you have to maintain the availability
of the applications and data on your VMs. An Availability Zone is a physically separate zone within an Azure
region. There are three Availability Zones per supported Azure region. Each Availability Zone has a distinct power
source, network, and cooling, and is logically separate from the other Availability Zones within the Azure region. By
architecting your solutions to use replicated VMs in zones, you can protect your apps and data from the loss of a
datacenter. If one zone is compromised, then replicated apps and data are instantly available in another zone.
Next steps
You can now start to use these availability and redundancy features to build your Azure environment. For best
practices information, see Azure availability best practices.
Sizes for Linux virtual machines in Azure
5/3/2018 • 1 min to read • Edit Online
This article describes the available sizes and options for the Azure virtual machines you can use to run your Linux
apps and workloads. It also provides deployment considerations to be aware of when you're planning to use
these resources. This article is also available for Windows virtual machines.
General purpose B, Dsv3, Dv3, DSv2, Dv2, DS, D, Av2, Balanced CPU-to-memory ratio. Ideal
A0-7 for testing and development, small to
medium databases, and low to medium
traffic web servers.
Memory optimized Esv3, Ev3, M, GS, G, DSv2, DS, Dv2, D High memory-to-CPU ratio. Great for
relational database servers, medium to
large caches, and in-memory analytics.
High performance compute H, A8-11 Our fastest and most powerful CPU
virtual machines with optional high-
throughput network interfaces (RDMA).
For information about pricing of the various sizes, see Virtual Machines Pricing.
For availability of VM sizes in Azure regions, see Products available by region.
To see general limits on Azure VMs, see Azure subscription and service limits, quotas, and constraints.
Learn more about how Azure compute units (ACU ) can help you compare compute performance across Azure
SKUs.
REST API
For information on using the REST API to query for VM sizes, see the following:
List available virtual machine sizes for resizing
List available virtual machine sizes for a subscription
List available virtual machine sizes in an availability set
ACU
Learn more about how Azure compute units (ACU ) can help you compare compute performance across Azure
SKUs.
Benchmark scores
Learn more about compute performance for Linux VMs using the CoreMark benchmark scores.
Next steps
Learn more about the different VM sizes that are available:
General purpose
Compute optimized
Memory optimized
Storage optimized
GPU
High performance compute
General purpose virtual machine sizes
4/11/2018 • 10 min to read • Edit Online
General purpose VM sizes provide balanced CPU -to-memory ratio. Ideal for testing and development, small to
medium databases, and low to medium traffic web servers. This article provides information about the number of
vCPUs, data disks and NICs as well as storage throughput and network bandwidth for each size in this grouping.
The A-series and Av2-series VMs can be deployed on a variety of hardware types and processors. The size
is throttled, based upon the hardware, to offer consistent processor performance for the running instance,
regardless of the hardware it is deployed on. To determine the physical hardware on which this size is
deployed, query the virtual hardware from within the Virtual Machine.
D -series VMs are designed to run applications that demand higher compute power and temporary disk
performance. D -series VMs provide faster processors, a higher memory-to-vCPU ratio, and a solid-state
drive (SSD ) for the temporary disk. For details, see the announcement on the Azure blog, New D -Series
Virtual Machine Sizes.
Dv2-series, a follow -on to the original D -series, features a more powerful CPU. The Dv2-series CPU is
about 35% faster than the D -series CPU. It is based on the latest generation Intel Xeon® E5-2673 v3 2.4
GHz (Haswell) or E5-2673 v4 2.3 GHz (Broadwell) processors, and with the Intel Turbo Boost Technology
2.0, can go up to 3.1 GHz. The Dv2-series has the same memory and disk configurations as the D -series.
The Dv3-series features the same processor(s) as the Dv2-series, but in a hyper-threaded configuration,
providing a better value proposition for most general purpose workloads, and bringing the Dv3 into
alignment with the general purpose VMs of most other clouds. Memory has been expanded (from ~3.5
GiB/vCPU to 4 GiB/vCPU ) while disk and network limits have been adjusted on a per core basis to align
with the move to hyperthreading. The Dv3 no longer has the high memory VM sizes of the D/Dv2 families,
those have been moved to the new Ev3 family.
The basic tier sizes are primarily for development workloads and other applications that don't require load
balancing, auto-scaling, or memory-intensive virtual machines.
B-series
The B -series burstable VMs are ideal for workloads that do not need the full performance of the CPU
continuously, like web servers, small databases and development and test environments. These workloads
typically have burstable performance requirements. The B -Series provides these customers the ability to purchase
a VM size with a price conscience baseline performance that allows the VM instance to build up credits when the
VM is utilizing less than its base performance. When the VM has accumulated credit, the VM can burst above the
VM’s baseline using up to 100% of the CPU when your application requires the higher CPU performance.
MAX
MAX UNCAC
CREDIT MAX LOCAL HED
S BANKE DISK DISK
LOCAL BASE BANKE D MAX PERF: PERF:
MEMOR SSD: PERF OF D/ CREDIT DATA IOPS / IOPS / MAX
SIZE VCPU Y: GIB GIB A CORE HOUR S DISKS MBPS MBPS NICS
Dsv3-series 1
ACU: 160-190
Dsv3-series sizes are based on the 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) processor or the latest 2.3 GHz
Intel XEON ® E5-2673 v4 (Broadwell) processor that can achieve 3.5GHz with Intel Turbo Boost Technology 2.0
and use premium storage. The Dsv3-series sizes offer a combination of vCPU, memory, and temporary storage
for most production workloads.
MAX
CACHED
AND TEMP
STORAGE MAX
THROUGHP UNCACHED MAX NICS /
UT: IOPS / DISK EXPECTED
TEMP MBPS THROUGHP NETWORK
MEMORY: STORAGE MAX DATA (CACHE SIZE UT: IOPS / BANDWIDT
SIZE VCPU GIB (SSD) GIB DISKS IN GIB) MBPS H (MBPS)
Dv3-series 1
ACU: 160-190
Dv3-series sizes are based on the 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) processor or 2.3 GHz Intel XEON ®
E5-2673 v4 (Broadwell) processor that can achieve 3.5GHz with Intel Turbo Boost Technology 2.0. The Dv3-series
sizes offer a combination of vCPU, memory, and temporary storage for most production workloads.
Data disk storage is billed separately from virtual machines. To use premium storage disks, use the Dsv3 sizes. The
pricing and billing meters for Dsv3 sizes are the same as Dv3-series.
MAX TEMP
STORAGE
THROUGHPUT:
IOPS / READ MAX NICS /
TEMP STORAGE MAX DATA MBPS / WRITE NETWORK
SIZE VCPU MEMORY: GIB (SSD) GIB DISKS MBPS BANDWIDTH
DSv2-series
ACU: 210-250
MAX
CACHED
AND TEMP
STORAGE MAX
THROUGHP UNCACHED MAX NICS /
UT: IOPS / DISK EXPECTED
TEMP MBPS THROUGHP NETWORK
MEMORY: STORAGE MAX DATA (CACHE SIZE UT: IOPS / BANDWIDT
SIZE VCPU GIB (SSD) GIB DISKS IN GIB) MBPS H (MBPS)
Dv2-series
ACU: 210-250
MAX TEMP
STORAGE
THROUGHP MAX NICS /
UT: IOPS / EXPECTED
TEMP READ MBPS NETWORK
MEMORY: STORAGE / WRITE MAX DATA THROUGHP BANDWIDT
SIZE VCPU GIB (SSD) GIB MBPS DISKS UT: IOPS H (MBPS)
DS-series
ACU: 160
MAX
CACHED
AND TEMP
STORAGE MAX
THROUGHP UNCACHED MAX NICS /
UT: IOPS / DISK EXPECTED
TEMP MBPS THROUGHP NETWORK
MEMORY: STORAGE MAX DATA (CACHE SIZE UT: IOPS / BANDWIDT
SIZE VCPU GIB (SSD) GIB DISKS IN GIB) MBPS H (MBPS)
D-series
ACU: 160
MAX TEMP
STORAGE MAX NICS /
THROUGHPUT: MAX DATA EXPECTED
IOPS / READ DISKS / NETWORK
TEMP STORAGE MBPS / WRITE THROUGHPUT: BANDWIDTH
SIZE VCPU MEMORY: GIB (SSD) GIB MBPS IOPS (MBPS)
Av2-series
ACU: 100
MAX TEMP
STORAGE MAX NICS /
THROUGHPUT: MAX DATA EXPECTED
IOPS / READ DISKS / NETWORK
TEMP STORAGE MBPS / WRITE THROUGHPUT: BANDWIDTH
SIZE VCPU MEMORY: GIB (SSD) GIB MBPS IOPS (MBPS)
A-series
ACU: 50-100
MAX NICS /
MAX DATA EXPECTED
DISK NETWORK
TEMP STORAGE MAX DATA THROUGHPUT: BANDWIDTH
SIZE VCPU MEMORY: GIB (HDD): GIB DISKS IOPS (MBPS)
1
1 The A0 size is over-subscribed on the physical hardware. For this specific size only, other customer deployments
may impact the performance of your running workload. The relative performance is outlined below as the
expected baseline, subject to an approximate variability of 15 percent.
Standard A0 - A4 using CLI and PowerShell
In the classic deployment model, some VM size names are slightly different in CLI and PowerShell:
Standard_A0 is ExtraSmall
Standard_A1 is Small
Standard_A2 is Medium
Standard_A3 is Large
Standard_A4 is ExtraLarge
Basic A
MAX MAX. DATA
SIZE – TEMPORARY DISKS (1023 MAX. IOPS (300
SIZE\NAME VCPU MEMORY NICS (MAX) DISK SIZE GB EACH) PER DISK)
Note that the number of Data Disks for Classic VMs might be lower than the number of Data Disks for Azure
Resource Manager VMs.
Other sizes
Compute optimized
Memory optimized
Storage optimized
GPU
High performance compute
Next steps
Learn more about how Azure compute units (ACU ) can help you compare compute performance across Azure
SKUs.
B-series burstable virtual machine sizes
4/9/2018 • 4 min to read • Edit Online
The B -series VM family allows you to choose which VM size provides you the necessary base level performance for
your workload, with the ability to burst CPU performance up to 100% of an Intel® Broadwell E5-2673 v4 2.3 GHz,
or an Intel® Haswell 2.4 GHz E5-2673 v3 processor vCPU.
The B -series VMs are ideal for workloads that do not need the full performance of the CPU continuously, like web
servers, small databases and development and test environments. These workloads typically have burstable
performance requirements. The B -series provides you with the ability to purchase a VM size with baseline
performance and the VM instance builds up credits when it is using less than its baseline. When the VM has
accumulated credit, the VM can burst above the baseline using up to 100% of the vCPU when your application
requires higher CPU performance.
The B -series comes in the following six VM sizes:
Q&A
Q: How do you get 135% baseline performance from a VM?
A: The 135% is shared amongst the 8 vCPU’s that make up the VM size. For example, if your application uses 4 of
the 8 cores working on batch processing and each of those 4 vCPU’s are running at 30% utilization the total
amount of VM CPU performance would equal 120%. Meaning that your VM would be building credit time based
on the 15% delta from your baseline performance. But it also means that when you have credits available that same
VM can use 100% of all 8 vCPU’s giving that VM a Max CPU performance of 800%.
Q: How can I monitor my credit balance and consumption
A: We will be introducing 2 new metrics in the coming weeks, the Credit metric will allow you to view how many
credits your VM has banked and the ConsumedCredit metric will show how many CPU credits your VM has
consumed from the bank. You will be able to view these metrics from the metrics pane in the portal or
programmatically through the Azure Monitor APIs.
For more information on how to access the metrics data for Azure, see Overview of metrics in Microsoft Azure.
Q: How are credits accumulated?
A: The VM accumulation and consumption rates are set such that a VM running at exactly its base performance
level will have neither a net accumulation or consumption of bursting credits. A VM will have a net increase in
credits whenever it is running below its base performance level and will have a net decrease in credits whenever the
VM is utilizing the CPU more than its base performance level.
Example: I deploy a VM using the B1ms size for my small time and attendance database application. This size
allows my application to use up to 20% of a vCPU as my baseline, which is .2 credits per minute I can use or bank.
My application is busy at the beginning and end of my employees work day, between 7:00-9:00 AM and 4:00 -
6:00PM. During the other 20 hours of the day, my application is typically at idle, only using 10% of the vCPU. For
the non-peak hours I earn 0.2 credits per minute but only consume 0.l credits per minute, so my VM will bank .1 x
60 = 6 credits per hour. For the 20 hours that I am off-peak, I will bank 120 credits.
During peak hours my application averages 60% vCPU utilization, I still earn 0.2 credits per minute but I consume
0.6 credits per minute, for a net cost of .4 credits a minute or .4 x 60 = 24 credits per hour. I have 4 hours per day of
peak usage, so it costs 4 x 24 = 96 credits for my peak usage.
If I take the 120 credits I earned off-peak and subtract the 96 credits I used for my peak times, I bank an additional
24 credits per day that I can use for other bursts of activity.
Q: Does the B -Series support Premium Storage data disks?
A: Yes, all B -Series sizes support Premium Storage data disks.
Q: My remaining credit are set to 0 after a redepoy or a stop/start.
A : When a VM is “REDPLOYED”, i.e., the VM moves to another node, and the accumulated credit is lost. If the VM
is stopped/started, but remains on the same node, the VM retains the accumulated credit. Whenever the VM starts
fresh on a node, it gets an initial credit, for Standard_B8ms it is 240 mins.
Other sizes
General purpose
Compute optimized
Memory optimized
Storage optimized
GPU optimized
High performance compute
Next steps
Learn more about how Azure compute units (ACU ) can help you compare compute performance across Azure
SKUs.
Compute optimized virtual machine sizes
4/9/2018 • 4 min to read • Edit Online
Compute optimized VM sizes have a high CPU -to-memory ratio and are good for medium traffic web servers,
network appliances, batch processes, and application servers. This article provides information about the number
of vCPUs, data disks, and NICs as well as storage throughput and network bandwidth for each size in this
grouping.
Fsv2-series is based on the Intel® Xeon® Platinum 8168 processor, featuring a base core frequency of 2.7 GHz
and a maximum single-core turbo frequency of 3.7 GHz. Intel® AVX-512 instructions, which are new on Intel
Scalable Processors, will provide up to a 2X performance boost to vector processing workloads on both single and
double precision floating point operations. In other words, they are really fast for any computational workload.
At a lower per-hour list price, the Fsv2-series is the best value in price-performance in the Azure portfolio based
on the Azure Compute Unit (ACU ) per vCPU.
F -series is based on the 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) processor, which can achieve clock speeds as
high as 3.1 GHz with the Intel Turbo Boost Technology 2.0. This is the same CPU performance as the Dv2-series of
VMs.
F -series VMs are an excellent choice for workloads that demand faster CPUs but do not need as much memory or
temporary storage per vCPU. Workloads such as analytics, gaming servers, web servers, and batch processing
will benefit from the value of the F -series.
The Fs-series provides all the advantages of the F -series, in addition to Premium storage.
Fsv2-series 1
ACU: 195 - 210
MAX CACHED
AND TEMP
STORAGE MAX NICS /
THROUGHPUT: EXPECTED
IOPS / MBPS NETWORK
TEMP STORAGE MAX DATA (CACHE SIZE IN BANDWIDTH
SIZE VCPU'S MEMORY: GIB (SSD) GIB DISKS GIB) (MBPS)
Fs-series 1
ACU: 210 - 250
MAX
CACHED
AND TEMP
STORAGE MAX
THROUGHP UNCACHED MAX NICS /
UT: IOPS / DISK EXPECTED
TEMP MBPS THROUGHP NETWORK
MEMORY: STORAGE MAX DATA (CACHE SIZE UT: IOPS / BANDWIDT
SIZE VCPU GIB (SSD) GIB DISKS IN GIB) MBPS H (MBPS)
F-series
ACU: 210 - 250
MAX TEMP
STORAGE MAX NICS /
THROUGHPUT: MAX DATA EXPECTED
IOPS / READ DISKS / NETWORK
TEMP STORAGE MBPS / WRITE THROUGHPUT: BANDWIDTH
SIZE VCPU MEMORY: GIB (SSD) GIB MBPS IOPS (MBPS)
Other sizes
General purpose
Memory optimized
Storage optimized
GPU
High performance compute
Next steps
Learn more about how Azure compute units (ACU ) can help you compare compute performance across Azure
SKUs.
Memory optimized virtual machine sizes
5/10/2018 • 10 min to read • Edit Online
Memory optimized VM sizes offer a high memory-to-CPU ratio that are great for relational database servers,
medium to large caches, and in-memory analytics. This article provides information about the number of vCPUs,
data disks and NICs as well as storage throughput and network bandwidth for each size in this grouping.
The M -Series offers the highest vCPU count (up to 128 vCPUs) and largest memory (up to 3.8 TiB ) of any
VM in the cloud. It’s ideal for extremely large databases or other applications that benefit from high vCPU
counts and large amounts of memory.
Dv2-series, D -series, G -series, and the DS/GS counterparts are ideal for applications that demand faster
vCPUs, better temporary storage performance, or have higher memory demands. They offer a powerful
combination for many enterprise-grade applications.
D -series VMs are designed to run applications that demand higher compute power and temporary disk
performance. D -series VMs provide faster processors, a higher memory-to-vCPU ratio, and a solid-state
drive (SSD ) for temporary storage. For details, see the announcement on the Azure blog, New D -Series
Virtual Machine Sizes.
Dv2-series, a follow -on to the original D -series, features a more powerful CPU. The Dv2-series CPU is
about 35% faster than the D -series CPU. It is based on the latest generation 2.4 GHz Intel Xeon® E5-2673
v3 2.4 GHz (Haswell) or E5-2673 v4 2.3 GHz (Broadwell) processors, and with the Intel Turbo Boost
Technology 2.0, can go up to 3.1 GHz. The Dv2-series has the same memory and disk configurations as the
D -series.
The Ev3-series features the E5-2673 v4 2.3 GHz (Broadwell) processor in a hyper-threaded configuration,
providing a better value proposition for most general purpose workloads, and bringing the Ev3 into
alignment with the general purpose VMs of most other clouds. Memory has been expanded (from 7
GiB/vCPU to 8 GiB/vCPU ) while disk and network limits have been adjusted on a per core basis to align
with the move to hyperthreading. The Ev3 is the follow up to the high memory VM sizes of the D/Dv2
families.
Azure Compute offers virtual machine sizes that are Isolated to a specific hardware type and dedicated to a
single customer. These virtual machine sizes are best suited for workloads that require a high degree of
isolation from other customers for workloads involving elements like compliance and regulatory
requirements. Customers can also choose to further subdivide the resources of these Isolated virtual
machines by using Azure support for nested virtual machines. Please see the tables of virtual machine
families below for your isolated VM options.
Esv3-series
ACU: 160-190 1
ESv3-series instances are based on the 2.3 GHz Intel XEON ® E5-2673 v4 (Broadwell) processor and can achieve
3.5GHz with Intel Turbo Boost Technology 2.0 and use premium storage. Ev3-series instances are ideal for
memory-intensive enterprise applications.
MAX
CACHED
AND TEMP
STORAGE MAX
THROUGHP UNCACHED MAX NICS /
UT: IOPS / DISK EXPECTED
TEMP MBPS THROUGHP NETWORK
MEMORY: STORAGE MAX DATA (CACHE SIZE UT: IOPS / BANDWIDT
SIZE VCPU GIB (SSD) GIB DISKS IN GIB) MBPS H (MBPS)
Ev3-series
ACU: 160 - 190 1
Ev3-series instances are based on the 2.3 GHz Intel XEON ® E5-2673 v4 (Broadwell) processor and can achieve
3.5GHz with Intel Turbo Boost Technology 2.0. Ev3-series instances are ideal for memory-intensive enterprise
applications.
Data disk storage is billed separately from virtual machines. To use premium storage disks, use the ESv3 sizes.
The pricing and billing meters for ESv3 sizes are the same as Ev3-series.
MAX TEMP
STORAGE
THROUGHPUT:
IOPS / READ MAX NICS /
TEMP STORAGE MAX DATA MBPS / WRITE NETWORK
SIZE VCPU MEMORY: GIB (SSD) GIB DISKS MBPS BANDWIDTH
M-series
ACU: 160-180 1
MAX
CACHED
AND TEMP
STORAGE MAX
THROUGHP UNCACHED MAX NICS /
UT: IOPS / DISK EXPECTED
TEMP MBPS THROUGHP NETWORK
MEMORY: STORAGE MAX DATA (CACHE SIZE UT: IOPS / BANDWIDT
SIZE VCPU GIB (SSD) GIB DISKS IN GIB) MBPS H (MBPS)
4
4 Instance is isolated to hardware dedicated to a single customer.
GS-series
ACU: 180 - 240 1
MAX
CACHED
AND TEMP
STORAGE MAX
THROUGHP UNCACHED MAX NICS /
UT: IOPS / DISK EXPECTED
TEMP MBPS THROUGHP NETWORK
MEMORY: STORAGE MAX DATA (CACHE SIZE UT: IOPS / BANDWIDT
SIZE VCPU GIB (SSD) GIB DISKS IN GIB) MBPS H (MBPS)
1 The maximum disk throughput (IOPS or MBps) possible with a GS series VM may be limited by the number,
size and striping of the attached disk(s). For details, see Premium Storage: High-performance storage for Azure
virtual machine workloads.
2 Instance is isolated to hardware dedicated to a single customer.
G-series
ACU: 180 - 240
MAX TEMP
STORAGE MAX NICS /
THROUGHPUT: MAX DATA EXPECTED
IOPS / READ DISKS / NETWORK
TEMP STORAGE MBPS / WRITE THROUGHPUT: BANDWIDTH
SIZE VCPU MEMORY: GIB (SSD) GIB MBPS IOPS (MBPS)
DSv2-series
ACU: 210 - 250 1
MAX
CACHED
AND TEMP
STORAGE MAX
THROUGHP UNCACHED MAX NICS /
UT: IOPS / DISK EXPECTED
TEMP MBPS THROUGHP NETWORK
MEMORY: STORAGE MAX DATA (CACHE SIZE UT: IOPS / BANDWIDT
SIZE VCPU GIB (SSD) GIB DISKS IN GIB) MBPS H (MBPS)
1 The maximum disk throughput (IOPS or MBps) possible with a DSv2 series VM may be limited by the number,
size and striping of the attached disk(s). For details, see Premium Storage: High-performance storage for Azure
virtual machine workloads.
2 Instance is isolated to hardware dedicated to a single customer.
Dv2-series
ACU: 210 - 250
MAX TEMP
STORAGE MAX NICS /
THROUGHPUT: MAX DATA EXPECTED
IOPS / READ DISKS / NETWORK
TEMP STORAGE MBPS / WRITE THROUGHPUT: BANDWIDTH
SIZE VCPU MEMORY: GIB (SSD) GIB MBPS IOPS (MBPS)
DS-series
ACU: 160 1
MAX
CACHED
AND TEMP
STORAGE MAX
THROUGHP UNCACHED MAX NICS /
UT: IOPS / DISK EXPECTED
TEMP MBPS THROUGHP NETWORK
MEMORY: STORAGE MAX DATA (CACHE SIZE UT: IOPS / BANDWIDT
SIZE VCPU GIB (SSD) GIB DISKS IN GIB) MBPS H (MBPS)
1 The maximum disk throughput (IOPS or MBps) possible with a DS series VM may be limited by the number,
size and striping of the attached disk(s). For details, see Premium Storage: High-performance storage for Azure
virtual machine workloads.
D-series
ACU: 160
MAX TEMP
STORAGE MAX NICS /
THROUGHPUT: MAX DATA EXPECTED
IOPS / READ DISKS / NETWORK
TEMP STORAGE MBPS / WRITE THROUGHPUT: BANDWIDTH
SIZE VCPU MEMORY: GIB (SSD) GIB MBPS IOPS (MBPS)
Other sizes
General purpose
Compute optimized
Storage optimized
GPU
High performance compute
Next steps
Learn more about how Azure compute units (ACU ) can help you compare compute performance across Azure
SKUs.
Constrained vCPU capable VM sizes
3/9/2018 • 2 min to read • Edit Online
Some database workloads like SQL Server or Oracle require high memory, storage, and I/O bandwidth, but not a
high core count. Many database workloads are not CPU -intensive. Azure offers certain VM sizes where you can
constrain the VM vCPU count to reduce the cost of software licensing, while maintaining the same memory,
storage, and I/O bandwidth.
The vCPU count can be constrained to one half or one quarter of the original VM size. These new VM sizes have a
suffix that specifies the number of active vCPUs to make them easier for you to identify.
For example, the current VM size Standard_GS5 comes with 32 vCPUs, 448 GB RAM, 64 disks (up to 256 TB ), and
80,000 IOPs or 2 GB/s of I/O bandwidth. The new VM sizes Standard_GS5-16 and Standard_GS5-8 comes with 16
and 8 active vCPUs respectively, while maintaining the rest of the specs of the Standard_GS5 for memory, storage,
and I/O bandwidth.
The licensing fees charged for SQL Server or Oracle are constrained to the new vCPU count, and other products
should be charged based on the new vCPU count. This results in a 50% to 75% increase in the ratio of the VM
specs to active (billable) vCPUs. These new VM sizes that are only available in Azure, allowing workloads to push
higher CPU utilization at a fraction of the (per-core) licensing cost. At this time, the compute cost, which includes
OS licensing, remains the same one as the original size. For more information, see Azure VM sizes for more cost-
effective database workloads.
Other sizes
Compute optimized
Memory optimized
Storage optimized
GPU
High performance compute
Next steps
Learn more about how Azure compute units (ACU ) can help you compare compute performance across Azure
SKUs.
Storage optimized virtual machine sizes
1/12/2018 • 2 min to read • Edit Online
Storage optimized VM sizes offer high disk throughput and IO, and are ideal for Big Data, SQL, and NoSQL
databases. This article provides information about the number of vCPUs, data disks and NICs as well as storage
throughput and network bandwidth for each size in this grouping.
The Ls-series offers up to 32 vCPUs, using the Intel® Xeon® processor E5 v3 family. The Ls-series gets the same
CPU performance as the G/GS -Series and comes with 8 GiB of memory per vCPU.
Ls-series
ACU: 180-240
MAX
MAX TEMP UNCACHED MAX NICS /
STORAGE DISK EXPECTED
TEMP THROUGHP THROUGHP NETWORK
MEMORY: STORAGE MAX DATA UT: IOPS / UT: IOPS / BANDWIDT
SIZE VCPU GIB (SSD) GIB DISKS MBPS MBPS H (MBPS)
The maximum disk throughput possible with Ls-series VMs may be limited by the number, size, and striping of
any attached disks. For details, see Premium Storage: High-performance storage for Azure virtual machine
workloads.
1 Instance is isolated to hardware dedicated to a single customer.
Other sizes
General purpose
Compute optimized
Memory optimized
GPU
High performance compute
Next steps
Learn more about how Azure compute units (ACU ) can help you compare compute performance across Azure
SKUs.
GPU optimized virtual machine sizes
4/9/2018 • 7 min to read • Edit Online
GPU optimized VM sizes are specialized virtual machines available with single or multiple NVIDIA GPUs. These
sizes are designed for compute-intensive, graphics-intensive, and visualization workloads. This article provides
information about the number and type of GPUs, vCPUs, data disks, and NICs as well as storage throughput and
network bandwidth for each size in this grouping.
NC, NCv2, NCv3, and ND sizes are optimized for compute-intensive and network-intensive applications and
algorithms, including CUDA- and OpenCL -based applications and simulations, AI, and Deep Learning.
NV sizes are optimized and designed for remote visualization, streaming, gaming, encoding, and VDI
scenarios utilizing frameworks such as OpenGL and DirectX.
NC-series
NC -series VMs are powered by the NVIDIA Tesla K80 card. Users can crunch through data faster by leveraging
CUDA for energy exploration applications, crash simulations, ray traced rendering, deep learning and more. The
NC24r configuration provides a low latency, high-throughput network interface optimized for tightly coupled
parallel computing workloads.
Standard_NC6 6 56 340 1 24 1
NCv2-series
NCv2-series VMs are powered by NVIDIA Tesla P100 GPUs. These GPUs can provide more than 2x the
computational performance of the NC -series. Customers can take advantage of these updated GPUs for
traditional HPC workloads such as reservoir modeling, DNA sequencing, protein analysis, Monte Carlo
simulations, and others. The NC24rs v2 configuration provides a low latency, high-throughput network interface
optimized for tightly coupled parallel computing workloads.
IMPORTANT
For this size family, the vCPU (core) quota in your subscription is initially set to 0 in each region. Request a vCPU quota
increase for this family in an available region.
TEMP STORAGE MAX DATA
SIZE VCPU MEMORY: GIB (SSD) GIB GPU DISKS MAX NICS
NCv3-series
NCv3-series VMs are powered by NVIDIA Tesla V100 GPUs. These GPUs can provide 1.5x the computational
performance of the NCv2-series. Customers can take advantage of these updated GPUs for traditional HPC
workloads such as reservoir modeling, DNA sequencing, protein analysis, Monte Carlo simulations, and others.
The NC24rs v3 configuration provides a low latency, high-throughput network interface optimized for tightly
coupled parallel computing workloads.
IMPORTANT
For this size family, the vCPU (core) quota in your subscription is initially set to 0 in each region. Request a vCPU quota
increase for this family in an available region.
ND-series
The ND -series virtual machines are a new addition to the GPU family designed for AI and Deep Learning
workloads. They offer excellent performance for training and inference. ND instances are powered by NVIDIA
Tesla P40 GPUs. These instances provide excellent performance for single-precision floating point operations, for
AI workloads utilizing Microsoft Cognitive Toolkit, TensorFlow, Caffe, and other frameworks. The ND -series also
offers a much larger GPU memory size (24 GB ), enabling to fit much larger neural net models. Like the NC -
series, the ND -series offers a configuration with a secondary low -latency, high-throughput network through
RDMA, and InfiniBand connectivity so you can run large-scale training jobs spanning many GPUs.
IMPORTANT
For this size family, the vCPU (core) quota per region in your subscription is initially set to 0. Request a vCPU quota increase
for this family in an available region.
NV-series
The NV -series virtual machines are powered by NVIDIA Tesla M60 GPUs and NVIDIA GRID technology for
desktop accelerated applications and virtual desktops where customers are able to visualize their data or
simulations. Users are able to visualize their graphics intensive workflows on the NV instances to get superior
graphics capability and additionally run single precision workloads such as encoding and rendering.
Each GPU in NV instances comes with a GRID license. This license gives you the flexibility to use an NV instance
as a virtual workstation for a single user, or 25 concurrent users can connect to the VM for a virtual application
scenario.
Standard_ 6 56 340 1 24 1 1 25
NV6
TIP
As an alternative to manual CUDA driver installation on a Linux VM, you can deploy an Azure Data Science Virtual Machine
image. The DSVM editions for Ubuntu 16.04 LTS or CentOS 7.4 pre-install NVIDIA CUDA drivers, the CUDA Deep Neural
Network Library, and other tools.
DISTRIBUTION DRIVER
DISTRIBUTION DRIVER
For driver installation and verification steps, see N -series driver setup for Linux.
Deployment considerations
For availability of N -series VMs, see Products available by region.
N -series VMs can only be deployed in the Resource Manager deployment model.
N -series VMs differ in the type of Azure Storage they support for their disks. NC and NV VMs only
support VM disks that are backed by Standard Disk Storage (HDD ). NCv2, ND, and NCv3 VMs only
support VM disks that are backed by Premium Disk Storage (SSD ).
If you want to deploy more than a few N -series VMs, consider a pay-as-you-go subscription or other
purchase options. If you're using an Azure free account, you can use only a limited number of Azure
compute cores.
You might need to increase the cores quota (per region) in your Azure subscription, and increase the
separate quota for NC, NCv2, NCv3, ND, or NV cores. To request a quota increase, open an online
customer support request at no charge. Default limits may vary depending on your subscription category.
You shouldn't install X server or other systems that use the Nouveau driver on Ubuntu NC VMs. Before
installing NVIDIA GPU drivers, you need to disable the Nouveau driver.
Other sizes
General purpose
Compute optimized
Memory optimized
Storage optimized
High performance compute
Next steps
Learn more about how Azure compute units (ACU ) can help you compare compute performance across Azure
SKUs.
Install NVIDIA GPU drivers on N-series VMs running
Linux
4/30/2018 • 7 min to read • Edit Online
To take advantage of the GPU capabilities of Azure N -series VMs running Linux, NVIDIA graphics drivers must be
installed. This article provides driver setup steps after you deploy an N -series VM. Driver setup information is also
available for Windows VMs.
For N -series VM specs, storage capacities, and disk details, see GPU Linux VM sizes.
TIP
As an alternative to manual CUDA driver installation on a Linux VM, you can deploy an Azure Data Science Virtual Machine
image. The DSVM editions for Ubuntu 16.04 LTS or CentOS 7.4 pre-install NVIDIA CUDA drivers, the CUDA Deep Neural
Network Library, and other tools.
DISTRIBUTION DRIVER
DISTRIBUTION DRIVER
WARNING
Installation of third-party software on Red Hat products can affect the Red Hat support terms. See the Red Hat
Knowledgebase article.
Install CUDA drivers for NC, NCv2, NCv3, and ND-series VMs
Here are steps to install CUDA drivers from the NVIDIA CUDA Toolkit on N -series VMs.
C and C++ developers can optionally install the full Toolkit to build GPU -accelerated applications. For more
information, see the CUDA Installation Guide.
To install CUDA drivers, make an SSH connection to each VM. To verify that the system has a CUDA-capable GPU,
run the following command:
You will see output similar to the following example (showing an NVIDIA Tesla K80 card):
CUDA_REPO_PKG=cuda-repo-ubuntu1604_9.1.85-1_amd64.deb
wget -O /tmp/${CUDA_REPO_PKG}
http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/${CUDA_REPO_PKG}
rm -f /tmp/${CUDA_REPO_PKG}
sudo reboot
sudo reboot
2. Install the latest Linux Integration Services for Hyper-V and Azure.
wget https://aka.ms/lis
cd LISISO
sudo ./install.sh
sudo reboot
CUDA_REPO_PKG=cuda-repo-rhel7-9.1.85-1.x86_64.rpm
wget http://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/${CUDA_REPO_PKG} -O
/tmp/${CUDA_REPO_PKG}
rm -f /tmp/${CUDA_REPO_PKG}
2. In /etc/waagent.conf, enable RDMA by uncommenting the following configuration lines. You need
root access to edit this file.
OS.EnableRDMA=y
OS.UpdateRdmaDriver=y
3. Add or change the following memory settings in KB in the /etc/security/limits.conf file. You need root
access to edit this file. For testing purposes you can set memlock to unlimited. For example:
<User or group name> hard memlock unlimited .
<User or group name> hard memlock <memory required for your application in KB>
<User or group name> soft memlock <memory required for your application in KB>
4. Install Intel MPI Library. Either purchase and download the library from Intel or download the free
evaluation version.
wget http://registrationcenter-download.intel.com/akdlm/irc_nas/tec/9278/l_mpi_p_5.1.3.223.tgz
CentOS -based 7.4 HPC - RDMA drivers and Intel MPI 5.1 are installed on the VM.
3. Disable the Nouveau kernel driver, which is incompatible with the NVIDIA driver. (Only use the NVIDIA
driver on NV VMs.) To do this, create a file in /etc/modprobe.d named nouveau.conf with the following
contents:
blacklist nouveau
blacklist lbm-nouveau
chmod +x NVIDIA-Linux-x86_64-grid.run
sudo ./NVIDIA-Linux-x86_64-grid.run
6. When you're asked whether you want to run the nvidia-xconfig utility to update your X configuration file,
select Yes.
7. After installation completes, copy /etc/nvidia/gridd.conf.template to a new file gridd.conf at location
/etc/nvidia/
2. Disable the Nouveau kernel driver, which is incompatible with the NVIDIA driver. (Only use the NVIDIA
driver on NV VMs.) To do this, create a file in /etc/modprobe.d named nouveau.conf with the following
contents:
blacklist nouveau
blacklist lbm-nouveau
3. Reboot the VM, reconnect, and install the latest Linux Integration Services for Hyper-V and Azure.
wget https://aka.ms/lis
cd LISISO
sudo ./install.sh
sudo reboot
4. Reconnect to the VM and run the lspci command. Verify that the NVIDIA M60 card or cards are visible as
PCI devices.
5. Download and install the GRID driver:
chmod +x NVIDIA-Linux-x86_64-grid.run
sudo ./NVIDIA-Linux-x86_64-grid.run
6. When you're asked whether you want to run the nvidia-xconfig utility to update your X configuration file,
select Yes.
7. After installation completes, copy /etc/nvidia/gridd.conf.template to a new file gridd.conf at location
/etc/nvidia/
X11 server
If you need an X11 server for remote connections to an NV VM, x11vnc is recommended because it allows
hardware acceleration of graphics. The BusID of the M60 device must be manually added to the xconfig file (
etc/X11/xorg.conf on Ubuntu 16.04 LTS, /etc/X11/XF86config on CentOS 7.3 or Red Hat Enterprise Server 7.3 ).
Add a "Device" section similar to the following:
Section "Device"
Identifier "Device0"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BoardName "Tesla M60"
BusID "your-BusID:0:0:0"
EndSection
The BusID can change when a VM gets reallocated or rebooted. Therefore, you may want to create a script to
update the BusID in the X11 configuration when a VM is rebooted. For example, create a script named
busidupdate.sh (or another name you choose) with the following contents:
#!/bin/bash
BUSID=$((16#`/usr/bin/nvidia-smi --query-gpu=pci.bus_id --format=csv | tail -1 | cut -d ':' -f 1`))
if grep -Fxq "${BUSID}" /etc/X11/XF86Config; then echo "BUSID is matching"; else echo "BUSID changed to
${BUSID}" && sed -i '/BusID/c\ BusID \"PCI:0@'${BUSID}':0:0:0\"' /etc/X11/XF86Config; fi
Then, create an entry for your upate script in /etc/rc.d/rc3.d so the script is invoked as root on boot.
Troubleshooting
You can set persistence mode using nvidia-smi so the output of the command is faster when you need to query
cards. To set persistence mode, execute nvidia-smi -pm 1 . Note that if the VM is restarted, the mode setting
goes away. You can always script the mode setting to execute upon startup.
Next steps
To capture a Linux VM image with your installed NVIDIA drivers, see How to generalize and capture a Linux
virtual machine.
High performance compute virtual machine sizes
4/9/2018 • 7 min to read • Edit Online
The A8-A11 and H-series sizes are also known as compute-intensive instances. The hardware that runs these
sizes is designed and optimized for compute-intensive and network-intensive applications, including high-
performance computing (HPC ) cluster applications, modeling, and simulations. The A8-A11 series uses Intel
Xeon E5-2670 @ 2.6 GHZ and the H-series uses Intel Xeon E5-2667 v3 @ 3.2 GHz. This article provides
information about the number of vCPUs, data disks, and NICs as well as storage throughput and network
bandwidth for each size in this grouping.
Azure H-series virtual machines are the latest in high performance computing VMs aimed at high end
computational needs, like molecular modeling, and computational fluid dynamics. These 8 and 16 vCPU VMs are
built on the Intel Haswell E5-2667 V3 processor technology featuring DDR4 memory and SSD -based
temporary storage.
In addition to the substantial CPU power, the H-series offers diverse options for low latency RDMA networking
using FDR InfiniBand and several memory configurations to support memory intensive computational
requirements.
H-series
ACU: 290-300
1 For MPI applications, dedicated RDMA backend network is enabled by FDR InfiniBand network, which delivers
ultra-low -latency and high bandwidth.
1For MPI applications, dedicated RDMA backend network is enabled by FDR InfiniBand network, which delivers
ultra-low -latency and high bandwidth.
Deployment considerations
Azure subscription – To deploy more than a few compute-intensive instances, consider a pay-as-you-go
subscription or other purchase options. If you're using an Azure free account, you can use only a limited
number of Azure compute cores.
Pricing and availability - These VM sizes are offered only in the Standard pricing tier. Check Products
available by region for availability in Azure regions.
Cores quota – You might need to increase the cores quota in your Azure subscription from the default
value. Your subscription might also limit the number of cores you can deploy in certain VM size families,
including the H-series. To request a quota increase, open an online customer support request at no charge.
(Default limits may vary depending on your subscription category.)
NOTE
Contact Azure Support if you have large-scale capacity needs. Azure quotas are credit limits, not capacity
guarantees. Regardless of your quota, you are only charged for cores that you use.
Virtual network – An Azure virtual network is not required to use the compute-intensive instances. However,
for many deployments you need at least a cloud-based Azure virtual network, or a site-to-site connection if
you need to access on-premises resources. When needed, create a new virtual network to deploy the
instances. Adding compute-intensive VMs to a virtual network in an affinity group is not supported.
Resizing – Because of their specialized hardware, you can only resize compute-intensive instances within the
same size family (H-series or compute-intensive A-series). For example, you can only resize an H-series VM
from one H-series size to another. In addition, resizing from a non-compute-intensive size to a compute-
intensive size is not supported.
RDMA-capable instances
A subset of the compute-intensive instances (H16r, H16mr, A8, and A9) feature a network interface for remote
direct memory access (RDMA) connectivity. (Selected N -series sizes designated with 'r' such as NC24r are also
RDMA-capable.) This interface is in addition to the standard Azure network interface available to other VM sizes.
This interface allows the RDMA-capable instances to communicate over an InfiniBand (IB ) network, operating at
FDR rates for H16r, H16mr, and RDMA-capable N -series virtual machines, and QDR rates for A8 and A9 virtual
machines. These RDMA capabilities can boost the scalability and performance of certain Message Passing
Interface (MPI) applications.
NOTE
In Azure, IP over IB is not supported. Only RDMA over IB is supported.
Deploy the RDMA-capable HPC VMs in the same availability set or VM scale set (when you use the Azure
Resource Manager deployment model) or the same cloud service (when you use the classic deployment model).
Additional requirements for RDMA-capable HPC VMs to access the Azure RDMA network follow.
MPI
Only Intel MPI 5.x versions are supported. Later versions (2017, 2018) of the Intel MPI runtime library are not
compatible with the Azure Linux RDMA drivers.
Distributions
Deploy a compute-intensive VM from one of the images in the Azure Marketplace that supports RDMA
connectivity:
Ubuntu - Ubuntu Server 16.04 LTS. Configure RDMA drivers on the VM and register with Intel to
download Intel MPI:
1. Install dapl, rdmacm, ibverbs, and mlx4
2. In /etc/waagent.conf, enable RDMA by uncommenting the following configuration lines. You need
root access to edit this file.
OS.EnableRDMA=y
OS.UpdateRdmaDriver=y
3. Add or change the following memory settings in KB in the /etc/security/limits.conf file. You need
root access to edit this file. For testing purposes you can set memlock to unlimited. For example:
<User or group name> hard memlock unlimited .
<User or group name> hard memlock <memory required for your application in KB>
<User or group name> soft memlock <memory required for your application in KB>
4. Install Intel MPI Library. Either purchase and download the library from Intel or download the free
evaluation version.
wget http://registrationcenter-download.intel.com/akdlm/irc_nas/tec/9278/l_mpi_p_5.1.3.223.tgz
SUSE Linux Enterprise Server - SLES 12 SP3 for HPC, SLES 12 SP3 for HPC (Premium), SLES 12 SP1
for HPC, SLES 12 SP1 for HPC (Premium). RDMA drivers are installed and Intel MPI packages are
distributed on the VM. Install MPI by running the following command:
CentOS -based HPC - CentOS -based 6.5 HPC or a later version (for H-series, version 7.1 or later is
recommended). RDMA drivers and Intel MPI 5.1 are installed on the VM.
NOTE
On the CentOS-based HPC images, kernel updates are disabled in the yum configuration file. This is because the
Linux RDMA drivers are distributed as an RPM package, and driver updates might not work if the kernel is updated.
Cluster configuration
Additional system configuration is needed to run MPI jobs on clustered VMs. For example, on a cluster of VMs,
you need to establish trust among the compute nodes. For typical settings, see Set up a Linux RDMA cluster to
run MPI applications.
Network topology considerations
On RDMA-enabled Linux VMs in Azure, Eth1 is reserved for RDMA network traffic. Do not change any
Eth1 settings or any information in the configuration file referring to this network. Eth0 is reserved for
regular Azure network traffic.
The RDMA network in Azure reserves the address space 172.16.0.0/16.
Using HPC Pack
HPC Pack, Microsoft’s free HPC cluster and job management solution, is one option for you to use the compute-
intensive instances with Linux. The latest releases of HPC Pack support several Linux distributions to run on
compute nodes deployed in Azure VMs, managed by a Windows Server head node. With RDMA-capable Linux
compute nodes running Intel MPI, HPC Pack can schedule and run Linux MPI applications that access the RDMA
network. See Get started with Linux compute nodes in an HPC Pack cluster in Azure.
Other sizes
General purpose
Compute optimized
Memory optimized
Storage optimized
GPU
Next steps
To get started deploying and using compute-intensive sizes with RDMA on Linux, see Set up a Linux
RDMA cluster to run MPI applications.
Learn more about how Azure compute units (ACU ) can help you compare compute performance across
Azure SKUs.
Azure compute unit (ACU)
4/9/2018 • 1 min to read • Edit Online
The concept of the Azure Compute Unit (ACU ) provides a way of comparing compute (CPU ) performance across
Azure SKUs. This will help you easily identify which SKU is most likely to satisfy your performance needs. ACU is
currently standardized on a Small (Standard_A1) VM being 100 and all other SKUs then represent
approximately how much faster that SKU can run a standard benchmark.
IMPORTANT
The ACU is only a guideline. The results for your workload may vary.
A0 50 1:1
M 160-180 2:1**
ACUs marked with a * use Intel® Turbo technology to increase CPU frequency and provide a performance
boost. The amount of the boost can vary based on the VM size, workload, and other workloads running on the
same host.
**Hyper-threaded.
Here are links to more information about the different sizes:
General-purpose
Memory optimized
Compute optimized
GPU optimized
High performance compute
Storage optimized
Compute benchmark scores for Linux VMs
4/11/2018 • 20 min to read • Edit Online
The following CoreMark benchmark scores show compute performance for Azure's high-performance VM lineup
running Ubuntu. Compute benchmark scores are also available for Windows VMs.
D - General Compute
(3/23/2018 7:28:16 PM pbi 2050259)
DS - Storage Optimized
(3/23/2018 7:34:52 PM pbi 2050259)
F - Compute Optimized
(3/23/2018 7:28:54 PM pbi 2050259)
G - Compute Optimized
(3/23/2018 7:27:25 PM pbi 2050259)
GS - Storage Optimized
(3/23/2018 7:25:12 PM pbi 2050259)
HPC - A8-11
(3/23/2018 7:35:10 PM pbi 2050259)
Ls - Storage Optimized
(3/23/2018 7:58:51 PM pbi 2050259)
M - Memory Optimized
(3/23/2018 8:57:07 PM pbi 2050259)
About CoreMark
Linux numbers were computed by running CoreMark on Ubuntu. CoreMark was configured with the number of
threads set to the number of virtual CPUs, and concurrency set to PThreads. The target number of iterations was
adjusted based on expected performance to provide a runtime of at least 20 seconds (typically much longer). The
final score represents the number of iterations completed divided by the number of seconds it took to run the test.
Each test was run at least seven times on each VM. Test run dates shown above. Tests run on multiple VMs across
Azure public regions the VM was supported in on the date run. Basic A and B (Burstable) series not shown because
performance is variable. N series not shown as they are GPU centric and Coremark doesn't measure GPU
performance.
Next steps
For storage capacities, disk details, and additional considerations for choosing among VM sizes, see Sizes for
virtual machines.
To run the CoreMark scripts on Linux VMs, download the CoreMark script pack.
Linux on distributions endorsed by Azure
4/25/2018 • 4 min to read • Edit Online
Partners provide Linux images in the Azure Marketplace. We are working with various Linux communities to add
even more flavors to the Endorsed Distribution list. In the meantime, for distributions that are not available from
the Marketplace, you can always bring your own Linux by following the guidelines at Create and upload a virtual
hard disk that contains the Linux operating system.
CentOS CentOS 6.3+, 7.0+ CentOS 6.3: LIS download Package: In repo under
CentOS 6.4+: In kernel "WALinuxAgent"
Source code: GitHub
Red Hat Enterprise Linux RHEL 6.7+, 7.1+ In kernel Package: In repo under
"WALinuxAgent"
Source code: GitHub
1 For Ubuntu 12.04 support on Azure please refer to the EOL notice.
Partners
CoreOS
https://coreos.com/docs/running-coreos/cloud-providers/azure/
From the CoreOS website:
CoreOS is designed for security, consistency, and reliability. Instead of installing packages via yum or apt, CoreOS
uses Linux containers to manage your services at a higher level of abstraction. A single service's code and all
dependencies are packaged within a container that can be run on one or many CoreOS machines.
Credativ
http://www.credativ.co.uk/credativ-blog/debian-images-microsoft-azure
Credativ is an independent consulting and services company that specializes in the development and
implementation of professional solutions by using free software. As leading open-source specialists, Credativ has
international recognition with many IT departments that use their support. In conjunction with Microsoft, Credativ
is currently preparing corresponding Debian images for Debian 8 (Jessie) and Debian before 7 (Wheezy). Both
images are specially designed to run on Azure and can be easily managed via the platform. Credativ will also
support the long-term maintenance and updating of the Debian images for Azure through its Open Source
Support Centers.
Oracle
http://www.oracle.com/technetwork/topics/cloud/faq-1963009.html
Oracle’s strategy is to offer a broad portfolio of solutions for public and private clouds. The strategy gives
customers choice and flexibility in how they deploy Oracle software in Oracle clouds and other clouds. Oracle’s
partnership with Microsoft enables customers to deploy Oracle software in Microsoft public and private clouds
with the confidence of certification and support from Oracle. Oracle’s commitment and investment in Oracle
public and private cloud solutions is unchanged.
Red Hat
http://www.redhat.com/en/partners/strategic-alliance/microsoft
The world's leading provider of open source solutions, Red Hat helps more than 90% of Fortune 500 companies
solve business challenges, align their IT and business strategies, and prepare for the future of technology. Red Hat
does this by providing secure solutions through an open business model and an affordable, predictable
subscription model.
SUSE
http://www.suse.com/suse-linux-enterprise-server-on-azure
SUSE Linux Enterprise Server on Azure is a proven platform that provides superior reliability and security for
cloud computing. SUSE's versatile Linux platform seamlessly integrates with Azure cloud services to deliver an
easily manageable cloud environment. With more than 9,200 certified applications from more than 1,800
independent software vendors for SUSE Linux Enterprise Server, SUSE ensures that workloads running
supported in the data center can be confidently deployed on Azure.
Canonical
http://www.ubuntu.com/cloud/azure
Canonical engineering and open community governance drive Ubuntu's success in client, server, and cloud
computing, which includes personal cloud services for consumers. Canonical's vision of a unified, free platform in
Ubuntu, from phone to cloud, provides a family of coherent interfaces for the phone, tablet, TV, and desktop. This
vision makes Ubuntu the first choice for diverse institutions from public cloud providers to the makers of
consumer electronics and a favorite among individual technologists.
With developers and engineering centers around the world, Canonical is uniquely positioned to partner with
hardware makers, content providers, and software developers to bring Ubuntu solutions to market for PCs,
servers, and handheld devices.
Planned maintenance for Linux virtual machines
3/22/2018 • 4 min to read • Edit Online
Azure periodically performs updates to improve the reliability, performance, and security of the host infrastructure
for virtual machines. These updates range from patching software components in the hosting environment (like
operating system, hypervisor, and various agents deployed on the host), upgrading networking components, to
hardware decommissioning. The majority of these updates are performed without any impact to the hosted virtual
machines. However, there are cases where updates do have an impact:
If a reboot-less update is possible, Azure uses memory preserving maintenance to pause the VM while the
host is updated or the VM is moved to an already updated host altogether.
If maintenance requires a reboot, you get a notice of when the maintenance is planned. In these cases, you'll
also be given a time window where you can start the maintenance yourself, at a time that works for you.
This page describes how Microsoft Azure performs both types of maintenance. For more information about
unplanned events (outages), see Manage the availability of virtual machines for Windows or Linux.
Applications running in a virtual machine can gather information about upcoming updates by using the Azure
Metadata Service for Windows or Linux.
For "how -to" information on managing planned maintenance, see "Handling planned maintenance notifications"
for Linux or Windows.
Next steps
For information on managing maintenance requiring a reboot, see Handling planned maintenance notifications.
About disks storage for Azure Linux VMs
4/9/2018 • 9 min to read • Edit Online
Just like any other computer, virtual machines in Azure use disks as a place to store an operating system,
applications, and data. All Azure virtual machines have at least two disks – a Linux operating system disk and a
temporary disk. The operating system disk is created from an image, and both the operating system disk and the
image are actually virtual hard disks (VHDs) stored in an Azure storage account. Virtual machines also can have
one or more data disks, that are also stored as VHDs.
In this article, we will talk about the different uses for the disks, and then discuss the different types of disks you
can create and use. This article is also available for Windows virtual machines.
NOTE
Azure has two different deployment models for creating and working with resources: Resource Manager and classic. This
article covers using both models, but Microsoft recommends that most new deployments use the Resource Manager model.
Temporary disk
Each VM contains a temporary disk. The temporary disk provides short-term storage for applications and
processes and is intended to only store data such as page or swap files. Data on the temporary disk may be lost
during a maintenance event or when you redeploy a VM. During a standard reboot of the VM, the data on the
temporary drive should persist.
On Linux virtual machines, the disk is typically /dev/sdb and is formatted and mounted to /mnt by the Azure
Linux Agent. The size of the temporary disk varies, based on the size of the virtual machine. For more information,
see Sizes for Linux virtual machines.
For more information on how Azure uses the temporary disk, see Understanding the temporary drive on
Microsoft Azure Virtual Machines
Data disk
A data disk is a VHD that's attached to a virtual machine to store application data, or other data you need to keep.
Data disks are registered as SCSI drives and are labeled with a letter that you choose. Each data disk has a
maximum capacity of 4095 GB. The size of the virtual machine determines how many data disks you can attach to
it and the type of storage you can use to host the disks.
NOTE
For more details about virtual machines capacities, see Sizes for Linux virtual machines.
Azure creates an operating system disk when you create a virtual machine from an image. If you use an image that
includes data disks, Azure also creates the data disks when it creates the virtual machine. Otherwise, you add data
disks after you create the virtual machine.
You can add data disks to a virtual machine at any time, by attaching the disk to the virtual machine. You can use
a VHD that you've uploaded or copied to your storage account, or one that Azure creates for you. Attaching a data
disk associates the VHD file with the VM, by placing a 'lease' on the VHD so it can't be deleted from storage while
it's still attached.
About VHDs
The VHDs used in Azure are .vhd files stored as page blobs in a standard or premium storage account in Azure.
For details about page blobs, see Understanding block blobs and page blobs. For details about premium storage,
see High-performance premium storage and Azure VMs.
Azure supports the fixed disk VHD format. The fixed format lays the logical disk out linearly within the file, so that
disk offset X is stored at blob offset X. A small footer at the end of the blob describes the properties of the VHD.
Often, the fixed format wastes space because most disks have large unused ranges in them. However, Azure stores
.vhd files in a sparse format, so you receive the benefits of both the fixed and dynamic disks at the same time. For
more details, see Getting started with virtual hard disks.
All .vhd files in Azure that you want to use as a source to create disks or images are read-only, except the .vhd files
uploaded or copied to Azure storage by the user (which can be either read-write or read-only). When you create a
disk or image, Azure makes copies of the source .vhd files. These copies can be read-only or read-and-write,
depending on how you use the VHD.
When you create a virtual machine from an image, Azure creates a disk for the virtual machine that is a copy of the
source .vhd file. To protect against accidental deletion, Azure places a lease on any source .vhd file that’s used to
create an image, an operating system disk, or a data disk.
Before you can delete a source .vhd file, you’ll need to remove the lease by deleting the disk or image. To delete a
.vhd file that is being used by a virtual machine as an operating system disk, you can delete the virtual machine,
the operating system disk, and the source .vhd file all at once by deleting the virtual machine and deleting all
associated disks. However, deleting a .vhd file that’s a source for a data disk requires several steps in a set order.
First you detach the disk from the virtual machine, then delete the disk, and then delete the .vhd file.
WARNING
If you delete a source .vhd file from storage, or delete your storage account, Microsoft can't recover that data for you.
Page blobs in Premium Storage are designed for use as VHDs only. Microsoft does not recommend storing other types of
data in page blobs in Premium Storage, as the cost may be significantly greater. Use block blobs for storing data that is not in
a VHD.
Types of disks
Azure Disks are designed for 99.999% availability. Azure Disks have consistently delivered enterprise-grade
durability, with an industry-leading ZERO% Annualized Failure Rate.
There are two performance tiers for storage that you can choose from when creating your disks -- Standard
Storage and Premium Storage. Also, there are two types of disks -- unmanaged and managed -- and they can
reside in either performance tier.
Standard storage
Standard Storage is backed by HDDs, and delivers cost-effective storage while still being performant. Standard
storage can be replicated locally in one datacenter, or be geo-redundant with primary and secondary data centers.
For more information about storage replication, please see Azure Storage replication.
For more information about using Standard Storage with VM disks, please see Standard Storage and Disks.
Premium storage
Premium Storage is backed by SSDs, and delivers high-performance, low -latency disk support for VMs running
I/O -intensive workloads. Typically you can use Premium Storage with sizes that include an "s" in the series name.
For example, there is the Dv3-Series and the Dsv3-series, the Dsv3-series can be used with Premium Storage. For
more information, please see Premium Storage.
Unmanaged disks
Unmanaged disks are the traditional type of disks that have been used by VMs. With these, you create your own
storage account and specify that storage account when you create the disk. You have to make sure you don't put
too many disks in the same storage account, because you could exceed the scalability targets of the storage
account (20,000 IOPS, for example), resulting in the VMs being throttled. With unmanaged disks, you have to
figure out how to maximize the use of one or more storage accounts to get the best performance out of your VMs.
Managed disks
Managed Disks handles the storage account creation/management in the background for you, and ensures that
you do not have to worry about the scalability limits of the storage account. You simply specify the disk size and
the performance tier (Standard/Premium), and Azure creates and manages the disk for you. Even as you add disks
or scale the VM up and down, you don't have to worry about the storage being used.
You can also manage your custom images in one storage account per Azure region, and use them to create
hundreds of VMs in the same subscription. For more information about Managed Disks, please see the Managed
Disks Overview.
We recommend that you use Azure Managed Disks for new VMs, and that you convert your previous unmanaged
disks to managed disks, to take advantage of the many features available in Managed Disks.
Disk comparison
The following table provides a comparison of Premium vs Standard for both unmanaged and managed disks to
help you decide what to use.
Disk Type Solid State Drives (SSD) Hard Disk Drives (HDD)
Troubleshooting
When adding data disks to a Linux VM, you may encounter errors if a disk does not exist at LUN 0. If you are
adding a disk manually using the azure vm disk attach-new command and you specify a LUN ( --lun ) rather than
allowing the Azure platform to determine the appropriate LUN, take care that a disk already exists / will exist at
LUN 0.
Consider the following example showing a snippet of the output from lsscsi :
The two data disks exist at LUN 0 and LUN 1 (the first column in the lsscsi output details
[host:channel:target:lun] ). Both disks should be accessbile from within the VM. If you had manually specified the
first disk to be added at LUN 1 and the second disk at LUN 2, you may not see the disks correctly from within your
VM.
NOTE
The Azure host value is 5 in these examples, but this may vary depending on the type of storage you select.
This disk behavior is not an Azure problem, but the way in which the Linux kernel follows the SCSI specifications.
When the Linux kernel scans the SCSI bus for attached devices, a device must be found at LUN 0 in order for the
system to continue scanning for additional devices. As such:
Review the output of lsscsi after adding a data disk to verify that you have a disk at LUN 0.
If your disk does not show up correctly within your VM, verify a disk exists at LUN 0.
Next steps
Attach a disk to add additional storage for your VM.
Create a snapshot.
Convert to managed disks.
Azure Managed Disks Overview
4/9/2018 • 8 min to read • Edit Online
Azure Managed Disks simplifies disk management for Azure IaaS VMs by managing the storage accounts
associated with the VM disks. You only have to specify the type (Premium or Standard) and the size of disk you
need, and Azure creates and manages the disk for you.
PREMIUM
MANAGED
DISK TYPE P4 P6 P10 P15 P20 P30 P40 P50
Disk Size 32 GiB 64 GiB 128 GiB 256 GiB 512 GiB 1024 GiB 2048 GiB 4095 GiB
(1 TiB) (2 TiB) (4 TiB)
Here are the disk sizes available for a standard managed disk:
STANDARD
MANAGED
DISK TYPE S4 S6 S10 S20 S30 S40 S50
Disk Size 32 GiB 64 GiB 128 GiB 512 GiB 1024 GiB (1 2048 GiB (2 4095 GiB (4
TiB) TiB) TiB)
Number of transactions: You are billed for the number of transactions that you perform on a standard managed
disk. There is no cost for transactions for a premium managed disk.
Outbound data transfers: Outbound data transfers (data going out of Azure data centers) incur billing for
bandwidth usage.
For detailed information on pricing for Managed Disks, see Managed Disks Pricing.
Images
Managed Disks also support creating a managed custom image. You can create an image from your custom VHD
in a storage account or directly from a generalized (sys-prepped) VM. This captures in a single image all managed
disks associated with a VM, including both the OS and data disks. This enables creating hundreds of VMs using
your custom image without the need to copy or manage any storage accounts.
For information on creating images, please check out the following articles:
How to capture a managed image of a generalized VM in Azure
How to generalize and capture a Linux virtual machine using the Azure CLI 2.0
Next steps
For more information about Managed Disks, please refer to the following articles.
Get started with Managed Disks
Create a VM using Resource Manager and PowerShell
Create a Linux VM using the Azure CLI 2.0
Attach a managed data disk to a Windows VM using PowerShell
Add a managed disk to a Linux VM
Managed Disks PowerShell Sample Scripts
Use Managed Disks in Azure Resource Manager templates
Compare Managed Disks storage options
Premium storage and disks
Standard storage and disks
Operational guidance
Migrate from AWS and other platforms to Managed Disks in Azure
Convert Azure VMs to managed disks in Azure
High-performance Premium Storage and managed
disks for VMs
4/9/2018 • 21 min to read • Edit Online
Azure Premium Storage delivers high-performance, low -latency disk support for virtual machines (VMs) with
input/output (I/O )-intensive workloads. VM disks that use Premium Storage store data on solid-state drives
(SSDs). To take advantage of the speed and performance of premium storage disks, you can migrate existing VM
disks to Premium Storage.
In Azure, you can attach several premium storage disks to a VM. Using multiple disks gives your applications up to
256 TB of storage per VM. With Premium Storage, your applications can achieve 80,000 I/O operations per
second (IOPS ) per VM, and a disk throughput of up to 2,000 megabytes per second (MB/s) per VM. Read
operations give you very low latencies.
With Premium Storage, Azure offers the ability to truly lift-and-shift demanding enterprise applications like
Dynamics AX, Dynamics CRM, Exchange Server, SAP Business Suite, and SharePoint farms to the cloud. You can
run performance-intensive database workloads in applications like SQL Server, Oracle, MongoDB, MySQL, and
Redis, which require consistent high performance and low latency.
NOTE
For the best performance for your application, we recommend that you migrate any VM disk that requires high IOPS to
Premium Storage. If your disk does not require high IOPS, you can help limit costs by keeping it in standard Azure Storage. In
standard storage, VM disk data is stored on hard disk drives (HDDs) instead of on SSDs.
Azure offers two ways to create premium storage disks for VMs:
Unmanaged disks
The original method is to use unmanaged disks. In an unmanaged disk, you manage the storage accounts
that you use to store the virtual hard disk (VHD ) files that correspond to your VM disks. VHD files are
stored as page blobs in Azure storage accounts.
Managed disks
When you choose Azure Managed Disks, Azure manages the storage accounts that you use for your VM
disks. You specify the disk type (Premium or Standard) and the size of the disk that you need. Azure creates
and manages the disk for you. You don't have to worry about placing the disks in multiple storage accounts
to ensure that you stay within scalability limits for your storage accounts. Azure handles that for you.
We recommend that you choose managed disks, to take advantage of their many features.
To get started with Premium Storage, create your free Azure account.
For information about migrating your existing VMs to Premium Storage, see Convert a Windows VM from
unmanaged disks to managed disks or Convert a Linux VM from unmanaged disks to managed disks.
NOTE
Premium Storage is available in most regions. For the list of available regions, see the row for Disk Storage in Azure
products available by region.
Features
Here are some of the features of Premium Storage:
Premium storage disks
Premium Storage supports VM disks that can be attached to specific size-series VMs. Premium Storage
supports DS -series, DSv2-series, GS -series, Ls-series, Fs-series, and Esv3-series VMs. You have a choice of
seven disk sizes: P4 (32 GB ), P6 (64 GB ), P10 (128 GB ), P20 (512 GB ), P30 (1024 GB ), P40 (2048 GB ), P50
(4095 GB ). P4 and P6 disk sizes are yet only supported for Managed Disks. Each disk size has its own
performance specifications. Depending on your application requirements, you can attach one or more disks
to your VM. We describe the specifications in more detail in Premium Storage scalability and performance
targets.
Premium page blobs
Premium Storage supports page blobs. Use page blobs to store persistent, unmanaged disks for VMs in
Premium Storage. Unlike standard Azure Storage, Premium Storage does not support block blobs, append
blobs, files, tables, or queues. Premium page blobs support six sizes from P10 to P50, and P60 (8191GiB ).
P60 Premium page blob is not supported to be attached as VM disks.
Any object placed in a premium storage account will be a page blob. The page blob snaps to one of the
supported provisioned sizes. This is why a premium storage account is not intended to be used to store tiny
blobs.
Premium storage account
To start using Premium Storage, create a premium storage account for unmanaged disks. In the Azure
portal, to create a premium storage account, choose the Premium performance tier. Select the Locally-
redundant storage (LRS ) replication option. You also can create a premium storage account by setting the
performance tier to Premium_LRS. To change the performance tier, use one of the following approaches:
PowerShell for Azure Storage
Azure CLI for Azure Storage
Azure Storage Resource Provider REST API (for Azure Resource Manager deployments) or one of
the Azure Storage resource provider client libraries
To learn about premium storage account limits, see Premium Storage scalability and performance
targets.
Premium locally redundant storage
A premium storage account supports only locally redundant storage as the replication option. Locally
redundant storage keeps three copies of the data within a single region. For regional disaster recovery, you
must back up your VM disks in a different region by using Azure Backup. You also must use a geo-
redundant storage (GRS ) account as the backup vault.
Azure uses your storage account as a container for your unmanaged disks. When you create an Azure VM
that supports Premium Storage with unmanaged disks, and you select a premium storage account, your
operating system and data disks are stored in that storage account.
Supported VMs
Premium Storage supports B -series, DS -series, DSv2-series, DSv3-series, GS -series, Ls-series, M -series, and Fs-
series VMs. You can use standard and premium storage disks with these VM types. You cannot use premium
storage disks with VM series that are not Premium Storage-compatible.
For information about VM types and sizes in Azure for Windows, see Windows VM sizes. For information about
VM types and sizes in Azure for Linux, see Linux VM sizes.
These are some of the features of the DS -series, DSv2-series, GS -series, Ls-series, and Fs-series VMs:
Cloud service
You can add DS -series VMs to a cloud service that has only DS -series VMs. Do not add DS -series VMs to
an existing cloud service that has any type other than DS -series VMs. You can migrate your existing VHDs
to a new cloud service that runs only DS -series VMs. If you want to use the same virtual IP address for the
new cloud service that hosts your DS -series VMs, use reserved IP addresses. GS -series VMs can be added
to an existing cloud service that has only GS -series VMs.
Operating system disk
You can set up your Premium Storage VM to use either a premium or a standard operating system disk. For
the best experience, we recommend using a Premium Storage-based operating system disk.
Data disks
You can use premium and standard disks in the same Premium Storage VM. With Premium Storage, you
can provision a VM and attach several persistent data disks to the VM. If needed, to increase the capacity
and performance of the volume, you can stripe across your disks.
NOTE
If you stripe premium storage data disks by using Storage Spaces, set up Storage Spaces with 1 column for each disk
that you use. Otherwise, overall performance of the striped volume might be lower than expected because of uneven
distribution of traffic across the disks. By default, in Server Manager, you can set up columns for up to 8 disks. If you
have more than 8 disks, use PowerShell to create the volume. Specify the number of columns manually. Otherwise,
the Server Manager UI continues to use 8 columns, even if you have more disks. For example, if you have 32 disks in
a single stripe set, specify 32 columns. To specify the number of columns the virtual disk uses, in the New-VirtualDisk
PowerShell cmdlet, use the NumberOfColumns parameter. For more information, see Storage Spaces Overview and
Storage Spaces FAQs.
Cache
VMs in the size series that support Premium Storage have a unique caching capability for high levels of
throughput and latency. The caching capability exceeds underlying premium storage disk performance. You
can set the disk caching policy on premium storage disks to ReadOnly, ReadWrite, or None. The default
disk caching policy is ReadOnly for all premium data disks, and ReadWrite for operating system disks. For
optimal performance for your application, use the correct cache setting. For example, for read-heavy or
read-only data disks, such as SQL Server data files, set the disk caching policy to ReadOnly. For write-
heavy or write-only data disks, such as SQL Server log files, set the disk caching policy to None. To learn
more about optimizing your design with Premium Storage, see Design for performance with Premium
Storage.
Analytics
To analyze VM performance by using disks in Premium Storage, turn on VM diagnostics in the Azure
portal. For more information, see Azure VM monitoring with Azure Diagnostics Extension.
To see disk performance, use operating system-based tools like Windows Performance Monitor for
Windows VMs and the iostat command for Linux VMs.
VM scale limits and performance
Each Premium Storage-supported VM size has scale limits and performance specifications for IOPS,
bandwidth, and the number of disks that can be attached per VM. When you use premium storage disks
with VMs, make sure that there is sufficient IOPS and bandwidth on your VM to drive disk traffic.
For example, a STANDARD_DS1 VM has a dedicated bandwidth of 32 MB/s for premium storage disk
traffic. A P10 premium storage disk can provide a bandwidth of 100 MB/s. If a P10 premium storage disk is
attached to this VM, it can only go up to 32 MB/s. It cannot use the maximum 100 MB/s that the P10 disk
can provide.
Currently, the largest VM in the DS -series is the Standard_DS15_v2. The Standard_DS15_v2 can provide up
to 960 MB/s across all disks. The largest VM in the GS -series is the Standard_GS5. The Standard_GS5 can
provide up to 2,000 MB/s across all disks.
These limits are for disk traffic only. These limits don't include cache hits and network traffic. A separate
bandwidth is available for VM network traffic. Bandwidth for network traffic is different from the dedicated
bandwidth used by premium storage disks.
For the most up-to-date information about maximum IOPS and throughput (bandwidth) for Premium
Storage-supported VMs, see Windows VM sizes or Linux VM sizes.
For more information about premium storage disks and their IOPS and throughput limits, see the table in
the next section.
PREMIUM
DISKS
TYPE P4 P6 P10 P15 P20 P30 P40 P50
IOPS per 120 240 500 1100 2300 5000 7500 7500
disk
NOTE
Make sure sufficient bandwidth is available on your VM to drive disk traffic, as described in Premium Storage-supported
VMs. Otherwise, your disk throughput and IOPS is constrained to lower values. Maximum throughput and IOPS are based
on the VM limits, not on the disk limits described in the preceding table.
Here are some important things to know about Premium Storage scalability and performance targets:
Provisioned capacity and performance
When you provision a premium storage disk, unlike standard storage, you are guaranteed the capacity,
IOPS, and throughput of that disk. For example, if you create a P50 disk, Azure provisions 4,095-GB
storage capacity, 7,500 IOPS, and 250-MB/s throughput for that disk. Your application can use all or part of
the capacity and performance.
Disk size
Azure maps the disk size (rounded up) to the nearest premium storage disk option, as specified in the table
in the preceding section. For example, a disk size of 100 GB is classified as a P10 option. It can perform up
to 500 IOPS, with up to 100-MB/s throughput. Similarly, a disk of size 400 GB is classified as a P20. It can
perform up to 2,300 IOPS, with 150-MB/s throughput.
NOTE
You can easily increase the size of existing disks. For example, you might want to increase the size of a 30-GB disk to
128 GB, or even to 1 TB. Or, you might want to convert your P20 disk to a P30 disk because you need more capacity
or more IOPS and throughput.
I/O size
The size of an I/O is 256 KB. If the data being transferred is less than 256 KB, it is considered 1 I/O unit.
Larger I/O sizes are counted as multiple I/Os of size 256 KB. For example, 1,100 KB I/O is counted as 5 I/O
units.
Throughput
The throughput limit includes writes to the disk, and it includes disk read operations that aren't served from
the cache. For example, a P10 disk has 100-MB/s throughput per disk. Some examples of valid throughput
for a P10 disk are shown in the following table:
MAX THROUGHPUT PER P10 DISK NON-CACHE READS FROM DISK NON-CACHE WRITES TO DISK
Cache hits
Cache hits are not limited by the allocated IOPS or throughput of the disk. For example, when you use a
data disk with a ReadOnly cache setting on a VM that is supported by Premium Storage, reads that are
served from the cache are not subject to the IOPS and throughput caps of the disk. If the workload of a disk
is predominantly reads, you might get very high throughput. The cache is subject to separate IOPS and
throughput limits at the VM level, based on the VM size. DS -series VMs have roughly 4,000 IOPS and 33-
MB/s throughput per core for cache and local SSD I/Os. GS -series VMs have a limit of 5,000 IOPS and 50-
MB/s throughput per core for cache and local SSD I/Os.
Throttling
Throttling might occur, if your application IOPS or throughput exceeds the allocated limits for a premium storage
disk. Throttling also might occur if your total disk traffic across all disks on the VM exceeds the disk bandwidth
limit available for the VM. To avoid throttling, we recommend that you limit the number of pending I/O requests
for the disk. Use a limit based on scalability and performance targets for the disk you have provisioned, and on the
disk bandwidth available to the VM.
Your application can achieve the lowest latency when it is designed to avoid throttling. However, if the number of
pending I/O requests for the disk is too small, your application cannot take advantage of the maximum IOPS and
throughput levels that are available to the disk.
The following examples demonstrate how to calculate throttling levels. All calculations are based on an I/O unit
size of 256 KB.
Example 1
Your application has processed 495 I/O units of 16-KB size in one second on a P10 disk. The I/O units are counted
as 495 IOPS. If you try a 2-MB I/O in the same second, the total of I/O units is equal to 495 + 8 IOPS. This is
because 2 MB I/O = 2,048 KB / 256 KB = 8 I/O units, when the I/O unit size is 256 KB. Because the sum of 495 +
8 exceeds the 500 IOPS limit for the disk, throttling occurs.
Example 2
Your application has processed 400 I/O units of 256-KB size on a P10 disk. The total bandwidth consumed is (400
× 256) / 1,024 KB = 100 MB/s. A P10 disk has a throughput limit of 100 MB/s. If your application tries to perform
more I/O operations in that second, it is throttled because it exceeds the allocated limit.
Example 3
You have a DS4 VM with two P30 disks attached. Each P30 disk is capable of 200-MB/s throughput. However, a
DS4 VM has a total disk bandwidth capacity of 256 MB/s. You cannot drive both attached disks to the maximum
throughput on this DS4 VM at the same time. To resolve this, you can sustain traffic of 200 MB/s on one disk and
56 MB/s on the other disk. If the sum of your disk traffic goes over 256 MB/s, disk traffic is throttled.
NOTE
If your disk traffic mostly consists of small I/O sizes, your application likely will hit the IOPS limit before the throughput limit.
However, if the disk traffic mostly consists of large I/O sizes, your application likely will hit the throughput limit first, instead
of the IOPS limit. You can maximize your application's IOPS and throughput capacity by using optimal I/O sizes. Also, you can
limit the number of pending I/O requests for a disk.
To learn more about designing for high performance by using Premium Storage, see Design for performance with
Premium Storage.
To maintain geo-redundant copies of your snapshots, you can copy snapshots from a premium storage account to
a geo-redundant standard storage account by using AzCopy or Copy Blob. For more information, see Transfer
data with the AzCopy command-line utility and Copy Blob.
For detailed information about performing REST operations against page blobs in a premium storage account, see
Blob service operations with Azure Premium Storage.
Managed disks
A snapshot for a managed disk is a read-only copy of the managed disk. The snapshot is stored as a standard
managed disk. Currently, incremental snapshots are not supported for managed disks. To learn how to take a
snapshot for a managed disk, see Create a copy of a VHD stored as an Azure managed disk by using managed
snapshots in Windows or Create a copy of a VHD stored as an Azure managed disk by using managed snapshots
in Linux.
If a managed disk is attached to a VM, some API operations on the disk are not permitted. For example, you
cannot generate a shared access signature (SAS ) to perform a copy operation while the disk is attached to a VM.
Instead, first create a snapshot of the disk, and then perform the copy of the snapshot. Alternately, you can detach
the disk and then generate an SAS to perform the copy operation.
sudo rpm -e hypervkvpd ## (Might return an error if not installed. That's OK.)
sudo yum install microsoft-hyper-v
Next steps
For more information about Premium Storage, see the following articles.
Design and implement with Premium Storage
Design for performance with Premium Storage
Blob storage operations with Premium Storage
Operational guidance
Migrate to Azure Premium Storage
Blog posts
Azure Premium Storage generally available
Announcing the GS -series: Adding Premium Storage support to the largest VMs in the public cloud
Azure Premium Storage: Design for High
Performance
11/1/2017 • 41 min to read • Edit Online
Overview
This article provides guidelines for building high performance applications using Azure Premium Storage. You can
use the instructions provided in this document combined with performance best practices applicable to
technologies used by your application. To illustrate the guidelines, we have used SQL Server running on Premium
Storage as an example throughout this document.
While we address performance scenarios for the Storage layer in this article, you will need to optimize the
application layer. For example, if you are hosting a SharePoint Farm on Azure Premium Storage, you can use the
SQL Server examples from this article to optimize the database server. Additionally, optimize the SharePoint
Farm's Web server and Application server to get the most performance.
This article will help answer following common questions about optimizing application performance on Azure
Premium Storage,
How to measure your application performance?
Why are you not seeing expected high performance?
Which factors influence your application performance on Premium Storage?
How do these factors influence performance of your application on Premium Storage?
How can you optimize for IOPS, Bandwidth and Latency?
We have provided these guidelines specifically for Premium Storage because workloads running on Premium
Storage are highly performance sensitive. We have provided examples where appropriate. You can also apply
some of these guidelines to applications running on IaaS VMs with Standard Storage disks.
Before you begin, if you are new to Premium Storage, first read the Premium Storage: High-Performance Storage
for Azure Virtual Machine Workloads and Azure Storage Scalability and Performance Targets articles.
IOPS
IOPS is number of requests that your application is sending to the storage disks in one second. An input/output
operation could be read or write, sequential or random. OLTP applications like an online retail website need to
process many concurrent user requests immediately. The user requests are insert and update intensive database
transactions, which the application must process quickly. Therefore, OLTP applications require very high IOPS.
Such applications handle millions of small and random IO requests. If you have such an application, you must
design the application infrastructure to optimize for IOPS. In the later section, Optimizing Application
Performance, we discuss in detail all the factors that you must consider to get high IOPS.
When you attach a premium storage disk to your high scale VM, Azure provisions for you a guaranteed number of
IOPS as per the disk specification. For example, a P50 disk provisions 7500 IOPS. Each high scale VM size also has
a specific IOPS limit that it can sustain. For example, a Standard GS5 VM has 80,000 IOPS limit.
Throughput
Throughput or Bandwidth is the amount of data that your application is sending to the storage disks in a specified
interval. If your application is performing input/output operations with large IO unit sizes, it requires high
Throughput. Data warehouse applications tend to issue scan intensive operations that access large portions of data
at a time and commonly perform bulk operations. In other words, such applications require higher Throughput. If
you have such an application, you must design its infrastructure to optimize for Throughput. In the next section, we
discuss in detail the factors you must tune to achieve this.
When you attach a premium storage disk to a high scale VM, Azure provisions Throughput as per that disk
specification. For example, a P50 disk provisions 250 MB per second disk Throughput. Each high scale VM size
also has as specific Throughput limit that it can sustain. For example, Standard GS5 VM has a maximum
throughput of 2,000 MB per second.
There is a relation between Throughput and IOPS as shown in the formula below.
Therefore, it is important to determine the optimal Throughput and IOPS values that your application requires. As
you try to optimize one, the other also gets affected. In a later section, Optimizing Application Performance, we
will discuss in more details about optimizing IOPS and Throughput.
Latency
Latency is the time it takes an application to receive a single request, send it to the storage disks and send the
response to the client. This is a critical measure of an application's performance in addition to IOPS and
Throughput. The Latency of a premium storage disk is the time it takes to retrieve the information for a request
and communicate it back to your application. Premium Storage provides consistent low latencies. If you enable
ReadOnly host caching on premium storage disks, you can get much lower read latency. We will discuss Disk
Caching in more detail in later section on Optimizing Application Performance.
When you are optimizing your application to get higher IOPS and Throughput, it will affect the Latency of your
application. After tuning the application performance, always evaluate the Latency of the application to avoid
unexpected high latency behavior.
PERFORMANCE
REQUIREMENTS 50 PERCENTILE 90 PERCENTILE 99 PERCENTILE
% Read operations
% Write operations
% Random operations
% Sequential operations
IO request size
Average Throughput
Max. Throughput
Min. Latency
Average Latency
Max. CPU
Average CPU
Max. Memory
Average Memory
Queue Depth
NOTE
You should consider scaling these numbers based on expected future growth of your application. It is a good idea to plan for
growth ahead of time, because it could be harder to change the infrastructure for improving performance later.
If you have an existing application and want to move to Premium Storage, first build the checklist above for the
existing application. Then, build a prototype of your application on Premium Storage and design the application
based on guidelines described in Optimizing Application Performance in a later section of this document. The next
section describes the tools you can use to gather the performance measurements.
Create a checklist similar to your existing application for the prototype. Using Benchmarking tools you can
simulate the workloads and measure performance on the prototype application. See the section on Benchmarking
to learn more. By doing so you can determine whether Premium Storage can match or surpass your application
performance requirements. Then you can implement the same guidelines for your production application.
Counters to measure application performance requirements
The best way to measure performance requirements of your application, is to use performance-monitoring tools
provided by the operating system of the server. You can use PerfMon for Windows and iostat for Linux. These tools
capture counters corresponding to each measure explained in the above section. You must capture the values of
these counters when your application is running its normal, peak and off-hours workloads.
The PerfMon counters are available for processor, memory and, each logical disk and physical disk of your server.
When you use premium storage disks with a VM, the physical disk counters are for each premium storage disk,
and logical disk counters are for each volume created on the premium storage disks. You must capture the values
for the disks that host your application workload. If there is a one to one mapping between logical and physical
disks, you can refer to physical disk counters; otherwise refer to the logical disk counters. On Linux, the iostat
command generates a CPU and disk utilization report. The disk utilization report provides statistics per physical
device or partition. If you have a database server with its data and log on separate disks, collect this data for both
disks. Below table describes counters for disks, processor and memory:
Disk Reads and Writes % of Reads and Write % Disk Read Time r/s
operations performed on the % Disk Write Time w/s
disk.
Queue Depth Number of outstanding I/O Current Disk Queue Length avgqu-sz
requests waiting to be read
form or written to the
storage disk.
Max. Memory Amount of memory required % Committed Bytes in Use Use vmstat
to run application smoothly
Example Scenario Enterprise OLTP application Enterprise Data warehousing Near real-time applications
requiring very high application processing large requiring instant responses
transactions per second rate. amounts of data. to user requests, like online
gaming.
Performance factors
VM size Use a VM size that offers Use a VM size with Use a VM size that offers
IOPS greater than your Throughput limit greater scale limits greater than your
application requirement. See than your application application requirement. See
VM sizes and their IOPS requirement. See VM sizes VM sizes and their limits
limits here. and their Throughput limits here.
here.
Disk size Use a disk size that offers Use a disk size with Use a disk size that offers
IOPS greater than your Throughput limit greater scale limits greater than your
application requirement. See than your application application requirement. See
disk sizes and their IOPS requirement. See disk sizes disk sizes and their limits
limits here. and their Throughput limits here.
here.
VM and Disk Scale Limits IOPS limit of the VM size Throughput limit of the VM Scale limits of the VM size
chosen should be greater size chosen should be chosen must be greater than
than total IOPS driven by greater than total total scale limits of attached
premium storage disks Throughput driven by premium storage disks.
attached to it. premium storage disks
attached to it.
IOPS THROUGHPUT LATENCY
Stripe Size Smaller stripe size for Larger stripe size for
random small IO pattern sequential large IO pattern
seen in OLTP applications. seen in Data Warehouse
E.g., use stripe size of 64KB applications. E.g., use 256KB
for SQL Server OLTP stripe size for SQL Server
application. Data warehouse application.
Queue Depth Larger Queue Depth yields Larger Queue Depth yields Smaller Queue Depth yields
higher IOPS. higher Throughput. lower latencies.
Nature of IO Requests
An IO request is a unit of input/output operation that your application will be performing. Identifying the nature of
IO requests, random or sequential, read or write, small or large, will help you determine the performance
requirements of your application. It is very important to understand the nature of IO requests, to make the right
decisions when designing your application infrastructure.
IO size is one of the more important factors. The IO size is the size of the input/output operation request
generated by your application. The IO size has a significant impact on performance especially on the IOPS and
Bandwidth that the application is able to achieve. The following formula shows the relationship between IOPS, IO
size and Bandwidth/Throughput.
Some applications allow you to alter their IO size, while some applications do not. For example, SQL Server
determines the optimal IO size itself, and does not provide users with any knobs to change it. On the other hand,
Oracle provides a parameter called DB_BLOCK_SIZE using which you can configure the I/O request size of the
database.
If you are using an application, which does not allow you to change the IO size, use the guidelines in this article to
optimize the performance KPI that is most relevant to your application. For example,
An OLTP application generates millions of small and random IO requests. To handle these type of IO requests,
you must design your application infrastructure to get higher IOPS.
A data warehousing application generates large and sequential IO requests. To handle these type of IO
requests, you must design your application infrastructure to get higher Bandwidth or Throughput.
If you are using an application, which allows you to change the IO size, use this rule of thumb for the IO size in
addition to other performance guidelines,
Smaller IO size to get higher IOPS. For example, 8 KB for an OLTP application.
Larger IO size to get higher Bandwidth/Throughput. For example, 1024 KB for a data warehouse application.
Here is an example on how you can calculate the IOPS and Throughput/Bandwidth for your application. Consider
an application using a P30 disk. The maximum IOPS and Throughput/Bandwidth a P30 disk can achieve is 5000
IOPS and 200 MB per second respectively. Now, if your application requires the maximum IOPS from the P30
disk and you use a smaller IO size like 8 KB, the resulting Bandwidth you will be able to get is 40 MB per second.
However, if your application requires the maximum Throughput/Bandwidth from P30 disk, and you use a larger IO
size like 1024 KB, the resulting IOPS will be less, 200 IOPS. Therefore, tune the IO size such that it meets both
your application's IOPS and Throughput/Bandwidth requirement. Table below summarizes the different IO sizes
and their corresponding IOPS and Throughput for a P30 disk.
To get IOPS and Bandwidth higher than the maximum value of a single premium storage disk, use multiple
premium disks striped together. For example, stripe two P30 disks to get a combined IOPS of 10,000 IOPS or a
combined Throughput of 400 MB per second. As explained in the next section, you must use a VM size that
supports the combined disk IOPS and Throughput.
NOTE
As you increase either IOPS or Throughput the other also increases, make sure you do not hit throughput or IOPS limits of
the disk or VM when increasing either one.
To witness the effects of IO size on application performance, you can run benchmarking tools on your VM and
disks. Create multiple test runs and use different IO size for each run to see the impact. Refer to the Benchmarking
section at the end of this article for more details.
BANDWIDTH
VM DISK MAX. DATA CACHE IO
VM SIZE CPU CORES MEMORY SIZES DISKS CACHE SIZE IOPS LIMITS
To view a complete list of all available Azure VM sizes, refer to Windows VM sizes or Linux VM sizes. Choose a
VM size that can meet and scale to your desired application performance requirements. In addition to this, take
into account following important considerations when choosing VM sizes.
Scale Limits
The maximum IOPS limits per VM and per disk are different and independent of each other. Make sure that the
application is driving IOPS within the limits of the VM as well as the premium disks attached to it. Otherwise,
application performance will experience throttling.
As an example, suppose an application requirement is a maximum of 4,000 IOPS. To achieve this, you provision a
P30 disk on a DS1 VM. The P30 disk can deliver up to 5,000 IOPS. However, the DS1 VM is limited to 3,200 IOPS.
Consequently, the application performance will be constrained by the VM limit at 3,200 IOPS and there will be
degraded performance. To prevent this situation, choose a VM and disk size that will both meet application
requirements.
Cost of Operation
In many cases, it is possible that your overall cost of operation using Premium Storage is lower than using
Standard Storage.
For example, consider an application requiring 16,000 IOPS. To achieve this performance, you will need a
Standard_D14 Azure IaaS VM, which can give a maximum IOPS of 16,000 using 32 standard storage 1TB disks.
Each 1TB standard storage disk can achieve a maximum of 500 IOPS. The estimated cost of this VM per month
will be $1,570. The monthly cost of 32 standard storage disks will be $1,638. The estimated total monthly cost will
be $3,208.
However, if you hosted the same application on Premium Storage, you will need a smaller VM size and fewer
premium storage disks, thus reducing the overall cost. A Standard_DS13 VM can meet the 16,000 IOPS
requirement using four P30 disks. The DS13 VM has a maximum IOPS of 25,600 and each P30 disk has a
maximum IOPS of 5,000. Overall, this configuration can achieve 5,000 x 4 = 20,000 IOPS. The estimated cost of
this VM per month will be $1,003. The monthly cost of four P30 premium storage disks will be $544.34. The
estimated total monthly cost will be $1,544.
Table below summarizes the cost breakdown of this scenario for Standard and Premium Storage.
STANDARD PREMIUM
Cost of Disks per month $1,638.40 (32 x 1 TB disks) $544.34 (4 x P30 disks)
Linux Distros
With Azure Premium Storage, you get the same level of Performance for VMs running Windows and Linux. We
support many flavors of Linux distros, and you can see the complete list here. It is important to note that different
distros are better suited for different types of workloads. You will see different levels of performance depending on
the distro your workload is running on. Test the Linux distros with your application and choose the one that works
best.
When running Linux with Premium Storage, check the latest updates about required drivers to ensure high
performance.
PREMIUM
DISKS TYPE P4 P6 P10 P20 P30 P40 P50
Throughput 25 MB per 50 MB per 100 MB per 150 MB per 200 MB per 250 MB per 250 MB per
per disk second second second second second second second
How many disks you choose depends on the disk size chosen. You could use a single P50 disk or multiple P10
disks to meet your application requirement. Take into account considerations listed below when making the choice.
Scale Limits (IOPS and Throughput)
The IOPS and Throughput limits of each Premium disk size is different and independent from the VM scale limits.
Make sure that the total IOPS and Throughput from the disks are within scale limits of the chosen VM size.
For example, if an application requirement is a maximum of 250 MB/sec Throughput and you are using a DS4 VM
with a single P30 disk. The DS4 VM can give up to 256 MB/sec Throughput. However, a single P30 disk has
Throughput limit of 200 MB/sec. Consequently, the application will be constrained at 200 MB/sec due to the disk
limit. To overcome this limit, provision more than one data disks to the VM or resize your disks to P40 or P50.
NOTE
Reads served by the cache are not included in the disk IOPS and Throughput, hence not subject to disk limits. Cache has its
separate IOPS and Throughput limit per VM.
For example, initially your reads and writes are 60MB/sec and 40MB/sec respectively. Over time, the cache warms up and
serves more and more of the reads from the cache. Then, you can get higher write Throughput from the disk.
Number of Disks
Determine the number of disks you will need by assessing application requirements. Each VM size also has a limit
on the number of disks that you can attach to the VM. Typically, this is twice the number of cores. Ensure that the
VM size you choose can support the number of disks needed.
Remember, the Premium Storage disks have higher performance capabilities compared to Standard Storage disks.
Therefore, if you are migrating your application from Azure IaaS VM using Standard Storage to Premium Storage,
you will likely need fewer premium disks to achieve the same or higher performance for your application.
Disk Caching
High Scale VMs that leverage Azure Premium Storage have a multi-tier caching technology called BlobCache.
BlobCache uses a combination of the Virtual Machine RAM and local SSD for caching. This cache is available for
the Premium Storage persistent disks and the VM local disks. By default, this cache setting is set to Read/Write for
OS disks and ReadOnly for data disks hosted on Premium Storage. With disk caching enabled on the Premium
Storage disks, the high scale VMs can achieve extremely high levels of performance that exceed the underlying
disk performance.
WARNING
Changing the cache setting of an Azure disk detaches and re-attaches the target disk. If it is the operating system disk, the
VM is restarted. Stop all applications/services that might be affected by this disruption before changing the disk cache
setting.
To learn more about how BlobCache works, refer to the Inside Azure Premium Storage blog post.
It is important to enable cache on the right set of disks. Whether you should enable disk caching on a premium
disk or not will depend on the workload pattern that disk will be handling. Table below shows the default cache
settings for OS and Data disks.
OS disk ReadWrite
Following are the recommended disk cache settings for data disks,
ReadOnly
By configuring ReadOnly caching on Premium Storage data disks, you can achieve low Read latency and get very
high Read IOPS and Throughput for your application. This is due two reasons,
1. Reads performed from cache, which is on the VM memory and local SSD, are much faster than reads from the
data disk, which is on the Azure blob storage.
2. Premium Storage does not count the Reads served from cache, towards the disk IOPS and Throughput.
Therefore, your application is able to achieve higher total IOPS and Throughput.
ReadWrite
By default, the OS disks have ReadWrite caching enabled. We have recently added support for ReadWrite caching
on data disks as well. If you are using ReadWrite caching, you must have a proper way to write the data from cache
to persistent disks. For example, SQL Server handles writing cached data to the persistent storage disks on its own.
Using ReadWrite cache with an application that does not handle persisting the required data can lead to data loss,
if the VM crashes.
As an example, you can apply these guidelines to SQL Server running on Premium Storage by doing the
following,
1. Configure "ReadOnly" cache on premium storage disks hosting data files.
a. The fast reads from cache lower the SQL Server query time since data pages are retrieved much faster from
the cache compared to directly from the data disks.
b. Serving reads from cache, means there is additional Throughput available from premium data disks. SQL
Server can use this additional Throughput towards retrieving more data pages and other operations like
backup/restore, batch loads, and index rebuilds.
2. Configure "None" cache on premium storage disks hosting the log files.
a. Log files have primarily write-heavy operations. Therefore, they do not benefit from the ReadOnly cache.
Disk Striping
When a high scale VM is attached with several premium storage persistent disks, the disks can be striped together
to aggregate their IOPs, bandwidth, and storage capacity.
On Windows, you can use Storage Spaces to stripe disks together. You must configure one column for each disk in
a pool. Otherwise, the overall performance of striped volume can be lower than expected, due to uneven
distribution of traffic across the disks.
Important: Using Server Manager UI, you can set the total number of columns up to 8 for a striped volume. When
attaching more than 8 disks, use PowerShell to create the volume. Using PowerShell, you can set the number of
columns equal to the number of disks. For example, if there are 16 disks in a single stripe set; specify 16 columns in
the NumberOfColumns parameter of the New -VirtualDisk PowerShell cmdlet.
On Linux, use the MDADM utility to stripe disks together. For detailed steps on striping disks on Linux refer to
Configure Software RAID on Linux.
Stripe Size
An important configuration in disk striping is the stripe size. The stripe size or block size is the smallest chunk of
data that application can address on a striped volume. The stripe size you configure depends on the type of
application and its request pattern. If you choose the wrong stripe size, it could lead to IO misalignment, which
leads to degraded performance of your application.
For example, if an IO request generated by your application is bigger than the disk stripe size, the storage system
writes it across stripe unit boundaries on more than one disk. When it is time to access that data, it will have to
seek across more than one stripe units to complete the request. The cumulative effect of such behavior can lead to
substantial performance degradation. On the other hand, if the IO request size is smaller than stripe size, and if it is
random in nature, the IO requests may add up on the same disk causing a bottleneck and ultimately degrading the
IO performance.
Depending on the type of workload your application is running, choose an appropriate stripe size. For random
small IO requests, use a smaller stripe size. Whereas, for large sequential IO requests use a larger stripe size. Find
out the stripe size recommendations for the application you will be running on Premium Storage. For SQL Server,
configure stripe size of 64KB for OLTP workloads and 256KB for data warehousing workloads. See Performance
best practices for SQL Server on Azure VMs to learn more.
NOTE
You can stripe together a maximum of 32 premium storage disks on a DS series VM and 64 premium storage disks on a GS
series VM.
Multi-threading
Azure has designed Premium Storage platform to be massively parallel. Therefore, a multi-threaded application
achieves much higher performance than a single-threaded application. A multi-threaded application splits up its
tasks across multiple threads and increases efficiency of its execution by utilizing the VM and disk resources to the
maximum.
For example, if your application is running on a single core VM using two threads, the CPU can switch between the
two threads to achieve efficiency. While one thread is waiting on a disk IO to complete, the CPU can switch to the
other thread. In this way, two threads can accomplish more than a single thread would. If the VM has more than
one core, it further decreases running time since each core can execute tasks in parallel.
You may not be able to change the way an off-the-shelf application implements single threading or multi-
threading. For example, SQL Server is capable of handling multi-CPU and multi-core. However, SQL Server
decides under what conditions it will leverage one or more threads to process a query. It can run queries and build
indexes using multi-threading. For a query that involves joining large tables and sorting data before returning to
the user, SQL Server will likely use multiple threads. However, a user cannot control whether SQL Server executes
a query using a single thread or multiple threads.
There are configuration settings that you can alter to influence this multi-threading or parallel processing of an
application. For example, in case of SQL Server it is the maximum Degree of Parallelism configuration. This setting
called MAXDOP, allows you to configure the maximum number of processors SQL Server can use when parallel
processing. You can configure MAXDOP for individual queries or index operations. This is beneficial when you
want to balance resources of your system for a performance critical application.
For example, say your application using SQL Server is executing a large query and an index operation at the same
time. Let us assume that you wanted the index operation to be more performant compared to the large query. In
such a case, you can set MAXDOP value of the index operation to be higher than the MAXDOP value for the
query. This way, SQL Server has more number of processors that it can leverage for the index operation compared
to the number of processors it can dedicate to the large query. Remember, you do not control the number of
threads SQL Server will use for each operation. You can control the maximum number of processors being
dedicated for multi-threading.
Learn more about Degrees of Parallelism in SQL Server. Find out such settings that influence multi-threading in
your application and their configurations to optimize performance.
Queue Depth
The Queue Depth or Queue Length or Queue Size is the number of pending IO requests in the system. The value
of Queue Depth determines how many IO operations your application can line up, which the storage disks will be
processing. It affects all the three application performance indicators that we discussed in this article viz., IOPS,
Throughput and Latency.
Queue Depth and multi-threading are closely related. The Queue Depth value indicates how much multi-threading
can be achieved by the application. If the Queue Depth is large, application can execute more operations
concurrently, in other words, more multi-threading. If the Queue Depth is small, even though application is multi-
threaded, it will not have enough requests lined up for concurrent execution.
Typically, off the shelf applications do not allow you to change the queue depth, because if set incorrectly it will do
more harm than good. Applications will set the right value of queue depth to get the optimal performance.
However, it is important to understand this concept so that you can troubleshoot performance issues with your
application. You can also observe the effects of queue depth by running benchmarking tools on your system.
Some applications provide settings to influence the Queue Depth. For example, the MAXDOP (maximum degree
of parallelism) setting in SQL Server explained in previous section. MAXDOP is a way to influence Queue Depth
and multi-threading, although it does not directly change the Queue Depth value of SQL Server.
High Queue Depth
A high queue depth lines up more operations on the disk. The disk knows the next request in its queue ahead of
time. Consequently, the disk can schedule operations ahead of time and process them in an optimal sequence.
Since the application is sending more requests to the disk, the disk can process more parallel IOs. Ultimately, the
application will be able to achieve higher IOPS. Since application is processing more requests, the total
Throughput of the application also increases.
Typically, an application can achieve maximum Throughput with 8-16+ outstanding IOs per attached disk. If a
Queue Depth is one, application is not pushing enough IOs to the system, and it will process less amount of in a
given period. In other words, less Throughput.
For example, in SQL Server, setting the MAXDOP value for a query to "4" informs SQL Server that it can use up
to four cores to execute the query. SQL Server will determine what is best queue depth value and the number of
cores for the query execution.
Optimal Queue Depth
Very high queue depth value also has its drawbacks. If queue depth value is too high, the application will try to
drive very high IOPS. Unless application has persistent disks with sufficient provisioned IOPS, this can negatively
affect application latencies. Following formula shows the relationship between IOPS, Latency and Queue Depth.
You should not configure Queue Depth to any high value, but to an optimal value, which can deliver enough IOPS
for the application without affecting latencies. For example, if the application latency needs to be 1 millisecond, the
Queue Depth required to achieve 5,000 IOPS is, QD = 5000 x 0.001 = 5.
Queue Depth for Striped Volume
For a striped volume, maintain a high enough queue depth such that, every disk has a peak queue depth
individually. For example, consider an application that pushes a queue depth of 2 and there are 4 disks in the stripe.
The two IO requests will go to two disks and remaining two disks will be idle. Therefore, configure the queue depth
such that all the disks can be busy. Formula below shows how to determine the queue depth of striped volumes.
Throttling
Azure Premium Storage provisions specified number of IOPS and Throughput depending on the VM sizes and
disk sizes you choose. Anytime your application tries to drive IOPS or Throughput above these limits of what the
VM or disk can handle, Premium Storage will throttle it. This manifests in the form of degraded performance in
your application. This can mean higher latency, lower Throughput or lower IOPS. If Premium Storage does not
throttle, your application could completely fail by exceeding what its resources are capable of achieving. So, to
avoid performance issues due to throttling, always provision sufficient resources for your application. Take into
consideration what we discussed in the VM sizes and Disk sizes sections above. Benchmarking is the best way to
figure out what resources you will need to host your application.
Benchmarking
Benchmarking is the process of simulating different workloads on your application and measuring the application
performance for each workload. Using the steps described in an earlier section, you have gathered the application
performance requirements. By running benchmarking tools on the VMs hosting the application, you can determine
the performance levels that your application can achieve with Premium Storage. In this section, we provide you
examples of benchmarking a Standard DS14 VM provisioned with Azure Premium Storage disks.
We have used common benchmarking tools Iometer and FIO, for Windows and Linux respectively. These tools
spawn multiple threads simulating a production like workload, and measure the system performance. Using the
tools you can also configure parameters like block size and queue depth, which you normally cannot change for an
application. This gives you more flexibility to drive the maximum performance on a high scale VM provisioned
with premium disks for different types of application workloads. To learn more about each benchmarking tool visit
Iometer and FIO.
To follow the examples below, create a Standard DS14 VM and attach 11 Premium Storage disks to the VM. Of the
11 disks, configure 10 disks with host caching as "None" and stripe them into a volume called NoCacheWrites.
Configure host caching as "ReadOnly" on the remaining disk and create a volume called CacheReads with this
disk. Using this setup, you will be able to see the maximum Read and Write performance from a Standard DS14
VM. For detailed steps about creating a DS14 VM with premium disks, go to Create and use a Premium Storage
account for a virtual machine data disk.
Warming up the Cache
The disk with ReadOnly host caching will be able to give higher IOPS than the disk limit. To get this maximum read
performance from the host cache, first you must warm up the cache of this disk. This ensures that the Read IOs
which benchmarking tool will drive on CacheReads volume actually hits the cache and not the disk directly. The
cache hits result in additional IOPS from the single cache enabled disk.
Important:
You must warm up the cache before running benchmarking, every time VM is rebooted.
Iometer
Download the Iometer tool on the VM.
Test file
Iometer uses a test file that is stored on the volume on which you will run the benchmarking test. It drives Reads
and Writes on this test file to measure the disk IOPS and Throughput. Iometer creates this test file if you have not
provided one. Create a 200GB test file called iobw.tst on the CacheReads and NoCacheWrites volumes.
Access Specifications
The specifications, request IO size, % read/write, % random/sequential are configured using the "Access
Specifications" tab in Iometer. Create an access specification for each of the scenarios described below. Create the
access specifications and "Save" with an appropriate name like – RandomWrites_8K, RandomReads_8K. Select the
corresponding specification when running the test scenario.
An example of access specifications for maximum Write IOPS scenario is shown below,
RandomWrites_8K 8K 100 0
2. Run the Iometer test for initializing cache disk with following parameters. Use three worker threads for the
target volume and a queue depth of 128. Set the "Run time" duration of the test to 2hrs on the "Test Setup"
tab.
3. Run the Iometer test for warming up cache disk with following parameters. Use three worker threads for
the target volume and a queue depth of 128. Set the "Run time" duration of the test to 2hrs on the "Test
Setup" tab.
After cache disk is warmed up, proceed with the test scenarios listed below. To run the Iometer test, use at least
three worker threads for each target volume. For each worker thread, select the target volume, set queue depth
and select one of the saved test specifications, as shown in the table below, to run the corresponding test scenario.
The table also shows expected results for IOPS and Throughput when running these tests. For all scenarios, a
small IO size of 8KB and a high queue depth of 128 is used.
NoCacheWrites RandomReads_8K
NoCacheWrites RandomReads_64K
Below are screenshots of the Iometer test results for combined IOPS and Throughput scenarios.
Combined Reads and Writes Maximum IOPS
Combined Reads and Writes Maximum Throughput
FIO
FIO is a popular tool to benchmark storage on the Linux VMs. It has the flexibility to select different IO sizes,
sequential or random reads and writes. It spawns worker threads or processes to perform the specified I/O
operations. You can specify the type of I/O operations each worker thread must perform using job files. We created
one job file per scenario illustrated in the examples below. You can change the specifications in these job files to
benchmark different workloads running on Premium Storage. In the examples, we are using a Standard DS 14 VM
running Ubuntu. Use the same setup described in the beginning of the Benchmarking section and warm up the
cache before running the benchmarking tests.
Before you begin, download FIO and install it on your virtual machine.
Run the following command for Ubuntu,
apt-get install fio
We will use four worker threads for driving Write operations and four worker threads for driving Read operations
on the disks. The Write workers will be driving traffic on the "nocache" volume, which has 10 disks with cache set
to "None". The Read workers will be driving traffic on the "readcache" volume, which has 1 disk with cache set to
"ReadOnly".
Maximum Write IOPS
Create the job file with following specifications to get maximum Write IOPS. Name it "fiowrite.ini".
[global]
size=30g
direct=1
iodepth=256
ioengine=libaio
bs=8k
[writer1]
rw=randwrite
directory=/mnt/nocache
[writer2]
rw=randwrite
directory=/mnt/nocache
[writer3]
rw=randwrite
directory=/mnt/nocache
[writer4]
rw=randwrite
directory=/mnt/nocache
Note the follow key things that are in line with the design guidelines discussed in previous sections. These
specifications are essential to drive maximum IOPS,
A high queue depth of 256.
A small block size of 8KB.
Multiple threads performing random writes.
Run the following command to kick off the FIO test for 30 seconds,
While the test runs, you will be able to see the number of write IOPS the VM and Premium disks are delivering. As
shown in the sample below, the DS14 VM is delivering its maximum write IOPS limit of 50,000 IOPS.
[reader1]
rw=randread
directory=/mnt/readcache
[reader2]
rw=randread
directory=/mnt/readcache
[reader3]
rw=randread
directory=/mnt/readcache
[reader4]
rw=randread
directory=/mnt/readcache
Note the follow key things that are in line with the design guidelines discussed in previous sections. These
specifications are essential to drive maximum IOPS,
A high queue depth of 256.
A small block size of 8KB.
Multiple threads performing random writes.
Run the following command to kick off the FIO test for 30 seconds,
While the test runs, you will be able to see the number of read IOPS the VM and Premium disks are delivering. As
shown in the sample below, the DS14 VM is delivering more than 64,000 Read IOPS. This is a combination of the
disk and the cache performance.
[reader1]
rw=randread
directory=/mnt/readcache
[reader2]
rw=randread
directory=/mnt/readcache
[reader3]
rw=randread
directory=/mnt/readcache
[reader4]
rw=randread
directory=/mnt/readcache
[writer1]
rw=randwrite
directory=/mnt/nocache
rate_iops=12500
[writer2]
rw=randwrite
directory=/mnt/nocache
rate_iops=12500
[writer3]
rw=randwrite
directory=/mnt/nocache
rate_iops=12500
[writer4]
rw=randwrite
directory=/mnt/nocache
rate_iops=12500
Note the follow key things that are in line with the design guidelines discussed in previous sections. These
specifications are essential to drive maximum IOPS,
A high queue depth of 128.
A small block size of 4KB.
Multiple threads performing random reads and writes.
Run the following command to kick off the FIO test for 30 seconds,
While the test runs, you will be able to see the number of combined read and write IOPS the VM and Premium
disks are delivering. As shown in the sample below, the DS14 VM is delivering more than 100,000 combined Read
and Write IOPS. This is a combination of the disk and the cache performance.
Next Steps
Learn more about Azure Premium Storage:
Premium Storage: High-Performance Storage for Azure Virtual Machine Workloads
For SQL Server users, read articles on Performance Best Practices for SQL Server:
Performance Best Practices for SQL Server in Azure Virtual Machines
Azure Premium Storage provides highest performance for SQL Server in Azure VM
Cost-effective Standard Storage and unmanaged and
managed Azure VM disks
11/1/2017 • 7 min to read • Edit Online
Azure Standard Storage delivers reliable, low -cost disk support for VMs running latency-insensitive workloads. It
also supports blobs, tables, queues, and files. With Standard Storage, the data is stored on hard disk drives (HDDs).
When working with VMs, you can use standard storage disks for Dev/Test scenarios and less critical workloads,
and premium storage disks for mission-critical production applications. Standard Storage is available in all Azure
regions.
This article focuses on the use of standard storage for VM Disks. For more information about the use of storage
with blobs, tables, queues, and files, please refer to the Introduction to Storage.
Disk types
There are two ways to create standard disks for Azure VMs:
Unmanaged disks: This is the original method where you manage the storage accounts used to store the VHD
files that correspond to the VM disks. VHD files are stored as page blobs in storage accounts. Unmanaged disks
can be attached to any Azure VM size, including the VMs that primarily use Premium Storage, such as the DSv2
and GS series. Azure VMs support attaching several standard disks, allowing up to 256 TB of storage per VM.
Azure Managed Disks: This feature manages the storage accounts used for the VM disks for you. You specify the
type (Premium or Standard) and size of disk you need, and Azure creates and manages the disk for you. You don't
have to worry about placing the disks across multiple storage accounts in order to ensure you stay within the
scalability limits for the storage accounts -- Azure handles that for you.
Even though both types of disks are available, we recommend using Managed Disks to take advantage of their
many features.
To get started with Azure Standard Storage, visit Get started for free.
For information on how to create a VM with Managed Disks, please see one of the following articles.
Create a VM using Resource Manager and PowerShell
Create a Linux VM using the Azure CLI 2.0
Max ingress1 per storage account (US Regions) 10 Gbps if GRS/ZRS enabled, 20 Gbps for LRS
Max egress1 per storage account (US Regions) 20 Gbps if RA-GRS/GRS/ZRS enabled, 30 Gbps for LRS
Max ingress1 per storage account (European and Asian 5 Gbps if GRS/ZRS enabled, 10 Gbps for LRS
Regions)
Max egress1 per storage account (European and Asian 10 Gbps if RA-GRS/GRS/ZRS enabled, 15 Gbps for LRS
Regions)
Total Request Rate (assuming 1 KB object size) per storage Up to 20,000 IOPS, entities per second, or messages per
account second
1 Ingress refers to all data (requests) being sent to a storage account. Egress refers to all data (responses) being
Next steps
Introduction to Azure Storage
Create a storage account
Managed Disks Overview
Create a VM using Resource Manager and PowerShell
Create a Linux VM using the Azure CLI 2.0
Scalability and performance targets for VM disks on
Linux
11/16/2017 • 3 min to read • Edit Online
An Azure virtual machine supports attaching a number of data disks. This article describes scalability and
performance targets for a VM's data disks. Use these targets to help decide the number and type of disk that you
need to meet your performance and capacity requirements.
IMPORTANT
For optimal performance, limit the number of highly utilized disks attached to the virtual machine to avoid possible throttling.
If all attached disks are not highly utilized at the same time, then the virtual machine can support a larger number of disks.
For Azure Managed Disks: The disk limit for managed disks is per region and per disk type. The maximum
limit, and also the default limit, is 10,000 managed disks per region and per disk type for a subscription. For
example, you can create up to 10,000 standard managed disks and also 10,000 premium managed disks in a
region, per subscription.
Managed snapshots and images count against the managed disks limit.
For standard storage accounts: A standard storage account has a maximum total request rate of 20,000
IOPS. The total IOPS across all of your virtual machine disks in a standard storage account should not
exceed this limit.
You can roughly calculate the number of highly utilized disks supported by a single standard storage
account based on the request rate limit. For example, for a Basic Tier VM, the maximum number of highly
utilized disks is about 66 (20,000/300 IOPS per disk), and for a Standard Tier VM, it is about 40 (20,000/500
IOPS per disk).
For premium storage accounts: A premium storage account has a maximum total throughput rate of 50
Gbps. The total throughput across all of your VM disks should not exceed this limit.
See Linux VM sizes for additional details.
STANDARD
DISK TYPE S4 S6 S10 S20 S30 S40 S50
Throughput 25 MB/sec 50 MB/sec 100 MB/sec 150 MB/sec 200 MB/sec 250 MB/sec 250 MB/sec
per disk
1Ingress refers to all data (requests) being sent to a storage account. Egress refers to all data (responses) being
PREMIUM
STORAGE DISK
TYPE P10 P20 P30 P40 P50
Disk size 128 GiB 512 GiB 1024 GiB (1 TB) 2048 GiB (2 TB) 4095 GiB (4 TB)
PREMIUM
STORAGE DISK
TYPE P10 P20 P30 P40 P50
Max throughput 100 MB/s 150 MB/s 200 MB/s 250 MB/s 250 MB/s
per disk
See also
Azure subscription and service limits, quotas, and constraints
Backup and disaster recovery for Azure IaaS disks
11/1/2017 • 21 min to read • Edit Online
This article explains how to plan for backup and disaster recovery (DR ) of IaaS virtual machines (VMs) and disks in
Azure. This document covers both managed and unmanaged disks.
First, we cover the built-in fault tolerance capabilities in the Azure platform that helps guard against local failures.
We then discuss the disaster scenarios not fully covered by the built-in capabilities. This is the main topic addressed
by this document. We also show several examples of workload scenarios where different backup and DR
considerations can apply. We then review possible solutions for the DR of IaaS disks.
Introduction
The Azure platform uses various methods for redundancy and fault tolerance to help protect customers from
localized hardware failures. Local failures can include problems with an Azure Storage server machine that stores
part of the data for a virtual disk or failures of an SSD or HDD on that server. Such isolated hardware component
failures can happen during normal operations.
The Azure platform is designed to be resilient to these failures. Major disasters can result in failures or the
inaccessibility of many storage servers or even a whole datacenter. Although your VMs and disks are normally
protected from localized failures, additional steps are necessary to protect your workload from region-wide
catastrophic failures, such as a major disaster, that can affect your VM and disks.
In addition to the possibility of platform failures, problems with a customer application or data can occur. For
example, a new version of your application might inadvertently make a change to the data that causes it to break.
In that case, you might want to revert the application and the data to a prior version that contains the last known
good state. This requires maintaining regular backups.
For regional disaster recovery, you must back up your IaaS VM disks to a different region.
Before we look at backup and DR options, let’s recap a few methods available for handling localized failures.
Azure IaaS resiliency
Resiliency refers to the tolerance for normal failures that occur in hardware components. Resiliency is the ability to
recover from failures and continue to function. It's not about avoiding failures, but responding to failures in a way
that avoids downtime or data loss. The goal of resiliency is to return the application to a fully functioning state
following a failure. Azure virtual machines and disks are designed to be resilient to common hardware faults. Let's
look at how the Azure IaaS platform provides this resiliency.
A virtual machine consists mainly of two parts: a compute server and the persistent disks. Both affect the fault
tolerance of a virtual machine.
If the Azure compute host server that houses your VM experiences a hardware failure, which is rare, Azure is
designed to automatically restore the VM on another server. If this happens, your computer reboots, and the VM
comes back up after some time. Azure automatically detects such hardware failures and executes recoveries to help
ensure the customer VM is available as soon as possible.
Regarding IaaS disks, the durability of data is critical for a persistent storage platform. Azure customers have
important business applications running on IaaS, and they depend on the persistence of the data. Azure designs
protection for these IaaS disks, with three redundant copies of the data that is stored locally. These copies provide
for high durability against local failures. If one of the hardware components that holds your disk fails, your VM is
not affected, because there are two additional copies to support disk requests. It works fine, even if two different
hardware components that support a disk fail at the same time (which is very rare).
To ensure that you always maintain three replicas, Azure Storage automatically spawns a new copy of the data in
the background if one of the three copies becomes unavailable. Therefore, it should not be necessary to use RAID
with Azure disks for fault tolerance. A simple RAID 0 configuration should be sufficient for striping the disks, if
necessary, to create larger volumes.
Because of this architecture, Azure has consistently delivered enterprise-grade durability for IaaS disks, with an
industry-leading zero percent annualized failure rate.
Localized hardware faults on the compute host or in the Storage platform can sometimes result in of the
temporary unavailability of the VM that is covered by the Azure SL A for VM availability. Azure also provides an
industry-leading SL A for single VM instances that use Azure Premium Storage disks.
To safeguard application workloads from downtime due to the temporary unavailability of a disk or VM, customers
can use availability sets. Two or more virtual machines in an availability set provide redundancy for the application.
Azure then creates these VMs and disks in separate fault domains with different power, network, and server
components.
Because of these separate fault domains, localized hardware failures typically do not affect multiple VMs in the set
at the same time. Having separate fault domains provides high availability for your application. It's considered a
good practice to use availability sets when high availability is required. The next section covers the disaster
recovery aspect.
Backup and disaster recovery
Disaster recovery is the ability to recover from rare, but major, incidents. This includes non-transient, wide-scale
failures, such as service disruption that affects an entire region. Disaster recovery includes data backup and
archiving, and might include manual intervention, such as restoring a database from a backup.
The Azure platform’s built-in protection against localized failures might not fully protect the VMs/disks if a major
disaster causes large-scale outages. This includes catastrophic events, such as if a datacenter is hit by a hurricane,
earthquake, fire, or if there is a large-scale hardware unit failures. In addition, you might encounter failures due to
application or data issues.
To help protect your IaaS workloads from outages, you should plan for redundancy and have backups to enable
recovery. For disaster recovery, you should back up in a different geographic location away from the primary site.
This helps ensure your backup is not affected by the same event that originally affected the VM or disks. For more
information, see Disaster recovery for Azure applications.
Your DR considerations might include the following aspects:
High availability: The ability of the application to continue running in a healthy state, without significant
downtime. By healthy state, we mean the application is responsive, and users can connect to the application
and interact with it. Certain mission-critical applications and databases might be required to always be
available, even when there are failures in the platform. For these workloads, you might need to plan
redundancy for the application, as well as the data.
Data durability: In some cases, the main consideration is ensuring that the data is preserved if a disaster
happens. Therefore, you might need a backup of your data in a different site. For such workloads, you might
not need full redundancy for the application, but only a regular backup of the disks.
Unmanaged locally redundant storage Local (locally redundant storage) Azure Backup
disks
High availability is best met by using managed disks in an availability set along with Azure Backup. If you use
unmanaged disks, you can still use Azure Backup for DR. If you are unable to use Azure Backup, then taking
consistent snapshots, as described in a later section, is an alternative solution for backup and DR.
Your choices for high availability, backup, and DR at application or infrastructure levels can be represented as
follows:
NOTE
Only having the disks in a geo-redundant storage or read-access geo-redundant storage account does not protect the VM
from disasters. You must also create coordinated snapshots or use Azure Backup. This is required to recover a VM to a
consistent state.
If you use locally redundant storage, you must copy the snapshots to a different storage account immediately after
creating the snapshot. The copy target might be a locally redundant storage account in a different region, resulting
in the copy being in a remote region. You can also copy the snapshot to a read-access geo-redundant storage
account in the same region. In this case, the snapshot is lazily replicated to the remote secondary region. Your
backup is protected from disasters at the primary site after the copying and replication is complete.
To copy your incremental snapshots for DR efficiently, review the instructions in Back up Azure unmanaged VM
disks with incremental snapshots.
Other options
SQL Server
SQL Server running in a VM has its own built-in capabilities to back up your SQL Server database to Azure Blob
storage or a file share. If the storage account is geo-redundant storage or read-access geo-redundant storage, you
can access those backups in the storage account’s secondary datacenter in the event of a disaster, with the same
restrictions as previously discussed. For more information, see Back up and restore for SQL Server in Azure virtual
machines. In addition to back up and restore, SQL Server AlwaysOn availability groups can maintain secondary
replicas of databases. This greatly reduces the disaster recovery time.
Other considerations
This article has discussed how to back up or take snapshots of your VMs and their disks to support disaster
recovery and how to use those to recover your data. With the Azure Resource Manager model, many people use
templates to create their VMs and other infrastructures in Azure. You can use a template to create a VM that has
the same configuration every time. If you use custom images for creating your VMs, you must also make sure that
your images are protected by using a read-access geo-redundant storage account to store them.
Consequently, your backup process can be a combination of two things:
Back up the data (disks).
Back up the configuration (templates and custom images).
Depending on the backup option you choose, you might have to handle the backup of both the data and the
configuration, or the backup service might handle all of that for you.
NOTE
Microsoft controls whether a failover occurs. Failover is not controlled per storage account, so it's not decided by individual
customers. To implement disaster recovery for specific storage accounts or virtual machine disks, you must use the
techniques described previously in this article.
Write Accelerator
5/10/2018 • 8 min to read • Edit Online
Write Accelerator is a disk capability for M -Series Virtual Machines (VMs) on Premium Storage with Azure
Managed Disks exclusively. As the name states, the purpose of the functionality is to improve the I/O latency of
writes against Azure Premium Storage. Write Accelerator is ideally suited where log file updates are required to
persist to disk in a highly performant manner for modern databases.
Write Accelerator is generally available for M -series VMs in the Public Cloud.
IMPORTANT
If you want to enable or disable Write Accelerator for an existing volume that is built out of multiple Azure Premium Storage
disks and striped using Windows disk or volume managers, Windows Storage Spaces, Windows Scale-out file server (SOFS),
Linux LVM or MDADM, all disks building the volume must be enabled or disabled for Write Accelerator in separate steps.
Before enabling or disabling Write Accelerator in such a configuration, shut down the Azure VM.
IMPORTANT
To enable Write Accelerator to an existing Azure disk that is NOT part of a volume build out of multiple disks with Windows
disk or volume managers, Windows Storage Spaces, Windows Scale-out file server (SOFS), Linux LVM, or MDADM, the
workload accessing the Azure disk needs to be shut down. Database applications using the Azure disk MUST be shut down.
IMPORTANT
Enabling Write Accelerator for the operating system disk of the VM will reboot the VM.
Enabling Write Accelerator for OS disks should not be necessary for SAP related VM configurations
Restrictions when using Write Accelerator
When using Write Accelerator for an Azure disk/VHD, these restrictions apply:
The Premium disk caching needs to be set to 'None' or 'Read Only'. All other caching modes are not supported.
Snapshot on the Write Accelerator enabled disk is not supported yet. This restriction blocks Azure Backup
Service ability to perform an application consistent snapshot of all disks of the virtual machine.
Only smaller I/O sizes (<=32KiB ) are taking the accelerated path. In workload situations where data is getting
bulk loaded or where the transaction log buffers of the different DBMS are filled to a larger degree before
getting persisted to the storage, chances are that the I/O written to disk is not taking the accelerated path.
There are limits of Azure Premium Storage VHDs per VM that can be supported by Write Accelerator. The current
limits are:
VM SKU NUMBER OF WRITE ACCELERATOR DISKS WRITE ACCELERATOR DISK IOPS PER VM
M128ms 16 8000
M128s 16 8000
M64ms 8 4000
M64s 8 4000
You need to adapt the names of VM, disk, resource group, size of the disk and LunID of the disk for your specific
deployment.
Enabling Azure Write Accelerator on an existing Azure disk
If you need to enable Write Accelerator on an existing disk, you can use this script to perform the task:
You need to adapt the names of VM, disk, and resource group. The script above adds Write Accelerator to an
existing disk where the value for $newstatus is set to '$true'. Using the value '$false' will disable Write Accelerator
on a given disk.
NOTE
Executing the script above will detach the disk specified, enable Write Accelerator against the disk, and then attach the disk
again
To attach a disk with Write Accelerator enabled please use the below command with your values:
Now you can install the armclient with the command below in cmd.exe or Power Shell
Replace the terms within '<< >>' with your data, including the file name the JSON file should have.
The output could look like:
{
"properties": {
"vmId": "2444c93e-f8bb-4a20-af2d-1658d9dbbbcb",
"hardwareProfile": {
"vmSize": "Standard_M64s"
},
"storageProfile": {
"imageReference": {
"publisher": "SUSE",
"offer": "SLES-SAP",
"sku": "12-SP3",
"version": "latest"
},
"osDisk": {
"osType": "Linux",
"name": "mylittlesap_OsDisk_1_754a1b8bb390468e9b4c429b81cc5f5a",
"createOption": "FromImage",
"caching": "ReadWrite",
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id":
"/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/mylittlesap/providers/Microsoft.Compute/dis
ks/mylittlesap_OsDisk_1_754a1b8bb390468e9b4c429b81cc5f5a"
},
"diskSizeGB": 30
},
"dataDisks": [
{
"lun": 0,
"name": "data1",
"createOption": "Attach",
"caching": "None",
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id":
"/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/mylittlesap/providers/Microsoft.Compute/dis
ks/data1"
},
"diskSizeGB": 1023
},
{
"lun": 1,
"name": "log1",
"createOption": "Attach",
"caching": "None",
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id":
"/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/mylittlesap/providers/Microsoft.Compute/dis
ks/data2"
},
"diskSizeGB": 1023
}
]
},
"osProfile": {
"computerName": "mylittlesapVM",
"adminUsername": "pl",
"linuxConfiguration": {
"disablePasswordAuthentication": false
},
"secrets": []
},
"networkProfile": {
"networkInterfaces": [
{
"id":
"/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/mylittlesap/providers/Microsoft.Network/net
workInterfaces/mylittlesap518"
}
]
},
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true,
"storageUri": "https://mylittlesapdiag895.blob.core.windows.net/"
}
},
"provisioningState": "Succeeded"
},
"type": "Microsoft.Compute/virtualMachines",
"location": "westeurope",
"id":
"/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/mylittlesap/providers/Microsoft.Compute/vir
tualMachines/mylittlesapVM",
"name": "mylittlesapVM"
Next step is to update the JSON file and to enable Write Accelerator on the disk called 'log1'. This step can be
accomplished by adding this attribute into the JSON file after the cache entry of the disk.
{
"lun": 1,
"name": "log1",
"createOption": "Attach",
"caching": "None",
**"writeAcceleratorEnabled": true,**
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id":
"/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/mylittlesap/providers/Microsoft.Compute/dis
ks/data2"
},
"diskSizeGB": 1023
}
The output should look like the one below. You can see that there is Write Accelerator enabled for one disk.
{
"properties": {
"vmId": "2444c93e-f8bb-4a20-af2d-1658d9dbbbcb",
"hardwareProfile": {
"vmSize": "Standard_M64s"
},
"storageProfile": {
"imageReference": {
"publisher": "SUSE",
"offer": "SLES-SAP",
"sku": "12-SP3",
"version": "latest"
},
"osDisk": {
"osType": "Linux",
"name": "mylittlesap_OsDisk_1_754a1b8bb390468e9b4c429b81cc5f5a",
"createOption": "FromImage",
"caching": "ReadWrite",
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id":
"/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/mylittlesap/providers/Microsoft.Compute/dis
ks/mylittlesap_OsDisk_1_754a1b8bb390468e9b4c429b81cc5f5a"
},
"diskSizeGB": 30
},
"dataDisks": [
{
"lun": 0,
"name": "data1",
"createOption": "Attach",
"caching": "None",
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id":
"/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/mylittlesap/providers/Microsoft.Compute/dis
ks/data1"
},
"diskSizeGB": 1023
},
{
"lun": 1,
"lun": 1,
"name": "log1",
"createOption": "Attach",
"caching": "None",
**"writeAcceleratorEnabled": true,**
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id":
"/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/mylittlesap/providers/Microsoft.Compute/dis
ks/data2"
},
"diskSizeGB": 1023
}
]
},
"osProfile": {
"computerName": "mylittlesapVM",
"adminUsername": "pl",
"linuxConfiguration": {
"disablePasswordAuthentication": false
},
"secrets": []
},
"networkProfile": {
"networkInterfaces": [
{
"id":
"/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/mylittlesap/providers/Microsoft.Network/net
workInterfaces/mylittlesap518"
}
]
},
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true,
"storageUri": "https://mylittlesapdiag895.blob.core.windows.net/"
}
},
"provisioningState": "Succeeded"
},
"type": "Microsoft.Compute/virtualMachines",
"location": "westeurope",
"id":
"/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/mylittlesap/providers/Microsoft.Compute/vir
tualMachines/mylittlesapVM",
"name": "mylittlesapVM"
From the point of the change on, the drive should be supported by Write Accelerator.
Troubleshoot storage resource deletion errors
5/2/2018 • 4 min to read • Edit Online
In certain scenarios, you may encounter one of the following errors occur while you are trying to delete an Azure
storage account, container, or blob in an Azure Resource Manager deployment:
Failed to delete storage account 'StorageAccountName'. Error: The storage account cannot be
deleted due to its artifacts being in use.
Failed to delete # out of # container(s):
vhds: There is currently a lease on the container and no lease ID was specified in the request.
Failed to delete # out of # blobs:
BlobName.vhd: There is currently a lease on the blob and no lease ID was specified in the request.
The VHDs used in Azure VMs are .vhd files stored as page blobs in a standard or premium storage account in
Azure. For more information about Azure disks, see About unmanaged and managed disk storage for Microsoft
Azure Linux VMs.
Azure prevents deletion of a disk that is attached to a VM to prevent corruption. It also prevents deletion of
containers and storage accounts that have a page blob that is attached to a VM.
The process to delete a storage account, container, or blob when receiving one of these errors is:
1. Identify blobs attached to a VM
2. Delete VMs with attached OS disk
3. Detach all data disk(s) from remaining VM (s)
Retry deleting the storage account, container, or blob after these steps are completed.
6. If the blob disk type is OSDisk follow Step 2: Delete VM to detach OS disk. Otherwise, if the blob disk type
is DataDisk follow the steps in Step 3: Detach data disk from the VM.
IMPORTANT
If MicrosoftAzureCompute_VMName and MicrosoftAzureCompute_DiskType do not appear in the blob metadata, it
indicates that the blob is explicitly leased and is not attached to a VM. Leased blobs cannot be deleted without breaking the
lease first. To break lease, right-click on the blob and select Break lease. Leased blobs that are not attached to a VM prevent
deletion of the blob, but do not prevent deletion of container or storage account.
Scenario 2: Deleting a container - identify all blob(s) within container that are attached to VMs
1. Sign in to the Azure portal.
2. On the Hub menu, select All resources. Go to the storage account, under Blob Service select Containers, and
find the container to be deleted.
3. Click to open the container and the list of blobs inside it will appear. Identify all the blobs with Blob Type =
Page blob and Lease State = Leased from this list. Follow Scenario 1 to identify the VM associated with
each of these blobs.
4. Follow Step 2 and Step 3 to delete VM (s) with OSDisk and detach DataDisk.
Scenario 3: Deleting storage account - identify all blob(s) within storage account that are attached to VMs
1. Sign in to the Azure portal.
2. On the Hub menu, select All resources. Go to the storage account, under Blob Service select Containers.
3. In Containers pane, identify all containers where Lease State is Leased and follow Scenario 2 for each
Leased container.
4. Follow Step 2 and Step 3 to delete VM (s) with OSDisk and detach DataDisk.
9. Select Save. The disk is now detached from the VM, and the VHD is no longer leased. It may take a few
minutes for the lease to be released. To verify that the lease has been released, browse to the blob location
and in the Blob properties pane, the Lease Status value should be Unlocked or Available.
Troubleshoot unexpected reboots of VMs with
attached VHDs
5/2/2018 • 1 min to read • Edit Online
If an Azure Virtual Machine (VM ) has a large number of attached VHDs that are in the same storage account, you
may exceed the scalability targets for an individual storage account, causing the VM to reboot unexpectedly. Check
the minute metrics for the storage account (TotalRequests/TotalIngress/TotalEgress) for spikes that exceed the
scalability targets for a storage account. See Metrics show an increase in PercentThrottlingError for assistance in
determining whether throttling has occurred on your storage account.
In general, each individual input or output operation on a VHD from a Virtual Machine translates to Get Page or
Put Page operations on the underlying page blob. Therefore, you can use the estimated IOPS for your
environment to tune how many VHDs you can have in a single storage account based on the specific behavior of
your application. Microsoft recommends having 40 or fewer disks in a single storage account. See Azure Storage
Scalability and Performance Targets for details about scalability targets for storage accounts, in particular the total
request rate and the total bandwidth for the type of storage account you are using.
If you are exceeding the scalability targets for your storage account, place your VHDs in multiple storage accounts
to reduce the activity in each individual account.
Virtual networks and virtual machines in Azure
3/20/2018 • 13 min to read • Edit Online
When you create an Azure virtual machine (VM ), you must create a virtual network (VNet) or use an existing VNet.
You also need to decide how your VMs are intended to be accessed on the VNet. It is important to plan before
creating resources and make sure that you understand the limits of networking resources.
In the following figure, VMs are represented as web servers and database servers. Each set of VMs are assigned to
separate subnets in the VNet.
You can create a VNet before you create a VM or you can as you create a VM. You create these resources to
support communication with a VM:
Network interfaces
IP addresses
Virtual network and subnets
In addition to those basic resources, you should also consider these optional resources:
Network security groups
Load balancers
Network interfaces
A network interface (NIC ) is the interconnection between a VM and a virtual network (VNet). A VM must have at
least one NIC, but can have more than one, depending on the size of the VM you create. Learn about how many
NICs each VM size supports for Windows or Linux.
You can create a VM with multiple NICs, and add or remove NICs through the lifecycle of a VM. Multiple NICs
allow a VM to connect to different subnets and send or receive traffic over the most appropriate interface.
If the VM is added to an availability set, all VMs within the availability set must have one or multiple NICs. VMs
with more than one NIC aren’t required to have the same number of NICs, but they must all have at least two.
Each NIC attached to a VM must exist in the same location and subscription as the VM. Each NIC must be
connected to a VNet that exists in the same Azure location and subscription as the NIC. You can change the subnet
a VM is connected to after it's created, but you cannot change the VNet. Each NIC attached to a VM is assigned a
MAC address that doesn’t change until the VM is deleted.
This table lists the methods that you can use to create a network interface.
METHOD DESCRIPTION
Azure CLI To provide the identifer of the public IP address that you
previously created, use az network nic create with the --
public-ip-address parameter.
IP addresses
You can assign these types of IP addresses to a NIC in Azure:
Public IP addresses - Used to communicate inbound and outbound (without network address translation
(NAT)) with the Internet and other Azure resources not connected to a VNet. Assigning a public IP address to a
NIC is optional. Public IP addresses have a nominal charge, and there's a maximum number that can be used
per subscription.
Private IP addresses - Used for communication within a VNet, your on-premises network, and the Internet
(with NAT). You must assign at least one private IP address to a VM. To learn more about NAT in Azure, read
Understanding outbound connections in Azure.
You can assign public IP addresses to VMs or internet-facing load balancers. You can assign private IP addresses to
VMs and internal load balancers. You assign IP addresses to a VM using a network interface.
There are two methods in which an IP address is allocated to a resource - dynamic or static. The default allocation
method is dynamic, where an IP address is not allocated when it's created. Instead, the IP address is allocated when
you create a VM or start a stopped VM. The IP address is released when you stop or delete the VM.
To ensure the IP address for the VM remains the same, you can set the allocation method explicitly to static. In this
case, an IP address is assigned immediately. It is released only when you delete the VM or change its allocation
method to dynamic.
This table lists the methods that you can use to create an IP address.
METHOD DESCRIPTION
Azure portal By default, public IP addresses are dynamic and the address
associated to them may change when the VM is stopped or
deleted. To guarantee that the VM always uses the same
public IP address, create a static public IP address. By default,
the portal assigns a dynamic private IP address to a NIC when
creating a VM. You can change this IP address to static after
the VM is created.
Azure CLI You use az network public-ip create with the --allocation-
method parameter as Dynamic or Static.
After you create a public IP address, you can associate it with a VM by assigning it to a NIC.
Azure portal If you let Azure create a VNet when you create a VM, the
name is a combination of the resource group name that
contains the VNet and -vnet. The address space is
10.0.0.0/24, the required subnet name is default, and the
subnet address range is 10.0.0.0/24.
Azure CLI The subnet and the VNet are created at the same time.
Provide a --subnet-name parameter to az network vnet
create with the subnet name.
METHOD DESCRIPTION
Azure CLI Use az network nsg create to initially create the NSG. Use az
network nsg rule create to add rules to the NSG. Use az
network vnet subnet update to add the NSG to the subnet.
Load balancers
Azure Load Balancer delivers high availability and network performance to your applications. A load balancer can
be configured to balance incoming Internet traffic to VMs or balance traffic between VMs in a VNet. A load
balancer can also balance traffic between on-premises computers and VMs in a cross-premises network, or
forward external traffic to a specific VM.
The load balancer maps incoming and outgoing traffic between the public IP address and port on the load balancer
and the private IP address and port of the VM.
When you create a load balancer, you must also consider these configuration elements:
Front-end IP configuration – A load balancer can include one or more front-end IP addresses, otherwise
known as virtual IPs (VIPs). These IP addresses serve as ingress for the traffic.
Back-end address pool – IP addresses that are associated with the NIC to which load is distributed.
NAT rules - Defines how inbound traffic flows through the front-end IP and distributed to the back-end IP.
Load balancer rules - Maps a given front-end IP and port combination to a set of back-end IP addresses and
port combination. A single load balancer can have multiple load balancing rules. Each rule is a combination of a
front-end IP and port and back-end IP and port associated with VMs.
Probes - Monitors the health of VMs. When a probe fails to respond, the load balancer stops sending new
connections to the unhealthy VM. The existing connections are not affected, and new connections are sent to
healthy VMs.
This table lists the methods that you can use to create an internet-facing load balancer.
METHOD DESCRIPTION
Azure PowerShell To provide the identifer of the public IP address that you
previously created, use New-
AzureRmLoadBalancerFrontendIpConfig with the -
PublicIpAddress parameter. Use New-
AzureRmLoadBalancerBackendAddressPoolConfig to create
the configuration of the back-end address pool. Use New-
AzureRmLoadBalancerInboundNatRuleConfig to create
inbound NAT rules associated with the front-end IP
configuration that you created. Use New-
AzureRmLoadBalancerProbeConfig to create the probes that
you need. Use New-AzureRmLoadBalancerRuleConfig to
create the load balancer configuration. Use New-
AzureRmLoadBalancer to create the load balancer.
Azure CLI Use az network lb create to create the initial load balancer
configuration. Use az network lb frontend-ip create to add the
public IP address that you previously created. Use az network
lb address-pool create to add the configuration of the back-
end address pool. Use az network lb inbound-nat-rule create
to add NAT rules. Use az network lb rule create to add the
load balancer rules. Use az network lb probe create to add the
probes.
Template Use 2 VMs in a Load Balancer and configure NAT rules on the
LB as a guide for deploying a load balancer using a template.
This table lists the methods that you can use to create an internal load balancer.
METHOD DESCRIPTION
Azure portal You can't currently create an internal load balancer using the
Azure portal.
Azure CLI Use the az network lb create command to create the initial
load balancer configuration. To define the private IP address,
use az network lb frontend-ip create with the --private-ip-
address parameter. Use az network lb address-pool create to
add the configuration of the back-end address pool. Use az
network lb inbound-nat-rule create to add NAT rules. Use az
network lb rule create to add the load balancer rules. Use az
network lb probe create to add the probes.
Template Use 2 VMs in a Load Balancer and configure NAT rules on the
LB as a guide for deploying a load balancer using a template.
VMs
VMs can be created in the same VNet and they can connect to each other using private IP addresses. They can
connect even if they are in different subnets without the need to configure a gateway or use public IP addresses. To
put VMs into a VNet, you create the VNet and then as you create each VM, you assign it to the VNet and subnet.
VMs acquire their network settings during deployment or startup.
VMs are assigned an IP address when they are deployed. If you deploy multiple VMs into a VNet or subnet, they
are assigned IP addresses as they boot up. A dynamic IP address (DIP ) is the internal IP address associated with a
VM. You can allocate a static DIP to a VM. If you allocate a static DIP, you should consider using a specific subnet to
avoid accidentally reusing a static DIP for another VM.
If you create a VM and later want to migrate it into a VNet, it is not a simple configuration change. You must
redeploy the VM into the VNet. The easiest way to redeploy is to delete the VM, but not any disks attached to it,
and then re-create the VM using the original disks in the VNet.
This table lists the methods that you can use to create a VM in a VNet.
METHOD DESCRIPTION
Azure portal Uses the default network settings that were previously
mentioned to create a VM with a single NIC. To create a VM
with multiple NICs, you must use a different method.
Azure CLI Create and connect a VM to a Vnet, subnet, and NIC that
build as individual steps.
Next steps
For VM -specific steps on how to manage Azure virtual networks for VMs, see the Windows or Linux tutorials.
There are also tutorials on how to load balance VMs and create highly available applications for Windows or Linux.
Learn how to configure user-defined routes and IP forwarding.
Learn how to configure VNet to VNet connections.
Learn how to Troubleshoot routes.
Automatically scale virtual machines in Azure
3/20/2018 • 4 min to read • Edit Online
You can easily automatically scale your virtual machines (VMs) when you use virtual machine scale sets and the
autoscaling feature of Azure Monitor. Your VMs need to be members of a scale set to be automatically scaled. This
article provides information that enables you to better understand how to scale your VMs both vertically and
horizontally using automatic and manual methods.
If your application needs to scale based on metrics that are not available through the host, then the VMs in the
scale set need to have either the Linux diagnostic extension or Windows diagnostics extension installed. If you
create a scale set using the Azure portal, you need to also use Azure PowerShell or the Azure CLI to install the
extension with the diagnostics configuration that you need.
Rules
Rules combine a metric with an action to be performed. When rule conditions are met, one or more autoscale
actions are triggered. For example, you might have a rule defined that increases the number of VMs by 1 if the
average CPU usage goes above 85 percent.
Notifications
You can set up triggers so that specific web URLs are called or emails are sent based on the autoscale rules that
you create. Webhooks allow you to route the Azure alert notifications to other systems for post-processing or
custom notifications.
Next steps
Learn more about scale sets in Design Considerations for Scale Sets.
Use infrastructure automation tools with virtual
machines in Azure
12/13/2017 • 7 min to read • Edit Online
To create and manage Azure virtual machines (VMs) in a consistent manner at scale, some form of automation is
typically desired. There are many tools and solutions that allow you to automate the complete Azure infrastructure
deployment and management lifecycle. This article introduces some of the infrastructure automation tools that you
can use in Azure. These tools commonly fit in to one of the following approaches:
Automate the configuration of VMs
Tools include Ansible, Chef, and Puppet.
Tools specific to VM customization include cloud-init for Linux VMs, PowerShell Desired State
Configuration (DSC ), and the Azure Custom Script Extension for all Azure VMs.
Automate infrastructure management
Tools include Packer to automate custom VM image builds, and Terraform to automate the infrastructure
build process.
Azure Automation can perform actions across your Azure and on-premises infrastructure.
Automate application deployment and delivery
Examples include Visual Studio Team Services and Jenkins.
Ansible
Ansible is an automation engine for configuration management, VM creation, or application deployment. Ansible
uses an agent-less model, typically with SSH keys, to authenticate and manage target machines. Configuration
tasks are defined in playbooks, with a number of Ansible modules available to carry out specific tasks. For more
information, see How Ansible works.
Learn how to:
Install and configure Ansible on Linux for use with Azure.
Create a basic VM.
Create a complete VM environment including supporting resources.
Chef
Chef is an automation platform that helps define how your infrastructure is configured, deployed, and managed.
Additional components included Chef Habitat for application lifecycle automation rather than the infrastructure,
and Chef InSpec that helps automate compliance with security and policy requirements. Chef Clients are installed
on target machines, with one or more central Chef Servers that store and manage the configurations. For more
information, see An Overview of Chef.
Learn how to:
Deploy Chef Automate from the Azure Marketplace.
Install Chef on Windows and create Azure VMs.
Puppet
Puppet is an enterprise-ready automation platform that handles the application delivery and deployment process.
Agents are installed on target machines to allow Puppet Master to run manifests that define the desired
configuration of the Azure infrastructure and VMs. Puppet can integrate with other solutions such as Jenkins and
GitHub for an improved devops workflow. For more information, see How Puppet works.
Learn how to:
Deploy Puppet from the Azure Marketplace.
Cloud-init
Cloud-init is a widely used approach to customize a Linux VM as it boots for the first time. You can use cloud-init to
install packages and write files, or to configure users and security. Because cloud-init is called during the initial boot
process, there are no additional steps or required agents to apply your configuration. For more information on how
to properly format your #cloud-config files, see the cloud-init documentation site. #cloud-config files are text files
encoded in base64.
Cloud-init also works across distributions. For example, you don't use apt-get install or yum install to install a
package. Instead you can define a list of packages to install. Cloud-init automatically uses the native package
management tool for the distro you select.
We are actively working with our endorsed Linux distro partners in order to have cloud-init enabled images
available in the Azure marketplace. These images make your cloud-init deployments and configurations work
seamlessly with VMs and virtual machine scale sets. The following table outlines the current cloud-init enabled
images availability on the Azure platform:
PowerShell DSC
PowerShell Desired State Configuration (DSC ) is a management platform to define the configuration of target
machines. DSC can also be used on Linux through the Open Management Infrastructure (OMI) server.
DSC configurations define what to install on a machine and how to configure the host. A Local Configuration
Manager (LCM ) engine runs on each target node that processes requested actions based on pushed configurations.
A pull server is a web service that runs on a central host to store the DSC configurations and associated resources.
The pull server communicates with the LCM engine on each target host to provide the required configurations and
report on compliance.
Learn how to:
Create a basic DSC configuration.
Configure a DSC pull server.
Use DSC for Linux.
Packer
Packer automates the build process when you create a custom VM image in Azure. You use Packer to define the
OS and run post-configuration scripts that customize the VM for your specific needs. Once configured, the VM is
then captured as a Managed Disk image. Packer automates the process to create the source VM, network and
storage resources, run configuration scripts, and then create the VM image.
Learn how to:
Use Packer to create a Linux VM image in Azure.
Use Packer to create a Windows VM image in Azure.
Terraform
Terraform is an automation tool that allows you to define and create an entire Azure infrastructure with a single
template format language - the HashiCorp Configuration Language (HCL ). With Terraform, you define templates
that automate the process to create network, storage, and VM resources for a given application solution. You can
use your existing Terraform templates for other platforms with Azure to ensure consistency and simplify the
infrastructure deployment without needing to convert to an Azure Resource Manager template.
Learn how to:
Install and configure Terraform with Azure.
Create an Azure infrastructure with Terraform.
Azure Automation
Azure Automation uses runbooks to process a set of tasks on the VMs you target. Azure Automation is used to
manage existing VMs rather than to create an infrastructure. Azure Automation can run across both Linux and
Windows VMs, as well as on-premises virtual or physical machines with a hybrid runbook worker. Runbooks can
be stored in a source control repository, such as GitHub. These runbooks can then run manually or on a defined
schedule.
Azure Automation also provides a Desired State Configuration (DSC ) service that allows you to create definitions
for how a given set of VMs should be configured. DSC then ensures that the required configuration is applied and
the VM stays consistent. Azure Automation DSC runs on both Windows and Linux machines.
Learn how to:
Create a PowerShell runbook.
Use Hybrid Runbook Worker to manage on-premises resources.
Use Azure Automation DSC.
Jenkins
Jenkins is a continuous integration server that helps deploy and test applications, and create automated pipelines
for code delivery. There are hundreds of plugins to extend the core Jenkins platform, and you can also integrate
with many other products and solutions through webhooks. You can manually install Jenkins on an Azure VM, run
Jenkins from within a Docker container, or use a pre-built Azure Marketplace image.
Learn how to:
Create a development infrastructure on a Linux VM in Azure with Jenkins, GitHub, and Docker.
Next steps
There are many different options to use infrastructure automation tools in Azure. You have the freedom to use the
solution that best fits your needs and environment. To get started and try some of the tools built-in to Azure, see
how to automate the customization of a Linux or Windows VM.
Secure and use policies on virtual machines in Azure
4/9/2018 • 3 min to read • Edit Online
It’s important to keep your virtual machine (VM ) secure for the applications that you run. Securing your VMs can
include one or more Azure services and features that cover secure access to your VMs and secure storage of your
data. This article provides information that enables you to keep your VM and applications secure.
Antimalware
The modern threat landscape for cloud environments is dynamic, increasing the pressure to maintain effective
protection in order to meet compliance and security requirements. Microsoft Antimalware for Azure is a free real-
time protection capability that helps identify and remove viruses, spyware, and other malicious software. Alerts can
be configured to notify you when known malicious or unwanted software attempts to install itself or run on your
VM.
Encryption
For enhanced Windows VM and Linux VM security and compliance, virtual disks in Azure can be encrypted.
Virtual disks on Windows VMs are encrypted at rest using Bitlocker. Virtual disks on Linux VMs are encrypted at
rest using dm-crypt.
There is no charge for encrypting virtual disks in Azure. Cryptographic keys are stored in Azure Key Vault using
software-protection, or you can import or generate your keys in Hardware Security Modules (HSMs) certified to
FIPS 140-2 level 2 standards. These cryptographic keys are used to encrypt and decrypt virtual disks attached to
your VM. You retain control of these cryptographic keys and can audit their use. An Azure Active Directory service
principal provides a secure mechanism for issuing these cryptographic keys as VMs are powered on and off.
Policies
Azure policies can be used to define the desired behavior for your organization's Windows VMs and Linux VMs. By
using policies, an organization can enforce various conventions and rules throughout the enterprise. Enforcement
of the desired behavior can help mitigate risk while contributing to the success of the organization.
Next steps
Walk through the steps to monitor virtual machine security by using Azure Security Center for Linux or
Windows.
How to monitor virtual machines in Azure
3/20/2018 • 5 min to read • Edit Online
You can take advantage of many opportunities to monitor your VMs by collecting, viewing, and analyzing
diagnostic and log data. To do simple monitoring of your VM, you can use the Overview screen for the VM in the
Azure portal. You can use extensions to configure diagnostics on your VMs to collect additional metric data. You
can also use more advanced monitoring options, such as Application Insights and Log Analytics.
Alerts
You can create alerts based on specific performance metrics. Examples of the issues you can be alerted about
include when average CPU usage exceeds a certain threshold, or available free disk space drops below a certain
amount. Alerts can be configured in the Azure portal, using Azure PowerShell, or the Azure CLI.
Logs
The Azure Activity Log is a subscription log that provides insight into subscription-level events that have occurred
in Azure. The log includes a range of data, from Azure Resource Manager operational data to updates on Service
Health events. You can click Activity Log in the Azure portal to view the log for your VM.
Some of the things you can do with the activity log include:
Create an alert on an Activity Log event.
Stream it to an Event Hub for ingestion by a third-party service or custom analytics solution such as PowerBI.
Analyze it in PowerBI using the PowerBI content pack.
Save it to a storage account for archival or manual inspection. You can specify the retention time (in days) using
the Log Profile.
You can also access activity log data by using Azure PowerShell, the Azure CLI, or Monitor REST APIs.
Azure Diagnostic Logs are logs emitted by your VM that provide rich, frequent data about its operation. Diagnostic
logs differ from the activity log by providing insight about operations that were performed within the VM.
Some of the things you can do with diagnostics logs include:
Save them to a storage account for auditing or manual inspection. You can specify the retention time (in days)
using Resource Diagnostic Settings.
Stream them to Event Hubs for ingestion by a third-party service or custom analytics solution such as PowerBI.
Analyze them with OMS Log Analytics.
Advanced monitoring
Operations Management Suite (OMS ) provides monitoring, alerting, and alert remediation capabilities
across cloud and on-premises assets. You can install an extension on a Linux VM or a Windows VM that
installs the OMS agent, and enrolls the VM into an existing OMS workspace.
Log Analytics is a service in OMS that monitors your cloud and on-premises environments to maintain their
availability and performance. It collects data generated by resources in your cloud and on-premises
environments and from other monitoring tools to provide analysis across multiple sources.
For Windows and Linux VMs, the recommended method for collecting logs and metrics is by installing the
Log Analytics agent. The easiest way to install the Log Analytics agent on a VM is through the Log Analytics
VM Extension. Using the extension simplifies the installation process and automatically configures the agent
to send data to the Log Analytics workspace that you specify. The agent is also upgraded automatically,
ensuring that you have the latest features and fixes.
Network Watcher enables you to monitor your VM and its associated resources as they relate to the
network that they are in. You can install the Network Watcher Agent extension on a Linux VM or a Windows
VM.
Next steps
Walk through the steps in Monitor a Windows Virtual Machine with Azure PowerShell or Monitor a Linux
Virtual Machine with the Azure CLI.
Learn more about the best practices around Monitoring and diagnostics.
Backup and restore options for Linux virtual machines
in Azure
4/9/2018 • 1 min to read • Edit Online
You can protect your data by taking backups at regular intervals. There are several backup options available for
VMs, depending on your use-case.
Azure Backup
For backing up Azure VMs running production workloads, use Azure Backup. Azure Backup supports application-
consistent backups for both Windows and Linux VMs. Azure Backup creates recovery points that are stored in geo-
redundant recovery vaults. When you restore from a recovery point, you can restore the whole VM or just specific
files.
For a simple, hands-on introduction to Azure Backup for Azure VMs, see the "Back up Azure virtual machines"
tutorial for Linux or Windows.
For more information on how Azure Backup works, see Plan your VM backup infrastructure in Azure
Managed snapshots
In development and test environments, snapshots provide a quick and simple option for backing up VMs that use
Managed Disks. A managed snapshot is a read-only full copy of a managed disk. Snapshots exist independent of
the source disk and can be used to create new managed disks for rebuilding a VM. They are billed based on the
used portion of the disk. For example, if you create a snapshot of a managed disk with provisioned capacity of 64
GB and actual used data size of 10 GB, snapshot will be billed only for the used data size of 10 GB.
For more information on creating snapshots, see:
Create copy of VHD stored as a Managed Disk using Snapshots in Windows
Create copy of VHD stored as a Managed Disk using Snapshots in Linux
Next steps
You can try out Azure Backup by following the "Back up Windows virtual machines tutorial" for Linux or Windows.
HPC, Batch, and Big Compute solutions using Azure
VMs
5/11/2018 • 4 min to read • Edit Online
Organizations have large-scale computing needs. These Big Compute workloads include engineering design and
analysis, financial risk calculations, image rendering, complex modeling, Monte Carlo simulations, and more.
Use the Azure cloud to efficiently run compute-intensive Linux and Windows workloads, from parallel batch jobs to
traditional HPC simulations. Run your HPC and batch workloads on Azure infrastructure, with your choice of
compute services, grid managers, Marketplace solutions, and vendor-hosted (SaaS ) applications. Azure provides
flexible solutions to distribute work and scale to thousands of VMs or cores and then scale down when you need
fewer resources.
Solution options
Do-it-yourself solutions
Set up your own cluster environment in Azure virtual machines or virtual machine scale sets.
Lift and shift an on-premises cluster, or deploy a new cluster in Azure for additional capacity.
Use Azure Resource Manager templates to deploy leading workload managers, infrastructure, and
applications.
Choose HPC and GPU VM sizes that include specialized hardware and network connections for MPI or
GPU workloads.
Add high performance storage for I/O -intensive workloads.
Hybrid solutions
Extend your on-premises solution to offload ("burst") peak workloads to Azure infrastructure
Use cloud compute on-demand with your existing workload manager.
Take advantage of HPC and GPU VM sizes for MPI or GPU workloads.
Big Compute solutions as a service
Develop custom Big Compute solutions and workflows using Azure Batch and related Azure services.
Run Azure-enabled engineering and simulation solutions from vendors including Altair, Rescale, and
Cycle Computing (now joined with Microsoft).
Use a Cray supercomputer as a service hosted in Azure.
Marketplace solutions
Use the scale of HPC applications and solutions offered in the Azure Marketplace.
The following sections provide more information about the supporting technologies and links to guidance.
Marketplace solutions
Visit the Azure Marketplace for Linux and Windows VM images and solutions designed for HPC. Examples include:
RogueWave CentOS -based HPC
SUSE Linux Enterprise Server for HPC
TIBCO Grid Server Engine
Azure Data Science VM for Windows and Linux
D3View
UberCloud
Intel Cloud Edition for Lustre
HPC applications
Run custom or commercial HPC applications in Azure. Several examples in this section are benchmarked to scale
efficiently with additional VMs or compute cores. Visit the Azure Marketplace for ready-to-deploy solutions.
NOTE
Check with the vendor of any commercial application for licensing or other restrictions for running in the cloud. Not all
vendors offer pay-as-you-go licensing. You might need a licensing server in the cloud for your solution, or connect to an on-
premises license server.
Engineering applications
Altair RADIOSS
ANSYS CFD
MATL AB Distributed Computing Server
StarCCM+
OpenFOAM
Graphics and rendering
Autodesk Maya, 3ds Max, and Arnold on Azure Batch
AI and deep learning
Batch AI training for deep learning models
Microsoft Cognitive Toolkit
Deep Learning VM
Batch Shipyard recipes for deep learning
Azure Batch
Batch is a platform service for running large-scale parallel and high-performance computing (HPC ) applications
efficiently in the cloud. Azure Batch schedules compute-intensive work to run on a managed pool of virtual
machines, and can automatically scale compute resources to meet the needs of your jobs.
SaaS providers or developers can use the Batch SDKs and tools to integrate HPC applications or container
workloads with Azure, stage data to Azure, and build job execution pipelines.
Learn how to:
Get started developing with Batch
Use Azure Batch code samples
Use low -priority VMs with Batch
Run containerized HPC workloads with Batch Shipyard
Run parallel R workloads on Batch
Run on-demand Spark jobs on Batch
Workload managers
The following are examples of cluster and workload managers that can run in Azure infrastructure. Create stand-
alone clusters in Azure VMs or burst to Azure VMs from an on-premises cluster.
Alces Flight Compute
TIBCO DataSynapse GridServer
Bright Cluster Manager
IBM Spectrum Symphony and Symphony LSF
PBS Pro
Microsoft HPC Pack - see options to run in Windows and Linux VMs
HPC storage
Large-scale Batch and HPC workloads have demands for data storage and access that exceed the capabilities of
traditional cloud file systems. Implement parallel file system solutions in Azure such as Lustre and BeeGFS.
Learn more:
Parallel virtual file systems on Azure
High performance cloud storage solutions from Avere (now joined with Microsoft)
Customer stories
Examples of customers that have solved business problems with Azure HPC solutions:
ANEO
AXA Global P&C
Axioma
d3View
EFS
Hymans Robertson
MetLife
Microsoft Research
Milliman
Mitsubishi UFJ Securities International
Schlumberger
Towers Watson
Next steps
Learn more about Big Compute solutions for engineering simulation, rendering, banking and capital markets,
and genomics.
For the latest announcements, see the Microsoft HPC and Batch team blog and the Azure blog.
Use the managed and scalable Azure Batch service to run compute-intensive workloads, without managing
underlying infrastructure Learn more
Example Azure infrastructure walkthrough for Linux
VMs
12/15/2017 • 3 min to read • Edit Online
This article walks through building out an example application infrastructure. We detail designing an infrastructure
for a simple on-line store that brings together all the guidelines and decisions around naming conventions,
availability sets, virtual networks and load balancers, and actually deploying your virtual machines (VMs).
Example workload
Adventure Works Cycles wants to build an on-line store application in Azure that consists of:
Two nginx servers running the client front-end in a web tier
Two nginx servers processing data and orders in an application tier
Two MongoDB servers part of a sharded cluster for storing product data and orders in a database tier
Two Active Directory domain controllers for customer accounts and suppliers in an authentication tier
All the servers are located in two subnets:
a front-end subnet for the web servers
a back-end subnet for the application servers, MongoDB cluster, and domain controllers
Incoming secure web traffic must be load-balanced among the web servers as customers browse the on-line store.
Order processing traffic in the form of HTTP requests from the web servers must be load-balanced among the
application servers. Additionally, the infrastructure must be designed for high availability.
The resulting design must incorporate:
An Azure subscription and account
A single resource group
Azure Managed Disks
A virtual network with two subnets
Availability sets for the VMs with a similar role
Virtual machines
All the above follow these naming conventions:
Adventure Works Cycles uses [IT workload]-[location]-[Azure resource] as a prefix
For this example, "azos" (Azure On-line Store) is the IT workload name and "use" (East US 2) is the
location
Virtual networks use AZOS -USE -VN [number]
Availability sets use azos-use-as-[role]
Virtual machine names use azos-use-vm-[vmname]
Storage
Adventure Works Cycles determined that they should use Azure Managed Disks. When creating VMs, both
storage available storage tiers are used:
Standard storage for the web servers, application servers, and domain controllers and their data disks.
Premium storage for the MongoDB sharded cluster servers and their data disks.
Availability sets
To maintain high availability of all four tiers of their on-line store, Adventure Works Cycles decided on four
availability sets:
azos-use-as-web for the web servers
azos-use-as-app for the application servers
azos-use-as-db for the servers in the MongoDB sharded cluster
azos-use-as-dc for the domain controllers
Virtual machines
Adventure Works Cycles decided on the following names for their Azure VMs:
azos-use-vm -web01 for the first web server
azos-use-vm -web02 for the second web server
azos-use-vm -app01 for the first application server
azos-use-vm -app02 for the second application server
azos-use-vm -db01 for the first MongoDB server in the cluster
azos-use-vm -db02 for the second MongoDB server in the cluster
azos-use-vm -dc01 for the first domain controller
azos-use-vm -dc02 for the second domain controller
Here is the resulting configuration.
The vCPU quotas for virtual machines and virtual machine scale sets are arranged in two tiers for each
subscription, in each region. The first tier is the Total Regional vCPUs, and the second tier is the various VM size
family cores such as Standard D Family vCPUs. Any time a new VM is deployed the vCPUs for the newly deployed
VM must not exceed the vCPU quota for the specific VM size family or the total regional vCPU quota. If either of
those quotas are exceeded, then the VM deployment will not be allowed. There is also a quota for the overall
number of virtual machines in the region. The details on each of these quotas can be seen in the Usage + quotas
section of the Subscription page in the Azure portal, or you can query for the values using Azure CLI.
Check usage
You can check your quota usage using az vm list-usage.
Reserved VM Instances
Reserved VM Instances, which are scoped to a single subscription, will add a new aspect to the vCPU quotas. These
values describe the number of instances of the stated size that must be deployable in the subscription. They work
as a placeholder in the quota system to ensure that quota is reserved to ensure reserved instances are deployable
in the subscription. For example, if a specific subscription has 10 Standard_D1 reserved instances the usages limit
for Standard_D1 Reserved Instances will be 10. This will cause Azure to ensure that there are always at least 10
vCPUs available in the Total Regional vCPUs quota to be used for Standard_D1 instances and there are at least 10
vCPUs available in the Standard D Family vCPU quota to be used for Standard_D1 instances.
If a quota increase is required to either purchase a Single Subscription RI, you can request a quota increase on
your subscription.
Next steps
For more information about billing and quotas, see Azure subscription and service limits, quotas, and constraints.
Create a complete Linux virtual machine with the
Azure CLI
3/8/2018 • 10 min to read • Edit Online
To quickly create a virtual machine (VM ) in Azure, you can use a single Azure CLI command that uses default
values to create any required supporting resources. Resources such as a virtual network, public IP address, and
network security group rules are automatically created. For more control of your environment in production use,
you may create these resources ahead of time and then add your VMs to them. This article guides you through
how to create a VM and each of the supporting resources one by one.
Make sure that you have installed the latest Azure CLI 2.0 and logged to an Azure account in with az login.
In the following examples, replace example parameter names with your own values. Example parameter names
include myResourceGroup, myVnet, and myVM.
By default, the output of Azure CLI commands is in JSON (JavaScript Object Notation). To change the default
output to a list or table, for example, use az configure --output. You can also add --output to any command for a
one time change in output format. The following example shows the JSON output from the az group create
command:
{
"id": "/subscriptions/guid/resourceGroups/myResourceGroup",
"location": "eastus",
"name": "myResourceGroup",
"properties": {
"provisioningState": "Succeeded"
},
"tags": null
}
The output shows the subnet is logically created inside the virtual network:
{
"addressSpace": {
"addressPrefixes": [
"192.168.0.0/16"
]
},
"dhcpOptions": {
"dnsServers": []
},
"etag": "W/\"e95496fc-f417-426e-a4d8-c9e4d27fc2ee\"",
"id":
"/subscriptions/guid/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVnet",
"location": "eastus",
"name": "myVnet",
"provisioningState": "Succeeded",
"resourceGroup": "myResourceGroup",
"resourceGuid": "ed62fd03-e9de-430b-84df-8a3b87cacdbb",
"subnets": [
{
"addressPrefix": "192.168.1.0/24",
"etag": "W/\"e95496fc-f417-426e-a4d8-c9e4d27fc2ee\"",
"id":
"/subscriptions/guid/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVnet/subnets
/mySubnet",
"ipConfigurations": null,
"name": "mySubnet",
"networkSecurityGroup": null,
"provisioningState": "Succeeded",
"resourceGroup": "myResourceGroup",
"resourceNavigationLinks": null,
"routeTable": null
}
],
"tags": {},
"type": "Microsoft.Network/virtualNetworks",
"virtualNetworkPeerings": null
}
Output:
{
"publicIp": {
"dnsSettings": {
"domainNameLabel": "mypublicdns",
"fqdn": "mypublicdns.eastus.cloudapp.azure.com",
"reverseFqdn": null
},
"etag": "W/\"2632aa72-3d2d-4529-b38e-b622b4202925\"",
"id":
"/subscriptions/guid/resourceGroups/myResourceGroup/providers/Microsoft.Network/publicIPAddresses/myPublicIP",
"idleTimeoutInMinutes": 4,
"ipAddress": null,
"ipConfiguration": null,
"location": "eastus",
"name": "myPublicIP",
"provisioningState": "Succeeded",
"publicIpAddressVersion": "IPv4",
"publicIpAllocationMethod": "Dynamic",
"resourceGroup": "myResourceGroup",
"resourceGuid": "4c65de38-71f5-4684-be10-75e605b3e41f",
"tags": null,
"type": "Microsoft.Network/publicIPAddresses"
}
}
You define rules that allow or deny specific traffic. To allow inbound connections on port 22 (to enable SSH
access), create an inbound rule with az network nsg rule create. The following example creates a rule named
myNetworkSecurityGroupRuleSSH:
To allow inbound connections on port 80 (for web traffic), add another network security group rule. The following
example creates a rule named myNetworkSecurityGroupRuleHTTP:
az network nsg rule create \
--resource-group myResourceGroup \
--nsg-name myNetworkSecurityGroup \
--name myNetworkSecurityGroupRuleWeb \
--protocol tcp \
--priority 1001 \
--destination-port-range 80 \
--access allow
Examine the network security group and rules with az network nsg show:
Output:
{
"defaultSecurityRules": [
{
"access": "Allow",
"description": "Allow inbound traffic from all VMs in VNET",
"destinationAddressPrefix": "VirtualNetwork",
"destinationPortRange": "*",
"direction": "Inbound",
"etag": "W/\"3371b313-ea9f-4687-a336-a8ebdfd80523\"",
"id":
"/subscriptions/guid/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkSecurityGroups/myNetwor
kSecurityGroup/defaultSecurityRules/AllowVnetInBound",
"name": "AllowVnetInBound",
"priority": 65000,
"protocol": "*",
"provisioningState": "Succeeded",
"resourceGroup": "myResourceGroup",
"sourceAddressPrefix": "VirtualNetwork",
"sourcePortRange": "*"
},
{
"access": "Allow",
"description": "Allow inbound traffic from azure load balancer",
"destinationAddressPrefix": "*",
"destinationPortRange": "*",
"direction": "Inbound",
"etag": "W/\"3371b313-ea9f-4687-a336-a8ebdfd80523\"",
"id":
"/subscriptions/guid/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkSecurityGroups/myNetwor
kSecurityGroup/defaultSecurityRules/AllowAzureLoadBalancerInBou
"name": "AllowAzureLoadBalancerInBound",
"priority": 65001,
"protocol": "*",
"provisioningState": "Succeeded",
"resourceGroup": "myResourceGroup",
"sourceAddressPrefix": "AzureLoadBalancer",
"sourcePortRange": "*"
},
{
"access": "Deny",
"description": "Deny all inbound traffic",
"destinationAddressPrefix": "*",
"destinationPortRange": "*",
"direction": "Inbound",
"etag": "W/\"3371b313-ea9f-4687-a336-a8ebdfd80523\"",
"id":
"/subscriptions/guid/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkSecurityGroups/myNetwor
kSecurityGroup/defaultSecurityRules/DenyAllInBound",
"name": "DenyAllInBound",
"priority": 65500,
"priority": 65500,
"protocol": "*",
"provisioningState": "Succeeded",
"resourceGroup": "myResourceGroup",
"sourceAddressPrefix": "*",
"sourcePortRange": "*"
},
{
"access": "Allow",
"description": "Allow outbound traffic from all VMs to all VMs in VNET",
"destinationAddressPrefix": "VirtualNetwork",
"destinationPortRange": "*",
"direction": "Outbound",
"etag": "W/\"3371b313-ea9f-4687-a336-a8ebdfd80523\"",
"id":
"/subscriptions/guid/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkSecurityGroups/myNetwor
kSecurityGroup/defaultSecurityRules/AllowVnetOutBound",
"name": "AllowVnetOutBound",
"priority": 65000,
"protocol": "*",
"provisioningState": "Succeeded",
"resourceGroup": "myResourceGroup",
"sourceAddressPrefix": "VirtualNetwork",
"sourcePortRange": "*"
},
{
"access": "Allow",
"description": "Allow outbound traffic from all VMs to Internet",
"destinationAddressPrefix": "Internet",
"destinationPortRange": "*",
"direction": "Outbound",
"etag": "W/\"3371b313-ea9f-4687-a336-a8ebdfd80523\"",
"id":
"/subscriptions/guid/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkSecurityGroups/myNetwor
kSecurityGroup/defaultSecurityRules/AllowInternetOutBound",
"name": "AllowInternetOutBound",
"priority": 65001,
"protocol": "*",
"provisioningState": "Succeeded",
"resourceGroup": "myResourceGroup",
"sourceAddressPrefix": "*",
"sourcePortRange": "*"
},
{
"access": "Deny",
"description": "Deny all outbound traffic",
"destinationAddressPrefix": "*",
"destinationPortRange": "*",
"direction": "Outbound",
"etag": "W/\"3371b313-ea9f-4687-a336-a8ebdfd80523\"",
"id":
"/subscriptions/guid/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkSecurityGroups/myNetwor
kSecurityGroup/defaultSecurityRules/DenyAllOutBound",
"name": "DenyAllOutBound",
"priority": 65500,
"protocol": "*",
"provisioningState": "Succeeded",
"resourceGroup": "myResourceGroup",
"sourceAddressPrefix": "*",
"sourcePortRange": "*"
}
],
"etag": "W/\"3371b313-ea9f-4687-a336-a8ebdfd80523\"",
"id":
"/subscriptions/guid/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkSecurityGroups/myNetwor
kSecurityGroup",
"location": "eastus",
"name": "myNetworkSecurityGroup",
"networkInterfaces": null,
"provisioningState": "Succeeded",
"resourceGroup": "myResourceGroup",
"resourceGuid": "47a9964e-23a3-438a-a726-8d60ebbb1c3c",
"securityRules": [
{
"access": "Allow",
"description": null,
"destinationAddressPrefix": "*",
"destinationPortRange": "22",
"direction": "Inbound",
"etag": "W/\"9e344b60-0daa-40a6-84f9-0ebbe4a4b640\"",
"id":
"/subscriptions/guid/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkSecurityGroups/myNetwor
kSecurityGroup/securityRules/myNetworkSecurityGroupRuleSSH",
"name": "myNetworkSecurityGroupRuleSSH",
"priority": 1000,
"protocol": "Tcp",
"provisioningState": "Succeeded",
"resourceGroup": "myResourceGroup",
"sourceAddressPrefix": "*",
"sourcePortRange": "*"
},
{
"access": "Allow",
"description": null,
"destinationAddressPrefix": "*",
"destinationPortRange": "80",
"direction": "Inbound",
"etag": "W/\"9e344b60-0daa-40a6-84f9-0ebbe4a4b640\"",
"id":
"/subscriptions/guid/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkSecurityGroups/myNetwor
kSecurityGroup/securityRules/myNetworkSecurityGroupRuleWeb",
"name": "myNetworkSecurityGroupRuleWeb",
"priority": 1001,
"protocol": "Tcp",
"provisioningState": "Succeeded",
"resourceGroup": "myResourceGroup",
"sourceAddressPrefix": "*",
"sourcePortRange": "*"
}
],
"subnets": null,
"tags": null,
"type": "Microsoft.Network/networkSecurityGroups"
}
Output:
{
{
"NewNIC": {
"dnsSettings": {
"appliedDnsServers": [],
"dnsServers": [],
"internalDnsNameLabel": null,
"internalDomainNameSuffix": "brqlt10lvoxedgkeuomc4pm5tb.bx.internal.cloudapp.net",
"internalFqdn": null
},
"enableAcceleratedNetworking": false,
"enableIpForwarding": false,
"etag": "W/\"04b5ab44-d8f4-422a-9541-e5ae7de8466d\"",
"id":
"/subscriptions/guid/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/myNic",
"ipConfigurations": [
{
"applicationGatewayBackendAddressPools": null,
"etag": "W/\"04b5ab44-d8f4-422a-9541-e5ae7de8466d\"",
"id":
"/subscriptions/guid/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/myNic/ipConf
igurations/ipconfig1",
"loadBalancerBackendAddressPools": null,
"loadBalancerInboundNatRules": null,
"name": "ipconfig1",
"primary": true,
"privateIpAddress": "192.168.1.4",
"privateIpAddressVersion": "IPv4",
"privateIpAllocationMethod": "Dynamic",
"provisioningState": "Succeeded",
"publicIpAddress": {
"dnsSettings": null,
"etag": null,
"id":
"/subscriptions/guid/resourceGroups/myResourceGroup/providers/Microsoft.Network/publicIPAddresses/myPublicIP",
"idleTimeoutInMinutes": null,
"ipAddress": null,
"ipConfiguration": null,
"location": null,
"name": null,
"provisioningState": null,
"publicIpAddressVersion": null,
"publicIpAllocationMethod": null,
"resourceGroup": "myResourceGroup",
"resourceGuid": null,
"tags": null,
"type": null
},
"resourceGroup": "myResourceGroup",
"subnet": {
"addressPrefix": null,
"etag": null,
"id":
"/subscriptions/guid/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVnet/subnets
/mySubnet",
"ipConfigurations": null,
"name": null,
"networkSecurityGroup": null,
"provisioningState": null,
"resourceGroup": "myResourceGroup",
"resourceNavigationLinks": null,
"routeTable": null
}
}
],
"location": "eastus",
"macAddress": null,
"name": "myNic",
"networkSecurityGroup": {
"defaultSecurityRules": null,
"etag": null,
"etag": null,
"id":
"/subscriptions/guid/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkSecurityGroups/myNetwor
kSecurityGroup",
"location": null,
"name": null,
"networkInterfaces": null,
"provisioningState": null,
"resourceGroup": "myResourceGroup",
"resourceGuid": null,
"securityRules": null,
"subnets": null,
"tags": null,
"type": null
},
"primary": null,
"provisioningState": "Succeeded",
"resourceGroup": "myResourceGroup",
"resourceGuid": "b3dbaa0e-2cf2-43be-a814-5cc49fea3304",
"tags": null,
"type": "Microsoft.Network/networkInterfaces",
"virtualMachine": null
}
}
az vm availability-set create \
--resource-group myResourceGroup \
--name myAvailabilitySet
Create a VM
You've created the network resources to support Internet-accessible VMs. Now create a VM and secure it with an
SSH key. In this example, let's create an Ubuntu VM based on the most recent LTS. You can find additional images
with az vm image list, as described in finding Azure VM images.
Specify an SSH key to use for authentication. If you do not have an SSH public key pair, you can create them or
use the --generate-ssh-keys parameter to create them for you. If you already have a key pair, this parameter uses
existing keys in ~/.ssh .
Create the VM by bringing all the resources and information together with the az vm create command. The
following example creates a VM named myVM:
az vm create \
--resource-group myResourceGroup \
--name myVM \
--location eastus \
--availability-set myAvailabilitySet \
--nics myNic \
--image UbuntuLTS \
--admin-username azureuser \
--generate-ssh-keys
SSH to your VM with the DNS entry you provided when you created the public IP address. This fqdn is shown in
the output as you create your VM:
{
"fqdns": "mypublicdns.eastus.cloudapp.azure.com",
"id": "/subscriptions/guid/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM",
"location": "eastus",
"macAddress": "00-0D-3A-13-71-C8",
"powerState": "VM running",
"privateIpAddress": "192.168.1.5",
"publicIpAddress": "13.90.94.252",
"resourceGroup": "myResourceGroup"
}
ssh [email protected]
Output:
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
azureuser@myVM:~$
You can install NGINX and see the traffic flow to the VM. Install NGINX as follows:
To see the default NGINX site in action, open your web browser and enter your FQDN:
Export as a template
What if you now want to create an additional development environment with the same parameters, or a
production environment that matches it? Resource Manager uses JSON templates that define all the parameters
for your environment. You build out entire environments by referencing this JSON template. You can build JSON
templates manually or export an existing environment to create the JSON template for you. Use az group export
to export your resource group as follows:
This command creates the myResourceGroup.json file in your current working directory. When you create an
environment from this template, you are prompted for all the resource names. You can populate these names in
your template file by adding the --include-parameter-default-value parameter to the az group export command.
Edit your JSON template to specify the resource names, or create a parameters.json file that specifies the resource
names.
To create an environment from your template, use az group deployment create as follows:
You might want to read more about how to deploy from templates. Learn about how to incrementally update
environments, use the parameters file, and access templates from a single storage location.
Next steps
Now you're ready to begin working with multiple networking components and VMs. You can use this sample
environment to build out your application by using the core components introduced here.
How to create a Linux virtual machine with Azure
Resource Manager templates
2/6/2018 • 1 min to read • Edit Online
This article shows you how to quickly deploy a Linux virtual machine (VM ) with Azure Resource Manager
templates and the Azure CLI 2.0. You can also perform these steps with the Azure CLI 1.0.
Templates overview
Azure Resource Manager templates are JSON files that define the infrastructure and configuration of your Azure
solution. By using a template, you can repeatedly deploy your solution throughout its lifecycle and have
confidence your resources are deployed in a consistent state. To learn more about the format of the template and
how you construct it, see Create your first Azure Resource Manager template. To view the JSON syntax for
resources types, see Define resources in Azure Resource Manager templates.
In the previous example, you specified a template stored in GitHub. You can also download or create a template
and specify the local path with the --template-file parameter.
az vm show \
--resource-group myResourceGroup \
--name sshvm \
--show-details \
--query publicIps \
--output tsv
You can then SSH to your VM as normal. Provide you own public IP address from the preceding command:
ssh azureuser@<ipAddress>
Next steps
In this example, you created a basic Linux VM. For more Resource Manager templates that include application
frameworks or create more complex environments, browse the Azure quickstart templates gallery.
Create a copy of a Linux VM by using Azure CLI 2.0
and Managed Disks
4/9/2018 • 2 min to read • Edit Online
This article shows you how to create a copy of your Azure virtual machine (VM ) running Linux using the Azure CLI
2.0 and the Azure Resource Manager deployment model. You can also perform these steps with the Azure CLI 1.0.
You can also upload and create a VM from a VHD.
Prerequisites
Install Azure CLI 2.0
Sign in to an Azure account with az login.
Have an Azure VM to use as the source for your copy.
az vm deallocate \
--resource-group myResourceGroup \
--name myVM
az vm list -g myResourceGroup \
--query '[].{Name:name,DiskName:storageProfile.osDisk.name}' \
--output table
Name DiskName
------ --------
myVM myDisk
2. Copy the disk by creating a new managed disk using az disk create. The following example creates a disk
named myCopiedDisk from the managed disk named myDisk:
az disk create --resource-group myResourceGroup \
--name myCopiedDisk --source myDisk
3. Verify the managed disks now in your resource group by using az disk list. The following example lists the
managed disks in the resource group named myResourceGroup:
2. Create a public IP by using az network public-ip create. The following example creates a public IP named
myPublicIP with the DNS name of mypublicdns. (The DNS name must be unique, so provide a unique
name.)
3. Create the NIC using az network nic create. The following example creates a NIC named myNic that's
attached to the mySubnet subnet:
Step 4: Create a VM
You can now create a VM by using az vm create.
Specify the copied managed disk to use as the OS disk (--attach-os-disk), as follows:
az vm create --resource-group myResourceGroup \
--name myCopiedVM --nics myNic \
--size Standard_DS1_v2 --os-type Linux \
--attach-os-disk myCopiedDisk
Next steps
To learn how to use Azure CLI to manage your new VM, see Azure CLI commands for the Azure Resource
Manager.
How to encrypt virtual disks on a Linux VM
3/8/2018 • 10 min to read • Edit Online
For enhanced virtual machine (VM ) security and compliance, virtual disks and the VM itself can be encrypted.
VMs are encrypted using cryptographic keys that are secured in an Azure Key Vault. You control these
cryptographic keys and can audit their use. This article details how to encrypt virtual disks on a Linux VM using
the Azure CLI 2.0. You can also perform these steps with the Azure CLI 1.0.
Quick commands
If you need to quickly accomplish the task, the following section details the base commands to encrypt virtual
disks on your VM. More detailed information and context for each step can be found the rest of the document,
starting here.
You need the latest Azure CLI 2.0 installed and logged in to an Azure account using az login. In the following
examples, replace example parameter names with your own values. Example parameter names include
myResourceGroup, myKey, and myVM.
First, enable the Azure Key Vault provider within your Azure subscription with az provider register and create a
resource group with az group create. The following example creates a resource group name myResourceGroup in
the eastus location:
Create an Azure Key Vault with az keyvault create and enable the Key Vault for use with disk encryption. Specify a
unique Key Vault name for keyvault_name as follows:
keyvault_name=myuniquekeyvaultname
az keyvault create \
--name $keyvault_name \
--resource-group myResourceGroup \
--location eastus \
--enabled-for-disk-encryption True
Create a cryptographic key in your Key Vault with az keyvault key create. The following example creates a key
named myKey:
Create a service principal using Azure Active Directory with az ad sp create-for-rbac. The service principal handles
the authentication and exchange of cryptographic keys from Key Vault. The following example reads in the values
for the service principal ID and password for use in later commands:
The password is only output when you create the service principal. If desired, view and record the password (
echo $sp_password ). You can list your service principals with az ad sp list and view additional information about a
specific service principal with az ad sp show.
Set permissions on your Key Vault with az keyvault set-policy. In the following example, the service principal ID is
supplied from the preceding command:
Create a VM with az vm create and attach a 5Gb data disk. Only certain marketplace images support disk
encryption. The following example creates a VM named myVM using a CentOS 7.2n image:
az vm create \
--resource-group myResourceGroup \
--name myVM \
--image OpenLogic:CentOS:7.2n:7.2.20160629 \
--admin-username azureuser \
--generate-ssh-keys \
--data-disk-sizes-gb 5
SSH to your VM using the publicIpAddress shown in the output of the preceding command. Create a partition and
filesystem, then mount the data disk. For more information, see Connect to a Linux VM to mount the new disk.
Close your SSH session.
Encrypt your VM with az vm encryption enable. The following example uses the $sp_id and $sp_password
variables from the preceding ad sp create-for-rbac command:
az vm encryption enable \
--resource-group myResourceGroup \
--name myVM \
--aad-client-id $sp_id \
--aad-client-secret $sp_password \
--disk-encryption-keyvault $keyvault_name \
--key-encryption-key myKey \
--volume-type all
It takes some time for the disk encryption process to complete. Monitor the status of the process with az vm
encryption show:
The status shows EncryptionInProgress. Wait until the status for the OS disk reports VMRestartPending, then
restart your VM with az vm restart:
The disk encryption process is finalized during the boot process, so wait a few minutes before checking the status
of encryption again with az vm encryption show:
The status should now report both the OS disk and data disk as Encrypted.
Encryption process
Disk encryption relies on the following additional components:
Azure Key Vault - used to safeguard cryptographic keys and secrets used for the disk encryption/decryption
process.
If one exists, you can use an existing Azure Key Vault. You do not have to dedicate a Key Vault to
encrypting disks.
To separate administrative boundaries and key visibility, you can create a dedicated Key Vault.
Azure Active Directory - handles the secure exchanging of required cryptographic keys and authentication
for requested actions.
You can typically use an existing Azure Active Directory instance for housing your application.
The service principal provides a secure mechanism to request and be issued the appropriate
cryptographic keys. You are not developing an actual application that integrates with Azure Active
Directory.
The Azure Key Vault containing the cryptographic keys and associated compute resources such as storage and the
VM itself must reside in the same region. Create an Azure Key Vault with az keyvault create and enable the Key
Vault for use with disk encryption. Specify a unique Key Vault name for keyvault_name as follows:
keyvault_name=myuniquekeyvaultname
az keyvault create \
--name $keyvault_name \
--resource-group myResourceGroup \
--location eastus \
--enabled-for-disk-encryption True
You can store cryptographic keys using software or Hardware Security Model (HSM ) protection. Using an HSM
requires a premium Key Vault. There is an additional cost to creating a premium Key Vault rather than standard
Key Vault that stores software-protected keys. To create a premium Key Vault, in the preceding step add
--sku Premium to the command. The following example uses software-protected keys since you created a standard
Key Vault.
For both protection models, the Azure platform needs to be granted access to request the cryptographic keys
when the VM boots to decrypt the virtual disks. Create a cryptographic key in your Key Vault with az keyvault key
create. The following example creates a key named myKey:
The password is only displayed when you create the service principal. If desired, view and record the password (
echo $sp_password ). You can list your service principals with az ad sp list and view additional information about a
specific service principal with az ad sp show.
To successfully encrypt or decrypt virtual disks, permissions on the cryptographic key stored in Key Vault must be
set to permit the Azure Active Directory service principal to read the keys. Set permissions on your Key Vault with
az keyvault set-policy. In the following example, the service principal ID is supplied from the preceding command:
az vm create \
--resource-group myResourceGroup \
--name myVM \
--image OpenLogic:CentOS:7.2n:7.2.20160629 \
--admin-username azureuser \
--generate-ssh-keys \
--data-disk-sizes-gb 5
SSH to your VM using the publicIpAddress shown in the output of the preceding command. Create a partition and
filesystem, then mount the data disk. For more information, see Connect to a Linux VM to mount the new disk.
Close your SSH session.
az vm encryption enable \
--resource-group myResourceGroup \
--name myVM \
--aad-client-id $sp_id \
--aad-client-secret $sp_password \
--disk-encryption-keyvault $keyvault_name \
--key-encryption-key myKey \
--volume-type all
It takes some time for the disk encryption process to complete. Monitor the status of the process with az vm
encryption show:
Wait until the status for the OS disk reports VMRestartPending, then restart your VM with az vm restart:
The disk encryption process is finalized during the boot process, so wait a few minutes before checking the status
of encryption again with az vm encryption show:
The status should now report both the OS disk and data disk as Encrypted.
az vm encryption enable \
--resource-group myResourceGroup \
--name myVM \
--aad-client-id $sp_id \
--aad-client-secret $sp_password \
--disk-encryption-keyvault $keyvault_name \
--key-encryption-key myKey \
--volume-type all
Next steps
For more information about managing Azure Key Vault, including deleting cryptographic keys and vaults, see
Manage Key Vault using CLI.
For more information about disk encryption, such as preparing an encrypted custom VM to upload to Azure,
see Azure Disk Encryption.
Get started with Role-Based Access Control in the
Azure portal
4/11/2018 • 3 min to read • Edit Online
Security-oriented companies should focus on giving employees the exact permissions they need. Too many
permissions can expose an account to attackers. Too few permissions means that employees can't get their work
done efficiently. Azure Role-Based Access Control (RBAC ) helps address this problem by offering fine-grained
access management for Azure.
Using RBAC, you can segregate duties within your team and grant only the amount of access to users that they
need to perform their jobs. Instead of giving everybody unrestricted permissions in your Azure subscription or
resources, you can allow only certain actions. For example, use RBAC to let one employee manage virtual
machines in a subscription, while another can manage SQL databases within the same subscription.
The RBAC role that you assign dictates what resources the user, group, or application can manage within that
scope.
Built-in roles
Azure RBAC has three basic roles that apply to all resource types:
Owner has full access to all resources including the right to delegate access to others.
Contributor can create and manage all types of Azure resources but can’t grant access to others.
Reader can view existing Azure resources.
The rest of the RBAC roles in Azure allow management of specific Azure resources. For example, the Virtual
Machine Contributor role allows the user to create and manage virtual machines. It does not give them access to
the virtual network or the subnet that the virtual machine connects to.
RBAC built-in roles lists the roles available in Azure. It specifies the operations and scope that each built-in role
grants to users. If you're looking to define your own roles for even more control, see how to build Custom roles in
Azure RBAC.
Next Steps
Get started with Role-Based Access Control in the Azure portal.
See the RBAC built-in roles
Define your own Custom roles in Azure RBAC
Apply policies to Linux VMs with Azure Resource
Manager
4/9/2018 • 2 min to read • Edit Online
By using policies, an organization can enforce various conventions and rules throughout the enterprise.
Enforcement of the desired behavior can help mitigate risk while contributing to the success of the organization. In
this article, we describe how you can use Azure Resource Manager policies to define the desired behavior for your
organization's Virtual Machines.
For an introduction to policies, see What is Azure Policy?.
Use a wild card to modify the preceding policy to allow any Ubuntu LTS image:
{
"field": "Microsoft.Compute/virtualMachines/imageSku",
"like": "*LTS"
}
Managed disks
To require the use of managed disks, use the following policy:
{
"if": {
"anyOf": [
{
"allOf": [
{
"field": "type",
"equals": "Microsoft.Compute/virtualMachines"
},
{
"field": "Microsoft.Compute/virtualMachines/osDisk.uri",
"exists": true
}
]
},
{
"allOf": [
{
"field": "type",
"equals": "Microsoft.Compute/VirtualMachineScaleSets"
},
{
"anyOf": [
{
"field": "Microsoft.Compute/VirtualMachineScaleSets/osDisk.vhdContainers",
"exists": true
},
{
"field": "Microsoft.Compute/VirtualMachineScaleSets/osdisk.imageUrl",
"exists": true
}
]
}
]
}
]
},
"then": {
"effect": "deny"
}
}
{
"field": "Microsoft.Compute/imageId",
"in": ["{imageId1}","{imageId2}"]
}
{
"if": {
"allOf": [
{
"field": "type",
"equals": "Microsoft.Compute/virtualMachines/extensions"
},
{
"field": "Microsoft.Compute/virtualMachines/extensions/publisher",
"equals": "Microsoft.Compute"
},
{
"field": "Microsoft.Compute/virtualMachines/extensions/type",
"equals": "{extension-type}"
}
]
},
"then": {
"effect": "deny"
}
}
Next steps
After defining a policy rule (as shown in the preceding examples), you need to create the policy definition and
assign it to a scope. The scope can be a subscription, resource group, or resource. To assign policies, see Use
Azure portal to assign and manage resource policies, Use PowerShell to assign policies, or Use Azure CLI to
assign policies.
For an introduction to resource policies, see What is Azure Policy?.
For guidance on how enterprises can use Resource Manager to effectively manage subscriptions, see Azure
enterprise scaffold - prescriptive subscription governance.
How to set up Key Vault for virtual machines with the
Azure CLI 2.0
4/9/2018 • 1 min to read • Edit Online
In the Azure Resource Manager stack, secrets/certificates are modeled as resources that are provided by Key Vault.
To learn more about Azure Key Vault, see What is Azure Key Vault? In order for Key Vault to be used with Azure
Resource Manager VMs, the EnabledForDeployment property on Key Vault must be set to true. This article shows
you how to set up Key Vault for use with Azure virtual machines (VMs) using the Azure CLI 2.0. You can also
perform these steps with the Azure CLI 1.0.
To perform these steps, you need the latest Azure CLI 2.0 installed and logged in to an Azure account using az
login.
{
"type": "Microsoft.KeyVault/vaults",
"name": "ContosoKeyVault",
"apiVersion": "2015-06-01",
"location": "<location-of-key-vault>",
"properties": {
"enabledForDeployment": "true",
....
....
}
}
Next steps
For other options that you can configure when you create a Key Vault by using templates, see Create a key vault.
Quick steps: Create and use an SSH public-private
key pair for Linux VMs in Azure
4/18/2018 • 3 min to read • Edit Online
With a secure shell (SSH) key pair, you can create virtual machines (VMs) in Azure that use SSH keys for
authentication, eliminating the need for passwords to log in. This article shows you how to quickly generate and
use an SSH public-private key file pair for Linux VMs. You can complete these steps with the Azure Cloud Shell,
a macOS or Linux host, the Windows Subsystem for Linux, and other tools that support OpenSSH.
For more background and examples, see detailed steps to create SSH key pairs.
For additional ways to generate and use SSH keys on a Windows computer, see How to use SSH keys with
Windows on Azure.
If you use the Azure CLI 2.0 to create your VM, you can optionally generate SSH public and private key files by
running the az vm create command with the --generate-ssh-keys option. The keys are stored in the ~/.ssh
directory. Note that this command option does not overwrite keys if they already exist in that location.
cat ~/.ssh/id_rsa.pub
If you copy and paste the contents of the public key file to use in the Azure portal or a Resource Manager
template, make sure you don't copy any additional whitespace. For example, if you use macOS, you can pipe the
public key file (by default, ~/.ssh/id_rsa.pub ) to pbcopy to copy the contents (there are other Linux programs
that do the same thing, such as xclip).
The public key that you place on your Linux VM in Azure is by default stored in ~/.ssh/id_rsa.pub , unless you
changed the location when you created the keys. If you use the Azure CLI 2.0 to create your VM with an existing
public key, specify the value or location of this public key by running the az vm create command with the
--ssh-key-value option.
SSH to your VM
With the public key deployed on your Azure VM, and the private key on your local system, SSH to your VM
using the IP address or DNS name of your VM. Replace azureuser and myvm.westus.cloudapp.azure.com in the
following command with the administrator user name and the fully qualified domain name (or IP address):
If you provided a passphrase when you created your key pair, enter the passphrase when prompted during the
login process. (The server is added to your ~/.ssh/known_hosts folder, and you won't be asked to connect again
until the public key on your Azure VM changes or the server name is removed from ~/.ssh/known_hosts .)
VMs created using SSH keys are by default configured with passwords disabled, to make brute-forced guessing
attempts vastly more expensive and therefore difficult.
Next steps
This article described creating a simple SSH key pair for quick usage.
If you need more assistance to work with your SSH key pair, see Detailed steps to create and manage SSH
key pairs.
If you have problems with SSH connections to an Azure VM, see Troubleshoot SSH connections to an
Azure Linux VM.
How to use SSH keys with Windows on Azure
4/18/2018 • 5 min to read • Edit Online
This article introduces ways to generate and use secure shell (SSH) keys on a Windows computer to create and
connect to a Linux virtual machine (VM ) in Azure. To use SSH keys from a Linux or macOS client, see the quick or
detailed guidance.
For more background and information, see the quick or detailed steps to create the keys with ssh-keygen .
Create SSH keys with PuTTYgen
If you prefer to use a GUI-based tool to create SSH keys, you can use the PuTTYgen key generator, included with
the PuTTY download package.
To create an SSH RSA key pair with PuTTYgen:
1. Start PuTTYgen.
2. Click Generate. By default PuTTYgen generates a 2048-bit SSH-2 RSA key.
3. Mouse over the blank area to generate some randomness for the key.
4. After the public key is generated, optionally enter and confirm a passphrase. You will be prompted for the
passphrase when you authenticate to the VM with your SSH key. Without a passphrase, if someone obtains
your private key, they can log in to any VM or service that uses that key. We recommend you create a
passphrase. However, if you forget the passphrase, there is no way to recover it.
5. The public key is displayed at the top of the window. You copy and paste this one-line format public key into
the Azure portal or an Azure Resource Manager template when you create a Linux VM. You can also click
Save public key to save a copy to your computer:
6. Optionally, to save the private key in PuTTy private key format (.ppk file), click Save private key. You need
the .ppk file of you want to use PuTTY later to make an SSH connection to the VM.
If you want to save the private key in the OpenSSH format, the private key format used by many SSH
clients, click Conversions > Export OpenSSH key.
If you configured a passphrase when you created your key pair, enter the passphrase when prompted during the
login process.
Connect with PuTTY
If you installed the PuTTY download package and previously generated a PuTTY private key (.ppk file), you can
connect to the Linux VM with PuTTY.
1. Start PuTTy.
2. Fill in the host name or IP address of your VM from the Azure portal:
3. Before selecting Open, click Connection > SSH > Auth tab. Browse to and select your PuTTY private key
(.ppk file):
Next steps
For detailed steps, options, and advanced examples of working with SSH keys, see detailed steps to create
SSH key pairs.
You can also use PowerShell in Azure Cloud Shell to generate SSH keys and make SSH connections to
Linux VMs. See the PowerShell quickstart.
If you have trouble using SSH to connect to your Linux VMs, see Troubleshoot SSH connections to an
Azure Linux VM.
Detailed steps: Create and manage SSH keys for
authentication to a Linux VM in Azure
4/18/2018 • 10 min to read • Edit Online
With a secure shell (SSH) key pair, you can create a Linux virtual machine on Azure that defaults to using SSH
keys for authentication, eliminating the need for passwords to log in. VMs created with the Azure portal, Azure
CLI, Resource Manager templates, or other tools can include your SSH public key as part of the deployment,
which sets up SSH key authentication for SSH connections.
This article provides detailed background and steps to create and manage an SSH RSA public-private key file pair
for SSH client connections. If you want quick commands, see How to create an SSH public-private key pair for
Linux VMs in Azure.
For additional ways to generate and use SSH keys on a Windows computer, see How to use SSH keys with
Windows on Azure.
Detailed example
The following example shows additional command options to create an SSH RSA key pair. If an SSH key pair
exists in the current location, those files are overwritten.
ssh-keygen \
-t rsa \
-b 4096 \
-C "azureuser@myserver" \
-f ~/.ssh/mykeys/myprivatekey \
-N mypassphrase
Command explained
ssh-keygen = the program used to create the keys
-t rsa = type of key to create, in this case in the RSA format
-b 4096 = the number of bits in the key, in this case 4096
-C "azureuser@myserver" = a comment appended to the end of the public key file to easily identify it. Normally an
email address is used as the comment, but use whatever works best for your infrastructure.
-f ~/.ssh/mykeys/myprivatekey = the filename of the private key file, if you choose not to use the default name. A
corresponding public key file appended with .pub is generated in the same directory. The directory must exist.
-N mypassphrase = an additional passphrase used to access the private key file.
Example of ssh-keygen
The key pair name for this article. Having a key pair named id_rsa is the default; some tools might expect the
id_rsa private key file name, so having one is a good idea. The directory ~/.ssh/ is the default location for SSH
key pairs and the SSH config file. If not specified with a full path, ssh-keygen creates the keys in the current
working directory, not the default ~/.ssh .
List of the ~/.ssh directory
ls -al ~/.ssh
-rw------- 1 azureuser staff 1675 Aug 25 18:04 id_rsa
-rw-r--r-- 1 azureuser staff 410 Aug 25 18:04 id_rsa.pub
Key passphrase
Enter passphrase (empty for no passphrase):
It is strongly recommended to add a passphrase to your private key. Without a passphrase to protect the key file,
anyone with the file can use it to log in to any server that has the corresponding public key. Adding a passphrase
offers more protection in case someone is able to gain access to your private key file, giving you time to change
the keys.
cat ~/.ssh/id_rsa.pub
ssh-rsa
XXXXXXXXXXc2EAAAADAXABAAABAXC5Am7+fGZ+5zXBGgXS6GUvmsXCLGc7tX7/rViXk3+eShZzaXnt75gUmT1I2f75zFn2hlAIDGKWf4g12KWc
Zxy81TniUOTjUsVlwPymXUXxESL/UfJKfbdstBhTOdy5EG9rYWA0K43SJmwPhH28BpoLfXXXXXG+/ilsXXXXXKgRLiJ2W19MzXHp8z3Lxw7r9w
x3HaVlP4XiFv9U4hGcp8RMI1MP1nNesFlOBpG4pV2bJRBTXNXeY4l6F8WZ3C4kuf8XxOo08mXaTpvZ3T1841altmNTZCcPkXuMrBjYSJbA8npo
XAXNwiivyoe3X2KMXXXXXdXXXXXXXXXXCXXXXX/ azureuser@myserver
If you copy and paste the contents of the public key file into the Azure portal or a Resource Manager template,
make sure you don't copy any additional whitespace or introduce additional linebreaks. For example, if you use
macOS, you can pipe the public key file (by default, ~/.ssh/id_rsa.pub ) to pbcopy to copy the contents (there are
other Linux programs that do the same thing, such as xclip).
If you prefer to use a public key that is in a multiline format, you can generate an RFC4716 formatted key in a
pem container from the public key you previously created.
To create a RFC4716 formatted key from an existing SSH public key:
ssh-keygen \
-f ~/.ssh/id_rsa.pub \
-e \
-m RFC4716 > ~/.ssh/id_ssh2.pem
If you provided a passphrase when you created your key pair, enter the passphrase when prompted during the
login process. (The server is added to your ~/.ssh/known_hosts folder, and you won't be asked to connect again
until the public key on your Azure VM changes or the server name is removed from ~/.ssh/known_hosts .)
ssh-add ~/.ssh/id_rsa
touch ~/.ssh/config
vim ~/.ssh/config
Example configuration
Add configuration settings appropriate for your host VM.
# Azure Keys
Host myvm
Hostname 102.160.203.241
User azureuser
# ./Azure Keys
You can add configurations for additional hosts to enable each to use its own dedicated key pair. See SSH config
file for more advanced configuration options.
Now that you have an SSH key pair and a configured SSH config file, you are able to log in to your Linux VM
quickly and securely. When you run the following command, SSH locates and loads any settings from the
Host myvm block in the SSH config file.
ssh myvm
The first time you log in to a server using an SSH key, the command prompts you for the passphrase for that key
file.
Next steps
Next up is to create Azure Linux VMs using the new SSH public key. Azure VMs that are created with an SSH
public key as the login are better secured than VMs created with the default login method, passwords.
Create a Linux virtual machine with the Azure portal
Create a Linux virtual machine with the Azure CLI
Create a Linux VM using an Azure template
Overview of the features in Azure Backup
4/18/2018 • 20 min to read • Edit Online
Azure Backup is the Azure-based service you can use to back up (or protect) and restore your data in the Microsoft
cloud. Azure Backup replaces your existing on-premises or off-site backup solution with a cloud-based solution
that is reliable, secure, and cost-competitive. Azure Backup offers multiple components that you download and
deploy on the appropriate computer, server, or in the cloud. The component, or agent, that you deploy depends on
what you want to protect. All Azure Backup components (no matter whether you're protecting data on-premises or
in the cloud) can be used to back up data to a Recovery Services vault in Azure. See the Azure Backup components
table (later in this article) for information about which component to use to protect specific data, applications, or
workloads.
Watch a video overview of Azure Backup
Azure Backup (MARS) Back up files and Backup 3x per day Files, Recovery Services
agent folders on physical or Not application Folders, vault
virtual Windows OS aware; file, folder, and System State
(VMs can be on- volume-level restore
premises or in Azure) only,
No separate No support for
backup server Linux.
required.
Hyper-V virtual machine (Windows) Windows Server System Center DPM (+ the Azure
Backup agent),
Azure Backup Server (includes the
Azure Backup agent)
Hyper-V virtual machine (Linux) Windows Server System Center DPM (+ the Azure
Backup agent),
Azure Backup Server (includes the
Azure Backup agent)
VMware virtual machine Windows Server System Center DPM (+ the Azure
Backup agent),
Azure Backup Server (includes the
Azure Backup agent)
Microsoft SQL Server Windows Server System Center DPM (+ the Azure
Backup agent),
Azure Backup Server (includes the
Azure Backup agent)
Azure IaaS VMs (Windows) running in Azure Azure Backup (VM extension)
Azure IaaS VMs (Linux) running in Azure Azure Backup (VM extension)
Linux support
The following table shows the Azure Backup components that have support for Linux.
System Center DPM File-consistent backup of Linux Guest VMs on Hyper-V and
VMWare
VM restore of Hyper-V and VMWare Linux Guest VMs
Azure Backup Server File-consistent backup of Linux Guest VMs on Hyper-V and
VMWare
VM restore of Hyper-V and VMWare Linux Guest VMs
File-consistent backup not available for Azure VM
NOTE
Do not modify or edit the staging location.
Recovery Services
vault
Disk storage
Tape storage
Compression
(in Recovery Services
vault)
Incremental backup
Disk deduplication
The Recovery Services vault is the preferred storage target across all components. System Center DPM and Azure
Backup Server also provide the option to have a local disk copy. However, only System Center DPM provides the
option to write data to a tape storage device.
Compression
Backups are compressed to reduce the required storage space. The only component that does not use compression
is the VM extension. The VM extension copies all backup data from your storage account to the Recovery Services
vault in the same region. No compression is used when transferring the data. Transferring the data without
compression slightly inflates the storage used. However, storing the data without compression allows for faster
restoration, should you need that recovery point.
Disk Deduplication
You can take advantage of deduplication when you deploy System Center DPM or Azure Backup Server on a
Hyper-V virtual machine. Windows Server performs data deduplication (at the host level) on virtual hard disks
(VHDs) that are attached to the virtual machine as backup storage.
NOTE
Deduplication is not available in Azure for any Backup component. When System Center DPM and Backup Server are
deployed in Azure, the storage disks attached to the VM cannot be deduplicated.
With Full Backup, each backup copy contains the entire data source. Full backup consumes a large amount of
network bandwidth and storage, each time a backup copy is transferred.
Differential backup stores only the blocks that changed since the initial full backup, which results in a smaller
amount of network and storage consumption. Differential backups don't retain redundant copies of unchanged
data. However, because the data blocks that remain unchanged between subsequent backups are transferred and
stored, differential backups are inefficient. In the second month, changed blocks A2, A3, A4, and A9 are backed up.
In the third month, these same blocks are backed up again, along with changed block A5. The changed blocks
continue to be backed up until the next full backup happens.
Incremental Backup achieves high storage and network efficiency by storing only the blocks of data that
changed since the previous backup. With incremental backup, there is no need to take regular full backups. In the
example, after taking the full backup in the first month, blocks A2, A3, A4, and A9 are marked as changed, and
transferred to the second month. In the third month, only changed block A5 is marked and transferred. Moving
less data saves storage and network resources, which decreases TCO.
Security
AZURE IAAS VM
FEATURE AZURE BACKUP AGENT SYSTEM CENTER DPM AZURE BACKUP SERVER BACKUP
Network security
(to Azure)
Data security
(in Azure)
Network security
All backup traffic from your servers to the Recovery Services vault is encrypted using Advanced Encryption
Standard 256. The backup data is sent over a secure HTTPS link. The backup data is also stored in the Recovery
Services vault in encrypted form. Only you, the Azure customer, have the passphrase to unlock this data. Microsoft
cannot decrypt the backup data at any point.
WARNING
Once you establish the Recovery Services vault, only you have access to the encryption key. Microsoft never maintains a
copy of your encryption key, and does not have access to the key. If the key is misplaced, Microsoft cannot recover the
backup data.
Data security
Backing up Azure VMs requires setting up encryption within the virtual machine. Use BitLocker on Windows
virtual machines and dm -crypt on Linux virtual machines. Azure Backup does not automatically encrypt backup
data that comes through this path.
Network
AZURE IAAS VM
FEATURE AZURE BACKUP AGENT SYSTEM CENTER DPM AZURE BACKUP SERVER BACKUP
Network compression
(to backup server)
Network compression
(to Recovery
Services vault)
The VM extension (on the IaaS VM ) reads the data directly from the Azure storage account over the storage
network, so it is not necessary to compress this traffic.
If you use a System Center DPM server or Azure Backup Server as a secondary backup server, compress the data
going from the primary server to the backup server. Compressing data before back up to DPM or Azure Backup
Server, saves bandwidth.
Network Throttling
The Azure Backup agent offers network throttling, which allows you to control how network bandwidth is used
during data transfer. Throttling can be helpful if you need to back up data during work hours but do not want the
backup process to interfere with other internet traffic. Throttling for data transfer applies to back up and restore
activities.
AZURE IAAS VM
AZURE BACKUP AGENT SYSTEM CENTER DPM AZURE BACKUP SERVER BACKUP
Backup frequency Three backups per Two backups per day Two backups per day One backup per day
(to Recovery Services day
vault)
Backup frequency Not applicable Every 15 minutes Every 15 minutes Not applicable
(to disk) for SQL Server for SQL Server
Every hour for Every hour for
other workloads other workloads
Retention options Daily, weekly, monthly, Daily, weekly, monthly, Daily, weekly, monthly, Daily, weekly, monthly,
yearly yearly yearly yearly
Maximum retention Depends on backup Depends on backup Depends on backup Depends on backup
period frequency frequency frequency frequency
Recovery points on Not applicable 64 for File Servers, 64 for File Servers, Not applicable
local disk 448 for Application 448 for Application
Servers Servers
Recovery point objective The amount of acceptable Backup solutions have wide Disaster recovery solutions
(RPO) data loss if a recovery needs variability in their acceptable have low RPOs. The DR copy
to be done. RPO. Virtual machine can be behind by a few
backups usually have an seconds or a few minutes.
RPO of one day, while
database backups have
RPOs as low as 15 minutes.
CONCEPT DETAILS BACKUP DISASTER RECOVERY (DR)
Recovery time objective The amount of time that it Because of the larger RPO, Disaster recovery solutions
(RTO) takes to complete a recovery the amount of data that a have smaller RTOs because
or restore. backup solution needs to they are more in sync with
process is typically much the source. Fewer changes
higher, which leads to longer need to be processed.
RTOs. For example, it can
take days to restore data
from tapes, depending on
the time it takes to
transport the tape from an
off-site location.
Retention How long data needs to be For scenarios that require Disaster recovery needs only
stored operational recovery (data operational recovery data,
corruption, inadvertent file which typically takes a few
deletion, OS failure), backup hours or up to a day.
data is typically retained for Because of the fine-grained
30 days or less. data capture used in DR
From a compliance solutions, using DR data for
standpoint, data might need long-term retention is not
to be stored for months or recommended.
even years. Backup data is
ideally suited for archiving in
such cases.
Next steps
Use one of the following tutorials for detailed, step-by-step, instructions for protecting data on Windows Server, or
protecting a virtual machine (VM ) in Azure:
Back up Files and Folders
Backup Azure Virtual Machines
For details about protecting other workloads, try one of these articles:
Back up your Windows Server
Back up application workloads
Backup Azure IaaS VMs
Back up a virtual machine in Azure with the CLI
2/14/2018 • 4 min to read • Edit Online
The Azure CLI is used to create and manage Azure resources from the command line or in scripts. You can protect
your data by taking backups at regular intervals. Azure Backup creates recovery points that can be stored in geo-
redundant recovery vaults. This article details how to back up a virtual machine (VM ) in Azure with the Azure CLI.
You can also perform these steps with Azure PowerShell or in the Azure portal.
This quick start enables backup on an existing Azure VM. If you need to create a VM, you can create a VM with the
Azure CLI.
To install and use the CLI locally, you must run Azure CLI version 2.0.18 or later. To find the CLI version, run
az --version . If you need to install or upgrade, see Install Azure CLI 2.0.
By default, the Recovery Services vault is set for Geo-Redundant storage. Geo-Redundant storage ensures your
backup data is replicated to a secondary Azure region that is hundreds of miles away from the primary region.
Enable backup for an Azure VM
Create a protection policy to define: when a backup job runs, and how long the recovery points are stored. The
default protection policy runs a backup job each day and retains recovery points for 30 days. You can use these
default policy values to quickly protect your VM. To enable backup protection for a VM, use az backup protection
enable-for-vm. Specify the resource group and VM to protect, then the policy to use:
The output is similar to the following example, which shows the backup job is InProgress:
When the Status of the backup job reports Completed, your VM is protected with Recovery Services and has a full
recovery point stored.
Clean up deployment
When no longer needed, you can disable protection on the VM, remove the restore points and Recovery Services
vault, then delete the resource group and associated VM resources. If you used an existing VM, you can skip the
final az group delete command to leave the resource group and VM in place.
If you want to try a Backup tutorial that explains how to restore data for your VM, go to Next steps.
Next steps
In this quick start, you created a Recovery Services vault, enabled protection on a VM, and created the initial
recovery point. To learn more about Azure Backup and Recovery Services, continue to the tutorials.
Back up multiple Azure VMs
Use Azure portal to back up multiple virtual machines
2/16/2018 • 6 min to read • Edit Online
When you back up data in Azure, you store that data in an Azure resource called a Recovery Services vault. The
Recovery Services vault resource is available from the Settings menu of most Azure services. The benefit of having
the Recovery Services vault integrated into the Settings menu of most Azure services makes it very easy to back
up data. However, individually working with each database or virtual machine in your business is tedious. What if
you want to back up the data for all virtual machines in one department, or in one location? It is easy to back up
multiple virtual machines by creating a backup policy and applying that policy to the desired virtual machines. This
tutorial explains how to:
Create a Recovery Services vault
Define a backup policy
Apply the backup policy to protect multiple virtual machines
Trigger an on-demand backup job for the protected virtual machines
2. In the Recovery Services vaults menu, click Add to open the Recovery Services vault menu.
3. In the Recovery Services vault menu,
Type myRecoveryServicesVault in Name,
The current subscription ID appears in Subscription. If you have additional subscriptions, you could
choose another subscription for the new vault.
For Resource group select Use existing and choose myResourceGroup. If myResourceGroup doesn't
exist, select Create new and type myResourceGroup.
From the Location drop-down menu, choose West Europe.
Click Create to create your Recovery Services vault.
A Recovery Services vault must be in the same location as the virtual machines being protected. If you have virtual
machines in multiple regions,create a Recovery Services vault in each region. This tutorial creates a Recovery
Services vault in West Europe because that is where myVM (the virtual machine created with the quickstart) was
created.
It can take several minutes for the Recovery Services vault to be created. Monitor the status notifications in the
upper right-hand area of the portal. Once your vault is created, it appears in the list of Recovery Services vaults.
When you create a Recovery Services vault, by default the vault has geo-redundant storage. To provide data
resiliency, geo-redundant storage replicates the data multiple times across two Azure regions.
4. To create a new policy, on the Backup policy menu, from the Choose backup policy drop-down menu,
select Create New.
5. In the Backup policy menu, for Policy Name type Finance. Enter the following changes for the Backup
policy:
For Backup frequency set the timezone for Central Time. Since the sports complex is in Texas, the
owner wants the timing to be local. Leave the backup frequency set to Daily at 3:30AM.
For Retention of daily backup point, set the period to 90 days.
For Retention of weekly backup point, use the Monday restore point and retain it for 52 weeks.
For Retention of monthly backup point, use the restore point from First Sunday of the month, and
retain it for 36 months.
Deselect the Retention of yearly backup point option. The leader of Finance doesn't want to keep data
longer than 36 months.
Click OK to create the backup policy.
After creating the backup policy, associate the policy with the virtual machines.
6. In the Select virtual machines dialog select myVM and click OK to deploy the backup policy to the virtual
machines.
All virtual machines that are in the same location, and are not already associated with a backup policy,
appear. myVMH1 and myVMR1 are selected to be associated with the Finance policy.
When the deployment completes, you receive a notification that deployment successfully completed.
Initial backup
You have enabled backup for the Recovery Services vaults, but an initial backup has not been created. It is a
disaster recovery best practice to trigger the first backup, so that your data is protected.
To run an on-demand backup job:
1. On the vault dashboard, click 3 under Backup Items, to open the Backup Items menu.
The Backup Items menu opens.
2. On the Backup Items menu, click Azure Virtual Machine to open the list of virtual machines associated
with the vault.
3. On the Backup Items list, click the ellipses ... to open the Context menu.
4. On the Context menu, select Backup now.
Deployment notifications let you know the backup job has been triggered, and that you can monitor the
progress of the job on the Backup jobs page. Depending on the size of your virtual machine, creating the
initial backup may take a while.
When the initial backup job completes, you can see its status in the Backup job menu. The on-demand
backup job created the initial restore point for myVM. If you want to back up other virtual machines, repeat
these steps for each virtual machine.
Clean up resources
If you plan to continue on to work with subsequent tutorials, do not clean up the resources created in this tutorial.
If you do not plan to continue, use the following steps to delete all resources created by this tutorial in the Azure
portal.
1. On the myRecoveryServicesVault dashboard, click 3 under Backup Items, to open the Backup Items
menu.
2. On the Backup Items menu, click Azure Virtual Machine to open the list of virtual machines associated
with the vault.
5. In the Stop Backup menu, select the upper drop-down menu and choose Delete Backup Data.
6. In the Type the name of the Backup item dialog, type myVM.
7. Once the backup item is verified (a checkmark appears), Stop backup button is enabled. Click Stop
Backup to stop the policy and delete the restore points.
.
8. In the myRecoveryServicesVault menu, click Delete.
Once the vault is deleted, you return to the list of Recovery Services vaults.
Next steps
In this tutorial you used the Azure portal to:
Create a Recovery Services vault
Set the vault to protect virtual machines
Create a custom backup and retention policy
Assign the policy to protect multiple virtual machines
Trigger an on-demand back up for virtual machines
Continue to the next tutorial to restore an Azure virtual machine from disk.
Restore VMs using CLI
Restore a disk and create a recovered VM in Azure
4/17/2018 • 5 min to read • Edit Online
Azure Backup creates recovery points that are stored in geo-redundant recovery vaults. When you restore from a
recovery point, you can restore the whole VM or individual files. This article explains how to restore a complete
VM using CLI. In this tutorial you learn how to:
List and select recovery points
Restore a disk from a recovery point
Create a VM from the restored disk
For information on using PowerShell to restore a disk and create a recovered VM, see Back up and restore Azure
VMs with PowerShell.
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version
2.0.18 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI 2.0.
Prerequisites
This tutorial requires a Linux VM that has been protected with Azure Backup. To simulate an accidental VM
deletion and recovery process, you create a VM from a disk in a recovery point. If you need a Linux VM that has
been protected with Azure Backup, see Back up a virtual machine in Azure with the CLI.
Backup overview
When Azure initiates a backup, the backup extension on the VM takes a point-in-time snapshot. The backup
extension is installed on the VM when the first backup is requested. Azure Backup can also take a snapshot of the
underlying storage if the VM is not running when the backup takes place.
By default, Azure Backup takes a file system consistent backup. Once Azure Backup takes the snapshot, the data is
transferred to the Recovery Services vault. To maximize efficiency, Azure Backup identifies and transfers only the
blocks of data that have changed since the previous backup.
When the data transfer is complete, the snapshot is removed and a recovery point is created.
List available recovery points
To restore a disk, you select a recovery point as the source for the recovery data. As the default policy creates a
recovery point each day and retains them for 30 days, you can keep a set of recovery points that allows you to
select a particular point in time for recovery.
To see a list of available recovery points, use az backup recoverypoint list. The recovery point name is used to
recover disks. In this tutorial, we want the most recent recovery point available. The --query [0].name parameter
selects the most recent recovery point name as follows:
Restore a VM disk
To restore your disk from the recovery point, you first create an Azure storage account. This storage account is
used to store the restored disk. In additional steps, the restored disk is used to create a VM.
1. To create a storage account, use az storage account create. The storage account name must be all lowercase,
and be globally unique. Replace mystorageaccount with your own unique name:
2. Restore the disk from your recovery point with az backup restore restore-disks. Replace mystorageaccount
with the name of the storage account you created in the preceding command. Replace
myRecoveryPointName with the recovery point name you obtained in the output from the previous az
backup recoverypoint list command:
The output is similar to the following example, which shows the restore job is InProgress:
Name Operation Status Item Name Start Time UTC Duration
-------- --------------- ---------- ----------- ------------------- --------------
7f2ad916 Restore InProgress myvm 2017-09-19T19:39:52 0:00:34.520850
a0a8e5e6 Backup Completed myvm 2017-09-19T03:09:21 0:15:26.155212
fe5d0414 ConfigureBackup Completed myvm 2017-09-19T03:03:57 0:00:31.191807
When the Status of the restore job reports Completed, the disk has been restored to the storage account.
2. Your unmanaged disk is secured in the storage account. The following commands get information about
your unmanaged disk and create a variable named uri that is used in the next step when you create the
Managed Disk.
3. Now you can create a Managed Disk from your recovered disk with az disk create. The uri variable from the
preceding step is used as the source for your Managed Disk.
az disk create \
--resource-group myResourceGroup \
--name myRestoredDisk \
--source $uri
4. As you now have a Managed Disk from your restored disk, clean up the unmanaged disk and storage
account with az storage account delete. Replace mystorageaccount with the name of your storage account
as follows:
2. To confirm that your VM has been created from your recovered disk, list the VMs in your resource group
with az vm list as follows:
Next steps
In this tutorial, you restored a disk from a recovery point and then created a VM from the disk. You learned how to:
List and select recovery points
Restore a disk from a recovery point
Create a VM from the restored disk
Advance to the next tutorial to learn about restoring individual files from a recovery point.
Restore files to a virtual machine in Azure
Restore files to a virtual machine in Azure
2/14/2018 • 6 min to read • Edit Online
Azure Backup creates recovery points that are stored in geo-redundant recovery vaults. When you restore from a
recovery point, you can restore the whole VM or individual files. This article details how to restore individual files.
In this tutorial you learn how to:
List and select recovery points
Connect a recovery point to a VM
Restore files from a recovery point
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version
2.0.18 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI 2.0.
Prerequisites
This tutorial requires a Linux VM that has been protected with Azure Backup. To simulate an accidental file deletion
and recovery process, you delete a page from a web server. If you need a Linux VM that runs a webserver and has
been protected with Azure Backup, see Back up a virtual machine in Azure with the CLI.
Backup overview
When Azure initiates a backup, the backup extension on the VM takes a point-in-time snapshot. The backup
extension is installed on the VM when the first backup is requested. Azure Backup can also take a snapshot of the
underlying storage if the VM is not running when the backup takes place.
By default, Azure Backup takes a file system consistent backup. Once Azure Backup takes the snapshot, the data is
transferred to the Recovery Services vault. To maximize efficiency, Azure Backup identifies and transfers only the
blocks of data that have changed since the previous backup.
When the data transfer is complete, the snapshot is removed and a recovery point is created.
2. To confirm that your web site currently works, open a web browser to the public IP address of your VM.
Leave the web browser window open.
3. Connect to your VM with SSH. Replace publicIpAddress with the public IP address that you obtained in a
previous command:
ssh publicIpAddress
4. Delete the default page from the web server at /var/www/html/index.nginx-debian.html as follows:
sudo rm /var/www/html/index.nginx-debian.html
5. In your web browser, refresh the web page. The web site no longer loads the page, as shown in the
following example:
2. To obtain the script that connects, or mounts, the recovery point to your VM, use az backup restore files
mount-rp. The following example obtains the script for the VM named myVM that is protected in
myRecoveryServicesVault.
Replace myRecoveryPointName with the name of the recovery point that you obtained in the preceding
command:
3. To transfer the script to your VM, use Secure Copy (SCP ). Provide the name of your downloaded script, and
replace publicIpAddress with the public IP address of your VM. Make sure you include the trailing : at the
end of the SCP command as follows:
ssh publicIpAddress
2. To allow your script to run correctly, add execute permissions with chmod. Enter the name of your own
script:
chmod +x myVM_we_1571974050985163527.sh
3. To mount the recovery point, run the script. Enter the name of your own script:
./myVM_we_1571974050985163527.sh
As the script runs, you are prompted to enter a password to access the recovery point. Enter the password
shown in the output from the previous az backup restore files mount-rp command that generated the
recovery script.
The output from the script gives you the path for the recovery point. The following example output shows
that the recovery point is mounted at /home/azureuser/myVM -20170919213536/Volume1:
Connection succeeded!
Please wait while we attach volumes of the recovery point to this machine...
************ Volumes of the recovery point and their mount paths on this machine ************
4. Use cp to copy the NGINX default web page from the mounted recovery point back to the original file
location. Replace the /home/azureuser/myVM -20170919213536/Volume1 mount point with your own
location:
5. In your web browser, refresh the web page. The web site now loads correctly again, as shown in the
following example:
6. Close the SSH session to your VM as follows:
exit
7. Unmount the recovery point from your VM with az backup restore files unmount-rp. The following example
unmounts the recovery point from the VM named myVM in myRecoveryServicesVault.
Replace myRecoveryPointName with the name of your recovery point that you obtained in the previous
commands:
Next steps
In this tutorial, you connected a recovery point to a VM and restored files for a web server. You learned how to:
List and select recovery points
Connect a recovery point to a VM
Restore files from a recovery point
Advance to the next tutorial to learn about how to back up Windows Server to Azure.
Back up Windows Server to Azure
Prepay for Virtual Machines with Reserved VM
Instances
5/2/2018 • 2 min to read • Edit Online
Prepay for virtual machines and save money with Reserved Virtual Machine Instances. For more information, see
Reserved Virtual Machine Instances offering.
You can buy Reserved Virtual Machine Instances in the Azure portal. To buy a Reserved Virtual Machine Instance:
You must be in an Owner role for at least one Enterprise or Pay-As-You-Go subscription.
For Enterprise subscriptions, reservation purchases must be enabled in the EA portal.
For Cloud Solution Provider (CSP ) program only the admin agents or sales agents can purchase the
reservations.
FIELD DESCRIPTION
5. You can view the cost of the reservation when you select Calculate cost.
6. Select Purchase.
7. Select View this Reservation to see the status of your purchase.
Next steps
The reservation discount is applied automatically to the number of running virtual machines that match the
reservation scope and attributes. You can update the scope of the reservation through Azure portal, PowerShell,
CLI or through the API.
To learn how to manage a reservation, see Manage Azure Reserved Virtual Machine Instances.
To learn more about Reserved Virtual Machine Instances, see the following articles.
Save money on virtual machines with Reserved Virtual Machine Instances
Understand how the Reserved Virtual Machine Instance discount is applied
Understand Reserved Instance usage for your Pay-As-You-Go subscription
Understand Reserved Instance usage for your Enterprise enrollment
Windows software costs not included with Reserved Instances
Reserved Instances in Partner Center Cloud Solution Provider (CSP ) program
Understanding Azure virtual machine usage
12/6/2017 • 7 min to read • Edit Online
By analyzing your Azure usage data, powerful consumption insights can be gained – insights that can enable better
cost management and allocation throughout your organization. This document provides a deep dive into your
Azure Compute consumption details. For more details on general Azure usage, navigate to Understanding your
bill.
Usage Date The date when the resource was used. “11/23/2017”
Meter Name This is specific for each service in Azure. “Compute Hours”
For compute, it is always “Compute
Hours”.
Service Type
The service type field in the Additional Info field corresponds to the exact VM size you deployed. Premium storage
VMs (SSD -based) and non-premium storage VMs (HDD -based) are priced the same. If you deploy an SSD -based
size, like Standard_DS2_v2, you see the non-SSD size (‘Standard_D2_v2 VM’) in the Meter Sub-Category column
and the SSD -size (‘Standard_DS2_v2’) in the Additional Info field.
Region Names
The region name populated in the Resource Location field in the usage details varies from the region name used in
the Azure Resource Manager. Here is a mapping between the region values:
australiaeast AU East
australiasoutheast AU Southeast
brazilsouth BR South
CanadaCentral CA Central
CanadaEast CA East
CentralIndia IN Central
centralus Central US
RESOURCE MANAGER REGION NAME RESOURCE LOCATION IN USAGE DETAILS
eastus East US
eastus2 East US 2
GermanyCentral DE Central
GermanyNortheast DE Northeast
japaneast JA East
japanwest JA West
KoreaCentral KR Central
KoreaSouth KR South
SouthIndia IN South
UKNorth US North
uksouth UK South
UKSouth2 UK South 2
ukwest UK West
WestIndia IN West
westus West US
westus2 US West 2
Next steps
To learn more about your usage details, see Understand your bill for Microsoft Azure.
Common Azure CLI 2.0 commands for managing
Azure resources
4/9/2018 • 1 min to read • Edit Online
The Azure CLI 2.0 allows you to create and manage your Azure resources on macOS, Linux, and Windows. This
article details some of the most common commands to create and manage virtual machines (VMs).
This article requires the Azure CLI version 2.0.4 or later. Run az --version to find the version. If you need to
upgrade, see Install Azure CLI 2.0. You can also use Cloud Shell from your browser.
Manage VM state
TASK AZURE CLI COMMANDS
Get VM info
TASK AZURE CLI COMMANDS
Next steps
For additional examples of the CLI commands, see the Create and Manage Linux VMs with the Azure CLI tutorial.
Move a Linux VM to another subscription or resource
group
4/9/2018 • 3 min to read • Edit Online
This article walks you through how to move a Linux VM between resource groups or subscriptions. Moving a VM
between subscriptions can be handy if you created a VM in a personal subscription and now want to move it to
your company's subscription.
IMPORTANT
You cannot move Managed Disks at this time.
New resource IDs are created as part of the move. Once the VM has been moved, you need to update your tools and scripts
to use the new resource IDs.
If the tenant IDs for the source and destination subscriptions are not the same, you must contact support to move
the resources to a new tenant.
To successfully move a VM, you need to move the VM and all its supporting resources. Use the az resource list
command to list all the resources in a resource group and their IDs. It helps to pipe the output of this command to
a file so you can copy and paste the IDs into later commands.
To move a VM and its resources to another resource group, use az resource move. The following example shows
how to move a VM and the most common resources it requires. Use the -ids parameter and pass in a comma-
separated list (without spaces) of IDs for the resources to move.
vm=/subscriptions/mySourceSubscriptionID/resourceGroups/mySourceResourceGroup/providers/Microsoft.Compute/virtu
alMachines/myVM
nic=/subscriptions/mySourceSubscriptionID/resourceGroups/mySourceResourceGroup/providers/Microsoft.Network/netw
orkInterfaces/myNIC
nsg=/subscriptions/mySourceSubscriptionID/resourceGroups/mySourceResourceGroup/providers/Microsoft.Network/netw
orkSecurityGroups/myNSG
pip=/subscriptions/mySourceSubscriptionID/resourceGroups/mySourceResourceGroup/providers/Microsoft.Network/publ
icIPAddresses/myPublicIPAddress
vnet=/subscriptions/mySourceSubscriptionID/resourceGroups/mySourceResourceGroup/providers/Microsoft.Network/vir
tualNetworks/myVNet
diag=/subscriptions/mySourceSubscriptionID/resourceGroups/mySourceResourceGroup/providers/Microsoft.Storage/sto
rageAccounts/mydiagnosticstorageaccount
storage=/subscriptions/mySourceSubscriptionID/resourceGroups/mySourceResourceGroup/providers/Microsoft.Storage/
storageAccounts/mystorageacountname
az resource move \
--ids $vm,$nic,$nsg,$pip,$vnet,$storage,$diag \
--destination-group "myDestinationResourceGroup"
If you want to move the VM and its resources to a different subscription, add the --destination-subscriptionId
parameter to specify the destination subscription.
If you are asked to confirm that you want to move the specified resource. Type Y to confirm that you want to move
the resources.
Next steps
You can move many different types of resources between resource groups and subscriptions. For more
information, see Move resources to new resource group or subscription.
Resize a Linux virtual machine using CLI 2.0
4/25/2018 • 1 min to read • Edit Online
After you provision a virtual machine (VM ), you can scale the VM up or down by changing the VM size. In some
cases, you must deallocate the VM first. You need to deallocate the VM if the desired size is not available on the
hardware cluster that is hosting the VM. This article details how to resize a Linux VM with the Azure CLI 2.0. You
can also perform these steps with the Azure CLI 1.0.
Resize a VM
To resize a VM, you need the latest Azure CLI 2.0 installed and logged in to an Azure account using az login.
1. View the list of available VM sizes on the hardware cluster where the VM is hosted with az vm list-vm-
resize-options. The following example lists VM sizes for the VM named myVM in the resource group
myResourceGroup region:
2. If the desired VM size is listed, resize the VM with az vm resize. The following example resizes the VM
named myVM to the Standard_DS3_v2 size:
The VM restarts during this process. After the restart, your existing OS and data disks are remapped.
Anything on the temporary disk is lost.
3. If the desired VM size is not listed, you need to first deallocate the VM with az vm deallocate. This process
allows the VM to then be resized to any size available that the region supports and then started. The
following steps deallocate, resize, and then start the VM named myVM in the resource group named
myResourceGroup :
WARNING
Deallocating the VM also releases any dynamic IP addresses assigned to the VM. The OS and data disks are not
affected.
Next steps
For additional scalability, run multiple VM instances and scale out. For more information, see Automatically scale
Linux machines in a Virtual Machine Scale Set.
Change the OS disk used by an Azure VM using the
CLI
4/26/2018 • 1 min to read • Edit Online
If you have an existing VM, but you want to swap the disk for a backup disk or another OS disk, you can use the
Azure CLI to swap the OS disks. You don't have to delete and recreate the VM. You can even use a managed disk in
another resource group, as long as it isn't already in use.
The VM does need to be stopped\deallocated, then the resource ID of the managed disk can be replaced with the
resource ID of a different managed disk.
Make sure that the VM size and storage type are compatible with the disk you want to attach. For example, if the
disk you want to use is in Premium Storage, then the VM needs to be capable of Premium Storage (like a DS -
series size).
This article requires Azure CLI version 2.0.25 or greater. Run az --version to find the version. If you need to install
or upgrade, see Install Azure CLI 2.0.
Use az disk list to get a list of the disks in your resource group.
az disk list \
-g myResourceGroupDisk \
--query '[*].{diskId:id}' \
--output table
az vm stop \
-n myVM \
-g myResourceGroup
Use az vm update with the full resource ID of the new disk for the --osdisk parameter
az vm update \
-g myResourceGroup \
-n myVM \
--os-disk /subscriptions/<subscription ID>/resourceGroups/swap/providers/Microsoft.Compute/disks/myDisk
az vm start \
-n myVM \
-g myResourceGroup
Next steps
To create a copy of a disk, see Snapshot a disk.
How to tag a Linux virtual machine in Azure
4/11/2018 • 2 min to read • Edit Online
This article describes different ways to tag a Linux virtual machine in Azure through the Resource Manager
deployment model. Tags are user-defined key/value pairs which can be placed directly on a resource or a resource
group. Azure currently supports up to 15 tags per resource and resource group. Tags may be placed on a resource
at the time of creation or added to an existing resource. Please note, tags are supported for resources created via
the Resource Manager deployment model only.
This template includes the following tags: Department, Application, and Created By. You can add/edit these tags
directly in the template if you would like different tag names.
As you can see, the tags are defined as key/value pairs, separated by a colon (:). The tags must be defined in this
format:
“tags”: {
“Key1” : ”Value1”,
“Key2” : “Value2”
}
Save the template file after you finish editing it with the tags of your choice.
Next, in the Edit Parameters section, you can fill out the values for your tags.
Click Create to deploy this template with your tag values.
Add a new tag through the portal by defining your own Key/Value pair, and save it.
Your new tag should now appear in the list of tags for your resource.
Tagging with Azure CLI
To begin, you need the latest Azure CLI 2.0 installed and logged in to an Azure account using az login.
You can also perform these steps with the Azure CLI 1.0.
You can view all properties for a given Virtual Machine, including the tags, using this command:
To add a new VM tag through the Azure CLI, you can use the azure vm update command along with the tag
parameter --set:
az vm update \
--resource-group MyResourceGroup \
--name MyTestVM \
--set tags.myNewTagName1=myNewTagValue1 tags.myNewTagName2=myNewTagValue2
To remove tags, you can use the --remove parameter in the azure vm update command.
Now that we have applied tags to our resources Azure CLI and the Portal, let’s take a look at the usage details to
see the tags in the billing portal.
From the usage details, you can see all of the tags in the Tags column:
By analyzing these tags along with usage, organizations will be able to gain new insights into their consumption
data.
Next steps
To learn more about tagging your Azure resources, see Azure Resource Manager Overview and Using Tags to
organize your Azure Resources.
To see how tags can help you manage your use of Azure resources, see Understanding your Azure Bill and Gain
insights into your Microsoft Azure resource consumption.
1 min to read •
Edit Online
Install and configure Remote Desktop to connect to a
Linux VM in Azure
2/27/2018 • 4 min to read • Edit Online
Linux virtual machines (VMs) in Azure are usually managed from the command line using a secure shell (SSH)
connection. When new to Linux, or for quick troubleshooting scenarios, the use of remote desktop may be easier.
This article details how to install and configure a desktop environment (xfce) and remote desktop (xrdp) for your
Linux VM using the Resource Manager deployment model.
Prerequisites
This article requires an existing Ubuntu 16.04 LTS VM in Azure. If you need to create a VM, use one of the
following methods:
The Azure CLI 2.0
The Azure portal
If you are using Windows and need more information on using SSH, see How to use SSH keys with Windows.
Next, install xfce using apt as follows:
Restart the xrdp service for the changes to take effect as follows:
NOTE
Specifying a password does not update your SSHD configuration to permit password logins if it currently does not. From a
security perspective, you may wish to connect to your VM with an SSH tunnel using key-based authentication and then
connect to xrdp. If so, skip the following step on creating a network security group rule to allow remote desktop traffic.
Troubleshoot
If you cannot connect to your Linux VM using a Remote Desktop client, use netstat on your Linux VM to verify
that your VM is listening for RDP connections as follows:
The following example shows the VM listening on TCP port 3389 as expected:
If the xrdp -sesman service is not listening, on an Ubuntu VM restart the service as follows:
Review logs in /var/log on your Ubuntu VM for indications as to why the service may not be responding. You can
also monitor the syslog during a remote desktop connection attempt to view any errors:
tail -f /var/log/syslog
Other Linux distributions such as Red Hat Enterprise Linux and SUSE may have different ways to restart services
and alternate log file locations to review.
If you do not receive any response in your remote desktop client and do not see any events in the system log, this
behavior indicates that remote desktop traffic cannot reach the VM. Review your network security group rules to
ensure that you have a rule to permit TCP on port 3389. For more information, see Troubleshoot application
connectivity issues.
Next steps
For more information about creating and using SSH keys with Linux VMs, see Create SSH keys for Linux VMs in
Azure.
For information on using SSH from Windows, see How to use SSH keys with Windows.
Join a RedHat Linux VM to an Azure Active Directory
Domain Service
4/9/2018 • 1 min to read • Edit Online
This article shows you how to join a Red Hat Enterprise Linux (RHEL ) 7 virtual machine to an Azure Active
Directory Domain Services (AADDS ) managed domain. The requirements are:
an Azure account
SSH public and private key files
an Azure Active Directory Domain Services DC
Quick Commands
Replace any examples with your own settings.
Switch the azure -cli to classic deployment mode
SSH to the VM
Now that the required packages are installed on the Linux virtual machine, the next task is to join the virtual
machine to the managed domain.
Discover the AAD Domain Services managed domain
Initialize kerberos
Ensure that you specify a user who belongs to the 'AAD DC Administrators' group. Only these users can join
computers to the managed domain.
kinit [email protected]
Next Steps
Red Hat Update Infrastructure (RHUI) for on-demand Red Hat Enterprise Linux VMs in Azure
Set up Key Vault for virtual machines in Azure Resource Manager
Deploy and manage virtual machines by using Azure Resource Manager templates and the Azure CLI
Log in to a Linux virtual machine in Azure using
Azure Active Directory authentication (Preview)
5/8/2018 • 7 min to read • Edit Online
To improve the security of Linux virtual machines (VMs) in Azure, you can integrate with Azure Active Directory
(AD ) authentication. When you use Azure AD authentication for Linux VMs, you centrally control and enforce
policies that allow or deny access to the VMs. This article shows you how to create and configure a Linux VM to
use Azure AD authentication.
NOTE
This feature is in preview and is not recommended for use with production virtual machines or workloads. Use this feature on
a test virtual machine that you expect to discard after testing.
There are many benefits of using Azure AD authentication to log in to Linux VMs in Azure, including:
Improved security:
You can use your corporate AD credentials to log in to Azure Linux VMs. There is no need to create local
administrator accounts and manage credential lifetime.
By reducing your reliance on local administrator accounts, you do not need to worry about credential
loss/theft, users configuring weak credentials etc.
The password complexity and password lifetime policies configured for your Azure AD directory help
secure Linux VMs as well.
To further secure login to Azure virtual machines, you can configure multi-factor authentication.
Seamless collaboration: With Role-Based Access Control (RBAC ), you can specify who can sign in to a
given VM as a regular user or with administrator privileges. When users join or leave your team, you can
update the RBAC policy for the VM to grant access as appropriate. This experience is much simpler than
having to scrub VMs to remove unnecessary SSH public keys. When employees leave your organization
and their user account is disabled or removed from Azure AD, they no longer have access to your resources.
Supported Azure regions and Linux distributions
The following Linux distributions are currently supported during the preview of this feature:
DISTRIBUTION VERSION
Ubuntu Server Ubuntu 14.04 LTS, Ubuntu Server 16.04 and Ubuntu Server
17.10
The following Azure regions are currently supported during the preview of this feature:
All public Azure regions
IMPORTANT
To use this preview feature, only deploy a supported Linux distro and in a supported Azure region. The feature is not
supported in Azure Government or sovereign clouds.
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version
2.0.31 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI 2.0.
az vm create \
--resource-group myResourceGroup \
--name myVM \
--image UbuntuLTS \
--admin-username azureuser \
--generate-ssh-keys
The provisioningState of Succeeded is shown once the extension is installed on the VM.
NOTE
To allow a user to log in to the VM over SSH, you must assign either the Virtual Machine Administrator Login or Virtual
Machine User Login role. An Azure user with the Owner or Contributor roles assigned for a VM do not automatically have
privileges to log in to the VM over SSH.
The following example uses az role assignment create to assign the Virtual Machine Administrator Login role to
the VM for your current Azure user. The username of your active Azure account is obtained with az account show,
and the scope is set to the VM created in a previous step with az vm show. The scope could also be assigned at a
resource group or subscription level, and normal RBAC inheritance permissions apply. For more information, see
Role-Based Access Controls
For more information on how to use RBAC to manage access to your Azure subscription resources, see using the
Azure CLI 2.0, Azure portal, or Azure PowerShell.
You can also configure Azure AD to require multi-factor authentication for a specific user to sign in to the Linux
virtual machine. For more information, see Get started with Azure Multi-Factor Authentication in the cloud.
Log in to the Azure Linux virtual machine using your Azure AD credentials. The -l parameter lets you specify
your own Azure AD account address. Specify the public IP address of your VM as output in the previous
command:
ssh -l [email protected] publicIps
You are prompted to sign in to Azure AD with a one-time use code at https://microsoft.com/devicelogin. Copy and
paste the one-time use code into the device login page, as shown in the following example:
When prompted, enter your Azure AD login credentials at the login page. The following message is shown in the
web browser when you have successfully authenticated:
You have signed in to the Microsoft Azure Linux Virtual Machine Sign-In application on your device.
Close the browser window, return to the SSH prompt, and press the Enter key. You are now signed in to the Azure
Linux virtual machine with the role permissions as assigned, such as VM User or VM Administrator. If your user
account is assigned the Virtual Machine Administrator Login role, you can use the sudo to run commands that
require root privileges.
Red Hat Update Infrastructure (RHUI) allows cloud providers, such as Azure, to mirror Red Hat-hosted repository
content, create custom repositories with Azure-specific content, and make it available to end-user VMs.
Red Hat Enterprise Linux (RHEL ) Pay-As-You-Go (PAYG ) images come preconfigured to access Azure RHUI. No
additional configuration is needed. To get the latest updates, run sudo yum update after your RHEL instance is
ready. This service is included as part of the RHEL PAYG software fees.
# Azure US Government
13.72.186.193
# Azure Germany
51.5.243.77
51.4.228.145
3. Check the output, and then verify the keyid and the user ID packet .
Version: GnuPG v1.4.7 (GNU/Linux)
:public key packet:
version 4, algo 1, created 1446074508, expires 0
pkey[0]: [2048 bits]
pkey[1]: [17 bits]
keyid: EB3E94ADBE1229CF
:user ID packet: "Microsoft (Release signing) <[email protected]>"
:signature packet: algo 1, keyid EB3E94ADBE1229CF
version 4, created 1446074508, md5len 0, sigclass 0x13
digest algo 2, begin of digest 1a 9b
hashed subpkt 2 len 4 (sig created 2015-10-28)
hashed subpkt 27 len 1 (key flags: 03)
hashed subpkt 11 len 5 (pref-sym-algos: 9 8 7 3 2)
hashed subpkt 21 len 3 (pref-hash-algos: 2 8 3)
hashed subpkt 22 len 2 (pref-zip-algos: 2 1)
hashed subpkt 30 len 1 (features: 01)
hashed subpkt 23 len 1 (key server preferences: 80)
subpkt 16 len 8 (issuer key ID EB3E94ADBE1229CF)
data: [2047 bits]
NOTE
Package versions change. If you manually connect to Azure RHUI, you can find the latest version of the client package
for each RHEL family by provisioning the latest image from the gallery.
a. Download.
For RHEL 6:
For RHEL 7:
b. Verify.
c. Check the output to ensure that the signature of the package is OK.
azureclient.rpm:
Header V3 RSA/SHA256 Signature, key ID be1229cf: OK
Header SHA1 digest: OK (927a3b548146c95a3f6c1a5d5ae52258a8859ab3)
V3 RSA/SHA256 Signature, key ID be1229cf: OK
MD5 digest: OK (c04ff605f82f4be8c96020bf5c23b86c)
d. Install the RPM.
6. After you finish, verify that you can access Azure RHUI from the VM.
Next steps
To create a Red Hat Enterprise Linux VM from an Azure Marketplace PAYG image and to use Azure-hosted RHUI,
go to the Azure Marketplace.
Understanding and using the Azure Linux Agent
5/10/2018 • 7 min to read • Edit Online
The Microsoft Azure Linux Agent (waagent) manages Linux & FreeBSD provisioning, and VM interaction with the
Azure Fabric Controller. In addition to the Linux Agent providing provisioning functionality, Azure also provides
the option of using cloud-init for some Linux OSes. The Linux Agent provides the following functionality for Linux
and FreeBSD IaaS deployments:
NOTE
For more information, see the README.
Image Provisioning
Creation of a user account
Configuring SSH authentication types
Deployment of SSH public keys and key pairs
Setting the host name
Publishing the host name to the platform DNS
Reporting SSH host key fingerprint to the platform
Resource Disk Management
Formatting and mounting the resource disk
Configuring swap space
Networking
Manages routes to improve compatibility with platform DHCP servers
Ensures the stability of the network interface name
Kernel
Configures virtual NUMA (disable for kernel < 2.6.37 )
Consumes Hyper-V entropy for /dev/random
Configures SCSI timeouts for the root device (which could be remote)
Diagnostics
Console redirection to the serial port
SCVMM Deployments
Detects and bootstraps the VMM agent for Linux when running in a System Center Virtual Machine
Manager 2012 R2 environment
VM Extension
Inject component authored by Microsoft and Partners into Linux VM (IaaS ) to enable software and
configuration automation
VM Extension reference implementation on https://github.com/Azure/azure-linux-extensions
Communication
The information flow from the platform to the agent occurs via two channels:
A boot-time attached DVD for IaaS deployments. This DVD includes an OVF -compliant configuration file that
includes all provisioning information other than the actual SSH keypairs.
A TCP endpoint exposing a REST API used to obtain deployment and topology configuration.
Requirements
The following systems have been tested and are known to work with the Azure Linux Agent:
NOTE
This list may differ from the official list of supported systems on the Microsoft Azure Platform, as described here:
http://support.microsoft.com/kb/2805216
CoreOS
CentOS 6.3+
Red Hat Enterprise Linux 6.7+
Debian 7.0+
Ubuntu 12.04+
openSUSE 12.3+
SLES 11 SP3+
Oracle Linux 6.4+
Other Supported Systems:
FreeBSD 10+ (Azure Linux Agent v2.0.10+)
The Linux agent depends on some system packages in order to function properly:
Python 2.6+
OpenSSL 1.0+
OpenSSH 5.3+
Filesystem utilities: sfdisk, fdisk, mkfs, parted
Password tools: chpasswd, sudo
Text processing tools: sed, grep
Network tools: ip-route
Kernel support for mounting UDF filesystems.
Installation
Installation using an RPM or a DEB package from your distribution's package repository is the preferred method
of installing and upgrading the Azure Linux Agent. All the endorsed distribution providers integrate the Azure
Linux agent package into their images and repositories.
Refer to the documentation in the Azure Linux Agent repo on GitHub for advanced installation options, such as
installing from source or to custom locations or prefixes.
Command-Line Options
Flags
verbose: Increase verbosity of specified command
force: Skip interactive confirmation for some commands
Commands
help: Lists the supported commands and flags.
deprovision: Attempt to clean the system and make it suitable for reprovisioning. The following operation
deletes:
All SSH host keys (if Provisioning.RegenerateSshHostKeyPair is 'y' in the configuration file)
Nameserver configuration in /etc/resolv.conf
Root password from /etc/shadow (if Provisioning.DeleteRootPassword is 'y' in the configuration file)
Cached DHCP client leases
Resets host name to localhost.localdomain
WARNING
Deprovisioning does not guarantee that the image is cleared of all sensitive information and suitable for redistribution.
deprovision+user: Performs everything in -deprovision (above) and also deletes the last provisioned user
account (obtained from /var/lib/waagent) and associated data. This parameter is when de-provisioning an
image that was previously provisioning on Azure so it may be captured and reused.
version: Displays the version of waagent
serialconsole: Configures GRUB to mark ttyS0 (the first serial port) as the boot console. This ensures that
kernel bootup logs are sent to the serial port and made available for debugging.
daemon: Run waagent as a daemon to manage interaction with the platform. This argument is specified to
waagent in the waagent init script.
start: Run waagent as a background process
Configuration
A configuration file (/etc/waagent.conf ) controls the actions of waagent. The following shows a sample
configuration file:
```
Provisioning.Enabled=y
Provisioning.DeleteRootPassword=n
Provisioning.RegenerateSshHostKeyPair=y
Provisioning.SshHostKeyPairType=rsa
Provisioning.MonitorHostName=y
Provisioning.DecodeCustomData=n
Provisioning.ExecuteCustomData=n
Provisioning.AllowResetSysUser=n
Provisioning.PasswordCryptId=6
Provisioning.PasswordCryptSaltLength=10
ResourceDisk.Format=y
ResourceDisk.Filesystem=ext4
ResourceDisk.MountPoint=/mnt/resource
ResourceDisk.MountOptions=None
ResourceDisk.EnableSwap=n
ResourceDisk.SwapSizeMB=0
LBProbeResponder=y
Logs.Verbose=n
OS.RootDeviceScsiTimeout=300
OS.OpensslPath=None
HttpProxy.Host=None
HttpProxy.Port=None
AutoUpdate.Enabled=y
```
The following various configuration options are described. Configuration options are of three types; Boolean,
String, or Integer. The Boolean configuration options can be specified as "y" or "n". The special keyword "None"
may be used for some string type configuration entries as the following details:
Provisioning.Enabled:
Type: Boolean
Default: y
This allows the user to enable or disable the provisioning functionality in the agent. Valid values are "y" or "n". If
provisioning is disabled, SSH host and user keys in the image are preserved and any configuration specified in the
Azure provisioning API is ignored.
NOTE
The Provisioning.Enabled parameter defaults to "n" on Ubuntu Cloud Images that use cloud-init for provisioning.
Provisioning.DeleteRootPassword:
Type: Boolean
Default: n
If set, the root password in the /etc/shadow file is erased during the provisioning process.
Provisioning.RegenerateSshHostKeyPair:
Type: Boolean
Default: y
If set, all SSH host key pairs (ecdsa, dsa, and rsa) are deleted during the provisioning process from /etc/ssh/. And a
single fresh key pair is generated.
The encryption type for the fresh key pair is configurable by the Provisioning.SshHostKeyPairType entry. Some
distributions re-create SSH key pairs for any missing encryption types when the SSH daemon is restarted (for
example, upon a reboot).
Provisioning.SshHostKeyPairType:
Type: String
Default: rsa
This can be set to an encryption algorithm type that is supported by the SSH daemon on the virtual machine. The
typically supported values are "rsa", "dsa" and "ecdsa". "putty.exe" on Windows does not support "ecdsa". So, if you
intend to use putty.exe on Windows to connect to a Linux deployment, use "rsa" or "dsa".
Provisioning.MonitorHostName:
Type: Boolean
Default: y
If set, waagent monitors the Linux virtual machine for hostname changes (as returned by the "hostname"
command) and automatically update the networking configuration in the image to reflect the change. In order to
push the name change to the DNS servers, networking is restarted in the virtual machine. This results in brief loss
of Internet connectivity.
Provisioning.DecodeCustomData
Type: Boolean
Default: n
Type: Boolean
Default: n
Type: Boolean
Default: n
This option allows the password for the sys user to be reset; default is disabled.
Provisioning.PasswordCryptId
Type: String
Default: 6
Type: String
Default: 10
Type: Boolean
Default: y
If set, the resource disk provided by the platform is formatted and mounted by waagent if the filesystem type
requested by the user in "ResourceDisk.Filesystem" is anything other than "ntfs". A single partition of type Linux
(83) is made available on the disk. This partition is not formatted if it can be successfully mounted.
ResourceDisk.Filesystem:
Type: String
Default: ext4
This specifies the filesystem type for the resource disk. Supported values vary by Linux distribution. If the string is
X, then mkfs.X should be present on the Linux image. SLES 11 images should typically use 'ext3'. FreeBSD images
should use 'ufs2' here.
ResourceDisk.MountPoint:
Type: String
Default: /mnt/resource
This specifies the path at which the resource disk is mounted. The resource disk is a temporary disk, and might be
emptied when the VM is deprovisioned.
ResourceDisk.MountOptions
Type: String
Default: None
Specifies disk mount options to be passed to the mount -o command. This is a comma-separated list of values, ex.
'nodev,nosuid'. See mount(8) for details.
ResourceDisk.EnableSwap:
Type: Boolean
Default: n
If set, a swap file (/swapfile) is created on the resource disk and added to the system swap space.
ResourceDisk.SwapSizeMB:
Type: Integer
Default: 0
Type: Boolean
Default: n
If set, log verbosity is boosted. Waagent logs to /var/log/waagent.log and utilizes the system logrotate functionality
to rotate logs.
OS.EnableRDMA
Type: Boolean
Default: n
If set, the agent attempts to install and then load an RDMA kernel driver that matches the version of the firmware
on the underlying hardware.
OS.RootDeviceScsiTimeout:
Type: Integer
Default: 300
This setting configures the SCSI timeout in seconds on the OS disk and data drives. If not set, the system defaults
are used.
OS.OpensslPath:
Type: String
Default: None
This setting can be used to specify an alternate path for the openssl binary to use for cryptographic operations.
HttpProxy.Host, HttpProxy.Port
Type: String
Default: None
If set, the agent uses this proxy server to access the internet.
AutoUpdate.Enabled
Type: Boolean
Default: y
Azure periodically performs updates to improve the reliability, performance, and security of the host infrastructure
for virtual machines. Updates are changes like patching the hosting environment or upgrading and
decommissioning hardware. A majority of these updates are performed without any impact to the hosted virtual
machines. However, there are cases where updates do have an impact:
If the maintenance does not require a reboot, Azure uses in-place migration to pause the VM while the host
is updated.
If maintenance requires a reboot, you get a notice of when the maintenance is planned. In these cases, you
are given a time window where you can start the maintenance yourself, when it works for you.
Planned maintenance that requires a reboot is scheduled in waves. Each wave has different scope (regions).
A wave starts with a notification to customers. By default, notification is sent to subscription owner and co-
owners. You can add more recipients and messaging options like email, SMS, and webhooks, to the
notifications using Azure Activity Log Alerts.
At the time of the notification, a self-service window is made available. During this window, you can find which
of your virtual machines are included in this wave and proactively start maintenance according to your own
scheduling needs.
After the self-service window, a scheduled maintenance window begins. At some point during this window,
Azure schedules and applies the required maintenance to your virtual machine.
The goal in having two windows is to give you enough time to start maintenance and reboot your virtual machine
while knowing when Azure will automatically start maintenance.
You can use the Azure portal, PowerShell, REST API, and CLI to query for the maintenance windows for your
VMs and start self-service maintenance.
NOTE
If you try to start maintenance and the request fails, Azure marks your VM as skipped. You will no longer be able to use the
Customer Initiated Maintenance option. Your VM will have to be rebooted by Azure during the scheduled maintenance
phase.
NOTE
Self-service maintenance might not be available for all of your VMs. To determine if proactive redeploy is available for your
VM, look for the Start now in the maintenance status. Self-service maintenance is currently not available for Cloud Services
(Web/Worker Role), Service Fabric, and Virtual Machine Scale Sets.
Self-service maintenance is not recommended for deployments using availability sets since these are highly
available setups, where only one update domain is impacted at any given time.
Let Azure trigger the maintenance, but be aware that the order of update domains being impacted does not
necessarily happen sequentially and that there is a 30-minute pause between update domains.
If a temporary loss of some of your capacity (1/update domain count) is a concern, it can easily be
compensated for by allocating addition instances during the maintenance period.
Don't use self-service maintenance in the following scenarios:
If you shut down your VMs frequently, either manually, using DevTest labs, using auto-shutdown, or following
a schedule, it could revert the maintenance status and therefore cause additional downtime.
On short-lived VMs which you know will be deleted before the end of the maintenance wave.
For workloads with a large state stored in the local (ephemeral) disk that is desired to be maintained upon
update.
For cases where you resize your VM often, as it could revert the maintenance status.
If you have adopted scheduled events which enable proactive failover or graceful shutdown of your workload,
15 minutes before start of maintenance shutdown
Use self-service maintenance, if you are planning to run your VM uninterrupted during the scheduled
maintenance phase and none of the counter-indications mentioned above are applicable.
It is best to use self-service maintenance in the following cases:
You need to communicate an exact maintenance window to your management or end-customer.
You need to complete the maintenance by a given date.
You need to control the sequence of maintenance, e.g., multi-tier application to guarantee safe recovery.
You need more than 30 minutes of VM recovery time between two update domains (UDs). To control the time
between update domains, you must trigger maintenance on your VMs one update domain (UD ) at a time.
VALUE DESCRIPTION
VALUE DESCRIPTION
Maintenance Pro-Active - shows the time window when you can self-start maintenance on your VMs.
Maintenance Scheduled - shows the time window when Azure will reboot your VM in order to complete
maintenance.
Classic deployments
If you still have legacy VMs that were deployed using the classic deployment model, you can use CLI 1.0 to query
for VMs and initiate maintenance.
Make sure you are in the correct mode to work with classic VM by typing:
To start maintenance on your classic VM named myVM in the myService service and myDeployment deployment,
type:
FAQ
Q: Why do you need to reboot my virtual machines now?
A: While the majority of updates and upgrades to the Azure platform do not impact virtual machine's availability,
there are cases where we can't avoid rebooting virtual machines hosted in Azure. We have accumulated several
changes which require us to restart our servers which will result in virtual machines reboot.
Q: If I follow your recommendations for High Availability by using an Availability Set, am I safe?
A: Virtual machines deployed in an availability set or virtual machine scale sets have the notion of Update
Domains (UD ). When performing maintenance, Azure honors the UD constraint and will not reboot virtual
machines from different UD (within the same availability set). Azure also waits for at least 30 minutes before
moving to the next group of virtual machines.
For more information about high availability, see Regions and availability for virtual machines in Azure.
Q: How do I get notified about planned maintenance?
A: A planned maintenance wave starts by setting a schedule to one or more Azure regions. Soon after, an email
notification is sent to the subscription owners (one email per subscription). Additional channels and recipients for
this notification could be configured using Activity Log Alerts. In case you deploy a virtual machine to a region
where planned maintenance is already scheduled, you will not receive the notification but rather need to check the
maintenance state of the VM.
Q: I don't see any indication of planned maintenance in the portal, Powershell, or CLI, What is wrong?
A: Information related to planned maintenance is available during a planned maintenance wave only for the VMs
which are going to be impacted by it. In other words, if you see not data, it could be that the maintenance wave
has already completed (or not started) or that your virtual machine is already hosted in an updated server.
Q: Is there a way to know exactly when my virtual machine will be impacted?
A: When setting the schedule, we define a time window of several days. However, the exact sequencing of servers
(and VMs) within this window is unknown. Customers who would like to know the exact time for their VMs can
use scheduled events and query from within the virtual machine and receive a 15 minute notification before a VM
reboot.
Q: How long will it take you to reboot my virtual machine?
A: Depending on the size of your VM, reboot may take up to several minutes during the self-service maintenance
window. During the Azure initiated reboots in the scheduled maintenance window, the reboot will typically take
about 25 minutes. Note that in case you use Cloud Services (Web/Worker Role), Virtual Machine Scale Sets, or
availability sets, you will be given 30 minutes between each group of VMs (UD ) during the scheduled
maintenance window.
Q: What is the experience in the case of Cloud Services (Web/Worker Role), Service Fabric, and Virtual
Machine Scale Sets?
A: While these platforms are impacted by planned maintenance, customers using these platforms are considered
safe given that only VMs in a single Upgrade Domain (UD ) will be impacted at any given time. Self-service
maintenance is currently not available for Cloud Services (Web/Worker Role), Service Fabric, and Virtual Machine
Scale Sets.
Q: I have received an email about hardware decommissioning, is this the same as planned
maintenance?
A: While hardware decommissioning is a planned maintenance event, we have not yet onboarded this use case to
the new experience.
Q: I don’t see any maintenance information on my VMs. What went wrong?
A: There are several reasons why you’re not seeing any maintenance information on your VMs:
1. You are using a subscription marked as Microsoft internal.
2. Your VMs are not scheduled for maintenance. It could be that the maintenance wave has ended, canceled or
modified so that your VMs are no longer impacted by it.
3. You don’t have the Maintenance column added to your VM list view. While we have added this column to the
default view, customers who configured to see non-default columns must manually add the Maintenance
column to their VM list view.
Q: My VM is scheduled for maintenance for the second time. Why?
A: There are several use cases where you will see your VM scheduled for maintenance after you have already
completed your maintenance-redeploy:
1. We have canceled the maintenance wave and restarted it with a different payload. It could be that we've
detected faulted payload and we simply need to deploy an additional payload.
2. Your VM was service healed to another node due to a hardware fault
3. You have selected to stop (deallocate) and restart the VM
4. You have auto shutdown turned on for the VM
Q: Maintenance of my availability set takes a long time, and I now see “skipped” status on some of my
availability set instances. Why?
A: If you have clicked to update multiple instances in an availability set in short succession, Azure will queue these
requests and starts to update only the VMs in one update domain (UD ) at a time. However, since there might be a
pause between update domains, the update might appear to take longer. If the update queue takes longer than 60
minutes, some instances will show the skipped state even if they have been updated successfully. To avoid this
incorrect status, update your availability sets by clicking only on instance within one availability set and wait for
the update on that VM to complete before clicking on the next VM in a different update domain.
Next Steps
Learn how you can register for maintenance events from within the VM using Scheduled Events.
Guidance for mitigating speculative execution side-
channel vulnerabilities in Azure
4/9/2018 • 3 min to read • Edit Online
NOTE
In late February 2018, Intel Corporation published updated Microcode Revision Guidance on the status of their microcode
releases, which improve stability and mitigate against the recent vulnerabilities disclosed by Google Project Zero. The
mitigations put in place by Azure on January 3, 2018 are not affected by Intel’s microcode update. Microsoft already put
strong mitigations in place to protect Azure customers from other Azure virtual machines.
Intel’s microcode addresses variant 2 Spectre (CVE-2017-5715 or branch target injection) to protect against attacks which
would only be applicable where you run shared or untrusted workloads inside your VMs on Azure. Our engineers are testing
the stability to minimize performance impacts of the microcode, prior to making it available to Azure customers. As very few
customers run untrusted workloads within their VMs, most customers will not need to enable this capability once released.
This page will be updated as more information is available.
Azure Cloud Services Enable auto update or ensure you are running the newest
Guest OS.
Azure Linux Virtual Machines Install updates from your operating system provider when
available.
Azure Windows Virtual Machines Verify that you are running a supported antivirus application
before you install OS updates. Contact your antivirus software
vendor for compatibility information.
Install the January security rollup.
Other Azure PaaS Services There is no action needed for customers using these services.
Azure automatically keeps your OS versions up-to-date.
Next steps
To learn more, see Securing Azure customers from CPU vulnerability.
Azure Metadata Service: Scheduled Events for Linux
VMs
4/9/2018 • 5 min to read • Edit Online
Scheduled Events is an Azure Metadata Service that gives your application time to prepare for virtual machine
(VM ) maintenance. It provides information about upcoming maintenance events (for example, reboot) so that
your application can prepare for them and limit disruption. It's available for all Azure Virtual Machines types,
including PaaS and IaaS on both Windows and Linux.
For information about Scheduled Events on Windows, see Scheduled Events for Windows VMs.
NOTE
Scheduled Events is generally available in all Azure Regions. See Version and Region Availability for latest release information.
The Basics
Metadata Service exposes information about running VMs by using a REST endpoint that's accessible from within
the VM. The information is available via a nonroutable IP so that it's not exposed outside the VM.
Scope
Scheduled events are delivered to:
All the VMs in a cloud service.
All the VMs in an availability set.
All the VMs in a scale set placement group.
As a result, check the Resources field in the event to identify which VMs are affected.
Endpoint Discovery
For VNET enabled VMs, Metadata Service is available from a static nonroutable IP, 169.254.169.254 . The full
endpoint for the latest version of Scheduled Events is:
http://169.254.169.254/metadata/scheduledevents?api-version=2017-08-01
If the VM is not created within a Virtual Network, the default cases for cloud services and classic VMs, additional
logic is required to discover the IP address to use. To learn how to discover the host endpoint, see this sample.
Version and Region Availability
The Scheduled Events service is versioned. Versions are mandatory; the current version is 2017-08-01 .
NOTE
Previous preview releases of Scheduled Events supported {latest} as the api-version. This format is no longer supported and
will be deprecated in the future.
{
"DocumentIncarnation": {IncarnationID},
"Events": [
{
"EventId": {eventID},
"EventType": "Reboot" | "Redeploy" | "Freeze",
"ResourceType": "VirtualMachine",
"Resources": [{resourceName}],
"EventStatus": "Scheduled" | "Started",
"NotBefore": {timeInUTC},
}
]
}
Event Properties
PROPERTY DESCRIPTION
Example:
602d9444-d2cd-49c7-8624-8643e7171297
Values:
Freeze : The VM is scheduled to pause for a few
seconds. The CPU is suspended, but there is no effect
on memory, open files, or network connections.
Reboot : The VM is scheduled for reboot.
(Nonpersistent memory is lost.)
Redeploy : The VM is scheduled to move to another
node. (Ephemeral disks are lost.)
Values:
VirtualMachine
Example:
["FrontEnd_IN_0", "BackEnd_IN_0"]
PROPERTY DESCRIPTION
Values:
Scheduled : This event is scheduled to start after the
time specified in the NotBefore property.
Started : This event has started.
Example:
Mon, 19 Sep 2016 18:29:47 GMT
Event Scheduling
Each event is scheduled a minimum amount of time in the future based on the event type. This time is reflected in
an event's NotBefore property.
Freeze 15 minutes
Reboot 15 minutes
Redeploy 10 minutes
Start an event
After you learn of an upcoming event and finish your logic for graceful shutdown, you can approve the
outstanding event by making a POST call to Metadata Service with EventId . This call indicates to Azure that it can
shorten the minimum notification time (when possible).
The following JSON sample is expected in the POST request body. The request should contain a list of
StartRequests . Each StartRequest contains EventId for the event you want to expedite:
{
"StartRequests" : [
{
"EventId": {EventId}
}
]
}
Bash sample
Python sample
The following sample queries Metadata Service for scheduled events and approves each outstanding event:
#!/usr/bin/python
import json
import urllib2
import socket
import sys
metadata_url = "http://169.254.169.254/metadata/scheduledevents?api-version=2017-08-01"
headers = "{Metadata:true}"
this_host = socket.gethostname()
def get_scheduled_events():
req = urllib2.Request(metadata_url)
req.add_header('Metadata', 'true')
resp = urllib2.urlopen(req)
data = json.loads(resp.read())
return data
def handle_scheduled_events(data):
for evt in data['Events']:
eventid = evt['EventId']
status = evt['EventStatus']
resources = evt['Resources']
eventtype = evt['EventType']
resourcetype = evt['ResourceType']
notbefore = evt['NotBefore'].replace(" ","_")
if this_host in resources:
print "+ Scheduled Event. This host " + this_host + " is scheduled for " + eventtype + " not
before " + notbefore
# Add logic for handling events here
def main():
data = get_scheduled_events()
handle_scheduled_events(data)
if __name__ == '__main__':
main()
sys.exit(0)
Next steps
Watch Scheduled Events on Azure Friday to see a demo.
Review the Scheduled Events code samples in the Azure Instance Metadata Scheduled Events Github
repository.
Read more about the APIs that are available in the Instance Metadata Service.
Learn about planned maintenance for Linux virtual machines in Azure.
Azure Instance Metadata service
5/10/2018 • 8 min to read • Edit Online
The Azure Instance Metadata Service provides information about running virtual machine instances that can be
used to manage and configure your virtual machines. This includes information such as SKU, network
configuration, and upcoming maintenance events. For more information on what type of information is available,
see metadata categories.
Azure's Instance Metadata Service is a REST Endpoint accessible to IaaS VMs created via the Azure Resource
Manager. The endpoint is available at a well-known non-routable IP address ( 169.254.169.254 ) that can be
accessed only from within the VM.
IMPORTANT
This service is generally available in Azure Regions. It regularly receives updates to expose new information about virtual
machine instances. This page reflects the up-to-date data categories available.
Service availability
The service is available in generally available Azure regions. Not all API version may be available in all Azure
Regions.
All Generally Available Global Azure Generally Available 2017-04-02, 2017-08-01, 2017-12-01,
Regions 2018-02-01
This table is updated when there are service updates and or new supported versions are available
To try out the Instance Metadata Service, create a VM from Azure Resource Manager or the Azure portal in the
above regions and follow the examples below.
Usage
Versioning
The Instance Metadata Service is versioned. Versions are mandatory and the current version on Global Azure is
2017-12-01 . Current supported versions are (2017 -04 -02, 2017 -08 -01,2017 -12 -01 )
NOTE
Previous preview releases of scheduled events supported {latest} as the api-version. This format is no longer supported and
will be deprecated in the future.
As newer versions are added, older versions can still be accessed for compatibility if your scripts have
dependencies on specific data formats. However, the previous preview version (2017-03-01) may not be available
once the service is generally available.
Using headers
When you query the Instance Metadata Service, you must provide the header Metadata: true to ensure the
request was not unintentionally redirected.
Retrieving metadata
Instance metadata is available for running VMs created/managed using Azure Resource Manager. Access all data
categories for a virtual machine instance using the following request:
NOTE
All instance metadata queries are case-sensitive.
Data output
By default, the Instance Metadata Service returns data in JSON format ( Content-Type: application/json ).
However, different APIs return data in different formats if requested. The following table is a reference of other
data formats APIs may support.
To access a non-default response format, specify the requested format as a querystring parameter in the request.
For example:
Security
The Instance Metadata Service endpoint is accessible only from within the running virtual machine instance on a
non-routable IP address. In addition, any request with a X-Forwarded-For header is rejected by the service.
Requests must also contain a Metadata: true header to ensure that the actual request was directly intended and
not a part of unintentional redirection.
Error
If there is a data element not found or a malformed request, the Instance Metadata Service returns standard HTTP
errors. For example:
200 OK
405 Method Not Allowed Only GET and POST requests are supported
429 Too Many Requests The API currently supports a maximum of 5 queries per
second
Examples
NOTE
All API responses are JSON strings. All following example responses are pretty-printed for readability.
Response
NOTE
The response is a JSON string. The following example response is pretty-printed for readability.
{
"interface": [
{
"ipv4": {
"ipAddress": [
{
"privateIpAddress": "10.1.0.4",
"publicIpAddress": "X.X.X.X"
}
],
"subnet": [
{
"address": "10.1.0.0",
"prefix": "24"
}
]
},
"ipv6": {
"ipAddress": []
},
"macAddress": "000D3AF806EC"
}
]
}
curl -H Metadata:true
"http://169.254.169.254/metadata/instance/network/interface/0/ipv4/ipAddress/0/publicIpAddress?api-
version=2017-08-01&format=text"
Retrieving all metadata for an instance
Request
Response
NOTE
The response is a JSON string. The following example response is pretty-printed for readability.
{
"compute": {
"location": "westus",
"name": "avset2",
"offer": "UbuntuServer",
"osType": "Linux",
"placementGroupId": "",
"platformFaultDomain": "1",
"platformUpdateDomain": "1",
"publisher": "Canonical",
"resourceGroupName": "myrg",
"sku": "16.04-LTS",
"subscriptionId": "xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx",
"tags": "",
"version": "16.04.201708030",
"vmId": "13f56399-bd52-4150-9748-7190aae1ff21",
"vmScaleSetName": "",
"vmSize": "Standard_D1",
"zone": "1"
},
"network": {
"interface": [
{
"ipv4": {
"ipAddress": [
{
"privateIpAddress": "10.1.2.5",
"publicIpAddress": "X.X.X.X"
}
],
"subnet": [
{
"address": "10.1.2.0",
"prefix": "24"
}
]
},
"ipv6": {
"ipAddress": []
},
"macAddress": "000D3A36DDED"
}
]
}
}
Response
NOTE
The response is a JSON string. The following example response is pretty-printed for readability.
{
"compute": {
"location": "westus",
"name": "SQLTest",
"offer": "SQL2016SP1-WS2016",
"osType": "Windows",
"platformFaultDomain": "0",
"platformUpdateDomain": "0",
"publisher": "MicrosoftSQLServer",
"sku": "Enterprise",
"version": "13.0.400110",
"vmId": "453945c8-3923-4366-b2d3-ea4c80e9b70e",
"vmSize": "Standard_DS2"
},
"network": {
"interface": [
{
"ipv4": {
"ipAddress": [
{
"privateIpAddress": "10.0.1.4",
"publicIpAddress": "X.X.X.X"
}
],
"subnet": [
{
"address": "10.0.1.0",
"prefix": "24"
}
]
},
"ipv6": {
"ipAddress": [
]
},
"macAddress": "002248020E1E"
}
]
}
}
Response
5c08b38e-4d57-4c23-ac45-aca61037f084
Response
Response
NOTE
The response is a JSON string. The following example response is pretty-printed for readability.
{
"compute": {
"location": "CentralUS",
"name": "IMDSCanary",
"offer": "RHEL",
"osType": "Linux",
"platformFaultDomain": "0",
"platformUpdateDomain": "0",
"publisher": "RedHat",
"sku": "7.2",
"version": "7.2.20161026",
"vmId": "5c08b38e-4d57-4c23-ac45-aca61037f084",
"vmSize": "Standard_DS2"
}
}
Ruby https://github.com/Microsoft/azureimds/blob/master/IMDSSa
mple.rb
Go https://github.com/Microsoft/azureimds/blob/master/imdssa
mple.go
Python https://github.com/Microsoft/azureimds/blob/master/IMDSSa
mple.py
C++ https://github.com/Microsoft/azureimds/blob/master/IMDSSa
mple-windows.cpp
C# https://github.com/Microsoft/azureimds/blob/master/IMDSSa
mple.cs
JavaScript https://github.com/Microsoft/azureimds/blob/master/IMDSSa
mple.js
PowerShell https://github.com/Microsoft/azureimds/blob/master/IMDSSa
mple.ps1
Bash https://github.com/Microsoft/azureimds/blob/master/IMDSSa
mple.sh
Perl https://github.com/Microsoft/azureimds/blob/master/IMDSSa
mple.pl
Java https://github.com/Microsoft/azureimds/blob/master/imdssa
mple.java
Puppet https://github.com/keirans/azuremetadata
FAQ
1. I am getting the error 400 Bad Request, Required metadata header not specified . What does this mean?
The Instance Metadata Service requires the header Metadata: true to be passed in the request. Passing
this header in the REST call allows access to the Instance Metadata Service.
2. Why am I not getting compute information for my VM?
Currently the Instance Metadata Service only supports instances created with Azure Resource Manager.
In the future, support for Cloud Service VMs might be added.
3. I created my Virtual Machine through Azure Resource Manager a while back. Why am I not see compute
metadata information?
For any VMs created after Sep 2016, add a Tag to start seeing compute metadata. For older VMs
(created before Sep 2016), add/remove extensions or data disks to the VM to refresh metadata.
4. I am not seeing all data populated for new version of 2017-08-01
For any VMs created after Sep 2016, add a Tag to start seeing compute metadata. For older VMs
(created before Sep 2016), add/remove extensions or data disks to the VM to refresh metadata.
5. Why am I getting the error 500 Internal Server Error ?
Retry your request based on exponential back off system. If the issue persists contact Azure support.
6. Where do I share additional questions/comments?
Send your comments on http://feedback.azure.com.
7. Would this work for Virtual Machine Scale Set Instance?
Yes Metadata service is available for Scale Set Instances.
8. How do I get support for the service?
To get support for the service, create a support issue in Azure portal for the VM where you are not able
to get metadata response after long retries
Next steps
Learn more about Scheduled Events
How to find Linux VM images in the Azure
Marketplace with the Azure CLI
3/1/2018 • 8 min to read • Edit Online
This topic describes how to use the Azure CLI 2.0 to find VM images in the Azure Marketplace. Use this
information to specify a Marketplace image when you create a VM programmatically with the CLI, Resource
Manager templates, or other tools.
Make sure that you installed the latest Azure CLI 2.0 and are logged in to an Azure account ( az login ).
Terminology
A Marketplace image in Azure has the following attributes:
Publisher - The organization that created the image. Examples: Canonical, MicrosoftWindowsServer
Offer - Name of a group of related images created by a publisher. Examples: Ubuntu Server, WindowsServer
SKU - An instance of an offer, such as a major release of a distribution. Examples: 16.04-LTS, 2016-Datacenter
Version - The version number of an image SKU.
To identify a Marketplace image when you deploy a VM programmatically, supply these values individually as
parameters, or some tools accept the image URN. The URN combines these values, separated by the colon (:)
character: Publisher:Offer:Sku:Version. In a URN, you can replace the version number with "latest", which selects
the latest version of the image.
If the image publisher provides additional license and purchase terms, you must accept those terms and enable
programmatic deployment. You also need to supply purchase plan parameters when deploying a VM
programmatically. See Deploy an image with Marketplace terms.
The output includes the image URN (the value in the Urn column). When creating a VM with one of these popular
Marketplace images, you can alternatively specify the UrnAlias, a shortened form such as UbuntuLTS.
You are viewing an offline list of images, use --all to retrieve an up-to-date list
Offer Publisher Sku Urn
UrnAlias Version
------------- ---------------------- ------------------ ---------------------------------------------------
----------- ------------------- ---------
CentOS OpenLogic 7.3 OpenLogic:CentOS:7.3:latest
CentOS latest
CoreOS CoreOS Stable CoreOS:CoreOS:Stable:latest
CoreOS latest
Debian credativ 8 credativ:Debian:8:latest
Debian latest
openSUSE-Leap SUSE 42.2 SUSE:openSUSE-Leap:42.2:latest
openSUSE-Leap latest
RHEL RedHat 7.3 RedHat:RHEL:7.3:latest
RHEL latest
SLES SUSE 12-SP2 SUSE:SLES:12-SP2:latest
SLES latest
UbuntuServer Canonical 16.04-LTS Canonical:UbuntuServer:16.04-LTS:latest
UbuntuLTS latest
...
Partial output:
Apply similar filters with the --location , --publisher , and --sku options. You can even perform partial matches
on a filter, such as searching for --offer Deb to find all Debian images.
If you don't specify a particular location with the --location option, the values for the default location are
returned. (Set a different default location by running az configure --defaults location=<location> .)
For example, the following command lists all Debian 8 SKUs in the West Europe location:
az vm image list --location westeurope --offer Deb --publisher credativ --sku 8 --all --output table
Partial output:
Partial output:
Location Name
---------- ----------------------------------------------------
westus 1e
westus 4psa
westus 7isolutions
westus a10networks
westus abiquo
westus accellion
westus Acronis
westus Acronis.Backup
westus actian_matrix
westus actifio
westus activeeon
westus adatao
...
Use this information to find offers from a specific publisher. For example, if Canonical is an image publisher in the
West US location, find their offers by running azure vm image list-offers . Pass the location and the publisher as
in the following example:
Output:
Location Name
---------- -------------------------
westus Ubuntu15.04Snappy
westus Ubuntu15.04SnappyDocker
westus UbunturollingSnappy
westus UbuntuServer
westus Ubuntu_Core
westus Ubuntu_Snappy_Core
westus Ubuntu_Snappy_Core_Docker
You see that in the West US region, Canonical publishes the UbuntuServer offer on Azure. But what SKUs? To get
those values, run azure vm image list-skus and set the location, publisher, and offer that you discovered:
az vm image list-skus --location westus --publisher Canonical --offer UbuntuServer --output table
Output:
Location Name
---------- -----------------
westus 12.04.3-LTS
westus 12.04.4-LTS
westus 12.04.5-DAILY-LTS
westus 12.04.5-LTS
westus 12.10
westus 14.04.0-LTS
westus 14.04.1-LTS
westus 14.04.2-LTS
westus 14.04.3-LTS
westus 14.04.4-LTS
westus 14.04.5-DAILY-LTS
westus 14.04.5-LTS
westus 16.04-beta
westus 16.04-DAILY-LTS
westus 16.04-LTS
westus 16.04.0-LTS
westus 16.10
westus 16.10-DAILY
westus 17.04
westus 17.04-DAILY
westus 17.10-DAILY
Finally, use the az vm image list command to find a specific version of the SKU you want, for example, 16.04 -LTS:
az vm image list --location westus --publisher Canonical --offer UbuntuServer --sku 16.04-LTS --all --output
table
Output:
Now you can choose precisely the image you want to use by taking note of the URN value. Pass this value with the
--image parameter when you create a VM with the az vm create command. Remember that you can optionally
replace the version number in the URN with "latest". This version is always the latest version of the image.
If you deploy a VM with a Resource Manager template, you set the image parameters individually in the
imageReference properties. See the template reference.
az vm image show --location westus --publisher Canonical --offer UbuntuServer --sku 16.04-LTS --version
16.04.201801260
Output:
{
"dataDiskImages": [],
"id": "/Subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/Providers/Microsoft.Compute/Locations/westus/Publishers/Canonical/ArtifactTypes/VMImage/Offers/Ub
untuServer/Skus/16.04-LTS/Versions/16.04.201801260",
"location": "westus",
"name": "16.04.201801260",
"osDiskImage": {
"operatingSystem": "Linux"
},
"plan": null,
"tags": null
}
Running a similar command for the RabbitMQ Certified by Bitnami image shows the following plan properties:
name , product , and publisher . ( Some images also have a promotion code property.) To deploy this image, see
the following sections to accept the terms and enable programmatic deployment.
az vm image show --location westus --publisher bitnami --offer rabbitmq --sku rabbitmq --version
3.7.1801130730
Output:
{
"dataDiskImages": [],
"id": "/Subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/Providers/Microsoft.Compute/Locations/westus/Publishers/bitnami/ArtifactTypes/VMImage/Offers/rabb
itmq/Skus/rabbitmq/Versions/3.7.1801130730",
"location": "westus",
"name": "3.7.1801130730",
"osDiskImage": {
"operatingSystem": "Linux"
},
"plan": {
"name": "rabbitmq",
"product": "rabbitmq",
"publisher": "bitnami"
},
"tags": null
}
The output includes a licenseTextLink to the license terms, and indicates that the value of accepted is true :
{
"accepted": true,
"additionalProperties": {},
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/providers/Microsoft.MarketplaceOrdering/offertypes/bitnami/offers/rabbitmq/plans/rabbitmq",
"licenseTextLink":
"https://storelegalterms.blob.core.windows.net/legalterms/3E5ED_legalterms_BITNAMI%253a24RABBITMQ%253a24RABBIT
MQ%253a24IGRT7HHPIFOBV3IQYJHEN2O2FGUVXXZ3WUYIMEIVF3KCUNJ7GTVXNNM23I567GBMNDWRFOY4WXJPN5PUYXNKB2QLAKCHP4IE5GO3B
2I.txt",
"name": "rabbitmq",
"plan": "rabbitmq",
"privacyPolicyLink": "https://bitnami.com/privacy",
"product": "rabbitmq",
"publisher": "bitnami",
"retrieveDatetime": "2018-02-22T04:06:28.7641907Z",
"signature":
"WVIEA3LAZIK7ZL2YRV5JYQXONPV76NQJW3FKMKDZYCRGXZYVDGX6BVY45JO3BXVMNA2COBOEYG2NO76ONORU7ITTRHGZDYNJNKLNLWI",
"type": "Microsoft.MarketplaceOrdering/offertypes"
}
NOTE
Azure has two different deployment models for creating and working with resources: Resource Manager and classic. This
article covers using both models, but Microsoft recommends that most new deployments use the Resource Manager model.
The Azure platform SL A applies to virtual machines running the Linux OS only when one of the endorsed
distributions is used. For these endorsed distributions, Linux images are provided in the Azure Marketplace with
the required configuration.
Linux on Azure - Endorsed Distributions
Support for Linux images in Microsoft Azure
All distributions running on Azure need to meet a number of prerequisites to have a chance to properly run on the
platform. This article is by no means comprehensive as every distribution is different; and it is possible that even if
you meet all the criteria below you need to significantly tweak your Linux system to ensure that it properly runs on
the platform.
It is for this reason that we recommend that you start with a Linux on Azure Endorsed Distributions when possible.
The following article guides you through how to prepare the various endorsed Linux distributions that are
supported on Azure:
CentOS -based Distributions
Debian Linux
Oracle Linux
Red Hat Enterprise Linux
SLES & openSUSE
Ubuntu
The rest of this article focuses on general guidance for running your Linux distribution on Azure.
# cd /boot
# sudo cp initrd-`uname -r`.img initrd-`uname -r`.img.bak
Next, rebuild the initrd with the hv_vmbus and hv_storvsc kernel modules:
Resizing VHDs
VHD images on Azure must have a virtual size aligned to 1 MB. Typically, VHDs created using Hyper-V should
already be aligned correctly. If the VHD is not aligned correctly, you may receive an error message similar to the
following when you attempt to create an image from your VHD:
To remedy this behavior, resize the VM using either the Hyper-V Manager console or the Resize-VHD Powershell
cmdlet. If you are not running in a Windows environment, it is recommended to use qemu-img to convert (if
needed) and resize the VHD.
NOTE
There is a known bug in qemu-img versions >=2.2.1 that results in an improperly formatted VHD. The issue has been fixed in
QEMU 2.6. It is recommended to use either qemu-img 2.2.0 or lower, or update to 2.6 or higher. Reference:
https://bugs.launchpad.net/qemu/+bug/1490611.
1. Resizing the VHD directly using tools such as qemu-img or vbox-manage may result in an unbootable VHD.
So it is recommended to first convert the VHD to a RAW disk image. If the VM image was already created
as RAW disk image (the default for some Hypervisors such as KVM ) then you may skip this step:
2. Calculate the required size of the disk image to ensure that the virtual size is aligned to 1 MB. The following
bash shell script can assist with this. The script uses " qemu-img info " to determine the virtual size of the disk
image and then calculates the size to the next 1 MB:
rawdisk="MyLinuxVM.raw"
vhddisk="MyLinuxVM.vhd"
MB=$((1024*1024))
size=$(qemu-img info -f raw --output json "$rawdisk" | \
gawk 'match($0, /"virtual-size": ([0-9]+),/, val) {print val[1]}')
rounded_size=$((($size/$MB + 1)*$MB))
echo "Rounded Size = $rounded_size"
3. Resize the raw disk using $rounded_size as set in the above script:
At a minimum, the absence of the following patches has been known to cause problems on Azure and so these
must be included in the kernel. This list is by no means exhaustive or complete for all distributions:
ata_piix: defer disks to the Hyper-V drivers by default
storvsc: Account for in-transit packets in the RESET path
storvsc: avoid usage of WRITE_SAME
storvsc: Disable WRITE SAME for RAID and virtual host adapter drivers
storvsc: NULL pointer dereference fix
storvsc: ring buffer failures may result in I/O freeze
scsi_sysfs: protect against double execution of __scsi_remove_device
This also ensures that all console messages are sent to the first serial port, which can assist Azure support
with debugging issues.
In addition to the above, it is recommended to remove the following parameters if they exist:
Graphical and quiet boot is not useful in a cloud environment where we want all the logs to be sent to the
serial port. The crashkernel option may be left configured if desired, but note that this parameter reduces
the amount of available memory in the VM by 128MB or more, which may be problematic on the smaller
VM sizes.
Installing the Azure Linux Agent
The Azure Linux Agent is required for provisioning a Linux image on Azure. Many distributions provide the
agent as an RPM or Deb package (the package is typically called 'WALinuxAgent' or 'walinuxagent'). The
agent can also be installed manually by following the steps in the Linux Agent Guide.
Ensure that the SSH server is installed and configured to start at boot time. This is usually the default.
Do not create swap space on the OS disk
The Azure Linux Agent can automatically configure swap space using the local resource disk that is attached
to the VM after provisioning on Azure. The local resource disk is a temporary disk, and might be emptied
when the VM is deprovisioned. After installing the Azure Linux Agent (see previous step), modify the
following parameters in /etc/waagent.conf appropriately:
ResourceDisk.Format=y
ResourceDisk.Filesystem=ext4
ResourceDisk.MountPoint=/mnt/resource
ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=2048 ## NOTE: set this to whatever you need it to be.
As a final step, run the following commands to deprovision the virtual machine:
NOTE
On Virtualbox you may see the following error after running 'waagent -force -deprovision':
[Errno 5] Input/output error . This error message is not critical and can be ignored.
Shut down the virtual machine and upload the VHD to Azure.
Prepare an Ubuntu virtual machine for Azure
4/9/2018 • 3 min to read • Edit Online
NOTE
Azure has two different deployment models for creating and working with resources: Resource Manager and classic. This
article covers using both models, but Microsoft recommends that most new deployments use the Resource Manager model.
Prerequisites
This article assumes that you have already installed an Ubuntu Linux operating system to a virtual hard disk.
Multiple tools exist to create .vhd files, for example a virtualization solution such as Hyper-V. For instructions, see
Install the Hyper-V Role and Configure a Virtual Machine.
Ubuntu installation notes
Please see also General Linux Installation Notes for more tips on preparing Linux for Azure.
The VHDX format is not supported in Azure, only fixed VHD. You can convert the disk to VHD format using
Hyper-V Manager or the convert-vhd cmdlet.
When installing the Linux system it is recommended that you use standard partitions rather than LVM (often
the default for many installations). This will avoid LVM name conflicts with cloned VMs, particularly if an OS
disk ever needs to be attached to another VM for troubleshooting. LVM or RAID may be used on data disks if
preferred.
Do not configure a swap partition on the OS disk. The Linux agent can be configured to create a swap file on
the temporary resource disk. More information about this can be found in the steps below.
All VHDs on Azure must have a virtual size aligned to 1MB. When converting from a raw disk to VHD you
must ensure that the raw disk size is a multiple of 1MB before conversion. See Linux Installation Notes for
more information.
Manual steps
NOTE
Before attempting to create your own custom Ubuntu image for Azure, please consider using the pre-built and tested images
from http://cloud-images.ubuntu.com/ instead.
Ubuntu 12.04:
Ubuntu 14.04:
Ubuntu 16.04:
4. The Ubuntu Azure images are now following the hardware enablement (HWE ) kernel. Update the operating
system to the latest kernel by running the following commands:
Ubuntu 12.04:
# sudo reboot
Ubuntu 14.04:
# sudo reboot
Ubuntu 16.04:
# sudo reboot
See also:
https://wiki.ubuntu.com/Kernel/LTSEnablementStack
https://wiki.ubuntu.com/Kernel/RollingLTSEnablementStack
5. Modify the kernel boot line for Grub to include additional kernel parameters for Azure. To do this open
/etc/default/grub in a text editor, find the variable called GRUB_CMDLINE_LINUX_DEFAULT (or add it if needed)
and edit it to include the following parameters:
Save and close this file, and then run sudo update-grub . This will ensure all console messages are sent to the
first serial port, which can assist Azure technical support with debugging issues.
6. Ensure that the SSH server is installed and configured to start at boot time. This is usually the default.
7. Install the Azure Linux Agent:
NOTE
The walinuxagent package may remove the NetworkManager and NetworkManager-gnome packages, if they are
installed.
8. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
9. Click Action -> Shut Down in Hyper-V Manager. Your Linux VHD is now ready to be uploaded to Azure.
Next steps
You're now ready to use your Ubuntu Linux virtual hard disk to create new virtual machines in Azure. If this is the
first time that you're uploading the .vhd file to Azure, see Create a Linux VM from a custom disk.
References
Ubuntu hardware enablement (HWE ) kernel:
http://blog.utlemming.org/2015/01/ubuntu-1404-azure-images-now -tracking.html
http://blog.utlemming.org/2015/02/1204-azure-cloud-images-now -using-hwe.html
Prepare a CentOS-based virtual machine for Azure
5/7/2018 • 9 min to read • Edit Online
NOTE
Azure has two different deployment models for creating and working with resources: Resource Manager and classic. This
article covers using both models, but Microsoft recommends that most new deployments use the Resource Manager model.
Prerequisites
This article assumes that you have already installed a CentOS (or similar derivative) Linux operating system to a
virtual hard disk. Multiple tools exist to create .vhd files, for example a virtualization solution such as Hyper-V. For
instructions, see Install the Hyper-V Role and Configure a Virtual Machine.
CentOS installation notes
Please see also General Linux Installation Notes for more tips on preparing Linux for Azure.
The VHDX format is not supported in Azure, only fixed VHD. You can convert the disk to VHD format using
Hyper-V Manager or the convert-vhd cmdlet. If you are using VirtualBox this means selecting Fixed size as
opposed to the default dynamically allocated when creating the disk.
When installing the Linux system it is recommended that you use standard partitions rather than LVM (often
the default for many installations). This will avoid LVM name conflicts with cloned VMs, particularly if an OS
disk ever needs to be attached to another identical VM for troubleshooting. LVM or RAID may be used on data
disks.
Kernel support for mounting UDF file systems is required. At first boot on Azure the provisioning configuration
is passed to the Linux VM via UDF -formatted media that is attached to the guest. The Azure Linux agent must
be able to mount the UDF file system to read its configuration and provision the VM.
Linux kernel versions below 2.6.37 do not support NUMA on Hyper-V with larger VM sizes. This issue
primarily impacts older distributions using the upstream Red Hat 2.6.32 kernel, and was fixed in RHEL 6.6
(kernel-2.6.32-504). Systems running custom kernels older than 2.6.37, or RHEL -based kernels older than
2.6.32-504 must set the boot parameter numa=off on the kernel command-line in grub.conf. For more
information see Red Hat KB 436883.
Do not configure a swap partition on the OS disk. The Linux agent can be configured to create a swap file on
the temporary resource disk. More information about this can be found in the steps below.
All VHDs on Azure must have a virtual size aligned to 1MB. When converting from a raw disk to VHD you
must ensure that the raw disk size is a multiple of 1MB before conversion. See Linux Installation Notes for
more information.
CentOS 6.x
1. In Hyper-V Manager, select the virtual machine.
2. Click Connect to open a console window for the virtual machine.
3. In CentOS 6, NetworkManager can interfere with the Azure Linux agent. Uninstall this package by running
the following command:
# sudo rpm -e --nodeps NetworkManager
4. Create or edit the file /etc/sysconfig/network and add the following text:
NETWORKING=yes
HOSTNAME=localhost.localdomain
5. Create or edit the file /etc/sysconfig/network-scripts/ifcfg-eth0 and add the following text:
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
TYPE=Ethernet
USERCTL=no
PEERDNS=yes
IPV6INIT=no
6. Modify udev rules to avoid generating static rules for the Ethernet interface(s). These rules can cause
problems when cloning a virtual machine in Microsoft Azure or Hyper-V:
7. Ensure the network service will start at boot time by running the following command:
8. If you would like to use the OpenLogic mirrors that are hosted within the Azure datacenters, then replace
the /etc/yum.repos.d/CentOS-Base.repo file with the following repositories. This will also add the
[openlogic] repository that includes additional packages such as the Azure Linux agent:
[openlogic]
name=CentOS-$releasever - openlogic packages for $basearch
baseurl=http://olcentgbl.trafficmanager.net/openlogic/$releasever/openlogic/$basearch/
enabled=1
gpgcheck=0
[base]
name=CentOS-$releasever - Base
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra
baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
#released updates
[updates]
name=CentOS-$releasever - Updates
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra
baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
NOTE
The rest of this guide will assume you are using at least the [openlogic] repo, which will be used to install the
Azure Linux agent below.
http_caching=packages
10. Run the following command to clear the current yum metadata and update the system with the latest
packages:
# yum clean all
Unless you are creating an image for an older version of CentOS, it is recommended to update all the
packages to the latest:
IMPORTANT
The step is required for CentOS 6.3 and earlier, and optional for later releases.
# sudo rpm -e hypervkvpd ## (may return error if not installed, that's OK)
# sudo yum install microsoft-hyper-v
Alternatively, you can follow the manual installation instructions on the LIS download page to install the
RPM onto your VM.
12. Install the Azure Linux Agent and dependencies:
The WALinuxAgent package will remove the NetworkManager and NetworkManager-gnome packages if
they were not already removed as described in step 3.
13. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To
do this, open /boot/grub/menu.lst in a text editor and ensure that the default kernel includes the following
parameters:
This will also ensure all console messages are sent to the first serial port, which can assist Azure support
with debugging issues.
In addition to the above, it is recommended to remove the following parameters:
Graphical and quiet boot are not useful in a cloud environment where we want all the logs to be sent to the
serial port. The crashkernel option may be left configured if desired, but note that this parameter will
reduce the amount of available memory in the VM by 128MB or more, which may be problematic on the
smaller VM sizes.
IMPORTANT
CentOS 6.5 and earlier must also set the kernel parameter numa=off . See Red Hat KB 436883.
14. Ensure that the SSH server is installed and configured to start at boot time. This is usually the default.
15. Do not create swap space on the OS disk.
The Azure Linux Agent can automatically configure swap space using the local resource disk that is attached
to the VM after provisioning on Azure. Note that the local resource disk is a temporary disk, and might be
emptied when the VM is deprovisioned. After installing the Azure Linux Agent (see previous step), modify
the following parameters in /etc/waagent.conf appropriately:
ResourceDisk.Format=y
ResourceDisk.Filesystem=ext4
ResourceDisk.MountPoint=/mnt/resource
ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=2048 ## NOTE: set this to whatever you need it to be.
16. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
17. Click Action -> Shut Down in Hyper-V Manager. Your Linux VHD is now ready to be uploaded to Azure.
CentOS 7.0+
Changes in CentOS 7 (and similar derivatives)
Preparing a CentOS 7 virtual machine for Azure is very similar to CentOS 6, however there are several important
differences worth noting:
The NetworkManager package no longer conflicts with the Azure Linux agent. This package is installed by
default and we recommend that it is not removed.
GRUB2 is now used as the default bootloader, so the procedure for editing kernel parameters has changed (see
below ).
XFS is now the default file system. The ext4 file system can still be used if desired.
Configuration Steps
1. In Hyper-V Manager, select the virtual machine.
2. Click Connect to open a console window for the virtual machine.
3. Create or edit the file /etc/sysconfig/network and add the following text:
NETWORKING=yes
HOSTNAME=localhost.localdomain
4. Create or edit the file /etc/sysconfig/network-scripts/ifcfg-eth0 and add the following text:
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
TYPE=Ethernet
USERCTL=no
PEERDNS=yes
IPV6INIT=no
NM_CONTROLLED=no
5. Modify udev rules to avoid generating static rules for the Ethernet interface(s). These rules can cause
problems when cloning a virtual machine in Microsoft Azure or Hyper-V:
6. If you would like to use the OpenLogic mirrors that are hosted within the Azure datacenters, then replace
the /etc/yum.repos.d/CentOS-Base.repo file with the following repositories. This will also add the
[openlogic] repository that includes packages for the Azure Linux agent:
[openlogic]
name=CentOS-$releasever - openlogic packages for $basearch
baseurl=http://olcentgbl.trafficmanager.net/openlogic/$releasever/openlogic/$basearch/
enabled=1
gpgcheck=0
[base]
name=CentOS-$releasever - Base
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra
baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
#released updates
[updates]
name=CentOS-$releasever - Updates
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra
baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
7. Run the following command to clear the current yum metadata and install any updates:
Unless you are creating an image for an older version of CentOS, it is recommended to update all the
packages to the latest:
This will also ensure all console messages are sent to the first serial port, which can assist Azure support
with debugging issues. It also turns off the new CentOS 7 naming conventions for NICs. In addition to the
above, it is recommended to remove the following parameters:
Graphical and quiet boot are not useful in a cloud environment where we want all the logs to be sent to the
serial port. The crashkernel option may be left configured if desired, but note that this parameter will
reduce the amount of available memory in the VM by 128MB or more, which may be problematic on the
smaller VM sizes.
9. Once you are done editing /etc/default/grub per above, run the following command to rebuild the grub
configuration:
10. If building the image from VMWare, VirtualBox or KVM: Ensure the Hyper-V drivers are included in the
initramfs:
Edit /etc/dracut.conf , add content:
# sudo dracut -f -v
ResourceDisk.Format=y
ResourceDisk.Filesystem=ext4
ResourceDisk.MountPoint=/mnt/resource
ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=2048 ## NOTE: set this to whatever you need it to be.
13. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
14. Click Action -> Shut Down in Hyper-V Manager. Your Linux VHD is now ready to be uploaded to Azure.
Next steps
You're now ready to use your CentOS Linux virtual hard disk to create new virtual machines in Azure. If this is the
first time that you're uploading the .vhd file to Azure, see Create a Linux VM from a custom disk.
Prepare a Red Hat-based virtual machine for Azure
5/7/2018 • 26 min to read • Edit Online
In this article, you will learn how to prepare a Red Hat Enterprise Linux (RHEL ) virtual machine for use in Azure.
The versions of RHEL that are covered in this article are 6.7+ and 7.1+. The hypervisors for preparation that are
covered in this article are Hyper-V, kernel-based virtual machine (KVM ), and VMware. For more information
about eligibility requirements for participating in Red Hat's Cloud Access program, see Red Hat's Cloud Access
website and Running RHEL on Azure.
4. Create or edit the /etc/sysconfig/network file, and add the following text:
NETWORKING=yes
HOSTNAME=localhost.localdomain
5. Create or edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file, and add the following text:
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
TYPE=Ethernet
USERCTL=no
PEERDNS=yes
IPV6INIT=no
6. Move (or remove) the udev rules to avoid generating static rules for the Ethernet interface. These rules
cause problems when you clone a virtual machine in Microsoft Azure or Hyper-V:
# sudo rm -f /etc/udev/rules.d/70-persistent-net.rules
7. Ensure that the network service will start at boot time by running the following command:
8. Register your Red Hat subscription to enable the installation of packages from the RHEL repository by
running the following command:
9. The WALinuxAgent package, WALinuxAgent-<version> , has been pushed to the Red Hat extras repository.
Enable the extras repository by running the following command:
10. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To
do this modification, open /boot/grub/menu.lst in a text editor, and ensure that the default kernel includes
the following parameters:
This will also ensure that all console messages are sent to the first serial port, which can assist Azure
support with debugging issues.
In addition, we recommended that you remove the following parameters:
rhgb quiet crashkernel=auto
Graphical and quiet boot are not useful in a cloud environment where we want all the logs to be sent to the
serial port. You can leave the crashkernel option configured if desired. Note that this parameter reduces
the amount of available memory in the virtual machine by 128 MB or more. This configuration might be
problematic on smaller virtual machine sizes.
IMPORTANT
RHEL 6.5 and earlier must also set the numa=off kernel parameter. See Red Hat KB 436883.
11. Ensure that the secure shell (SSH) server is installed and configured to start at boot time, which is usually
the default. Modify /etc/ssh/sshd_config to include the following line:
ClientAliveInterval 180
12. Install the Azure Linux Agent by running the following command:
ResourceDisk.Format=y
ResourceDisk.Filesystem=ext4
ResourceDisk.MountPoint=/mnt/resource
ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=2048 ## NOTE: set this to whatever you need it to be.
14. Unregister the subscription (if necessary) by running the following command:
15. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
# export HISTSIZE=0
# logout
16. Click Action > Shut Down in Hyper-V Manager. Your Linux VHD is now ready to be uploaded to Azure.
Prepare a RHEL 7 virtual machine from Hyper-V Manager
1. In Hyper-V Manager, select the virtual machine.
2. Click Connect to open a console window for the virtual machine.
3. Create or edit the /etc/sysconfig/network file, and add the following text:
NETWORKING=yes
HOSTNAME=localhost.localdomain
4. Create or edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file, and add the following text:
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
TYPE=Ethernet
USERCTL=no
PEERDNS=yes
IPV6INIT=no
NM_CONTROLLED=no
5. Ensure that the network service will start at boot time by running the following command:
6. Register your Red Hat subscription to enable the installation of packages from the RHEL repository by
running the following command:
7. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To
do this modification, open /etc/default/grub in a text editor, and edit the GRUB_CMDLINE_LINUX parameter.
For example:
This will also ensure that all console messages are sent to the first serial port, which can assist Azure
support with debugging issues. This configuration also turns off the new RHEL 7 naming conventions for
NICs. In addition, we recommend that you remove the following parameters:
Graphical and quiet boot are not useful in a cloud environment where we want all the logs to be sent to the
serial port. You can leave the crashkernel option configured if desired. Note that this parameter reduces
the amount of available memory in the virtual machine by 128 MB or more, which might be problematic
on smaller virtual machine sizes.
8. After you are done editing /etc/default/grub , run the following command to rebuild the grub
configuration:
ClientAliveInterval 180
10. The WALinuxAgent package, WALinuxAgent-<version> , has been pushed to the Red Hat extras repository.
Enable the extras repository by running the following command:
11. Install the Azure Linux Agent by running the following command:
ResourceDisk.Format=y
ResourceDisk.Filesystem=ext4
ResourceDisk.MountPoint=/mnt/resource
ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=2048 ## NOTE: set this to whatever you need it to be.
13. If you want to unregister the subscription, run the following command:
14. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
# export HISTSIZE=0
# logout
15. Click Action > Shut Down in Hyper-V Manager. Your Linux VHD is now ready to be uploaded to Azure.
Change the second field of the root user from "!!" to the encrypted password.
3. Create a virtual machine in KVM from the qcow2 image. Set the disk type to qcow2, and set the virtual
network interface device model to virtio. Then, start the virtual machine, and sign in as root.
4. Create or edit the /etc/sysconfig/network file, and add the following text:
NETWORKING=yes
HOSTNAME=localhost.localdomain
5. Create or edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file, and add the following text:
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
TYPE=Ethernet
USERCTL=no
PEERDNS=yes
IPV6INIT=no
6. Move (or remove) the udev rules to avoid generating static rules for the Ethernet interface. These rules
cause problems when you clone a virtual machine in Azure or Hyper-V:
# sudo rm -f /etc/udev/rules.d/70-persistent-net.rules
7. Ensure that the network service will start at boot time by running the following command:
# chkconfig network on
8. Register your Red Hat subscription to enable the installation of packages from the RHEL repository by
running the following command:
9. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To
do this configuration, open /boot/grub/menu.lst in a text editor, and ensure that the default kernel includes
the following parameters:
Graphical and quiet boot are not useful in a cloud environment where we want all the logs to be sent to the
serial port. You can leave the crashkernel option configured if desired. Note that this parameter reduces
the amount of available memory in the virtual machine by 128 MB or more, which might be problematic
on smaller virtual machine sizes.
IMPORTANT
RHEL 6.5 and earlier must also set the numa=off kernel parameter. See Red Hat KB 436883.
Rebuild initramfs:
# dracut -f -v
12. Ensure that the SSH server is installed and configured to start at boot time:
# chkconfig sshd on
PasswordAuthentication yes
ClientAliveInterval 180
13. The WALinuxAgent package, WALinuxAgent-<version> , has been pushed to the Red Hat extras repository.
Enable the extras repository by running the following command:
14. Install the Azure Linux Agent by running the following command:
# chkconfig waagent on
15. The Azure Linux Agent can automatically configure swap space by using the local resource disk that is
attached to the virtual machine after the virtual machine is provisioned on Azure. Note that the local
resource disk is a temporary disk, and it might be emptied when the virtual machine is deprovisioned. After
you install the Azure Linux Agent in the previous step, modify the following parameters in
/etc/waagent.conf appropriately:
ResourceDisk.Format=y
ResourceDisk.Filesystem=ext4
ResourceDisk.MountPoint=/mnt/resource
ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=2048 ## NOTE: set this to whatever you need it to be.
16. Unregister the subscription (if necessary) by running the following command:
# subscription-manager unregister
17. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
# export HISTSIZE=0
# logout
NOTE
There is a known bug in qemu-img versions >=2.2.1 that results in an improperly formatted VHD. The issue has been fixed
in QEMU 2.6. It is recommended to use either qemu-img 2.2.0 or lower, or update to 2.6 or higher. Reference:
https://bugs.launchpad.net/qemu/+bug/1490611.
Make sure that the size of the raw image is aligned with 1 MB. Otherwise, round up the size to align with 1
MB:
# MB=$((1024*1024))
# size=$(qemu-img info -f raw --output json "rhel-6.9.raw" | \
gawk 'match($0, /"virtual-size": ([0-9]+),/, val) {print val[1]}')
# rounded_size=$((($size/$MB + 1)*$MB))
# qemu-img resize rhel-6.9.raw $rounded_size
Change the second field of root user from "!!" to the encrypted password.
3. Create a virtual machine in KVM from the qcow2 image. Set the disk type to qcow2, and set the virtual
network interface device model to virtio. Then, start the virtual machine, and sign in as root.
4. Create or edit the /etc/sysconfig/network file, and add the following text:
NETWORKING=yes
HOSTNAME=localhost.localdomain
5. Create or edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file, and add the following text:
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
TYPE=Ethernet
USERCTL=no
PEERDNS=yes
IPV6INIT=no
NM_CONTROLLED=no
6. Ensure that the network service will start at boot time by running the following command:
7. Register your Red Hat subscription to enable installation of packages from the RHEL repository by running
the following command:
8. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To
do this configuration, open /etc/default/grub in a text editor, and edit the GRUB_CMDLINE_LINUX parameter.
For example:
Graphical and quiet boot are not useful in a cloud environment where we want all the logs to be sent to the
serial port. You can leave the crashkernel option configured if desired. Note that this parameter reduces
the amount of available memory in the virtual machine by 128 MB or more, which might be problematic
on smaller virtual machine sizes.
9. After you are done editing /etc/default/grub , run the following command to rebuild the grub
configuration:
# grub2-mkconfig -o /boot/grub2/grub.cfg
Rebuild initramfs:
# dracut -f -v
12. Ensure that the SSH server is installed and configured to start at boot time:
PasswordAuthentication yes
ClientAliveInterval 180
13. The WALinuxAgent package, WALinuxAgent-<version> , has been pushed to the Red Hat extras repository.
Enable the extras repository by running the following command:
14. Install the Azure Linux Agent by running the following command:
ResourceDisk.Format=y
ResourceDisk.Filesystem=ext4
ResourceDisk.MountPoint=/mnt/resource
ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=2048 ## NOTE: set this to whatever you need it to be.
16. Unregister the subscription (if necessary) by running the following command:
# subscription-manager unregister
17. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
# export HISTSIZE=0
# logout
NOTE
There is a known bug in qemu-img versions >=2.2.1 that results in an improperly formatted VHD. The issue has been fixed
in QEMU 2.6. It is recommended to use either qemu-img 2.2.0 or lower, or update to 2.6 or higher. Reference:
https://bugs.launchpad.net/qemu/+bug/1490611.
First convert the image to raw format:
Make sure that the size of the raw image is aligned with 1 MB. Otherwise, round up the size to align with 1
MB:
# MB=$((1024*1024))
# size=$(qemu-img info -f raw --output json "rhel-7.4.raw" | \
gawk 'match($0, /"virtual-size": ([0-9]+),/, val) {print val[1]}')
# rounded_size=$((($size/$MB + 1)*$MB))
# qemu-img resize rhel-7.4.raw $rounded_size
2. Create a file named network in the /etc/sysconfig/ directory that contains the following text:
NETWORKING=yes
HOSTNAME=localhost.localdomain
3. Create or edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file, and add the following text:
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
TYPE=Ethernet
USERCTL=no
PEERDNS=yes
IPV6INIT=no
4. Move (or remove) the udev rules to avoid generating static rules for the Ethernet interface. These rules
cause problems when you clone a virtual machine in Azure or Hyper-V:
# sudo rm -f /etc/udev/rules.d/70-persistent-net.rules
5. Ensure that the network service will start at boot time by running the following command:
6. Register your Red Hat subscription to enable the installation of packages from the RHEL repository by
running the following command:
7. The WALinuxAgent package, WALinuxAgent-<version> , has been pushed to the Red Hat extras repository.
Enable the extras repository by running the following command:
8. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To
do this, open /etc/default/grub in a text editor, and edit the GRUB_CMDLINE_LINUX parameter. For example:
This will also ensure that all console messages are sent to the first serial port, which can assist Azure
support with debugging issues. In addition, we recommend that you remove the following parameters:
Graphical and quiet boot are not useful in a cloud environment where we want all the logs to be sent to the
serial port. You can leave the crashkernel option configured if desired. Note that this parameter reduces
the amount of available memory in the virtual machine by 128 MB or more, which might be problematic
on smaller virtual machine sizes.
9. Add Hyper-V modules to initramfs:
Edit /etc/dracut.conf , and add the following content:
Rebuild initramfs:
# dracut -f -v
10. Ensure that the SSH server is installed and configured to start at boot time, which is usually the default.
Modify /etc/ssh/sshd_config to include the following line:
ClientAliveInterval 180
11. Install the Azure Linux Agent by running the following command:
ResourceDisk.Format=y
ResourceDisk.Filesystem=ext4
ResourceDisk.MountPoint=/mnt/resource
ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=2048 ## NOTE: set this to whatever you need it to be.
13. Unregister the subscription (if necessary) by running the following command:
14. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
# export HISTSIZE=0
# logout
15. Shut down the virtual machine, and convert the VMDK file to a .vhd file.
NOTE
There is a known bug in qemu-img versions >=2.2.1 that results in an improperly formatted VHD. The issue has been fixed
in QEMU 2.6. It is recommended to use either qemu-img 2.2.0 or lower, or update to 2.6 or higher. Reference:
https://bugs.launchpad.net/qemu/+bug/1490611.
First convert the image to raw format:
Make sure that the size of the raw image is aligned with 1 MB. Otherwise, round up the size to align with 1
MB:
# MB=$((1024*1024))
# size=$(qemu-img info -f raw --output json "rhel-6.9.raw" | \
gawk 'match($0, /"virtual-size": ([0-9]+),/, val) {print val[1]}')
# rounded_size=$((($size/$MB + 1)*$MB))
# qemu-img resize rhel-6.9.raw $rounded_size
NETWORKING=yes
HOSTNAME=localhost.localdomain
2. Create or edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file, and add the following text:
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
TYPE=Ethernet
USERCTL=no
PEERDNS=yes
IPV6INIT=no
NM_CONTROLLED=no
3. Ensure that the network service will start at boot time by running the following command:
4. Register your Red Hat subscription to enable the installation of packages from the RHEL repository by
running the following command:
5. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To
do this modification, open /etc/default/grub in a text editor, and edit the GRUB_CMDLINE_LINUX parameter.
For example:
This configuration also ensures that all console messages are sent to the first serial port, which can assist
Azure support with debugging issues. It also turns off the new RHEL 7 naming conventions for NICs. In
addition, we recommend that you remove the following parameters:
Graphical and quiet boot are not useful in a cloud environment where we want all the logs to be sent to the
serial port. You can leave the crashkernel option configured if desired. Note that this parameter reduces
the amount of available memory in the virtual machine by 128 MB or more, which might be problematic
on smaller virtual machine sizes.
6. After you are done editing /etc/default/grub , run the following command to rebuild the grub
configuration:
Rebuild initramfs:
# dracut -f -v
8. Ensure that the SSH server is installed and configured to start at boot time. This setting is usually the
default. Modify /etc/ssh/sshd_config to include the following line:
ClientAliveInterval 180
9. The WALinuxAgent package, WALinuxAgent-<version> , has been pushed to the Red Hat extras repository.
Enable the extras repository by running the following command:
10. Install the Azure Linux Agent by running the following command:
12. If you want to unregister the subscription, run the following command:
13. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
# export HISTSIZE=0
# logout
14. Shut down the virtual machine, and convert the VMDK file to the VHD format.
NOTE
There is a known bug in qemu-img versions >=2.2.1 that results in an improperly formatted VHD. The issue has been fixed
in QEMU 2.6. It is recommended to use either qemu-img 2.2.0 or lower, or update to 2.6 or higher. Reference:
https://bugs.launchpad.net/qemu/+bug/1490611.
Make sure that the size of the raw image is aligned with 1 MB. Otherwise, round up the size to align with 1
MB:
# MB=$((1024*1024))
# size=$(qemu-img info -f raw --output json "rhel-7.4.raw" | \
gawk 'match($0, /"virtual-size": ([0-9]+),/, val) {print val[1]}')
# rounded_size=$((($size/$MB + 1)*$MB))
# qemu-img resize rhel-7.4.raw $rounded_size
# Keyboard layouts
keyboard --vckeymap=us --xlayouts='us'
# System language
lang en_US.UTF-8
# Network information
network --bootproto=dhcp
# Root password
rootpw --plaintext "to_be_disabled"
# System services
services --enabled="sshd,waagent,NetworkManager"
# System timezone
timezone Etc/UTC --isUtc --ntpservers
0.rhel.pool.ntp.org,1.rhel.pool.ntp.org,2.rhel.pool.ntp.org,3.rhel.pool.ntp.org
# Firewall configuration
firewall --disabled
# Enable SELinux
selinux --enforcing
# Don't configure X
skipx
%packages
@base
@console-internet
chrony
sudo
parted
-dracut-config-rescue
%end
%post --log=/var/log/anaconda/post-install.log
#!/bin/bash
# Register Red Hat Subscription
subscription-manager register --username=XXX --password=XXX --auto-attach --force
# Install WALinuxAgent
yum install -y WALinuxAgent
# Configure network
cat << EOF > /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
TYPE=Ethernet
USERCTL=no
PEERDNS=yes
IPV6INIT=no
NM_CONTROLLED=no
EOF
%end
2. Place the kickstart file where the installation system can access it.
3. In Hyper-V Manager, create a new virtual machine. On the Connect Virtual Hard Disk page, select
Attach a virtual hard disk later, and complete the New Virtual Machine Wizard.
4. Open the virtual machine settings:
a. Attach a new virtual hard disk to the virtual machine. Make sure to select VHD Format and Fixed Size.
b. Attach the installation ISO to the DVD drive.
c. Set the BIOS to boot from CD.
5. Start the virtual machine. When the installation guide appears, press Tab to configure the boot options.
6. Enter inst.ks=<the location of the kickstart file> at the end of the boot options, and press Enter.
7. Wait for the installation to finish. When it's finished, the virtual machine will be shut down automatically.
Your Linux VHD is now ready to be uploaded to Azure.
Known issues
The Hyper-V driver could not be included in the initial RAM disk when using a non-Hyper-V hypervisor
In some cases, Linux installers might not include the drivers for Hyper-V in the initial RAM disk (initrd or
initramfs) unless Linux detects that it is running in a Hyper-V environment.
When you're using a different virtualization system (that is, Virtualbox, Xen, etc.) to prepare your Linux image, you
might need to rebuild initrd to ensure that at least the hv_vmbus and hv_storvsc kernel modules are available on
the initial RAM disk. This is a known issue at least on systems that are based on the upstream Red Hat
distribution.
To resolve this issue, add Hyper-V modules to initramfs and rebuild it:
Edit /etc/dracut.conf , and add the following content:
Rebuild initramfs:
# dracut -f -v
Next steps
You're now ready to use your Red Hat Enterprise Linux virtual hard disk to create new virtual machines in Azure. If
this is the first time that you're uploading the .vhd file to Azure, see Create a Linux VM from a custom disk.
For more details about the hypervisors that are certified to run Red Hat Enterprise Linux, see the Red Hat website.
Prepare a Debian VHD for Azure
4/9/2018 • 3 min to read • Edit Online
Prerequisites
This section assumes that you have already installed a Debian Linux operating system from an .iso file downloaded
from the Debian website to a virtual hard disk. Multiple tools exist to create .vhd files; Hyper-V is only one example.
For instructions using Hyper-V, see Install the Hyper-V Role and Configure a Virtual Machine.
Installation notes
Please see also General Linux Installation Notes for more tips on preparing Linux for Azure.
The newer VHDX format is not supported in Azure. You can convert the disk to VHD format using Hyper-V
Manager or the convert-vhd cmdlet.
When installing the Linux system it is recommended that you use standard partitions rather than LVM (often
the default for many installations). This will avoid LVM name conflicts with cloned VMs, particularly if an OS
disk ever needs to be attached to another VM for troubleshooting. LVM or RAID may be used on data disks if
preferred.
Do not configure a swap partition on the OS disk. The Azure Linux agent can be configured to create a swap file
on the temporary resource disk. More information about this can be found in the steps below.
All VHDs on Azure must have a virtual size aligned to 1MB. When converting from a raw disk to VHD you
must ensure that the raw disk size is a multiple of 1MB before conversion. See Linux Installation Notes for
more information.
# sudo update-grub
8. For Debian 7, it is required to run the 3.16-based kernel from the wheezy-backports repository. First create
a file called /etc/apt/preferences.d/linux.pref with the following contents:
Then run "sudo apt-get install linux-image-amd64" to install the new kernel.
9. Deprovision the virtual machine and prepare it for provisioning on Azure and run:
10. Click Action -> Shut Down in Hyper-V Manager. Your Linux VHD is now ready to be uploaded to Azure.
Next steps
You're now ready to use your Debian virtual hard disk to create new virtual machines in Azure. If this is the first
time that you're uploading the .vhd file to Azure, see Create a Linux VM from a custom disk.
Prepare a SLES or openSUSE virtual machine for
Azure
4/9/2018 • 6 min to read • Edit Online
NOTE
Azure has two different deployment models for creating and working with resources: Resource Manager and classic. This
article covers using both models, but Microsoft recommends that most new deployments use the Resource Manager model.
Prerequisites
This article assumes that you have already installed a SUSE or openSUSE Linux operating system to a virtual hard
disk. Multiple tools exist to create .vhd files, for example a virtualization solution such as Hyper-V. For instructions,
see Install the Hyper-V Role and Configure a Virtual Machine.
SLES / openSUSE installation notes
Please see also General Linux Installation Notes for more tips on preparing Linux for Azure.
The VHDX format is not supported in Azure, only fixed VHD. You can convert the disk to VHD format using
Hyper-V Manager or the convert-vhd cmdlet.
When installing the Linux system it is recommended that you use standard partitions rather than LVM (often
the default for many installations). This will avoid LVM name conflicts with cloned VMs, particularly if an OS
disk ever needs to be attached to another VM for troubleshooting. LVM or RAID may be used on data disks if
preferred.
Do not configure a swap partition on the OS disk. The Linux agent can be configured to create a swap file on
the temporary resource disk. More information about this can be found in the steps below.
All VHDs on Azure must have a virtual size aligned to 1MB. When converting from a raw disk to VHD you
must ensure that the raw disk size is a multiple of 1MB before conversion. See Linux Installation Notes for
more information.
6. Check if waagent is set to "on" in chkconfig, and if not, enable it for autostart:
8. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To
do this open "/boot/grub/menu.lst" in a text editor and ensure that the default kernel includes the following
parameters:
This will ensure all console messages are sent to the first serial port, which can assist Azure support with
debugging issues.
9. Confirm that /boot/grub/menu.lst and /etc/fstab both reference the disk using its UUID (by-uuid) instead of
the disk ID (by-id).
Get disk UUID
# ls /dev/disk/by-uuid/
If /dev/disk/by-id/ is used, update both /boot/grub/menu.lst and /etc/fstab with the proper by-uuid value
Before change
root=/dev/disk/by-id/SCSI-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx-part1
After change
root=/dev/disk/by-uuid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
10. Modify udev rules to avoid generating static rules for the Ethernet interface(s). These rules can cause
problems when cloning a virtual machine in Microsoft Azure or Hyper-V:
11. It is recommended to edit the file "/etc/sysconfig/network/dhcp" and change the DHCLIENT_SET_HOSTNAME
parameter to the following:
DHCLIENT_SET_HOSTNAME="no"
12. In "/etc/sudoers", comment out or remove the following lines if they exist:
Defaults targetpw # ask for the password of the target user i.e. root ALL ALL=(ALL ) ALL # WARNING!
Only use this together with 'Defaults targetpw'!
13. Ensure that the SSH server is installed and configured to start at boot time. This is usually the default.
14. Do not create swap space on the OS disk.
The Azure Linux Agent can automatically configure swap space using the local resource disk that is attached
to the VM after provisioning on Azure. Note that the local resource disk is a temporary disk, and might be
emptied when the VM is deprovisioned. After installing the Azure Linux Agent (see previous step), modify
the following parameters in /etc/waagent.conf appropriately:
ResourceDisk.Format=y ResourceDisk.Filesystem=ext4 ResourceDisk.MountPoint=/mnt/resource
ResourceDisk.EnableSwap=y ResourceDisk.SwapSizeMB=2048 ## NOTE: set this to whatever you need it
to be.
15. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
If the command returns "No repositories defined..." then use the following commands to add these repos:
You can then verify the repositories have been added by running the command ' zypper lr ' again. In case
one of the relevant update repositories is not enabled, enable it with following command:
13. Click Action -> Shut Down in Hyper-V Manager. Your Linux VHD is now ready to be uploaded to Azure.
Next steps
You're now ready to use your SUSE Linux virtual hard disk to create new virtual machines in Azure. If this is the
first time that you're uploading the .vhd file to Azure, see Create a Linux VM from a custom disk.
Prepare an Oracle Linux virtual machine for Azure
4/9/2018 • 7 min to read • Edit Online
NOTE
Azure has two different deployment models for creating and working with resources: Resource Manager and classic. This
article covers using both models, but Microsoft recommends that most new deployments use the Resource Manager model.
Prerequisites
This article assumes that you have already installed an Oracle Linux operating system to a virtual hard disk.
Multiple tools exist to create .vhd files, for example a virtualization solution such as Hyper-V. For instructions, see
Install the Hyper-V Role and Configure a Virtual Machine.
Oracle Linux installation notes
Please see also General Linux Installation Notes for more tips on preparing Linux for Azure.
Oracle's Red Hat compatible kernel and their UEK3 (Unbreakable Enterprise Kernel) are both supported on
Hyper-V and Azure. For best results, please be sure to update to the latest kernel while preparing your Oracle
Linux VHD.
Oracle's UEK2 is not supported on Hyper-V and Azure as it does not include the required drivers.
The VHDX format is not supported in Azure, only fixed VHD. You can convert the disk to VHD format using
Hyper-V Manager or the convert-vhd cmdlet.
When installing the Linux system it is recommended that you use standard partitions rather than LVM (often
the default for many installations). This will avoid LVM name conflicts with cloned VMs, particularly if an OS
disk ever needs to be attached to another VM for troubleshooting. LVM or RAID may be used on data disks if
preferred.
NUMA is not supported for larger VM sizes due to a bug in Linux kernel versions below 2.6.37. This issue
primarily impacts distributions using the upstream Red Hat 2.6.32 kernel. Manual installation of the Azure Linux
agent (waagent) will automatically disable NUMA in the GRUB configuration for the Linux kernel. More
information about this can be found in the steps below.
Do not configure a swap partition on the OS disk. The Linux agent can be configured to create a swap file on
the temporary resource disk. More information about this can be found in the steps below.
All VHDs on Azure must have a virtual size aligned to 1MB. When converting from a raw disk to VHD you
must ensure that the raw disk size is a multiple of 1MB before conversion. See Linux Installation Notes for
more information.
Make sure that the Addons repository is enabled. Edit the file /etc/yum.repo.d/public-yum-ol6.repo (Oracle
Linux 6) or /etc/yum.repo.d/public-yum-ol7.repo (Oracle Linux ), and change the line enabled=0 to enabled=1
under [ol6_addons] or [ol7_addons] in this file.
Note: If the package is not already installed, this command will fail with an error message. This is expected.
4. Create a file named network in the /etc/sysconfig/ directory that contains the following text:
NETWORKING=yes
HOSTNAME=localhost.localdomain
5. Create a file named ifcfg-eth0 in the /etc/sysconfig/network-scripts/ directory that contains the following
text:
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
TYPE=Ethernet
USERCTL=no
PEERDNS=yes
IPV6INIT=no
6. Modify udev rules to avoid generating static rules for the Ethernet interface(s). These rules can cause
problems when cloning a virtual machine in Microsoft Azure or Hyper-V:
7. Ensure the network service will start at boot time by running the following command:
# chkconfig network on
9. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To
do this open "/boot/grub/menu.lst" in a text editor and ensure that the default kernel includes the following
parameters:
This will also ensure all console messages are sent to the first serial port, which can assist Azure support
with debugging issues. This will disable NUMA due to a bug in Oracle's Red Hat compatible kernel.
In addition to the above, it is recommended to remove the following parameters:
Graphical and quiet boot are not useful in a cloud environment where we want all the logs to be sent to the
serial port.
The crashkernel option may be left configured if desired, but note that this parameter will reduce the
amount of available memory in the VM by 128MB or more, which may be problematic on the smaller VM
sizes.
10. Ensure that the SSH server is installed and configured to start at boot time. This is usually the default.
11. Install the Azure Linux Agent by running the following command. The latest version is 2.0.15.
Note that installing the WALinuxAgent package will remove the NetworkManager and NetworkManager-
gnome packages if they were not already removed as described in step 2.
12. Do not create swap space on the OS disk.
The Azure Linux Agent can automatically configure swap space using the local resource disk that is attached
to the VM after provisioning on Azure. Note that the local resource disk is a temporary disk, and might be
emptied when the VM is deprovisioned. After installing the Azure Linux Agent (see previous step), modify
the following parameters in /etc/waagent.conf appropriately:
ResourceDisk.Format=y
ResourceDisk.Filesystem=ext4
ResourceDisk.MountPoint=/mnt/resource
ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=2048 ## NOTE: set this to whatever you need it to be.
13. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
14. Click Action -> Shut Down in Hyper-V Manager. Your Linux VHD is now ready to be uploaded to Azure.
4. Create a file named ifcfg-eth0 in the /etc/sysconfig/network-scripts/ directory that contains the following
text:
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
TYPE=Ethernet
USERCTL=no
PEERDNS=yes
IPV6INIT=no
5. Modify udev rules to avoid generating static rules for the Ethernet interface(s). These rules can cause
problems when cloning a virtual machine in Microsoft Azure or Hyper-V:
6. Ensure the network service will start at boot time by running the following command:
8. Run the following command to clear the current yum metadata and install any updates:
9. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To
do this open "/etc/default/grub" in a text editor and edit the GRUB_CMDLINE_LINUX parameter, for example:
This will also ensure all console messages are sent to the first serial port, which can assist Azure support
with debugging issues. It also turns off the new OEL 7 naming conventions for NICs. In addition to the
above, it is recommended to remove the following parameters:
Graphical and quiet boot are not useful in a cloud environment where we want all the logs to be sent to the
serial port.
The crashkernel option may be left configured if desired, but note that this parameter will reduce the
amount of available memory in the VM by 128MB or more, which may be problematic on the smaller VM
sizes.
10. Once you are done editing "/etc/default/grub" per above, run the following command to rebuild the grub
configuration:
11. Ensure that the SSH server is installed and configured to start at boot time. This is usually the default.
12. Install the Azure Linux Agent by running the following command:
ResourceDisk.Format=y
ResourceDisk.Filesystem=ext4
ResourceDisk.MountPoint=/mnt/resource
ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=2048 ## NOTE: set this to whatever you need it to be.
14. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
15. Click Action -> Shut Down in Hyper-V Manager. Your Linux VHD is now ready to be uploaded to Azure.
Next steps
You're now ready to use your Oracle Linux .vhd to create new virtual machines in Azure. If this is the first time that
you're uploading the .vhd file to Azure, see Create a Linux VM from a custom disk.
Create and Upload an OpenBSD disk image to Azure
4/11/2018 • 3 min to read • Edit Online
This article shows you how to create and upload a virtual hard disk (VHD ) that contains the OpenBSD operating
system. After you upload it, you can use it as your own image to create a virtual machine (VM ) in Azure through
Azure CLI.
Prerequisites
This article assumes that you have the following items:
An Azure subscription - If you don't have an account, you can create one in just a couple of minutes. If you
have an MSDN subscription, see Monthly Azure credit for Visual Studio subscribers. Otherwise, learn how to
create a free trial account.
Azure CLI 2.0 - Make sure you have the latest Azure CLI 2.0 installed and logged in to your Azure account with
az login.
OpenBSD operating system installed in a .vhd file - A supported OpenBSD operating system (6.1 version
AMD64) must be installed to a virtual hard disk. Multiple tools exist to create .vhd files. For example, you can
use a virtualization solution such as Hyper-V to create the .vhd file and install the operating system. For
instructions about how to install and use Hyper-V, see Install Hyper-V and create a virtual machine.
4. By default, the root user is disabled on virtual machines in Azure. Users can run commands with elevated
privileges by using the doas command on OpenBSD VM. Doas is enabled by default. For more information,
see doas.conf.
5. Install and configure prerequisites for the Azure Agent as follows:
pkg_add py-setuptools openssl git
ln -sf /usr/local/bin/python2.7 /usr/local/bin/python
ln -sf /usr/local/bin/python2.7-2to3 /usr/local/bin/2to3
ln -sf /usr/local/bin/python2.7-config /usr/local/bin/python-config
ln -sf /usr/local/bin/pydoc2.7 /usr/local/bin/pydoc
6. The latest release of the Azure agent can always be found on Github. Install the agent as follows:
IMPORTANT
After you install Azure Agent, it's a good idea to verify that it's running as follows:
7. Deprovision the system to clean it and make it suitable for reprovisioning. The following command also
deletes the last provisioned user account and the associated data:
To upload your VHD, create a storage account with az storage account create. Storage account names must be
unique, so provide your own name. The following example creates a storage account named mystorageaccount:
To logically separate the VHDs you upload, create a container within the storage account with az storage container
create:
az vm create \
--resource-group myResourceGroup \
--name myOpenBSD61 \
--image "https://mystorageaccount.blob.core.windows.net/vhds/OpenBSD61.vhd" \
--os-type linux \
--admin-username azureuser \
--ssh-key-value ~/.ssh/id_rsa.pub
Next steps
If you want to know more about Hyper-V support on OpenBSD6.1, read OpenBSD 6.1 and hyperv.4.
If you want to create a VM from managed disk, read az disk.
Introduction to FreeBSD on Azure
4/9/2018 • 3 min to read • Edit Online
Overview
FreeBSD for Microsoft Azure is an advanced computer operating system used to power modern servers, desktops,
and embedded platforms.
Microsoft Corporation is making images of FreeBSD available on Azure with the Azure VM Guest Agent pre-
configured. Currently, the following FreeBSD versions are offered as images by Microsoft:
FreeBSD 10.3-RELEASE
FreeBSD 11.0-RELEASE
FreeBSD 11.1-RELEASE
The agent is responsible for communication between the FreeBSD VM and the Azure fabric for operations such as
provisioning the VM on first use (user name, password or SSH key, host name, etc.) and enabling functionality for
selective VM extensions.
As for future versions of FreeBSD, the strategy is to stay current and make the latest releases available shortly after
they are published by the FreeBSD release engineering team.
If bash is not installed on your FreeBSD machine, run following command before the installation.
If python is not installed on your FreeBSD machine, run following commands before the installation.
az login
az group create --name myResourceGroup --location eastus
az vm create --name myFreeBSD11 \
--resource-group myResourceGroup \
--image MicrosoftOSTC:FreeBSD:11.0:latest \
--admin-username azureuser \
--generate-ssh-keys
Then you can log in to your FreeBSD VM through the ip address that printed in the output of above deployment.
NOTE
FreeBSD VM only supports CustomScript version 1.x by now.
Authentication: user names, passwords, and SSH keys
When you're creating a FreeBSD virtual machine by using the Azure portal, you must provide a user name,
password, or SSH public key. User names for deploying a FreeBSD virtual machine on Azure must not match
names of system accounts (UID <100) already present in the virtual machine ("root", for example). Currently, only
the RSA SSH key is supported. A multiline SSH key must begin with ---- BEGIN SSH2 PUBLIC KEY ---- and end
with ---- END SSH2 PUBLIC KEY ---- .
$ sudo <COMMAND>
Known issues
The Azure VM Guest Agent version 2.2.2 has a known issue that causes the provision failure for FreeBSD VM on
Azure. The fix was captured by Azure VM Guest Agent version 2.2.3 and later releases.
Next steps
Go to Azure Marketplace to create a FreeBSD VM.
How to create an image of a virtual machine or VHD
5/10/2018 • 4 min to read • Edit Online
To create multiple copies of a virtual machine (VM ) to use in Azure, capture an image of the VM or the OS VHD.
To create an image, you need remove personal account information which makes it safer to deploy multiple times.
In the following steps you deprovision an existing VM, deallocate and create an image. You can use this image to
create VMs across any resource group within your subscription.
If you want to create a copy of your existing Linux VM for backup or debugging, or upload a specialized Linux
VHD from an on-premises VM, see Upload and create a Linux VM from custom disk image.
You can also use Packer to create your custom configuration. For more information on using Packer, see How to
use Packer to create Linux virtual machine images in Azure.
Quick commands
For a simplified version of this topic, for testing, evaluating or learning about VMs in Azure, see Create a custom
image of an Azure VM using the CLI.
NOTE
Only run this command on a VM that you intend to capture as an image. It does not guarantee that the image is
cleared of all sensitive information or is suitable for redistribution. The +user parameter also removes the last
provisioned user account. If you want to keep account credentials in the VM, just use -deprovision to leave the user
account in place.
3. Type y to continue. You can add the -force parameter to avoid this confirmation step.
4. After the command completes, type exit. This step closes the SSH client.
az vm deallocate \
--resource-group myResourceGroup \
--name myVM
2. Mark the VM as generalized with az vm generalize. The following example marks the the VM named
myVM in the resource group named myResourceGroup as generalized:
az vm generalize \
--resource-group myResourceGroup \
--name myVM
3. Now create an image of the VM resource with az image create. The following example creates an image
named myImage in the resource group named myResourceGroup using the VM resource named myVM:
az image create \
--resource-group myResourceGroup \
--name myImage --source myVM
NOTE
The image is created in the same resource group as your source VM. You can create VMs in any resource group
within your subscription from this image. From a management perspective, you may wish to create a specific
resource group for your VM resources and images.
If you would like to store your image in zone-resilient storage, you need to create it in a region that supports
availability zones and include the --zone-resilient true parameter.
az vm create \
--resource-group myResourceGroup \
--name myVMDeployed \
--image myImage\
--admin-username azureuser \
--ssh-key-value ~/.ssh/id_rsa.pub
"id": "/subscriptions/guid/resourceGroups/MYRESOURCEGROUP/providers/Microsoft.Compute/images/myImage",
"location": "westus",
"name": "myImage",
The following example uses az vm create to create a VM in a different resource group than the source image by
specifying the image resource ID:
az vm create \
--resource-group myOtherResourceGroup \
--name myOtherVMDeployed \
--image "/subscriptions/guid/resourceGroups/MYRESOURCEGROUP/providers/Microsoft.Compute/images/myImage" \
--admin-username azureuser \
--ssh-key-value ~/.ssh/id_rsa.pub
az vm show \
--resource-group myResourceGroup \
--name myVMDeployed \
--show-details
Next steps
You can create multiple VMs from your source VM image. If you need to make changes to your image:
Create a VM from your image.
Make any updates or configuration changes.
Follow the steps again to deprovision, deallocate, generalize, and create an image.
Use this new image for future deployments. If desired, delete the original image.
For more information on managing your VMs with the CLI, see Azure CLI 2.0.
How to use Packer to create Linux virtual machine
images in Azure
5/7/2018 • 5 min to read • Edit Online
Each virtual machine (VM ) in Azure is created from an image that defines the Linux distribution and OS version.
Images can include pre-installed applications and configurations. The Azure Marketplace provides many first and
third-party images for most common distributions and application environments, or you can create your own
custom images tailored to your needs. This article details how to use the open source tool Packer to define and
build custom images in Azure.
az ad sp create-for-rbac --query "{ client_id: appId, client_secret: password, tenant_id: tenant }"
{
"client_id": "f5b6a5cf-fbdf-4a9f-b3b8-3c2cd00225a4",
"client_secret": "0e760437-bf34-4aad-9f8d-870be799c55d",
"tenant_id": "72f988bf-86f1-41af-91ab-2d7cd011db47"
}
To authenticate to Azure, you also need to obtain your Azure subscription ID with az account show:
You use the output from these two commands in the next step.
{
"builders": [{
"type": "azure-arm",
"client_id": "f5b6a5cf-fbdf-4a9f-b3b8-3c2cd00225a4",
"client_secret": "0e760437-bf34-4aad-9f8d-870be799c55d",
"tenant_id": "72f988bf-86f1-41af-91ab-2d7cd011db47",
"subscription_id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx",
"managed_image_resource_group_name": "myResourceGroup",
"managed_image_name": "myPackerImage",
"os_type": "Linux",
"image_publisher": "Canonical",
"image_offer": "UbuntuServer",
"image_sku": "16.04-LTS",
"azure_tags": {
"dept": "Engineering",
"task": "Image deployment"
},
This template builds an Ubuntu 16.04 LTS image, installs NGINX, then deprovisions the VM.
NOTE
If you expand on this template to provision user credentials, adjust the provisioner command that deprovisions the Azure
agent to read -deprovision rather than deprovision+user . The +user flag removes all user accounts from the source
VM.
ManagedImageResourceGroupName: myResourceGroup
ManagedImageName: myPackerImage
ManagedImageLocation: eastus
It takes a few minutes for Packer to build the VM, run the provisioners, and clean up the deployment.
az vm create \
--resource-group myResourceGroup \
--name myVM \
--image myPackerImage \
--admin-username azureuser \
--generate-ssh-keys
If you wish to create VMs in a different resource group or region than your Packer image, specify the image ID
rather than image name. You can obtain the image ID with az image show.
It takes a few minutes to create the VM. Once the VM has been created, take note of the publicIpAddress
displayed by the Azure CLI. This address is used to access the NGINX site via a web browser.
To allow web traffic to reach your VM, open port 80 from the Internet with az vm open-port:
az vm open-port \
--resource-group myResourceGroup \
--name myVM \
--port 80
Next steps
In this example, you used Packer to create a VM image with NGINX already installed. You can use this VM image
alongside existing deployment workflows, such as to deploy your app to VMs created from the Image with
Ansible, Chef, or Puppet.
For additional example Packer templates for other Linux distros, see this GitHub repo.
Download a Linux VHD from Azure
4/25/2018 • 2 min to read • Edit Online
In this article, you learn how to download a Linux virtual hard disk (VHD ) file from Azure using the Azure CLI and
Azure portal.
Virtual machines (VMs) in Azure use disks as a place to store an operating system, applications, and data. All Azure
VMs have at least two disks – a Windows operating system disk and a temporary disk. The operating system disk
is initially created from an image, and both the operating system disk and the image are VHDs stored in an Azure
storage account. Virtual machines also can have one or more data disks, that are also stored as VHDs.
If you haven't already done so, install Azure CLI 2.0.
Stop the VM
A VHD can’t be downloaded from Azure if it's attached to a running VM. You need to stop the VM to download a
VHD. If you want to use a VHD as an image to create other VMs with new disks, you need to deprovision and
generalize the operating system contained in the file and stop the VM. To use the VHD as a disk for a new instance
of an existing VM or data disk, you only need to stop and deallocate the VM.
To use the VHD as an image to create other VMs, complete these steps:
1. Use SSH, the account name, and the public IP address of the VM to connect to it and deprovision it. You can
find the public IP address with az network public-ip show. The +user parameter also removes the last
provisioned user account. If you are baking account credentials in to the VM, leave out this +user parameter.
The following example removes the last provisioned user account:
ssh azureuser@<publicIpAddress>
sudo waagent -deprovision+user -force
exit
To use the VHD as a disk for a new instance of an existing VM or data disk, complete these steps:
1. Sign in to the Azure portal.
2. On the Hub menu, click Virtual Machines.
3. Select the VM from the list.
4. On the blade for the VM, click Stop.
Generate SAS URL
To download the VHD file, you need to generate a shared access signature (SAS ) URL. When the URL is generated,
an expiration time is assigned to the URL.
1. On the menu of the blade for the VM, click Disks.
2. Select the operating system disk for the VM, and then click Export.
3. Click Generate URL.
Download VHD
1. Under the URL that was generated, click Download the VHD file.
2. You may need to click Save in the browser to start the download. The default name for the VHD file is abcd.
Next steps
Learn how to upload and create a Linux VM from custom disk with the Azure CLI 2.0.
Manage Azure disks the Azure CLI.
1 min to read •
Edit Online
Manage the availability of Linux virtual machines
4/9/2018 • 8 min to read • Edit Online
Learn ways to set up and manage multiple virtual machines to ensure high availability for your Linux application
in Azure. You can also manage the availability of Windows virtual machines.
For instructions on creating an availability set using CLI in the Resource Manager deployment model, see azure
availset: commands to manage your availability sets.
IMPORTANT
Avoid leaving a single instance virtual machine in an availability set by itself. VMs in this configuration do not qualify for a
SLA guarantee and face downtime during Azure planned maintenance events, except when a single VM is using Azure
Premium Storage. For single VMs using premium storage, the Azure SLA applies.
Each virtual machine in your availability set is assigned an update domain and a fault domain by the
underlying Azure platform. For a given availability set, five non-user-configurable update domains are assigned
by default (Resource Manager deployments can then be increased to provide up to 20 update domains) to
indicate groups of virtual machines and underlying physical hardware that can be rebooted at the same time.
When more than five virtual machines are configured within a single availability set, the sixth virtual machine is
placed into the same update domain as the first virtual machine, the seventh in the same update domain as the
second virtual machine, and so on. The order of update domains being rebooted may not proceed sequentially
during planned maintenance, but only one update domain is rebooted at a time. A rebooted update domain is
given 30 minutes to recover before maintenance is initiated on a different update domain.
Fault domains define the group of virtual machines that share a common power source and network switch. By
default, the virtual machines configured within your availability set are separated across up to three fault
domains for Resource Manager deployments (two fault domains for Classic). While placing your virtual
machines into an availability set does not protect your application from operating system or application-specific
failures, it does limit the impact of potential physical hardware failures, network outages, or power interruptions.
Use managed disks for VMs in an availability set
If you are currently using VMs with unmanaged disks, we highly recommend you convert VMs in Availability
Set to use Managed Disks.
Managed disks provide better reliability for Availability Sets by ensuring that the disks of VMs in an Availability
Set are sufficiently isolated from each other to avoid single points of failure. It does this by automatically placing
the disks in different storage fault domains (storage clusters) and aligning them with the VM fault domain. If a
storage fault domain fails due to hardware or software failure, only the VM instance with disks on the storage
fault domain fails.
IMPORTANT
The number of fault domains for managed availability sets varies by region - either two or three per region. The following
table shows the number per region
East US 3
East US 2 3
West US 3
West US 2 2
Central US 3
North Central US 3
South Central US 3
West Central US 2
Canada Central 2
REGION MAX # OF FAULT DOMAINS
Canada East 2
North Europe 3
West Europe 3
UK South 2
UK West 2
East Asia 2
Japan East 2
Japan West 2
South India 2
Central India 2
West India 2
Korea Central 2
Korea South 2
Australia East 2
Australia Southeast 2
Brazil South 2
US Gov Virginia 2
US Gov Texas 2
US Gov Arizona 2
US DoD Central 2
US DoD East 2
If you plan to use VMs with unmanaged disks, follow below best practices for Storage accounts where virtual
hard disks (VHDs) of VMs are stored as page blobs.
1. Keep all disks (OS and data) associated with a VM in the same storage account
2. Review the limits on the number of unmanaged disks in a Storage account before adding more VHDs
to a storage account
3. Use separate storage account for each VM in an Availability Set. Do not share Storage accounts with
multiple VMs in the same Availability Set. It is acceptable for VMs across different Availability Sets to share
storage accounts if above best practices are followed
Next steps
To learn more about load balancing your virtual machines, see Load Balancing virtual machines.
Vertically scale Azure Linux virtual machine with
Azure Automation
4/9/2018 • 2 min to read • Edit Online
Vertical scaling is the process of increasing or decreasing the resources of a machine in response to the workload.
In Azure this can be accomplished by changing the size of the Virtual Machine. This can help in the following
scenarios
If the Virtual Machine is not being used frequently, you can resize it down to a smaller size to reduce your
monthly costs
If the Virtual Machine is seeing a peak load, it can be resized to a larger size to increase its capacity
The outline for the steps to accomplish this is as below
1. Setup Azure Automation to access your Virtual Machines
2. Import the Azure Automation Vertical Scale runbooks into your subscription
3. Add a webhook to your runbook
4. Add an alert to your Virtual Machine
NOTE
Because of the size of the first Virtual Machine, the sizes it can be scaled to, may be limited due to the availability of the other
sizes in the cluster current Virtual Machine is deployed in. In the published automation runbooks used in this article we take
care of this case and only scale within the below VM size pairs. This means that a Standard_D1v2 Virtual Machine will not
suddenly be scaled up to Standard_G5 or scaled down to Basic_A0.
Basic_A0 Basic_A4
Standard_A0 Standard_A4
Standard_A5 Standard_A7
Standard_A8 Standard_A9
Standard_A10 Standard_A11
Standard_D1 Standard_D4
Standard_D11 Standard_D14
Standard_DS1 Standard_DS4
Standard_DS11 Standard_DS14
Standard_D1v2 Standard_D5v2
Standard_D11v2 Standard_D14v2
Standard_G1 Standard_G5
Standard_GS1 Standard_GS5
This article steps through using the Azure CLI to create a Linux VM in an Azure availability zone. An availability
zone is a physically separate zone in an Azure region. Use availability zones to protect your apps and data from an
unlikely failure or loss of an entire datacenter.
To use an availability zone, create your virtual machine in a supported Azure region.
Make sure that you have installed the latest Azure CLI 2.0 and logged in to an Azure account with az login.
The output is similar to the following condensed example, which shows the Availability Zones in which each VM
size is available:
az vm create --resource-group myResourceGroupVM --name myVM --location eastus2 --image UbuntuLTS --generate-
ssh-keys --zone 1
It may take a few minutes to create the VM. Once the VM has been created, the Azure CLI outputs information
about the VM. Take note of the zones value, which indicates the availability zone in which the VM is running.
{
"fqdns": "",
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/myResourceGroupVM/providers/Microsoft.Compute/virtualMachines/myVM",
"location": "eastus2",
"macAddress": "00-0D-3A-23-9A-49",
"powerState": "VM running",
"privateIpAddress": "10.0.0.4",
"publicIpAddress": "52.174.34.95",
"resourceGroup": "myResourceGroupVM",
"zones": "1"
}
The output shows that the managed disk is in the same availability zone as the VM:
{
"creationData": {
"createOption": "FromImage",
"imageReference": {
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/Providers/Microsoft.Compute/Locations/westeurope/Publishers/Canonical/ArtifactTypes/VMImage/Offer
s/UbuntuServer/Skus/16.04-LTS/Versions/latest",
"lun": null
},
"sourceResourceId": null,
"sourceUri": null,
"storageAccountId": null
},
"diskSizeGb": 30,
"encryptionSettings": null,
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/myResourceGroupVM/providers/Microsoft.Compute/disks/osdisk_761c570dab",
"location": "eastus2",
"managedBy": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/myResourceGroupVM/providers/Microsoft.Compute/virtualMachines/myVM",
"name": "myVM_osdisk_761c570dab",
"osType": "Linux",
"provisioningState": "Succeeded",
"resourceGroup": "myResourceGroupVM",
"sku": {
"name": "Premium_LRS",
"tier": "Premium"
},
"tags": {},
"timeCreated": "2018-03-05T22:16:06.892752+00:00",
"type": "Microsoft.Compute/disks",
"zones": [
"1"
]
}
Use the az vm list-ip-addresses command to return the name of public IP address resource in myVM. In this
example, the name is stored in a variable that is used in a later step.
The output shows that the IP address is in the same availability zone as the VM:
{
"dnsSettings": null,
"etag": "W/\"b7ad25eb-3191-4c8f-9cec-c5e4a3a37d35\"",
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/myResourceGroupVM/providers/Microsoft.Network/publicIPAddresses/myVMPublicIP",
"idleTimeoutInMinutes": 4,
"ipAddress": "52.174.34.95",
"ipConfiguration": {
"etag": null,
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/myResourceGroupVM/providers/Microsoft.Network/networkInterfaces/myVMVMNic/ipConfig
urations/ipconfigmyVM",
"name": null,
"privateIpAddress": null,
"privateIpAllocationMethod": null,
"provisioningState": null,
"publicIpAddress": null,
"resourceGroup": "myResourceGroupVM",
"subnet": null
},
"location": "eastUS2",
"name": "myVMPublicIP",
"provisioningState": "Succeeded",
"publicIpAddressVersion": "IPv4",
"publicIpAllocationMethod": "Dynamic",
"resourceGroup": "myResourceGroupVM",
"resourceGuid": "8c70a073-09be-4504-0000-000000000000",
"tags": {},
"type": "Microsoft.Network/publicIPAddresses",
"zones": [
"1"
]
}
Next steps
In this article, you learned how to create a VM in an availability zone. Learn more about regions and availability for
Azure VMs.
Install and configure Ansible to manage virtual
machines in Azure
5/7/2018 • 5 min to read • Edit Online
Ansible allows you to automate the deployment and configuration of resources in your environment. You can use
Ansible to manage your virtual machines (VMs) in Azure, the same as you would any other resource. This article
details how to install Ansible and the required Azure Python SDK modules for some of the most common Linux
distros. You can install Ansible on other distros by adjusting the installed packages to fit your particular platform.
To create Azure resources in a secure manner, you also learn how to create and define credentials for Ansible to
use.
For more installation options and steps for additional platforms, see the Ansible install guide.
If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version
2.0.30 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI 2.0.
Install Ansible
One of the easiest ways to use Ansible with Azure is with the Azure Cloud Shell, a browser-based shell experience
to manage and develop Azure resources. Ansible is pre-installed in the Cloud Shell, so you can skip instructions on
how to install Ansible and go to Create Azure credentials. For a list of additional tools also available in the Cloud
Shell, see Features and tools for Bash in the Azure Cloud Shell.
The following instructions show you how to create a Linux VM for various distros and then install Ansible. If you
don't need to create a Linux VM, skip this first step to create an Azure resource group. If you do need to create a
VM, first create a resource group with az group create. The following example creates a resource group named
myResourceGroup in the eastus location:
Now, select one of the following distros for steps on how to create a VM, if needed, and then install Ansible:
CentOS 7.4
Ubuntu 16.04 LTS
SLES 12 SP2
CentOS 7.4
If needed, create a VM with az vm create. The following example creates a VM named myVMAnsible:
az vm create \
--name myVMAnsible \
--resource-group myResourceGroup \
--image OpenLogic:CentOS:7.4:latest \
--admin-username azureuser \
--generate-ssh-keys
SSH to your VM using the publicIpAddress noted in the output from the VM create operation:
ssh azureuser@<publicIpAddress>
On your VM, install the required packages for the Azure Python SDK modules and Ansible as follows:
az vm create \
--name myVMAnsible \
--resource-group myResourceGroup \
--image Canonical:UbuntuServer:16.04-LTS:latest \
--admin-username azureuser \
--generate-ssh-keys
SSH to your VM using the publicIpAddress noted in the output from the VM create operation:
ssh azureuser@<publicIpAddress>
On your VM, install the required packages for the Azure Python SDK modules and Ansible as follows:
SSH to your VM using the publicIpAddress noted in the output from the VM create operation:
ssh azureuser@<publicIpAddress>
On your VM, install the required packages for the Azure Python SDK modules and Ansible as follows:
{
"client_id": "eec5624a-90f8-4386-8a87-02730b5410d5",
"secret": "531dcffa-3aff-4488-99bb-4816c395ea3f",
"tenant": "72f988bf-86f1-41af-91ab-2d7cd011db47"
}
To authenticate to Azure, you also need to obtain your Azure subscription ID using az account show:
You use the output from these two commands in the next step.
mkdir ~/.azure
vi ~/.azure/credentials
The credentials file itself combines the subscription ID with the output of creating a service principal. Output from
the previous az ad sp create-for-rbac command is the same as needed for client_id, secret, and tenant. The
following example credentials file shows these values matching the previous output. Enter your own values as
follows:
[default]
subscription_id=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
client_id=eec5624a-90f8-4386-8a87-02730b5410d5
secret=531dcffa-3aff-4488-99bb-4816c395ea3f
tenant=72f988bf-86f1-41af-91ab-2d7cd011db47
export AZURE_SUBSCRIPTION_ID=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
export AZURE_CLIENT_ID=eec5624a-90f8-4386-8a87-02730b5410d5
export AZURE_SECRET=531dcffa-3aff-4488-99bb-4816c395ea3f
export AZURE_TENANT=72f988bf-86f1-41af-91ab-2d7cd011db47
Next steps
You now have Ansible and the required Azure Python SDK modules installed, and credentials defined for Ansible
to use. Learn how to create a VM with Ansible. You can also learn how to create a complete Azure VM and
supporting resources with Ansible.
Create a basic virtual machine in Azure with Ansible
5/7/2018 • 2 min to read • Edit Online
Ansible allows you to automate the deployment and configuration of resources in your environment. You can use
Ansible to manage your virtual machines (VMs) in Azure, the same as you would any other resource. This article
shows you how to create a basic VM with Ansible. You can also learn how to Create a complete VM environment
with Ansible.
Prerequisites
To manage Azure resources with Ansible, you need the following:
Ansible and the Azure Python SDK modules installed on your host system.
Install Ansible on CentOS 7.4, Ubuntu 16.04 LTS, and SLES 12 SP2
Azure credentials, and Ansible configured to use them.
Create Azure credentials and configure Ansible
Azure CLI version 2.0.4 or later. Run az --version to find the version.
If you need to upgrade, see Install Azure CLI 2.0. You can also use Cloud Shell from your browser.
Create a virtual network for your VM with az network vnet create. The following example creates a virtual network
named myVnet and a subnet named mySubnet:
ansible-playbook azure_create_vm.yml
The output looks similar to the following example that shows the VM has been successfully created:
Next steps
This example creates a VM in an existing resource group and with a virtual network already deployed. For a more
detailed example on how to use Ansible to create supporting resources such as a virtual network and Network
Security Group rules, see Create a complete VM environment with Ansible.
Create a complete Linux virtual machine environment
in Azure with Ansible
5/7/2018 • 4 min to read • Edit Online
Ansible allows you to automate the deployment and configuration of resources in your environment. You can use
Ansible to manage your virtual machines (VMs) in Azure, the same as you would any other resource. This article
shows you how to create a complete Linux environment and supporting resources with Ansible. You can also learn
how to Create a basic VM with Ansible.
Prerequisites
To manage Azure resources with Ansible, you need the following:
Ansible and the Azure Python SDK modules installed on your host system.
Install Ansible on CentOS 7.4, Ubuntu 16.04 LTS, and SLES 12 SP2
Azure credentials, and Ansible configured to use them.
Create Azure credentials and configure Ansible
Azure CLI version 2.0.4 or later. Run az --version to find the version.
If you need to upgrade, see Install Azure CLI 2.0. You can also use Cloud Shell from your browser.
To add a subnet, the following section creates a subnet named mySubnet in the myVnet virtual network:
Ansible needs a resource group to deploy all your resources into. Create a resource group with az group create.
The following example creates a resource group named myResourceGroup in the eastus location:
To create the complete VM environment with Ansible, run the playbook as follows:
ansible-playbook azure_create_complete_vm.yml
The output looks similar to the following example that shows the VM has been successfully created:
Next steps
This example creates a complete VM environment including the required virtual networking resources. For a more
direct example to create a VM into existing network resources with default options, see Create a VM.
Install and configure Terraform to provision VMs and
other infrastructure into Azure
2/16/2018 • 3 min to read • Edit Online
Terraform provides an easy way to define, preview, and deploy cloud infrastructure by using a simple templating
language. This article describes the necessary steps to use Terraform to provision resources in Azure.
TIP
To learn more about how to use Terraform with Azure, visit the Terraform Hub. Terraform is installed by default in the Cloud
Shell. By using Cloud Shell, you can skip the install/setup portions of this document.
Install Terraform
To install Terraform, download the package appropriate for your operating system into a separate install directory.
The download contains a single executable file, for which you should also define a global path. For instructions on
how to set the path on Linux and Mac, go to this webpage. For instructions on how to set the path on Windows, go
to this webpage.
Verify your path configuration with the terraform command. You should see a list of available Terraform options
as output:
azureuser@Azure:~$ terraform
Usage: terraform [--version] [--help] <command> [args]
If you have multiple Azure subscriptions, their details are returned by the az login command. Set the
SUBSCRIPTION_ID environment variable to hold the value of the returned id field from the subscription you want
to use.
Set the subscription that you want to use for this session.
Your appId, password, sp_name, and tenant are returned. Make a note of the appId and password.
To test your credentials, open a new shell and run the following command, using the returned values for sp_name,
password, and tenant:
#!/bin/sh
echo "Setting environment variables for Terraform"
export ARM_SUBSCRIPTION_ID=your_subscription_id
export ARM_CLIENT_ID=your_appId
export ARM_CLIENT_SECRET=your_password
export ARM_TENANT_ID=your_tenant_id
provider "azurerm" {
}
resource "azurerm_resource_group" "rg" {
name = "testResourceGroup"
location = "westus"
}
Save the file and then run terraform init . This command downloads the Azure modules required to create an
Azure resource group. You see the following output:
+ azurerm_resource_group.rg
id: <computed>
location: "westus"
name: "testResourceGroup"
tags.%: <computed>
azurerm_resource_group.rg: Creating...
location: "" => "westus"
name: "" => "testResourceGroup"
tags.%: "" => "<computed>"
azurerm_resource_group.rg: Creation complete after 1s
Next steps
You have installed Terraform and configured Azure credentials so that you can start deploying infrastructure into
your Azure subscription. You then tested your installation by creating an empty Azure resource group.
Create an Azure VM with Terraform
Create a complete Linux virtual machine
infrastructure in Azure with Terraform
5/1/2018 • 7 min to read • Edit Online
Terraform allows you to define and create complete infrastructure deployments in Azure. You build Terraform
templates in a human-readable format that create and configure Azure resources in a consistent, reproducible
manner. This article shows you how to create a complete Linux environment and supporting resources with
Terraform. You can also learn how to Install and configure Terraform.
TIP
If you create environment variables for the values or are using the Azure Cloud Shell Bash experience , you don't need to
include the variable declarations in this section.
provider "azurerm" {
subscription_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
client_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
client_secret = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
tenant_id = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}
The following section creates a resource group named myResourceGroup in the eastus location:
tags {
environment = "Terraform Demo"
}
}
tags {
environment = "Terraform Demo"
}
}
: The following section creates a subnet named mySubnet in the myVnet virtual network
tags {
environment = "Terraform Demo"
}
}
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
tags {
environment = "Terraform Demo"
}
}
ip_configuration {
name = "myNicConfiguration"
subnet_id = "${azurerm_subnet.myterraformsubnet.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.myterraformpublicip.id}"
}
tags {
environment = "Terraform Demo"
}
}
byte_length = 8
}
Now you can create a storage account. The following section creates a storage account, with the name based on
the random text generated in the preceding step:
tags {
environment = "Terraform Demo"
}
}
storage_os_disk {
name = "myOsDisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04.0-LTS"
version = "latest"
}
os_profile {
computer_name = "myvm"
admin_username = "azureuser"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/azureuser/.ssh/authorized_keys"
key_data = "ssh-rsa AAAAB3Nz{snip}hwhqT9h"
}
}
boot_diagnostics {
enabled = "true"
storage_uri = "${azurerm_storage_account.mystorageaccount.primary_blob_endpoint}"
}
tags {
environment = "Terraform Demo"
}
}
variable "resourcename" {
default = "myResourceGroup"
}
tags {
environment = "Terraform Demo"
}
}
tags {
environment = "Terraform Demo"
}
}
# Create subnet
resource "azurerm_subnet" "myterraformsubnet" {
name = "mySubnet"
resource_group_name = "${azurerm_resource_group.myterraformgroup.name}"
virtual_network_name = "${azurerm_virtual_network.myterraformnetwork.name}"
address_prefix = "10.0.1.0/24"
}
tags {
environment = "Terraform Demo"
}
}
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
tags {
environment = "Terraform Demo"
}
}
ip_configuration {
ip_configuration {
name = "myNicConfiguration"
subnet_id = "${azurerm_subnet.myterraformsubnet.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.myterraformpublicip.id}"
}
tags {
environment = "Terraform Demo"
}
}
byte_length = 8
}
tags {
environment = "Terraform Demo"
}
}
storage_os_disk {
name = "myOsDisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04.0-LTS"
version = "latest"
}
os_profile {
computer_name = "myvm"
admin_username = "azureuser"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/azureuser/.ssh/authorized_keys"
key_data = "ssh-rsa AAAAB3Nz{snip}hwhqT9h"
}
}
boot_diagnostics {
enabled = "true"
storage_uri = "${azurerm_storage_account.mystorageaccount.primary_blob_endpoint}"
}
tags {
environment = "Terraform Demo"
}
}
terraform init
The next step is to have Terraform review and validate the template. This step compares the requested resources to
the state information saved by Terraform and then outputs the planned execution. Resources are not created in
Azure.
terraform plan
After you execute the previous command, you should see something like the following screen:
...
Note: You didn’t specify an “-out” parameter to save this plan, so when
“apply” is called, Terraform can’t guarantee this is what will execute.
+ azurerm_resource_group.myterraform
<snip>
+ azurerm_virtual_network.myterraformnetwork
<snip>
+ azurerm_network_interface.myterraformnic
<snip>
+ azurerm_network_security_group.myterraformnsg
<snip>
+ azurerm_public_ip.myterraformpublicip
<snip>
+ azurerm_subnet.myterraformsubnet
<snip>
+ azurerm_virtual_machine.myterraformvm
<snip>
Plan: 7 to add, 0 to change, 0 to destroy.
If everything looks correct and you are ready to build the infrastructure in Azure, apply the template in Terraform:
terraform apply
Once Terraform completes, your VM infrastructure is ready. Obtain the public IP address of your VM with az vm
show:
az vm show --resource-group myResourceGroup --name myVM -d --query [publicIps] --o tsv
ssh azureuser@<publicIps>
Next steps
You have created basic infrastructure in Azure by using Terraform. For more complex scenarios, including
examples that use load balancers and virtual machine scale sets, see numerous Terraform examples for Azure. For
an up-to-date list of supported Azure providers, see the Terraform documentation.
Cloud-init support for virtual machines in Azure
3/2/2018 • 4 min to read • Edit Online
This article explains the support that exists for cloud-init to configure a virtual machine (VM ) or virtual machine
scale sets (VMSS ) at provisioning time in Azure. These cloud-init scripts run on first boot once the resources have
been provisioned by Azure.
Cloud-init overview
Cloud-init is a widely used approach to customize a Linux VM as it boots for the first time. You can use cloud-init
to install packages and write files, or to configure users and security. Because cloud-init is called during the initial
boot process, there are no additional steps or required agents to apply your configuration. For more information
on how to properly format your #cloud-config files, see the cloud-init documentation site. #cloud-config files are
text files encoded in base64.
Cloud-init also works across distributions. For example, you don't use apt-get install or yum install to install a
package. Instead you can define a list of packages to install. Cloud-init automatically uses the native package
management tool for the distro you select.
We are actively working with our endorsed Linux distro partners in order to have cloud-init enabled images
available in the Azure marketplace. These images will make your cloud-init deployments and configurations work
seamlessly with VMs and VM Scale Sets (VMSS ). The following table outlines the current cloud-init enabled
images availability on the Azure platform:
Currently Azure Stack does not support the provisioning of RHEL 7.4 and CentOS 7.4 using cloud-init.
The next step is to create a file in your current shell, named cloud -init.txt and paste the following configuration. For
this example, create the file in the Cloud Shell not on your local machine. You can use any editor you wish. Enter
sensible-editor cloud-init.txt to create the file and see a list of available editors. Choose #1 to use the nano
editor. Make sure that the whole cloud-init file is copied correctly, especially the first line:
#cloud-config
package_upgrade: true
packages:
- httpd
Press ctrl-X to exit the file, type y to save the file and press enter to confirm the file name on exit.
The final step is to create a VM with the az vm create command.
The following example creates a VM named centos74 and creates SSH keys if they do not already exist in a default
key location. To use a specific set of keys, use the --ssh-key-value option. Use the --custom-data parameter to
pass in your cloud-init config file. Provide the full path to the cloud -init.txt config if you saved the file outside of
your present working directory. The following example creates a VM named centos74:
az vm create \
--resource-group myResourceGroup \
--name centos74 \
--image OpenLogic:CentOS:7-CI:latest \
--custom-data cloud-init.txt \
--generate-ssh-keys
When the VM has been created, the Azure CLI shows information specific to your deployment. Take note of the
publicIpAddress . This address is used to access the VM. It takes some time for the VM to be created, the packages
to install, and the app to start. There are background tasks that continue to run after the Azure CLI returns you to
the prompt. You can SSH into the VM and use the steps outlined in the Troubleshooting section to view the cloud-
init logs.
Troubleshooting cloud-init
Once the VM has been provisioned, cloud-init will run through all the modules and script defined in
--custom-data in order to configure the VM. If you need to troubleshoot any errors or omissions from the
configuration, you need to search for the module name ( disk_setup or runcmd for example) in the cloud-init log -
located in /var/log/cloud-init.log.
NOTE
Not every module failure results in a fatal cloud-init overall configuration failure. For example, using the runcmd module, if
the script fails, cloud-init will still report provisioning succeeded because the runcmd module executed.
Next steps
For cloud-init examples of configuration changes, see the following documents:
Add an additional Linux user to a VM
Run a package manager to update existing packages on first boot
Change VM local hostname
Install an application package, update configuration files and inject keys
Use cloud-init to set hostname for a Linux VM in
Azure
2/6/2018 • 1 min to read • Edit Online
This article shows you how to use cloud-init to configure a specific hostname on a virtual machine (VM ) or virtual
machine scale sets (VMSS ) at provisioning time in Azure. These cloud-init scripts run on first boot once the
resources have been provisioned by Azure. For more information about how cloud-init works natively in Azure
and the supported Linux distros, see cloud-init overview
#cloud-config
hostname: myhostname
Before deploying this image, you need to create a resource group with the az group create command. An Azure
resource group is a logical container into which Azure resources are deployed and managed. The following
example creates a resource group named myResourceGroup in the eastus location.
Now, create a VM with az vm create and specify the cloud-init file with --custom-data cloud_init_hostname.txt as
follows:
az vm create \
--resource-group myResourceGroup \
--name centos74 \
--image OpenLogic:CentOS:7-CI:latest \
--custom-data cloud_init_hostname.txt \
--generate-ssh-keys
Once created, the Azure CLI shows information about the VM. Use the publicIpAddress to SSH to your VM.
Enter your own address as follows:
ssh <publicIpAddress>
The VM should report the hostname as that value set in the cloud-init file, as shown in the following example
output:
myhostname
Next steps
For additional cloud-init examples of configuration changes, see the following:
Add an additional Linux user to a VM
Run a package manager to update existing packages on first boot
Change VM local hostname
Install an application package, update configuration files and inject keys
Use cloud-init to update and install packages in a
Linux VM in Azure
4/23/2018 • 2 min to read • Edit Online
This article shows you how to use cloud-init to update packages on a Linux virtual machine (VM ) or virtual
machine scale sets (VMSS ) at provisioning time in Azure. These cloud-init scripts run on first boot once the
resources have been provisioned by Azure. For more information about how cloud-init works natively in Azure
and the supported Linux distros, see cloud-init overview
#cloud-config
package_upgrade: true
packages:
- httpd
Before deploying this image, you need to create a resource group with the az group create command. An Azure
resource group is a logical container into which Azure resources are deployed and managed. The following
example creates a resource group named myResourceGroup in the eastus location.
Now, create a VM with az vm create and specify the cloud-init file with --custom-data cloud_init_upgrade.txt as
follows:
az vm create \
--resource-group myResourceGroup \
--name centos74 \
--image OpenLogic:CentOS:7-CI:latest \
--custom-data cloud_init_upgrade.txt \
--generate-ssh-keys
SSH to the public IP address of your VM shown in the output from the preceding command. Enter your own
publicIpAddress as follows:
ssh <publicIpAddress>
Run the package management tool and check for updates.
As cloud-init checked for and installed updates on boot, there should be no additional updates to apply. You see
the update process, number of altered packages as well as the installation of httpd by running yum history and
review the output similar to the one below.
Next steps
For additional cloud-init examples of configuration changes, see the following:
Add an additional Linux user to a VM
Run a package manager to update existing packages on first boot
Change VM local hostname
Install an application package, update configuration files and inject keys
Use cloud-init to add a user to a Linux VM in Azure
2/6/2018 • 2 min to read • Edit Online
This article shows you how to use cloud-init to add a user on a virtual machine (VM ) or virtual machine scale sets
(VMSS ) at provisioning time in Azure. This cloud-init script runs on first boot once the resources have been
provisioned by Azure. For more information about how cloud-init works natively in Azure and the supported
Linux distros, see cloud-init overview.
#cloud-config
users:
- default
- name: myadminuser
groups: sudo
shell: /bin/bash
sudo: ['ALL=(ALL) NOPASSWD:ALL']
ssh-authorized-keys:
- ssh-rsa AAAAB3<snip>
NOTE
The #cloud-config file includes the - default parameter included. This will append the user, to the existing admin user
created during provisioning. If you create a user without the - default parameter - the auto generated admin user
created by the Azure platform would be overwritten.
Before deploying this image, you need to create a resource group with the az group create command. An Azure
resource group is a logical container into which Azure resources are deployed and managed. The following
example creates a resource group named myResourceGroup in the eastus location.
Now, create a VM with az vm create and specify the cloud-init file with --custom-data cloud_init_add_user.txt as
follows:
az vm create \
--resource-group myResourceGroup \
--name centos74 \
--image OpenLogic:CentOS:7-CI:latest \
--custom-data cloud_init_add_user.txt \
--generate-ssh-keys
SSH to the public IP address of your VM shown in the output from the preceding command. Enter your own
publicIpAddress as follows:
ssh <publicIpAddress>
To confirm your user was added to the VM and the specified groups, view the contents of the /etc/group file as
follows:
cat /etc/group
The following example output shows the user from the cloud_init_add_user.txt file has been added to the VM and
the appropriate group:
root:x:0:
<snip />
sudo:x:27:myadminuser
<snip />
myadminuser:x:1000:
Next steps
For additional cloud-init examples of configuration changes, see the following:
Add an additional Linux user to a VM
Run a package manager to update existing packages on first boot
Change VM local hostname
Install an application package, update configuration files and inject keys
Use cloud-init to configure a swapfile on a Linux VM
3/13/2018 • 2 min to read • Edit Online
This article shows you how to use cloud-init to configure the swapfile on various Linux distributions. The swapfile
was traditionally configured by the Linux Agent (WAL A) based on which distributions required one. This document
will outline the process for building the swapfile on demand during provisioning time using cloud-init. For more
information about how cloud-init works natively in Azure and the supported Linux distros, see cloud-init overview
#cloud-config
disk_setup:
ephemeral0:
table_type: gpt
layout: [66, [33,82]]
overwrite: true
fs_setup:
- device: ephemeral0.1
filesystem: ext4
- device: ephemeral0.2
filesystem: swap
mounts:
- ["ephemeral0.1", "/mnt"]
- ["ephemeral0.2", "none", "swap", "sw", "0", "0"]
Before deploying this image, you need to create a resource group with the az group create command. An Azure
resource group is a logical container into which Azure resources are deployed and managed. The following
example creates a resource group named myResourceGroup in the eastus location.
Now, create a VM with az vm create and specify the cloud-init file with --custom-data cloud_init_swapfile.txt as
follows:
az vm create \
--resource-group myResourceGroup \
--name centos74 \
--image OpenLogic:CentOS:7-CI:latest \
--custom-data cloud_init_swapfile.txt \
--generate-ssh-keys
Verify swapfile was created
SSH to the public IP address of your VM shown in the output from the preceding command. Enter your own
publicIpAddress as follows:
ssh <publicIpAddress>
Once you have SSH'ed into the vm, check if the swapfile was created
swapon -s
NOTE
If you have an existing Azure image that has a swap file configured and you want to change the swap file configuration for
new images, you should remove the existing swap file. Please see 'Customize Images to provision by cloud-init' document for
more details.
Next steps
For additional cloud-init examples of configuration changes, see the following:
Add an additional Linux user to a VM
Run a package manager to update existing packages on first boot
Change VM local hostname
Install an application package, update configuration files and inject keys
Use cloud-init to run a bash script in a Linux VM in
Azure
2/6/2018 • 2 min to read • Edit Online
This article shows you how to use cloud-init to run an existing bash script on a Linux virtual machine (VM ) or
virtual machine scale sets (VMSS ) at provisioning time in Azure. These cloud-init scripts run on first boot once the
resources have been provisioned by Azure. For more information about how cloud-init works natively in Azure and
the supported Linux distros, see cloud-init overview
#!/bin/sh
echo "this has been written via cloud-init" + $(date) >> /tmp/myScript.txt
Before deploying this image, you need to create a resource group with the az group create command. An Azure
resource group is a logical container into which Azure resources are deployed and managed. The following
example creates a resource group named myResourceGroup in the eastus location.
Now, create a VM with az vm create and specify the bash script file with --custom-data simple_bash.sh as follows:
az vm create \
--resource-group myResourceGroup \
--name centos74 \
--image OpenLogic:CentOS:7-CI:latest \
--custom-data simple_bash.sh \
--generate-ssh-keys
Change to the /tmp directory and verify that myScript.txt file exists and has the appropriate text inside of it. If it
does not, you can check the /var/log/cloud-init.log for more details. Search for the following entry:
Next steps
For additional cloud-init examples of configuration changes, see the following:
Add an additional Linux user to a VM
Run a package manager to update existing packages on first boot
Change VM local hostname
Install an application package, update configuration files and inject keys
Prepare an existing Linux Azure VM image for use
with cloud-init
5/10/2018 • 3 min to read • Edit Online
This article shows you how to take an existing Azure virtual machine and prepare it to be redeployed and ready to
use cloud-init. The resulting image can be used to deploy a new virtual machine or virtual machine scale sets -
either of which could then be further customized by cloud-init at deployment time. These cloud-init scripts run on
first boot once the resources have been provisioned by Azure. For more information about how cloud-init works
natively in Azure and the supported Linux distros, see cloud-init overview
Prerequisites
This document assumes you already have a running Azure virtual machine running a supported version of the
Linux operating system. You have already configured the machine to suit your needs, installed all the required
modules, processed all the required updates and have tested it to ensure it meets your requirements.
- disk_setup
- mounts
cloud_init_modules:
- migrator
- bootcmd
- write-files
- growpart
- resizefs
- disk_setup
- mounts
- set_hostname
- update_hostname
- update_etc_hosts
- rsyslog
- users-groups
- ssh
A number of tasks relating to provisioning and handling ephemeral disks need to be updated in /etc/waagent.conf .
Run the following commands to update the appropriate settings.
sed -i 's/Provisioning.Enabled=y/Provisioning.Enabled=n/g' /etc/waagent.conf
sed -i 's/Provisioning.UseCloudInit=n/Provisioning.UseCloudInit=y/g' /etc/waagent.conf
sed -i 's/ResourceDisk.Format=y/ResourceDisk.Format=n/g' /etc/waagent.conf
sed -i 's/ResourceDisk.EnableSwap=y/ResourceDisk.EnableSwap=n/g' /etc/waagent.conf
Allow only Azure as a datasource for the Azure Linux Agent by creating a new file
/etc/cloud/cloud.cfg.d/91-azure_datasource.cfg using an editor of your choice with the following lines:
If your existing Azure image has a swap file configured and you want to change the swap file configuration for new
images using cloud-init, you need to remove the existing swap file.
For RedHat based images - follow the instructions in the following RedHat document explaining how to remove the
swap file.
For CentOS images with swapfile enabled, you can run the following command to turn off the swapfile:
Ensure the swapfile reference is removed from /etc/fstab - it should look something like the following output:
# /etc/fstab
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=99cf66df-2fef-4aad-b226-382883643a1c / xfs defaults 0 0
UUID=7c473048-a4e7-4908-bad3-a9be22e9d37d /boot xfs defaults 0 0
To save space and remove the swap file you can run the following command:
rm /mnt/resource/swapfile
Extra step for cloud-init prepared image
NOTE
If your image was previously a cloud-init prepared and configured image, you need to do the following steps.
The following three commands are only used if the VM you are customizing to be a new specialized source image
was previously provisioned by cloud-init. You do NOT need to run these if your image was configured using the
Azure Linux Agent.
For more information about the Azure Linux Agent deprovision commands, see the Azure Linux Agent for more
details.
Exit the SSH session, then from your bash shell, run the following AzureCLI commands to deallocate, generalize
and create a new Azure VM image. Replace myResourceGroup and sourceVmName with the appropriate information
reflecting your sourceVM.
Next steps
For additional cloud-init examples of configuration changes, see the following:
Add an additional Linux user to a VM
Run a package manager to update existing packages on first boot
Change VM local hostname
Install an application package, update configuration files and inject keys
Azure and Jenkins
4/9/2018 • 1 min to read • Edit Online
Jenkins is a popular open-source automation server used to set up continuous integration and delivery (CI/CD ) for
your software projects. You can host your Jenkins deployment in Azure or extend your existing Jenkins
configuration using Azure resources. Jenkins plugins are also available to simplify CI/CD of your applications to
Azure.
This article is an introduction to using Azure with Jenkins, detailing the core Azure features available to Jenkins
users. To get started with your own Jenkins server in Azure, see our quickstart.
This quickstart shows how to install Jenkins on an Ubuntu Linux VM with the tools and plug-ins configured to
work with Azure. When you're finished, you have a Jenkins server running in Azure building a sample Java app
from GitHub.
Prerequisites
An Azure subscription
Access to SSH on your computer's command line (such as the Bash shell or PuTTY )
If you don't have an Azure subscription, create a free account before you begin.
3. After reviewing the pricing details and terms information, select Continue.
4. Select Create to configure the Jenkins server in the Azure portal.
5. In the Basics tab, specify the following values:
Name - Enter Jenkins .
User name - Enter the user name to use when signing into the virtual machine on which Jenkins is
running. The user name must meet specific requirements.
Authentication type - Select SSH public key.
SSH public key - Copy and paste an RSA public key in single-line format (starting with ssh-rsa ) or
multi-line PEM format. You can generate SSH keys using ssh-keygen on Linux and macOS, or
PuTTYGen on Windows. For more information about SSH keys and Azure, see the article, How to Use
SSH keys with Windows on Azure.
Subscription - Select the Azure subscription into which you want to install Jenkins.
Resource group - Select Create new, and enter a name for the resource group that serves as a logical
container for the collection of resources that make up your Jenkins installation.
Location - Select East US.
Connect to Jenkins
Navigate to your virtual machine (for example, http://jenkins2517454.eastus.cloudapp.azure.com/) in your web
browser. The Jenkins console is inaccessible through unsecured HTTP so instructions are provided on the page to
access the Jenkins console securely from your computer using an SSH tunnel.
Set up the tunnel using the ssh command on the page from the command line, replacing username with the name
of the virtual machine admin user chosen earlier when setting up the virtual machine from the solution template.
ssh -L 127.0.0.1:8080:localhost:8080 [email protected]
After you have started the tunnel, navigate to http://localhost:8080/ on your local machine.
Get the initial password by running the following command in the command line while connected through SSH to
the Jenkins VM.
Unlock the Jenkins dashboard for the first time using this initial password.
Select Install suggested plugins on the next page and then create a Jenkins admin user used to access the
Jenkins dashboard.
The Jenkins server is now ready to build code.
Select the Source Code Management tab, enable Git, and enter the following URL in Repository URL field:
https://github.com/spring-guides/gs-spring-boot.git
Select the Build tab, then select Add build step, Invoke Gradle script. Select Use Gradle Wrapper, then enter
complete in Wrapper location and build for Tasks.
Select Advanced.. and then enter complete in the Root Build script field. Select Save.
Navigate to complete/build/libs and ensure the gs-spring-boot-0.1.0.jar is there to verify that your build was
successful. Your Jenkins server is now ready to build your own projects in Azure.
Next Steps
Add Azure VMs as Jenkins agents
Scale your Jenkins deployments to meet demand with
Azure VM agents
2/15/2018 • 4 min to read • Edit Online
This tutorial shows how to use the Jenkins Azure VM Agents plugin to add on-demand capacity with Linux virtual
machines running in Azure.
In this tutorial, you will:
Install the Azure VM Agents plugin
Configure the plugin to create resources in your Azure subscription
Set the compute resources available to each agent
Set the operating system and tools installed on each agent
Create a new Jenkins freestyle job
Run the job on an Azure VM agent
Prerequisites
An Azure subscription
A Jenkins master server. If you don't have one, view the quickstart to set up one in Azure.
If you don't have an Azure subscription, create a free account before you begin.
1. From the Jenkins dashboard, select Manage Jenkins, then select Manage Plugins.
2. Select the Available tab, then search for Azure VM Agents. Select the checkbox next to the entry for the
plugin and select Install without restart from the bottom of the dashboard.
{
"appId": "BBBBBBBB-BBBB-BBBB-BBBB-BBBBBBBBBBB",
"displayName": "jenkins_sp",
"name": "http://jenkins_sp",
"password": "secure_password",
"tenant": "CCCCCCCC-CCCC-CCCC-CCCCCCCCCCC"
}
d. Enter the credentials from the service principal into the Add credentials dialog. If you don't know your
Azure subscription ID, you can query it from the CLI:
az account list
{
"cloudName": "AzureCloud",
"id": "AAAAAAAA-AAAA-AAAA-AAAA-AAAAAAAAAAAA",
"isDefault": true,
"name": "Visual Studio Enterprise",
"state": "Enabled",
"tenantId": "CCCCCCCC-CCCC-CCCC-CCCC-CCCCCCCCCCC",
"user": {
"name": "[email protected]",
"type": "user"
}
The completed service principal should use the id field for Subscription ID, the appId value for Client
ID, password for Client Secret, and tenant for Tenant ID. Select Add to add the service principal and
then configure the plugin to use the newly created credential.
4. In the Resource Group Name section, leave Create new selected and enter myJenkinsAgentGroup .
5. Select Verify configuration to connect to Azure to test the profile settings.
6. Select Apply to update the plugin configuration.
Configure agent resources
Configure a template for use to define an Azure VM agent. This template defines the compute resources each
agent has when created.
1. Select Add next to Add Azure Virtual Machine Template.
2. Enter defaulttemplate for the Name
3. Enter ubuntu for the Label
4. Select the desired Azure region from the combo box.
5. Select a VM size from the drop-down under Virtual Machine Size. A general-purpose Standard_DS1_v2 size is
fine for this tutorial.
6. Leave the Retention time at 60 . This setting defines the number of minutes Jenkins can wait before it
deallocated idle agents. Specify 0 if you do not want idle agents to be removed automatically.
Select Add next to Admin Credentials, then select Jenkins. Enter a username and password used to log in to the
agents, making sure they satisfy the username and password policy for administrative accounts on Azure VMs.
Select Verify Template to verify the configuration and then select Save to save your changes and return to the
Jenkins dashboard.
Create a job in Jenkins
1. Within the Jenkins dashboard, click New Item.
2. Enter demoproject1 for the name and select Freestyle project, then select OK.
3. In the General tab, choose Restrict where project can be run and type ubuntu in Label Expression. You see
a message confirming that the label is served by the cloud configuration created in the previous step.
4. In the Source Code Management tab, select Git and add the following URL into the Repository URL field:
https://github.com/spring-projects/spring-petclinic.git
5. In the Build tab, select Add build step, then Invoke top-level Maven targets. Enter package in the Goals
field.
6. Select Save to save the job definition.
Overview
The following information shows how to use Blob storage as a repository of build artifacts created by a Jenkins
Continuous Integration (CI) solution, or as a source of downloadable files to be used in a build process. One of the
scenarios where you would find this useful is when you're coding in an agile development environment (using Java
or other languages), builds are running based on continuous integration, and you need a repository for your build
artifacts, so that you could, for example, share them with other organization members, your customers, or maintain
an archive. Another scenario is when your build job itself requires other files, for example, dependencies to
download as part of the build input.
In this tutorial you will be using the Azure Storage Plugin for Jenkins CI made available by Microsoft.
Overview of Jenkins
Jenkins enables continuous integration of a software project by allowing developers to easily integrate their code
changes and have builds produced automatically and frequently, thereby increasing the productivity of the
developers. Builds are versioned, and build artifacts can be uploaded to various repositories. This topic will show
how to use Azure blob storage as the repository of the build artifacts. It will also show how to download
dependencies from Azure blob storage.
More information about Jenkins can be found at Meet Jenkins.
Prerequisites
You will need the following to use the Blob service with your Jenkins CI solution:
A Jenkins Continuous Integration solution.
If you currently don't have a Jenkins CI solution, you can run a Jenkins CI solution using the following
technique:
1. On a Java-enabled machine, download jenkins.war from http://jenkins-ci.org.
2. At a command prompt that is opened to the folder that contains jenkins.war, run:
java -jar jenkins.war
3. In your browser, open http://localhost:8080/ . This will open the Jenkins dashboard, which you will
use to install and configure the Azure Storage plugin.
While a typical Jenkins CI solution would be set up to run as a service, running the Jenkins war at the
command line will be sufficient for this tutorial.
An Azure account. You can sign up for an Azure account at http://www.azure.com.
An Azure storage account. If you don't already have a storage account, you can create one using the steps at
Create a Storage Account.
Familiarity with the Jenkins CI solution is recommended but not required, as the following content will use a
basic example to show you the steps needed when using the Blob service as a repository for Jenkins CI build
artifacts.
How to configure the Azure Storage plugin to use your storage account
1. Within the Jenkins dashboard, click Manage Jenkins.
2. In the Manage Jenkins page, click Configure System.
3. In the Microsoft Azure Storage Account Configuration section:
a. Enter your storage account name, which you can obtain from the Azure Portal.
b. Enter your storage account key, also obtainable from the Azure Portal.
c. Use the default value for Blob Service Endpoint URL if you are using the public Azure cloud. If you are
using a different Azure cloud, use the endpoint as specified in the Azure Portal for your storage account.
d. Click Validate storage credentials to validate your storage account.
e. [Optional] If you have additional storage accounts that you want made available to your Jenkins CI, click
Add more Storage Accounts.
f. Click Save to save your settings.
5. In the Post-build Actions section of the job configuration, click Add post-build action and choose
Upload artifacts to Azure Blob storage.
6. For Storage account name, select the storage account to use.
7. For Container name, specify the container name. (The container will be created if it does not already exist
when the build artifacts are uploaded.) You can use environment variables, so for this example enter
${JOB_NAME } as the container name.
Tip
Below the Command section where you entered a script for Execute Windows batch command is a link
to the environment variables recognized by Jenkins. Click that link to learn the environment variable names
and descriptions. Note that environment variables that contain special characters, such as the BUILD_URL
environment variable, are not allowed as a container name or common virtual path.
8. Click Make new container public by default for this example. (If you want to use a private container, you'll
need to create a shared access signature to allow access. That is beyond the scope of this topic. You can learn
more about shared access signatures at Using Shared Access Signatures (SAS ).)
9. [Optional] Click Clean container before uploading if you want the container to be cleared of contents before
build artifacts are uploaded (leave it unchecked if you do not want to clean the contents of the container).
10. For List of Artifacts to upload, enter text/*.txt.
11. For Common virtual path for uploaded artifacts, for purposes of this tutorial, enter
${BUILD_ID }/${BUILD_NUMBER}.
12. Click Save to save your settings.
13. In the Jenkins dashboard, click Build Now to run MyJob. Examine the console output for status. Status
messages for Azure storage will be included in the console output when the post-build action starts to upload
build artifacts.
14. Upon successful completion of the job, you can examine the build artifacts by opening the public blob.
a. Login to the Azure Portal.
b. Click Storage.
c. Click the storage account name that you used for Jenkins.
d. Click Containers.
e. Click the container named myjob, which is the lowercase version of the job name that you assigned
when you created the Jenkins job. Container names and blob names are lowercase (and case-sensitive) in
Azure storage. Within the list of blobs for the container named myjob you should see hello.txt and
date.txt. Copy the URL for either of these items and open it in your browser. You will see the text file that
was uploaded as a build artifact.
Only one post-build action that uploads artifacts to Azure blob storage can be created per job. Note that the single
post-build action to upload artifacts to Azure blob storage can specify different files (including wildcards) and paths
to files within List of Artifacts to upload using a semi-colon as a separator. For example, if your Jenkins build
produces JAR files and TXT files in your workspace's build folder, and you want to upload both to Azure blob
storage, use the following for the List of Artifacts to upload value: build/*.jar;build/*.txt. You can also use
double-colon syntax to specify a path to use within the blob name. For example, if you want the JARs to get
uploaded using binaries in the blob path and the TXT files to get uploaded using notices in the blob path, use the
following for the List of Artifacts to upload value: build/*.jar::binaries;build/*.txt::notices.
How to create a build step that downloads from Azure blob storage
The following steps show how to configure a build step to download items from Azure blob storage. This would be
useful if you want to include items in your build, for example, JARs that you keep in Azure blob storage.
1. In the Build section of the job configuration, click Add build step and choose Download from Azure Blob
storage.
2. For Storage account name, select the storage account to use.
3. For Container name, specify the name of the container that has the blobs you want to download. You can use
environment variables.
4. For Blob name, specify the blob name. You can use environment variables. Also, you can use an asterisk, as a
wildcard after you specify the initial letter(s) of the blob name. For example, project\* would specify all blobs
whose names start with project.
5. [Optional] For Download path, specify the path on the Jenkins machine where you want to download files
from Azure blob storage. Environment variables can also be used. (If you do not provide a value for Download
path, the files from Azure blob storage will be downloaded to the job's workspace.)
If you have additional items you want to download from Azure blob storage, you can create additional build steps.
After you run a build, you can check the build history console output, or look at your download location, to see
whether the blobs you expected were successfully downloaded.
(The format above applies to the public Azure cloud. If you are using a different Azure cloud, use the
endpoint within the Azure Portal to determine your URL endpoint.)
In the format above, storageaccount represents the name of your storage account, container_name
represents the name of your container, and blob_name represents the name of your blob, respectively. Within
the container name, you can have multiple paths, separated by a forward slash, /. The example container
name in this tutorial was MyJob, and ${BUILD_ID }/${BUILD_NUMBER} was used for the common
virtual path, resulting in the blob having a URL of the following form:
http://example.blob.core.windows.net/myjob/2014-04-14_23-57-00/1/hello.txt
Next steps
Meet Jenkins
Azure Storage SDK for Java
Azure Storage Client SDK Reference
Azure Storage Services REST API
Azure Storage Team Blog
For more information, visit Azure for Java developers.
Create a Docker environment in Azure using the
Docker VM extension
4/9/2018 • 3 min to read • Edit Online
Docker is a popular container management and imaging platform that allows you to quickly work with containers
on Linux. In Azure, there are various ways you can deploy Docker according to your needs. This article focuses on
using the Docker VM extension and Azure Resource Manager templates with the Azure CLI 2.0. You can also
perform these steps with the Azure CLI 1.0.
WARNING
The Azure Docker VM extension for Linux is deprecated and will be retired November 2018. The extension merely installs
Docker, so alternatives such as cloud-init or the Custom Script Extension are a better way to install the Docker version of
choice. For more information on how to use cloud-init, see Customize a Linux VM with cloud-init.
Next, deploy a VM with az group deployment create that includes the Azure Docker VM extension from this Azure
Resource Manager template on GitHub. When prompted, provide your own unique values for
newStorageAccountName, adminUsername, adminPassword, and dnsNameForPublicIP:
az vm show \
--resource-group myResourceGroup \
--name myDockerVM \
--show-details \
--query [fqdns] \
--output tsv
SSH to your new Docker host. Provide your own username and DNS name from the preceding steps:
The output is similar to the following example as the NGINX image is downloaded and a container started:
Check the status of the containers running on your Docker host as follows:
sudo docker ps
The output is similar to the following example, showing that the NGINX container is running and TCP ports 80
and 443 and being forwarded:
To see your container in action, open up a web browser and enter the DNS name of your Docker host:
Azure Docker VM extension template reference
The previous example uses an existing quickstart template. You can also deploy the Azure Docker VM extension
with your own Resource Manager templates. To do so, add the following to your Resource Manager templates,
defining the vmName of your VM appropriately:
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat(variables('vmName'), '/DockerExtension'))]",
"apiVersion": "2015-05-01-preview",
"location": "[parameters('location')]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
],
"properties": {
"publisher": "Microsoft.Azure.Extensions",
"type": "DockerExtension",
"typeHandlerVersion": "1.*",
"autoUpgradeMinorVersion": true,
"settings": {},
"protectedSettings": {}
}
}
You can find more detailed walkthrough on using Resource Manager templates by reading Azure Resource
Manager overview.
Next steps
You may wish to configure the Docker daemon TCP port, understand Docker security, or deploy containers using
Docker Compose. For more information on the Azure Docker VM Extension itself, see the GitHub project.
Read more information about the additional Docker deployment options in Azure:
Use Docker Machine with the Azure driver
Get Started with Docker and Compose to define and run a multi-container application on an Azure virtual
machine.
Deploy an Azure Container Service cluster
How to use Docker Machine to create hosts in Azure
2/6/2018 • 3 min to read • Edit Online
This article details how to use Docker Machine to create hosts in Azure. The docker-machine command creates a
Linux virtual machine (VM ) in Azure then installs Docker. You can then manage your Docker hosts in Azure using
the same local tools and workflows. To use docker-machine in Windows 10, you must use Linux bash.
You create Docker host VMs in Azure with docker-machine create by specifying azure as the driver. For more
information, see the Docker Azure Driver documentation
The following example creates a VM named myVM, based on "Standard D2 v2" plan, creates a user account
named azureuser, and opens port 80 on the host VM. Follow any prompts to log in to your Azure account and
grant Docker Machine permissions to create and manage resources.
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://40.68.254.142:2376"
export DOCKER_CERT_PATH="/Users/user/.docker/machine/machines/machine"
export DOCKER_MACHINE_NAME="machine"
# Run this command to configure your shell:
# eval $(docker-machine env myvm)
To define the connection settings, you can either run the suggested configuration command (
eval $(docker-machine env myvm) ), or you can set the environment variables manually.
Run a container
To see a container in action, lets run a basic NGINX webserver. Create a container with docker run and expose
port 80 for web traffic as follows:
View running containers with docker ps . The following example output shows the NGINX container running with
port 80 exposed:
docker-machine ip myvm
To see the container in action, open a web browser and enter the public IP address noted in the output of the
preceding command:
Next steps
You can also create hosts with the Docker VM Extension. For examples on using Docker Compose, see Get started
with Docker and Compose in Azure.
Get started with Docker and Compose to define and
run a multi-container application in Azure
3/8/2018 • 4 min to read • Edit Online
With Compose, you use a simple text file to define an application consisting of multiple Docker containers. You
then spin up your application in a single command that does everything to deploy your defined environment. As
an example, this article shows you how to quickly set up a WordPress blog with a backend MariaDB SQL database
on an Ubuntu VM. You can also use Compose to set up more complex applications.
Next, deploy a VM with az group deployment create that includes the Azure Docker VM extension from this Azure
Resource Manager template on GitHub. When prompted, provide your own unique values for
newStorageAccountName, adminUsername, adminPassword, and dnsNameForPublicIP:
az vm show \
--resource-group myResourceGroup \
--name myDockerVM \
--show-details \
--query [fqdns] \
--output tsv
SSH to your new Docker host. Provide your own username and DNS name from the preceding steps:
ssh [email protected]
To check that Compose is installed on the VM, run the following command:
docker-compose --version
TIP
If you used another method to create a Docker host and need to install Compose yourself, see the Compose documentation.
sensible-editor docker-compose.yml
Paste the following example into your Docker Compose file. This configuration uses images from the DockerHub
Registry to install WordPress (the open source blogging and content management system) and a linked backend
MariaDB SQL database. Enter your own MYSQL_ROOT_PASSWORD as follows:
wordpress:
image: wordpress
links:
- db:mysql
ports:
- 80:80
db:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: <your password>
docker-compose up -d
This command starts the Docker containers specified in docker-compose.yml. It takes a minute or two for this step
to complete. You see output similar to the following example:
Creating wordpress_db_1...
Creating wordpress_wordpress_1...
...
NOTE
Be sure to use the -d option on start-up so that the containers run in the background continuously.
To verify that the containers are up, type docker-compose ps . You should see something like:
You can now connect to WordPress directly on the VM on port 80. Open a web browser and enter the DNS name
of your VM (such as http://mypublicdns.eastus.cloudapp.azure.com ). You should now see the WordPress start
screen, where you can complete the installation and get started with the application.
Next steps
Go to the Docker VM extension user guide for more options to configure Docker and Compose in your Docker
VM. For example, one option is to put the Compose yml file (converted to JSON ) directly in the configuration
of the Docker VM extension.
Check out the Compose command-line reference and user guide for more examples of building and deploying
multi-container apps.
Use an Azure Resource Manager template, either your own or one contributed from the community, to deploy
an Azure VM with Docker and an application set up with Compose. For example, the Deploy a WordPress blog
with Docker template uses Docker and Compose to quickly deploy WordPress with a MySQL backend on an
Ubuntu VM.
Try integrating Docker Compose with a Docker Swarm cluster. See Using Compose with Swarm for scenarios.
Cloud Foundry on Azure
4/25/2018 • 2 min to read • Edit Online
Cloud Foundry is an open-source platform-as-a-service (PaaS ) for building, deploying, and operating 12-factor
applications developed in various languages and frameworks. This document describes the options you have for
running Cloud Foundry on Azure and how you can get started.
NOTE
The level of support for your Azure resources, such as the virtual machines where you run Cloud Foundry, is based on your
Azure support agreement. Best-effort community support only applies to the Cloud Foundry-specific components.
Next steps
Deploy Pivotal Cloud Foundry from the Azure Marketplace
Deploy an app to Cloud Foundry in Azure
Deploy your first app to Cloud Foundry on Microsoft
Azure
4/9/2018 • 4 min to read • Edit Online
Cloud Foundry is a popular open-source application platform available on Microsoft Azure. In this article, we show
how to deploy and manage an application on Cloud Foundry in an Azure environment.
IMPORTANT
If you are deploying PCF from the Azure Marketplace, make a note of the SYSTEMDOMAINURL and the admin credentials
required to access the Pivotal Apps Manager, both of which are described in the marketplace deployment guide. They are
needed to complete this tutorial. For marketplace deployments, the SYSTEMDOMAINURL is in the form https://system.ip-
address.cf.pcfazure.com.
You are prompted to log in to the Cloud Controller. Use the admin account credentials that you acquired from the
marketplace deployment steps.
Cloud Foundry provides orgs and spaces as namespaces to isolate the teams and environments within a shared
deployment. The PCF marketplace deployment includes the default system org and a set of spaces created to
contain the base components, like the autoscaling service and the Azure service broker. For now, choose the system
space.
cf create-org myorg
cf create-space dev -o myorg
Use the target command to switch to the new org and space:
Now, when you deploy an application, it is automatically created in the new org and space. To confirm that there
are currently no apps in the new org/space, type cf apps again.
NOTE
For more information about orgs and spaces and how they can be used for role-based access control (RBAC), see the Cloud
Foundry documentation.
Deploy an application
Let's use a sample Cloud Foundry application called Hello Spring Cloud, which is written in Java and based on the
Spring Framework and Spring Boot.
Clone the Hello Spring Cloud repository
The Hello Spring Cloud sample application is available on GitHub. Clone it to your environment and change into
the new directory:
cf push
When you push an application, Cloud Foundry detects the type of application (in this case, a Java app) and
identifies its dependencies (in this case, the Spring framework). It then packages everything required to run your
code into a standalone container image, known as a droplet. Finally, Cloud Foundry schedules the application on
one of the available machines in your environment and creates a URL where you can reach it, which is available in
the output of the command.
To see the hello-spring-cloud application, open the provided URL in your browser:
NOTE
To learn more about what happens during cf push , see How Applications Are Staged in the Cloud Foundry documentation.
cf logs hello-spring-cloud
By default, the logs command uses tail, which shows new logs as they are written. To see new logs appear, refresh
the hello-spring-cloud app in the browser.
To view logs that have already been written, add the recent switch:
cf scale -i 2 hello-spring-cloud
Running the cf app command on the application shows that Cloud Foundry is creating another instance of the
application. Once the application has started, Cloud Foundry automatically starts load balancing traffic to it.
Next steps
Read the Cloud Foundry documentation
Set up the Visual Studio Team Services plugin for Cloud Foundry
Configure the Microsoft Log Analytics Nozzle for Cloud Foundry
OpenShift in Azure
5/8/2018 • 1 min to read • Edit Online
OpenShift is an open and extensible container application platform that brings Docker and Kubernetes to the
enterprise.
OpenShift includes Kubernetes for container orchestration and management. It adds developer- and operations-
centric tools that enable:
Rapid application development.
Easy deployment and scaling.
Long-term lifecycle maintenance for teams and applications.
There are multiple versions of OpenShift available:
OpenShift Origin
OpenShift Container Platform
OpenShift Online
OpenShift Dedicated
Of the four versions covered in this article, only two are available for customers to deploy in Azure: OpenShift
Origin and OpenShift Container Platform.
OpenShift Origin
Origin is an open-source upstream project of OpenShift that's community supported. Origin can be installed on
CentOS or Red Hat Enterprise Linux (RHEL ).
OpenShift Online
Online is a Red Hat-managed multi-tenant OpenShift that uses Container Platform. Red Hat manages all of the
underlying infrastructure (such as VMs, OpenShift cluster, networking, and storage).
With this version, the customer deploys containers but has no control over which hosts the containers run. Because
Online is multi-tenant, containers may be located on the same VM hosts as containers from other customers. Cost
is per container.
OpenShift Dedicated
Dedicated is a Red Hat-managed single-tenant OpenShift that uses Container Platform. Red Hat manages all of the
underlying infrastructure (VMs, OpenShift cluster, networking, storage, etc.). The cluster is specific to one customer
and runs in a public cloud (such as AWS or Google, with Azure coming in early 2018). A starting cluster includes
four application nodes for $48,000 per year (paid up front).
Next steps
Configure common prerequisites for OpenShift in Azure
Deploy OpenShift Origin in Azure
Deploy OpenShift Container Platform in Azure
Post-deployment tasks
Troubleshoot OpenShift deployment
Common prerequisites for deploying OpenShift in
Azure
3/8/2018 • 4 min to read • Edit Online
This article describes common prerequisites for deploying OpenShift Origin or OpenShift Container Platform in
Azure.
The installation of OpenShift uses Ansible playbooks. Ansible uses Secure Shell (SSH) to connect to all cluster
hosts to complete installation steps.
When you initiate the SSH connection to the remote hosts, you cannot enter a password. For this reason, the
private key cannot have a password associated with it or deployment fails.
Because the virtual machines (VMs) deploy via Azure Resource Manager templates, the same public key is used for
access to all VMs. You need to inject the corresponding private key into the VM that executes all the playbooks as
well. To do this securely, you use an Azure key vault to pass the private key into the VM.
If there's a need for persistent storage for containers, then persistent volumes are required. OpenShift supports
Azure virtual hard disks (VHDs) for this capability, but Azure must first be configured as the cloud provider.
In this model, OpenShift:
Creates a VHD object in an Azure Storage account.
Mounts the VHD to a VM and format the volume.
Mounts the volume to the pod.
For this configuration to work, OpenShift needs permissions to perform the previous tasks in Azure. You achieve
this with a service principal. The service principal is a security account in Azure Active Directory that is granted
permissions to resources.
The service principal needs to have access to the storage accounts and VMs that make up the cluster. If all
OpenShift cluster resources deploy to a single resource group, the service principal can be granted permissions to
that resource group.
This guide describes how to create the artifacts associated with the prerequisites.
Create a key vault to manage SSH keys for the OpenShift cluster.
Create a service principal for use by the Azure Cloud Solution Provider.
If you don't have an Azure subscription, create a free account before you begin.
Sign in to Azure
Sign in to your Azure subscription with the az login command and follow the on-screen directions, or click Try it to
use Cloud Shell.
az login
NOTE
Your SSH key pair cannot have a password.
For more information on SSH keys on Windows, see How to create SSH keys on Windows.
{
"appId": "11111111-abcd-1234-efgh-111111111111",
"displayName": "openshiftsp",
"name": "http://openshiftsp",
"password": {Strong Password},
"tenant": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
}
WARNING
Be sure to create a secure password. Follow the Azure AD password rules and restrictions guidance.
For more information on service principals, see Create an Azure service principal with Azure CLI 2.0.
Next steps
This article covered the following topics:
Create a key vault to manage SSH keys for the OpenShift cluster.
Create a service principal for use by the Azure Cloud Solution Provider.
Next, deploy an OpenShift cluster:
Deploy OpenShift Origin
Deploy OpenShift Container Platform
Deploy OpenShift Origin in Azure
2/6/2018 • 2 min to read • Edit Online
You can use one of two ways to deploy OpenShift Origin in Azure:
You can manually deploy all the necessary Azure infrastructure components, and then follow the OpenShift
Origin documentation.
You can also use an existing Resource Manager template that simplifies the deployment of the OpenShift
Origin cluster.
NOTE
The following command requires Azure CLI 2.0.8 or later. You can verify the CLI version with the az --version command.
To update the CLI version, see Install Azure CLI 2.0.
The following example deploys the OpenShift cluster and all related resources into a resource group named
myResourceGroup, with a deployment name of myOpenShiftCluster. The template is referenced directly from the
GitHub repo by using a local parameters file named azuredeploy.parameters.json.
The deployment takes at least 25 minutes to finish, depending on the total number of nodes deployed. The URL of
the OpenShift console and the DNS name of the OpenShift master prints to the terminal when the deployment
finishes.
{
"OpenShift Console Uri": "http://openshiftlb.cloudapp.azure.com:8443/console",
"OpenShift Master SSH": "ssh [email protected] -p 2200"
}
Clean up resources
Use the az group delete command to remove the resource group, OpenShift cluster, and all related resources when
they're no longer needed.
Next steps
Post-deployment tasks
Troubleshoot OpenShift deployment
Getting started with OpenShift Origin
Deploy OpenShift Container Platform in Azure
2/6/2018 • 4 min to read • Edit Online
You can use one of several methods to deploy OpenShift Container Platform in Azure:
You can manually deploy the necessary Azure infrastructure components and then follow the OpenShift
Container Platform documentation.
You can also use an existing Resource Manager template that simplifies the deployment of the OpenShift
Container Platform cluster.
Another option is to use the Azure Marketplace offer.
For all options, a Red Hat subscription is required. During the deployment, the Red Hat Enterprise Linux instance is
registered to the Red Hat subscription and attached to the Pool ID that contains the entitlements for OpenShift
Container Platform. Ensure that you have a valid Red Hat Subscription Manager (RHSM ) username, password,
and Pool ID. You can verify this information by signing in to https://access.redhat.com.
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"masterVmSize": {
"value": "Standard_E2s_v3"
},
"infraVmSize": {
"value": "Standard_E2s_v3"
},
"nodeVmSize": {
"value": "Standard_E2s_v3"
},
"openshiftClusterPrefix": {
"value": "mycluster"
},
"masterInstanceCount": {
"value": 3
},
"infraInstanceCount": {
"infraInstanceCount": {
"value": 2
},
"nodeInstanceCount": {
"value": 2
},
"dataDiskSize": {
"value": 128
},
"adminUsername": {
"value": "clusteradmin"
},
"openshiftPassword": {
"value": "{Strong Password}"
},
"enableMetrics": {
"value": "true"
},
"enableLogging": {
"value": "true"
},
"enableCockpit": {
"value": "false"
},
"rhsmUsernamePasswordOrActivationKey": {
"value": "usernamepassword"
},
"rhsmUsernameOrOrgId": {
"value": "{RHSM Username}"
},
"rhsmPasswordOrActivationKey": {
"value": "{RHSM Password}"
},
"rhsmPoolId": {
"value": "{Pool ID}"
},
"sshPublicKey": {
"value": "{SSH Public Key}"
},
"keyVaultResourceGroup": {
"value": "keyvaultrg"
},
"keyVaultName": {
"value": "keyvault"
},
"keyVaultSecret": {
"value": "keysecret"
},
"enableAzure": {
"value": "true"
},
"aadClientId": {
"value": "11111111-abcd-1234-efgh-111111111111"
},
"aadClientSecret": {
"value": "{Strong Password}"
},
"defaultSubDomainType": {
"value": "nipio"
}
}
}
The following example deploys the OpenShift cluster and all related resources into a resource group named
myResourceGroup, with a deployment name of myOpenShiftCluster. The template is referenced directly from the
GitHub repo, and a local parameters file named azuredeploy.parameters.json file is used.
The deployment takes at least 30 minutes to complete, depending on the total number of nodes deployed. The
URL of the OpenShift console and the DNS name of the OpenShift master prints to the terminal when the
deployment finishes.
{
"OpenShift Console Uri": "http://openshiftlb.cloudapp.azure.com:8443/console",
"OpenShift Master SSH": "ssh [email protected] -p 2200"
}
Next steps
Post-deployment tasks
Troubleshoot OpenShift deployment in Azure
Getting started with OpenShift Container Platform
Post-deployment tasks
4/20/2018 • 5 min to read • Edit Online
After you deploy an OpenShift cluster, you can configure additional items. This article covers the following:
How to configure single sign-on by using Azure Active Directory (Azure AD )
How to configure Log Analytics to monitor OpenShift
How to configure metrics and logging
{
"appId": "12345678-ca3c-427b-9a04-ab12345cd678",
"appPermissions": null,
"availableToOtherTenants": false,
"displayName": "OCPAzureAD",
"homepage": "https://masterdns343khhde.westus.cloudapp.azure.com:8443/console",
"identifierUris": [
"https://masterdns343khhde.westus.cloudapp.azure.com:8443/console"
],
"objectId": "62cd74c9-42bb-4b9f-b2b5-b6ee88991c80",
"objectType": "Application",
"replyUrls": [
"https://masterdns343khhde.westus.cloudapp.azure.com:8443/oauth2callback/OCPAzureAD"
]
}
Take note of the appId property returned from the command for a later step.
In the Azure portal:
1. Select Azure Active Directory > App Registration.
2. Search for your app registration (for example, OCPAzureAD ).
3. In the results, click the app registration.
4. Under Settings, select Required permissions.
5. Under Required Permissions, select Add.
6. Click Step 1: Select API, and then click Windows Azure Active Directory
(Microsoft.Azure.ActiveDirectory). Click Select at the bottom.
7. On Step 2: Select Permissions, select Sign in and read user profile under Delegated Permissions, and
then click Select.
8. Select Done.
Configure OpenShift for Azure AD authentication
To configure OpenShift to use Azure AD as an authentication provider, the /etc/origin/master/master-config.yaml
file must be edited on all master nodes.
Find the tenant ID by using the following CLI command:
az account show
Find the tenant ID by using the following CLI command: az account show
In the OpenShift console, you now see two options for authentication: htpasswd_auth and [App Registration].
Monitor OpenShift with Log Analytics
To monitor OpenShift with Log Analytics, you can use one of two options: OMS Agent installation on VM host, or
OMS Container. This article provides instructions for deploying the OMS Container.
Create an OpenShift project for Log Analytics and set user access
oadm new-project omslogging --node-selector='zone=default'
oc project omslogging
oc create serviceaccount omsagent
oadm policy add-cluster-role-to-user cluster-reader system:serviceaccount:omslogging:omsagent
oadm policy add-scc-to-user privileged system:serviceaccount:omslogging:omsagent
Replace wsid_data with the Base64 encoded Log Analytics Workspace ID. Then replace key_data with the Base64-
encoded Log Analytics Workspace Shared Key.
wsid_data='11111111-abcd-1111-abcd-111111111111'
key_data='My Strong Password'
echo $wsid_data | base64 | tr -d '\n'
echo $key_data | base64 | tr -d '\n'
oc create -f ocp-secret.yml
oc create -f ocp-omsagent.yml
2. Edit the /etc/ansible/hosts file and add the following lines after the Identity Provider Section (# Enable
HTPasswdPasswordIdentityProvider):
# Setup metrics
openshift_hosted_metrics_deploy=false
openshift_metrics_cassandra_storage_type=dynamic
openshift_metrics_start_cluster=true
openshift_metrics_hawkular_nodeselector={"type":"infra"}
openshift_metrics_cassandra_nodeselector={"type":"infra"}
openshift_metrics_heapster_nodeselector={"type":"infra"}
openshift_hosted_metrics_public_url=https://metrics.$ROUTING/hawkular/metrics
# Setup logging
openshift_hosted_logging_deploy=false
openshift_hosted_logging_storage_kind=dynamic
openshift_logging_fluentd_nodeselector={"logging":"true"}
openshift_logging_es_nodeselector={"type":"infra"}
openshift_logging_kibana_nodeselector={"type":"infra"}
openshift_logging_curator_nodeselector={"type":"infra"}
openshift_master_logging_public_url=https://kibana.$ROUTING
3. Replace $ROUTING with the string used for the openshift_master_default_subdomain option in the same
/etc/ansible/hosts file.
Azure Cloud Provider in use
On the first master node (Origin) or bastion node (OCP ), SSH by using the credentials provided during
deployment. Issue the following command:
ansible-playbook $HOME/openshift-ansible/playbooks/byo/openshift-cluster/openshift-metrics.yml \
-e openshift_metrics_install_metrics=True \
-e openshift_metrics_cassandra_storage_type=dynamic
ansible-playbook $HOME/openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml \
-e openshift_logging_install_logging=True \
-e openshift_hosted_logging_storage_kind=dynamic
ansible-playbook $HOME/openshift-ansible/playbooks/byo/openshift-cluster/openshift-metrics.yml \
-e openshift_metrics_install_metrics=True
ansible-playbook $HOME/openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml \
-e openshift_logging_install_logging=True
Next steps
Getting started with OpenShift Container Platform
Getting started with OpenShift Origin
Troubleshoot OpenShift deployment in Azure
11/10/2017 • 1 min to read • Edit Online
If your OpenShift cluster does not deploy successfully, try these troubleshooting tasks to narrow down the issue.
View the deployment status and compare against the following list of exit codes:
Exit code 3: Your Red Hat Subscription User Name / Password or Organization ID / Activation Key is incorrect
Exit code 4: Your Red Hat Pool ID is incorrect or there are no entitlements available
Exit code 5: Unable to provision Docker Thin Pool Volume
Exit code 6: OpenShift Cluster installation failed
Exit code 7: OpenShift Cluster installation succeeded but Azure Cloud Solution Provider configuration failed -
master config on Master Node issue
Exit code 8: OpenShift Cluster installation succeeded but Azure Cloud Solution Provider configuration failed -
node config on Master Node issue
Exit code 9: OpenShift Cluster installation succeeded but Azure Cloud Solution Provider configuration failed -
node config on Infra or App Node issue
Exit code 10: OpenShift Cluster installation succeeded but Azure Cloud Solution Provider configuration failed -
correcting Master Nodes or not able to set Master as unschedulable
Exit code 11: Metrics failed to deploy
Exit code 12: Logging failed to deploy
For exit codes 7-10, the OpenShift cluster was installed, but the Azure Cloud Solution Provider configuration
failed. You can SSH to the master node (OpenShift Origin) or the bastion node (OpenShift Container Platform),
and from there SSH to each cluster node to fix the issues.
A common cause for the failures with exit codes 7-9 is that the service principal did not have proper permissions
to the subscription or the resource group. If this is the issue, assign the correct permissions and manually rerun the
script that failed and all subsequent scripts.
Be sure to restart the service that failed (for example, systemctl restart atomic-openshift-node.service) before
executing the scripts again.
For further troubleshooting, SSH into your master node on port 2200 (Origin) or the bastion node on port 22
(Container Platform). You need to be in the root (sudo su -) and then browse to the following directory:
/var/lib/waagent/custom-script/download.
Here you see folders named "0" and "1." In each of these folders, you see two files, "stderr" and "stdout." Look
through these files to determine where the failure occurred.
Using Azure for hosting and running SAP workload
scenarios
4/24/2018 • 10 min to read • Edit Online
By choosing Microsoft Azure as your SAP ready cloud partner, you are able to reliably run your mission critical
SAP workloads and scenarios on a scalable, compliant, and enterprise-proven platform. Get the scalability,
flexibility, and cost savings of Azure. With the expanded partnership between Microsoft and SAP, you can run SAP
applications across dev/test and production scenarios in Azure - and be fully supported. From SAP NetWeaver to
SAP S4/HANA, SAP BI, Linux to Windows, SAP HANA to SQL, we have you covered.
Besides hosting SAP NetWeaver scenarios with the different DBMS on Azure, you can host different other SAP
workload scenarios, like SAP BI on Azure. Documentation regarding SAP NetWeaver deployments on Azure
native Virtual Machines can be found in the section "SAP NetWeaver on Azure Virtual Machines."
Azure has native Azure Virtual Machine offers that are ever growing in size of CPU and memory resources to
cover SAP workload that leverages SAP HANA. For more information on this area, look up the documents under
the section SAP HANA on Azure Virtual Machines."
The uniqueness of Azure for SAP HANA is a unique offer that sets Azure apart from competition. In order to
enable hosting more memory and CPU resource demanding SAP scenarios involving SAP HANA, Azure offers
the usage of customer dedicated bare-metal hardware for the purpose of running SAP HANA deployments that
require up to 20 TB (60 TB scale-out) of memory for S/4HANA or other SAP HANA workload. This unique Azure
solution of SAP HANA on Azure (Large Instances) allows you to run SAP HANA on the dedicated bare-metal
hardware with the SAP application layer or workload middle-ware layer hosted in native Azure Virtual Machines.
This solution is documented in several documents in the section "SAP HANA on Azure (Large Instances)."
Hosting SAP workload scenarios in Azure also can create requirements of Identity integration and Single-Sign-On
using Azure Activity Directory to different SAP components and SAP SaaS or PaaS offers. A list of such
integration and Single-Sign-On scenarios with Azure Active Directory (AAD ) and SAP entities is described and
documented in the section "AAD SAP Identity Integration and Single-Sign-On."
This guide details using the Azure CLI to deploy an Azure virtual machine from the Oracle marketplace gallery
image in order to create an Oracle 12c database. Once the server is deployed, you will connect via SSH in order to
configure the Oracle database.
If you don't have an Azure subscription, create a free account before you begin.
If you choose to install and use the CLI locally, this quickstart requires that you are running the Azure CLI version
2.0.4 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI 2.0.
After you create the VM, Azure CLI displays information similar to the following example. Note the value for
publicIpAddress . You use this address to access the VM.
{
"fqdns": "",
"id":
"/subscriptions/{snip}/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM",
"location": "westus",
"macAddress": "00-0D-3A-36-2F-56",
"powerState": "VM running",
"privateIpAddress": "10.0.0.4",
"publicIpAddress": "13.64.104.241",
"resourceGroup": "myResourceGroup"
}
Connect to the VM
To create an SSH session with the VM, use the following command. Replace the IP address with the
publicIpAddress value for your VM.
ssh <publicIpAddress>
$ sudo su - oracle
$ lsnrctl start
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 23-MAR-2017 15:32:08
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Log File /u01/app/oracle/diag/tnslsnr/myVM/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=myVM.twltkue3xvsujaz1bvlrhfuiwf.dx.internal.cloudapp.net)
(PORT=1521)))
The listener supports no services
The command completed successfully
dbca -silent \
-createDatabase \
-templateName General_Purpose.dbc \
-gdbname cdb1 \
-sid cdb1 \
-responseFile NO_VALUE \
-characterSet AL32UTF8 \
-sysPassword OraPasswd1 \
-systemPassword OraPasswd1 \
-createAsContainerDatabase true \
-numberOfPDBs 1 \
-pdbName pdb1 \
-pdbAdminPassword OraPasswd1 \
-databaseType MULTIPURPOSE \
-automaticMemoryManagement false \
-storageType FS \
-ignorePreReqs
You also can add ORACLE_HOME and ORACLE_SID variables to the .bashrc file. This would save the environment
variables for future sign-ins. Confirm the following statements have been added to the ~/.bashrc file using editor
of your choice.
# Add ORACLE_HOME.
export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1
# Add ORACLE_SID.
export ORACLE_SID=cdb1
sqlplus / as sysdba
exec DBMS_XDB_CONFIG.SETHTTPSPORT(5502);
3. Open the container PDB1 if not already opened, but first check the status:
4. If the OPEN_MODE for PDB1 is not READ WRITE, then run the followings commands to open PDB1:
You need to type quit to end the sqlplus session and type exit to logout of the oracle user.
sudo su -
2. Using your favorite editor, edit the file /etc/oratab and change the default N to Y :
cdb1:/u01/app/oracle/product/12.1.0/dbhome_1:Y
case "$1" in
'start')
# Start the Oracle databases:
# The following command assumes that the Oracle sign-in
# will not prompt the user for any values.
# Remove "&" if you don't want startup as a background process.
su - $ORA_OWNER -c "$ORA_HOME/bin/dbstart $ORA_HOME" &
touch /var/lock/subsys/dbora
;;
'stop')
# Stop the Oracle databases:
# The following command assumes that the Oracle sign-in
# will not prompt the user for any values.
su - $ORA_OWNER -c "$ORA_HOME/bin/dbshut $ORA_HOME" &
rm -f /var/lock/subsys/dbora
;;
esac
ln -s /etc/init.d/dbora /etc/rc.d/rc0.d/K01dbora
ln -s /etc/init.d/dbora /etc/rc.d/rc3.d/S99dbora
ln -s /etc/init.d/dbora /etc/rc.d/rc5.d/S99dbora
reboot
2. To open the endpoint that you use to access Oracle EM Express remotely, create a Network Security Group
rule with az network nsg rule create as follows:
3. If needed, obtain the public IP address of your VM again with az network public-ip show as follows:
4. Connect EM Express from your browser. Make sure your browser is compatible with EM Express (Flash
install is required):
You can log in by using the SYS account, and check the as sysdba checkbox. Use the password OraPasswd1 that
you set during installation.
Clean up resources
Once you have finished exploring your first Oracle database on Azure and the VM is no longer needed, you can
use the az group delete command to remove the resource group, VM, and all related resources.
az group delete --name myResourceGroup
Next steps
Learn about other Oracle solutions on Azure.
Try the Installing and Configuring Oracle Automated Storage Management tutorial.
Install the Elastic Stack on an Azure VM
4/9/2018 • 5 min to read • Edit Online
This article walks you through how to deploy Elasticsearch, Logstash, and Kibana, on an Ubuntu VM in Azure. To
see the Elastic Stack in action, you can optionally connect to Kibana and work with some sample logging data.
In this tutorial you learn how to:
Create an Ubuntu VM in an Azure resource group
Install Elasticsearch, Logstash, and Kibana on the VM
Send sample data to Elasticsearch with Logstash
Open ports and work with data in the Kibana console
This deployment is suitable for basic development with the Elastic Stack. For more on the Elastic Stack, including
recommendations for a production environment, see the Elastic documentation and the Azure Architecture Center.
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version
2.0.4 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI 2.0.
When the VM has been created, the Azure CLI shows information similar to the following example. Take note of
the publicIpAddress . This address is used to access the VM.
{
"fqdns": "",
"id": "/subscriptions/<subscription
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM",
"location": "eastus",
"macAddress": "00-0D-3A-23-9A-49",
"powerState": "VM running",
"privateIpAddress": "10.0.0.4",
"publicIpAddress": "40.68.254.142",
"resourceGroup": "myResourceGroup"
}
Use the following command to create an SSH session with the virtual machine. Substitute the correct public IP
address of your virtual machine. In this example, the IP address is 40.68.254.142.
Install the Java Virtual on the VM and configure the JAVA_HOME variable-this is necessary for the Elastic Stack
components to run.
Run the following commands to update Ubuntu package sources and install Elasticsearch, Kibana, and Logstash.
sudo apt update && sudo apt install elasticsearch kibana logstash
NOTE
Detailed installation instructions, including directory layouts and initial configuration, are maintained in Elastic's
documentation
Start Elasticsearch
Start Elasticsearch on your VM with the following command:
This command produces no output, so verify that Elasticsearch is running on the VM with this curl command:
{
"name" : "w6Z4NwR",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "SDzCajBoSK2EkXmHvJVaDQ",
"version" : {
"number" : "5.6.3",
"build_hash" : "1a2f265",
"build_date" : "2017-10-06T20:33:39.012Z",
"build_snapshot" : false,
"lucene_version" : "6.6.1"
},
"tagline" : "You Know, for Search"
}
This is a basic logstash pipeline that echoes standard input to standard output.
Set up Logstash to forward the kernel messages from this VM to Elasticsearch. Create a new file in an empty
directory called vm-syslog-logstash.conf and paste in the following Logstash configuration:
input {
stdin {
type => "stdin-type"
}
file {
type => "syslog"
path => [ "/var/log/*.log", "/var/log/*/*.log", "/var/log/messages", "/var/log/syslog" ]
start_position => "beginning"
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => "localhost:9200"
}
}
You see the syslog entries in your terminal echoed as they are sent to Elasticsearch. Use CTRL+C to exit out of
Logstash once you've sent some data.
server.host:"0.0.0.0"
Open port 5601 from the Azure CLI to allow remote access to the Kibana console:
Open up the Kibana console and select Create to generate a default index based on the syslog data you sent to
Elasticsearch earlier.
Select Discover on the Kibana console to search, browse, and filter through the syslog events.
Next steps
In this tutorial, you deployed the Elastic Stack into a development VM in Azure. You learned how to:
Create an Ubuntu VM in an Azure resource group
Install Elasticsearch, Logstash, and Kibana on the VM
Send sample data to Elasticsearch from Logstash
Open ports and work with data in the Kibana console
How to use FreeBSD's Packet Filter to create a secure
firewall in Azure
4/9/2018 • 2 min to read • Edit Online
This article introduces how to deploy a NAT firewall using FreeBSD’s Packer Filter through Azure Resource
Manager template for common web server scenario.
What is PF?
PF (Packet Filter, also written pf ) is a BSD licensed stateful packet filter, a central piece of software for firewalling.
PF has since evolved quickly and now has several advantages over other available firewalls. Network Address
Translation (NAT) is in PF since day one, then packet scheduler and active queue management have been
integrated into PF, by integrating the ALTQ and making it configurable through PF's configuration. Features such
as pfsync and CARP for failover and redundancy, authpf for session authentication, and ftp-proxy to ease
firewalling the difficult FTP protocol, have also extended PF. In short, PF is a powerful and feature-rich firewall.
Get started
If you are interested in setting up a secure firewall in the cloud for your web servers, then let’s get started. You can
also apply the scripts used in this Azure Resource Manager template to set up your networking topology. The
Azure Resource Manager template set up a FreeBSD virtual machine that performs NAT /redirection using PF and
two FreeBSD virtual machines with the Nginx web server installed and configured. In addition to performing NAT
for the two web servers egress traffic, the NAT/redirection virtual machine intercepts HTTP requests and redirect
them to the two web servers in round-robin fashion. The VNet uses the private non-routable IP address space
10.0.0.2/24 and you can modify the parameters of the template. The Azure Resource Manager template also
defines a route table for the whole VNet, which is a collection of individual routes used to override Azure default
routes based on the destination IP address.
Next, deploy the template pf-freebsd-setup with az group deployment create. Download
azuredeploy.parameters.json under the same path and define your own resource values, such as adminPassword ,
networkPrefix , and domainNamePrefix .
After about five minutes, you will get the information of "provisioningState": "Succeeded" . Then you can ssh to the
frontend VM (NAT) or access Nginx web server in a browser using the public IP address or FQDN of the frontend
VM (NAT). The following example lists FQDN and public IP address that assigned to the frontend VM (NAT) in the
myResourceGroup resource group.
Next steps
Do you want to set up your own NAT in Azure? Open Source, free but powerful? Then PF is a good choice. By
using the template pf-freebsd-setup, you only need five minutes to set up a NAT firewall with round-robin load
balancing using FreeBSD's PF in Azure for common web server scenario.
If you want to learn the offering of FreeBSD in Azure, refer to introduction to FreeBSD on Azure.
If you want to know more about PF, refer to FreeBSD handbook or PF -User's Guide.
How to install MySQL on Azure
4/9/2018 • 3 min to read • Edit Online
In this article, you will learn how to install and configure MySQL on an Azure virtual machine running Linux.
NOTE
Azure has two different deployment models for creating and working with resources: Resource Manager and classic. This
article covers using both models, but Microsoft recommends that most new deployments use the Resource Manager model.
#[azureuser@mysqlnode:~]sudo su -
During installation, you will see a dialog window poping up to ask you to set MySQL root password below,
and you need set the password here.
Input the password again to confirm.
How to install MySQL on Red Hat OS family like CentOS, Oracle Linux
We will use Linux VM with CentOS or Oracle Linux here.
Step 1: Add the MySQL Yum repository Switch to root user:
#[azureuser@mysqlnode:~]sudo su -
Step 2: Edit below file to enable the MySQL repository for downloading the MySQL5.6 package.
[mysql56-community]
name=MySQL 5.6 Community Server
baseurl=http://repo.mysql.com/yum/mysql-5.6-community/el/6/$basearch/
enabled=1
gpgcheck=1
gpgkey=file:/etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
#sudo su -
Next Step
Find more usage and information regarding MySQL Here.
Install MySQL on a virtual machine running
OpenSUSE Linux in Azure
1/22/2018 • 3 min to read • Edit Online
MySQL is a popular, open-source SQL database. This tutorial shows you how to create a virtual machine running
OpenSUSE Linux, then install MySQL.
If you choose to install and use the CLI locally, you need Azure CLI version 2.0 or later. To find the version, run
az --version . If you need to install or upgrade, see Install Azure CLI 2.0.
Create the VM. In this example, we are naming the VM myVM. We are also going to use a VM size
Standard_D2s_v3, but you should choose the VM size you think is most appropriate for your workload.
You also need to add a rule to the network security group to allow traffic over port 3306 for MySQL.
Connect to the VM
You'll use SSH to connect to the VM. In this example, the public IP address of the VM is 10.111.112.113. You can
see the IP address in the output when you created the VM.
ssh 10.111.112.113
Update the VM
After you're connected to the VM, install system updates and patches.
Install MySQL
Install the MySQL in the VM over SSH. Reply to prompts as appropriate.
MySQL password
After installation, the MySQL root password is empty by default. Run the mysql_secure_installation script to
secure MySQL. The script prompts you to change the MySQL root password, remove anonymous user accounts,
disable remote root logins, remove test databases, and reload the privileges table.
mysql_secure_installation
Log in to MySQL
You can now log in and enter the MySQL prompt.
mysql -u root -p
This switches you to the MySQL prompt where you can issue SQL statements to interact with the database.
Now, create a new MySQL user.
Create a database
Create a database and grant the mysqluser user permissions.
Database user names and passwords are only used by scripts connecting to the database. Database user account
names do not necessarily represent actual user accounts on the system.
Enable log in from another computer. In this example, the IP address of the computer that we want to log in from is
10.112.113.114.
quit
Next steps
For details about MySQL, see the MySQL Documentation.
How to install and configure MongoDB on a Linux
VM
3/8/2018 • 5 min to read • Edit Online
MongoDB is a popular open-source, high-performance NoSQL database. This article shows you how to install and
configure MongoDB on a Linux VM with the Azure CLI 2.0. You can also perform these steps with the Azure CLI
1.0. Examples are shown that detail how to:
Manually install and configure a basic MongoDB instance
Create a basic MongoDB instance using a Resource Manager template
Create a complex MongoDB sharded cluster with replica sets using a Resource Manager template
Create a VM with az vm create. The following example creates a VM named myVM with a user named azureuser
using SSH public key authentication
az vm create \
--resource-group myResourceGroup \
--name myVM \
--image CentOS \
--admin-username azureuser \
--generate-ssh-keys
SSH to the VM using your own username and the publicIpAddress listed in the output from the previous step:
ssh azureuser@<publicIpAddress>
To add the installation sources for MongoDB, create a yum repository file as follows:
Open the MongoDB repo file for editing, such as with vi or nano . Add the following lines:
[mongodb-org-3.6]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.6/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.6.asc
By default, SELinux is enforced on CentOS images that prevents you from accessing MongoDB. Install policy
management tools and configure SELinux to allow MongoDB to operate on its default TCP port 27017 as follows:
Verify the MongoDB installation by connecting using the local mongo client:
mongo
Now test the MongoDB instance by adding some data and then searching:
> db
test
> db.foo.insert( { a : 1 } )
> db.foo.find()
{ "_id" : ObjectId("57ec477cd639891710b90727"), "a" : 1 }
> exit
Next, deploy the MongoDB template with az group deployment create. When prompted, enter your own unique
values for newStorageAccountName, dnsNameForPublicIP, and admin username and password:
Log on to the VM using the public DNS address of your VM. You can view the public DNS address with az vm
show:
SSH to your VM using your own username and public DNS address:
Verify the MongoDB installation by connecting using the local mongo client as follows:
mongo
Now test the instance by adding some data and searching as follows:
> db
test
> db.foo.insert( { a : 1 } )
> db.foo.find()
{ "_id" : ObjectId("57ec477cd639891710b90727"), "a" : 1 }
> exit
WARNING
Deploying this complex MongoDB sharded cluster requires more than 20 cores, which is typically the default core count per
region for a subscription. Open an Azure support request to increase your core count.
To create this environment, you need the latest Azure CLI 2.0 installed and logged in to an Azure account using az
login. First, create a resource group with az group create. The following example creates a resource group named
myResourceGroup in the eastus location:
az group create --name myResourceGroup --location eastus
Next, deploy the MongoDB template with az group deployment create. Define your own resource names and sizes
where needed such as for mongoAdminUsername, sizeOfDataDiskInGB, and configNodeVmSize:
This deployment can take over an hour to deploy and configure all the VM instances. The --no-wait flag is used at
the end of the preceding command to return control to the command prompt once the template deployment has
been accepted by the Azure platform. You can then view the deployment status with az group deployment show.
The following example views the status for the myMongoDBCluster deployment in the myResourceGroup resource
group:
Next steps
In these examples, you connect to the MongoDB instance locally from the VM. If you want to connect to the
MongoDB instance from another VM or network, ensure the appropriate Network Security Group rules are
created.
These examples deploy the core MongoDB environment for development purposes. Apply the required security
configuration options for your environment. For more information, see the MongoDB security docs.
For more information about creating using templates, see the Azure Resource Manager overview.
The Azure Resource Manager templates use the Custom Script Extension to download and execute scripts on your
VMs. For more information, see Using the Azure Custom Script Extension with Linux Virtual Machines.
Install and configure PostgreSQL on Azure
4/9/2018 • 5 min to read • Edit Online
PostgreSQL is an advanced open-source database similar to Oracle and DB2. It includes enterprise-ready features
such as full ACID compliance, reliable transactional processing, and multi-version concurrency control. It also
supports standards such as ANSI SQL and SQL/MED (including foreign data wrappers for Oracle, MySQL,
MongoDB, and many others). It is highly extensible with support for over 12 procedural languages, GIN and GiST
indexes, spatial data support, and multiple NoSQL -like features for JSON or key-value-based applications.
In this article, you will learn how to install and configure PostgreSQL on an Azure virtual machine running Linux.
NOTE
Azure has two different deployment models for creating and working with resources: Resource Manager and classic. This
article covers using both models, but Microsoft recommends that most new deployments use the Resource Manager model.
Install PostgreSQL
NOTE
You must already have an Azure virtual machine running Linux in order to complete this tutorial. To create and set up a Linux
VM before proceeding, see the Azure Linux VM tutorial.
# sudo su -
2. Some distributions have dependencies that you must install before installing PostgreSQL. Check for your
distro in this list and run the appropriate command:
Red Hat base Linux:
# yum install readline-devel gcc make zlib-devel openssl openssl-devel libxml2-devel pam-devel
pam libxslt-devel tcl-devel python-devel -y
# apt-get install readline-devel gcc make zlib-devel openssl openssl-devel libxml2-devel pam-
devel pam libxslt-devel tcl-devel python-devel -y
SUSE Linux:
# zypper install readline-devel gcc make zlib-devel openssl openssl-devel libxml2-devel pam-
devel pam libxslt-devel tcl-devel python-devel -y
3. Download PostgreSQL into the root directory, and then unzip the package:
The above is an example. You can find the more detailed download address in the Index of /pub/source/.
4. To start the build, run these commands:
# cd postgresql-9.3.5
# ./configure --prefix=/opt/postgresql-9.3.5
5. If you want to build everything that can be built, including the documentation (HTML and man pages) and
additional modules (contrib), run the following command instead:
# gmake install-world
Configure PostgreSQL
1. (Optional) Create a symbolic link to shorten the PostgreSQL reference to not include the version number:
# ln -s /opt/pgsql9.3.5 /opt/pgsql
# mkdir -p /opt/pgsql_data
3. Create a non-root user and modify that user’s profile. Then, switch to this new user (called postgres in our
example):
# useradd postgres
# su - postgres
NOTE
For security reasons, PostgreSQL uses a non-root user to initialize, start, or shut down the database.
4. Edit the bash_profile file by entering the commands below. These lines will be added to the end of the
bash_profile file:
$ source .bash_profile
$ which psql
/opt/pgsql/bin/psql
$ psql -V
Set up PostgreSQL
Run the following commands:
# cd /root/postgresql-9.3.5/contrib/start-scripts
# cp linux /etc/init.d/postgresql
Modify two variables in the /etc/init.d/postgresql file. The prefix is set to the installation path of PostgreSQL:
/opt/pgsql. PGDATA is set to the data storage path of PostgreSQL: /opt/pgsql_data.
# chmod +x /etc/init.d/postgresql
Start PostgreSQL:
# /etc/init.d/postgresql start
# su - postgres
$ createdb events
CREATE TABLE potluck (name VARCHAR(20), food VARCHAR(30), confirmed CHAR(1), signup_date DATE);
You have now set up a four-column table with the following column names and restrictions:
1. The “name” column has been limited by the VARCHAR command to be under 20 characters long.
2. The “food” column indicates the food item that each person will bring. VARCHAR limits this text to be under 30
characters.
3. The “confirmed” column records whether the person has RSVP’d to the potluck. The acceptable values are "Y"
and "N".
4. The “date” column shows when they signed up for the event. Postgres requires that dates be written as yyyy-
mm-dd.
You should see the following if your table has been successfully created:
You can also check the table structure by using the following command:
INSERT INTO potluck (name, food, confirmed, signup_date) VALUES('John', 'Casserole', 'Y', '2012-04-11');
You can add a couple more people to the table as well. Here are some options, or you can create your own:
INSERT INTO potluck (name, food, confirmed, signup_date) VALUES('Sandy', 'Key Lime Tarts', 'N', '2012-04-14');
INSERT INTO potluck (name, food, confirmed, signup_date) VALUES ('Tom', 'BBQ','Y', '2012-04-18');
INSERT INTO potluck (name, food, confirmed, signup_date) VALUES('Tina', 'Salad', 'Y', '2012-04-18');
Show tables
Use the following command to show a table:
This deletes all the information in the "John" row. The output is:
Create and manage a cloud-based high-performance computing (HPC ) cluster by taking advantage of Microsoft
HPC Pack and Azure compute and infrastructure services. HPC Pack, available for free download, is built on
Microsoft Azure and Windows Server technologies and supports a wide range of HPC workloads.
For more HPC options in Azure, see Technical resources for batch and high-performance computing.
This article focuses on options to use HPC Pack to run Linux workloads. There are also options for running
Windows HPC workloads with HPC Pack.
This article shows you one way to run a Linux high-performance computing (HPC ) workload on Azure virtual
machines. Here, you set up a Microsoft HPC Pack cluster on Azure with Linux compute nodes and run a NAMD
simulation to calculate and visualize the structure of a large biomolecular system.
NOTE
Azure has two different deployment models for creating and working with resources: Resource Manager and classic. This
article covers using both models, but Microsoft recommends that most new deployments use the Resource Manager model.
NAMD (for Nanoscale Molecular Dynamics program) is a parallel molecular dynamics package designed for
high-performance simulation of large biomolecular systems containing up to millions of atoms. Examples of
these systems include viruses, cell structures, and large proteins. NAMD scales to hundreds of cores for typical
simulations and to more than 500,000 cores for the largest simulations.
Microsoft HPC Pack provides features to run large-scale HPC and parallel applications in clusters of on-
premises computers or Azure virtual machines. Originally developed as a solution for Windows HPC
workloads, HPC Pack now supports running Linux HPC applications on Linux compute node VMs deployed in
an HPC Pack cluster. See Get started with Linux compute nodes in an HPC Pack cluster in Azure for an
introduction.
Prerequisites
HPC Pack cluster with Linux compute nodes - Deploy an HPC Pack cluster with Linux compute nodes on
Azure using either an Azure Resource Manager template or an Azure PowerShell script. See Get started with
Linux compute nodes in an HPC Pack cluster in Azure for the prerequisites and steps for either option. If you
choose the PowerShell script deployment option, see the sample configuration file in the sample files at the end
of this article. This file configures an Azure-based HPC Pack cluster consisting of a Windows Server 2012 R2
head node and four size Large CentOS 6.6 compute nodes. Customize this file as needed for your environment.
NAMD software and tutorial files - Download NAMD software for Linux from the NAMD site (registration
required). This article is based on NAMD version 2.10, and uses the Linux-x86_64 (64-bit Intel/AMD with
Ethernet) archive. Also download the NAMD tutorial files. The downloads are .tar files, and you need a
Windows tool to extract the files on the cluster head node. To extract the files, follow the instructions later in this
article.
VMD (optional) - To see the results of your NAMD job, download and install the molecular visualization
program VMD on a computer of your choice. The current version is 1.9.2. See the VMD download site to get
started.
ssh-keygen -t rsa
NOTE
Press Enter to use the default settings until the command is completed. Do not enter a passphrase here; when
prompted for a password, just press Enter.
3. Change directory to the ~/.ssh directory. The private key is stored in id_rsa and the public key in id_rsa.pub.
<ExtendedData>
<PrivateKey>Copy the contents of private key here</PrivateKey>
<PublicKey>Copy the contents of public key here</PublicKey>
</ExtendedData>
5. Open a Command Prompt and enter the following command to set the credentials data for the
hpclab\hpcuser account. You use the extendeddata parameter to pass the name of the C:\cred.xml file you
created for the key data.
This command completes successfully without output. After setting the credentials for the user accounts you
need to run jobs, store the cred.xml file in a secure location, or delete it.
6. If you generated the RSA key pair on one of your Linux nodes, remember to delete the keys after you finish
using them. HPC Pack does not set up mutual trust if it finds an existing id_rsa file or id_rsa.pub file.
IMPORTANT
We don’t recommend running a Linux job as a cluster administrator on a shared cluster, because a job submitted by an
administrator runs under the root account on the Linux nodes. A job submitted by a non-administrator user runs under a
local Linux user account with the same name as the job user. In this case, HPC Pack sets up mutual trust for this Linux user
across all the nodes allocated to the job. You can set up the Linux user manually on the Linux nodes before running the job,
or HPC Pack creates the user automatically when the job is submitted. If HPC Pack creates the user, HPC Pack deletes it after
the job completes. To reduce security threat, the keys are removed after the job completes on the nodes.
The first command creates a folder named /namd2 on all nodes in the LinuxNodes group. The second command
mounts the shared folder //CentOS66HN/Namd/namd2 onto the folder with dir_mode and file_mode bits set to
777. The username and password in the command should be the credentials of a user on the head node.
NOTE
The “`” symbol in the second command is an escape symbol for PowerShell. “`,” means the “,” (comma character) is a part of
the command.
Create a Bash script to run a NAMD job
Your NAMD job needs a nodelist file for charmrun to determine the number of nodes to use when starting
NAMD processes. You use a Bash script that generates the nodelist file and runs charmrun with this nodelist file.
You can then submit a NAMD job in HPC Cluster Manager that calls this script.
Using a text editor of your choice, create a Bash script in the /namd2 folder containing the NAMD program files
and name it hpccharmrun.sh. For a quick proof of concept, copy the example hpccharmrun.sh script provided at the
end of this article and go to Submit a NAMD job.
TIP
Save your script as a text file with Linux line endings (LF only, not CR LF). This ensures that it runs properly on the Linux
nodes.
#!/bin/bash
2. Get node information from the environment variables. $NODESCORES stores a list of split words from
$CCP_NODES_CORES. $COUNT is the size of $NODESCORES.
<Number of nodes> <Name of node1> <Cores of node1> <Name of node2> <Cores of node2>…
This variable lists the total number of nodes, node names, and number of cores on each node that are
allocated to the job. For example, if the job needs 10 cores to run, the value of $CCP_NODES_CORES is
similar to:
3. If the $CCP_NODES_CORES variable is not set, start charmrun directly. (This should only occur when you
run this script directly on your Linux nodes.)
if [ ${COUNT} -eq 0 ]
then
# CCP_NODES is_CORES is not found or is empty, so just run charmrun without nodelist arg.
#echo ${CHARMRUN} $*
${CHARMRUN} $*
else
# Create the nodelist file
NODELIST_PATH=${SCRIPT_PATH}/nodelist_$$
# Get every node name and number of cores and write into the nodelist file
I=1
while [ ${I} -lt ${COUNT} ]
do
echo "host ${NODESCORES[${I}]} ++cpus ${NODESCORES[$(($I+1))]}" >> ${NODELIST_PATH}
let "I=${I}+2"
done
5. Run charmrun with the nodelist file, get its return status, and remove the nodelist file at the end.
${CCP_NUMCPUS } is another environment variable set by the HPC Pack head node. It stores the number
of total cores allocated to this job. We use it to specify the number of processes for charmrun.
RTNSTS=$?
rm -f ${NODELIST_PATH}
fi
exit ${RTNSTS}
Following is the information in the nodelist file, which the script generates:
group main
host <Name of node1> ++cpus <Cores of node1>
host <Name of node2> ++cpus <Cores of node2>
…
For example:
group main
host CENTOS66LN-00 ++cpus 4
host CENTOS66LN-01 ++cpus 4
host CENTOS66LN-03 ++cpus 2
5. On the Job Details page, under Job Resources, select the type of resource as Node and set the Minimum
to 3. , we run the job on three Linux nodes and each node has four cores.
6. Click Edit Tasks in the left navigation, and then click Add to add a task to the job.
7. On the Task Details and I/O Redirection page, set the following values:
Command line -
/namd2/hpccharmrun.sh ++remote-shell ssh /namd2/namd2 /namd2/namdsample/1-2-sphere/ubq_ws_eq.conf
> /namd2/namd2_hpccharmrun.log
TIP
The preceding command line is a single command without line breaks. It wraps to appear on several lines
under Command line.
hpccred delcreds
Sample files
Sample XML configuration file for cluster deployment by PowerShell script
<?xml version="1.0" encoding="utf-8" ?>
<IaaSClusterConfig>
<Subscription>
<SubscriptionName>Subscription-1</SubscriptionName>
<StorageAccount>mystorageaccount</StorageAccount>
</Subscription>
<Location>West US</Location>
<VNet>
<VNetName>MyVNet</VNetName>
<SubnetName>Subnet-1</SubnetName>
</VNet>
<Domain>
<DCOption>HeadNodeAsDC</DCOption>
<DomainFQDN>hpclab.local</DomainFQDN>
</Domain>
<Database>
<DBOption>LocalDB</DBOption>
</Database>
<HeadNode>
<VMName>CentOS66HN</VMName>
<ServiceName>MyHPCService</ServiceName>
<VMSize>Large</VMSize>
<EnableRESTAPI />
<EnableWebPortal />
</HeadNode>
<LinuxComputeNodes>
<VMNamePattern>CentOS66LN-%00%</VMNamePattern>
<ServiceName>MyLnxCNService</ServiceName>
<VMSize>Large</VMSize>
<NodeCount>4</NodeCount>
<ImageName>5112500ae3b842c8b9c604889f8753c3__OpenLogic-CentOS-66-20150325</ImageName>
</LinuxComputeNodes>
</IaaSClusterConfig>
if [ ${COUNT} -eq 0 ]
then
# If CCP_NODES_CORES is not found or is empty, just run the charmrun without nodelist arg.
#echo ${CHARMRUN} $*
${CHARMRUN} $*
else
# Create the nodelist file
NODELIST_PATH=${SCRIPT_PATH}/nodelist_$$
# Get every node name & cores and write into the nodelist file
I=1
while [ ${I} -lt ${COUNT} ]
do
echo "host ${NODESCORES[${I}]} ++cpus ${NODESCORES[$(($I+1))]}" >> ${NODELIST_PATH}
let "I=${I}+2"
done
RTNSTS=$?
rm -f ${NODELIST_PATH}
fi
exit ${RTNSTS}
Install NVIDIA GPU drivers on N-series VMs running
Linux
4/30/2018 • 7 min to read • Edit Online
To take advantage of the GPU capabilities of Azure N -series VMs running Linux, NVIDIA graphics drivers must be
installed. This article provides driver setup steps after you deploy an N -series VM. Driver setup information is also
available for Windows VMs.
For N -series VM specs, storage capacities, and disk details, see GPU Linux VM sizes.
TIP
As an alternative to manual CUDA driver installation on a Linux VM, you can deploy an Azure Data Science Virtual Machine
image. The DSVM editions for Ubuntu 16.04 LTS or CentOS 7.4 pre-install NVIDIA CUDA drivers, the CUDA Deep Neural
Network Library, and other tools.
DISTRIBUTION DRIVER
DISTRIBUTION DRIVER
WARNING
Installation of third-party software on Red Hat products can affect the Red Hat support terms. See the Red Hat
Knowledgebase article.
Install CUDA drivers for NC, NCv2, NCv3, and ND-series VMs
Here are steps to install CUDA drivers from the NVIDIA CUDA Toolkit on N -series VMs.
C and C++ developers can optionally install the full Toolkit to build GPU -accelerated applications. For more
information, see the CUDA Installation Guide.
To install CUDA drivers, make an SSH connection to each VM. To verify that the system has a CUDA-capable GPU,
run the following command:
You will see output similar to the following example (showing an NVIDIA Tesla K80 card):
CUDA_REPO_PKG=cuda-repo-ubuntu1604_9.1.85-1_amd64.deb
wget -O /tmp/${CUDA_REPO_PKG}
http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/${CUDA_REPO_PKG}
rm -f /tmp/${CUDA_REPO_PKG}
sudo reboot
sudo reboot
2. Install the latest Linux Integration Services for Hyper-V and Azure.
wget https://aka.ms/lis
cd LISISO
sudo ./install.sh
sudo reboot
CUDA_REPO_PKG=cuda-repo-rhel7-9.1.85-1.x86_64.rpm
wget http://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/${CUDA_REPO_PKG} -O
/tmp/${CUDA_REPO_PKG}
rm -f /tmp/${CUDA_REPO_PKG}
2. In /etc/waagent.conf, enable RDMA by uncommenting the following configuration lines. You need
root access to edit this file.
OS.EnableRDMA=y
OS.UpdateRdmaDriver=y
3. Add or change the following memory settings in KB in the /etc/security/limits.conf file. You need root
access to edit this file. For testing purposes you can set memlock to unlimited. For example:
<User or group name> hard memlock unlimited .
<User or group name> hard memlock <memory required for your application in KB>
<User or group name> soft memlock <memory required for your application in KB>
4. Install Intel MPI Library. Either purchase and download the library from Intel or download the free
evaluation version.
wget http://registrationcenter-download.intel.com/akdlm/irc_nas/tec/9278/l_mpi_p_5.1.3.223.tgz
CentOS -based 7.4 HPC - RDMA drivers and Intel MPI 5.1 are installed on the VM.
3. Disable the Nouveau kernel driver, which is incompatible with the NVIDIA driver. (Only use the NVIDIA
driver on NV VMs.) To do this, create a file in /etc/modprobe.d named nouveau.conf with the following
contents:
blacklist nouveau
blacklist lbm-nouveau
chmod +x NVIDIA-Linux-x86_64-grid.run
sudo ./NVIDIA-Linux-x86_64-grid.run
6. When you're asked whether you want to run the nvidia-xconfig utility to update your X configuration file,
select Yes.
7. After installation completes, copy /etc/nvidia/gridd.conf.template to a new file gridd.conf at location
/etc/nvidia/
2. Disable the Nouveau kernel driver, which is incompatible with the NVIDIA driver. (Only use the NVIDIA
driver on NV VMs.) To do this, create a file in /etc/modprobe.d named nouveau.conf with the following
contents:
blacklist nouveau
blacklist lbm-nouveau
3. Reboot the VM, reconnect, and install the latest Linux Integration Services for Hyper-V and Azure.
wget https://aka.ms/lis
cd LISISO
sudo ./install.sh
sudo reboot
4. Reconnect to the VM and run the lspci command. Verify that the NVIDIA M60 card or cards are visible as
PCI devices.
5. Download and install the GRID driver:
chmod +x NVIDIA-Linux-x86_64-grid.run
sudo ./NVIDIA-Linux-x86_64-grid.run
6. When you're asked whether you want to run the nvidia-xconfig utility to update your X configuration file,
select Yes.
7. After installation completes, copy /etc/nvidia/gridd.conf.template to a new file gridd.conf at location
/etc/nvidia/
X11 server
If you need an X11 server for remote connections to an NV VM, x11vnc is recommended because it allows
hardware acceleration of graphics. The BusID of the M60 device must be manually added to the xconfig file (
etc/X11/xorg.conf on Ubuntu 16.04 LTS, /etc/X11/XF86config on CentOS 7.3 or Red Hat Enterprise Server 7.3 ).
Add a "Device" section similar to the following:
Section "Device"
Identifier "Device0"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BoardName "Tesla M60"
BusID "your-BusID:0:0:0"
EndSection
The BusID can change when a VM gets reallocated or rebooted. Therefore, you may want to create a script to
update the BusID in the X11 configuration when a VM is rebooted. For example, create a script named
busidupdate.sh (or another name you choose) with the following contents:
#!/bin/bash
BUSID=$((16#`/usr/bin/nvidia-smi --query-gpu=pci.bus_id --format=csv | tail -1 | cut -d ':' -f 1`))
if grep -Fxq "${BUSID}" /etc/X11/XF86Config; then echo "BUSID is matching"; else echo "BUSID changed to
${BUSID}" && sed -i '/BusID/c\ BusID \"PCI:0@'${BUSID}':0:0:0\"' /etc/X11/XF86Config; fi
Then, create an entry for your upate script in /etc/rc.d/rc3.d so the script is invoked as root on boot.
Troubleshooting
You can set persistence mode using nvidia-smi so the output of the command is faster when you need to
query cards. To set persistence mode, execute nvidia-smi -pm 1 . Note that if the VM is restarted, the mode
setting goes away. You can always script the mode setting to execute upon startup.
Next steps
To capture a Linux VM image with your installed NVIDIA drivers, see How to generalize and capture a Linux
virtual machine.
Frequently asked questions about Azure IaaS VM
disks and managed and unmanaged premium disks
4/9/2018 • 10 min to read • Edit Online
This article answers some frequently asked questions about Azure Managed Disks and Azure Premium Storage.
Managed Disks
What is Azure Managed Disks?
Managed Disks is a feature that simplifies disk management for Azure IaaS VMs by handling storage account
management for you. For more information, see the Managed Disks overview.
If I create a standard managed disk from an existing VHD that's 80 GB, how much will that cost me?
A standard managed disk created from an 80-GB VHD is treated as the next available standard disk size, which is
an S10 disk. You're charged according to the S10 disk pricing. For more information, see the pricing page.
Are there any transaction costs for standard managed disks?
Yes. You're charged for each transaction. For more information, see the pricing page.
For a standard managed disk, will I be charged for the actual size of the data on the disk or for the
provisioned capacity of the disk?
You're charged based on the provisioned capacity of the disk. For more information, see the pricing page.
How is pricing of premium managed disks different from unmanaged disks?
The pricing of premium managed disks is the same as unmanaged premium disks.
Can I change the storage account type (Standard or Premium ) of my managed disks?
Yes. You can change the storage account type of your managed disks by using the Azure portal, PowerShell, or the
Azure CLI.
Is there a way that I can copy or export a managed disk to a private storage account?
Yes. You can export your managed disks by using the Azure portal, PowerShell, or the Azure CLI.
Can I use a VHD file in an Azure storage account to create a managed disk with a different subscription?
No.
Can I use a VHD file in an Azure storage account to create a managed disk in a different region?
No.
Are there any scale limitations for customers that use managed disks?
Managed Disks eliminates the limits associated with storage accounts. However, the maximum limit, and also the
default limit, is 10,000 managed disks per region and per disk type for a subscription.
Can I take an incremental snapshot of a managed disk?
No. The current snapshot capability makes a full copy of a managed disk. However, we are planning to support
incremental snapshots in the future.
Can VMs in an availability set consist of a combination of managed and unmanaged disks?
No. The VMs in an availability set must use either all managed disks or all unmanaged disks. When you create an
availability set, you can choose which type of disks you want to use.
Is Managed Disks the default option in the Azure portal?
Yes.
Can I create an empty managed disk?
Yes. You can create an empty disk. A managed disk can be created independently of a VM, for example, without
attaching it to a VM.
What is the supported fault domain count for an availability set that uses Managed Disks?
Depending on the region where the availability set that uses Managed Disks is located, the supported fault domain
count is 2 or 3.
How is the standard storage account for diagnostics set up?
You set up a private storage account for VM diagnostics. In the future, we plan to switch diagnostics to Managed
Disks as well.
What kind of Role-Based Access Control support is available for Managed Disks?
Managed Disks supports three key default roles:
Owner: Can manage everything, including access
Contributor: Can manage everything except access
Reader: Can view everything, but can't make changes
Is there a way that I can copy or export a managed disk to a private storage account?
You can get a read-only shared access signature URI for the managed disk and use it to copy the contents to a
private storage account or on-premises storage.
Can I create a copy of my managed disk?
Customers can take a snapshot of their managed disks and then use the snapshot to create another managed disk.
Are unmanaged disks still supported?
Yes. We support unmanaged and managed disks. We recommend that you use managed disks for new workloads
and migrate your current workloads to managed disks.
If I create a 128-GB disk and then increase the size to 130 GB, will I be charged for the next disk size (512
GB )?
Yes.
Can I create locally redundant storage, geo-redundant storage, and zone-redundant storage managed
disks?
Azure Managed Disks currently supports only locally redundant storage managed disks.
Can I shrink or downsize my managed disks?
No. This feature is not supported currently.
Can I break a lease on my disk?
No. This is not supported currently as a lease is present to prevent accidental deletion when the disk is being used.
Can I change the computer name property when a specialized (not created by using the System
Preparation tool or generalized) operating system disk is used to provision a VM?
No. You can't update the computer name property. The new VM inherits it from the parent VM, which was used to
create the operating system disk.
Where can I find sample Azure Resource Manager templates to create VMs with managed disks?
List of templates using Managed Disks
https://github.com/chagarw/MDPP
The support for Azure CLI v2 and Azure Storage Explorer is coming soon.
Are P4 and P6 disk sizes supported for unmanaged disks or page blobs?
No. P4 (32 GB ) and P6 (64 GB ) disk sizes are supported only for managed disks. Support for unmanaged disks
and page blobs is coming soon.
If my existing premium managed disk less than 64 GB was created before the small disk was enabled
(around June 15, 2017), how is it billed?
Existing small premium disks less than 64 GB continue to be billed according to the P10 pricing tier.
How can I switch the disk tier of small premium disks less than 64 GB from P10 to P4 or P6?
You can take a snapshot of your small disks and then create a disk to automatically switch the pricing tier to P4 or
P6 based on the provisioned size.
What if my question isn't answered here?
If your question isn't listed here, let us know and we'll help you find an answer. You can post a question at the end
of this article in the comments. To engage with the Azure Storage team and other community members about this
article, use the MSDN Azure Storage forum.
To request features, submit your requests and ideas to the Azure Storage feedback forum.
Add a disk to a Linux VM
4/9/2018 • 8 min to read • Edit Online
This article shows you how to attach a persistent disk to your VM so that you can preserve your data - even if
your VM is reprovisioned due to maintenance or resizing.
The output looks something like the following (you can use the -o table option to any command to format the
output in):
{
"accountType": "Standard_LRS",
"creationData": {
"createOption": "Empty",
"imageReference": null,
"sourceResourceId": null,
"sourceUri": null,
"storageAccountId": null
},
"diskSizeGb": 50,
"encryptionSettings": null,
"id": "/subscriptions/<guid>/resourceGroups/rasquill-script/providers/Microsoft.Compute/disks/myDataDisk",
"location": "westus",
"name": "myDataDisk",
"osType": null,
"ownerId": null,
"provisioningState": "Succeeded",
"resourceGroup": "myResourceGroup",
"tags": null,
"timeCreated": "2017-02-02T23:35:47.708082+00:00",
"type": "Microsoft.Compute/disks"
}
Once connected to your VM, you're ready to attach a disk. First, find the disk using dmesg (the method you use
to discover your new disk may vary). The following example uses dmesg to filter on SCSI disks:
Here, sdc is the disk that we want. Partition the disk with fdisk , make it a primary disk on partition 1, and accept
the other defaults. The following example starts the fdisk process on /dev/sdc:
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x2a59b123.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Now, write a file system to the partition with the mkfs command. Specify your filesystem type and the device
name. The following example creates an ext4 filesystem on the /dev/sdc1 partition that was created in the
preceding steps:
Now, create a directory to mount the file system using mkdir . The following example creates a directory at
/datadrive:
Use mount to then mount the filesystem. The following example mounts the /dev/sdc1 partition to the
/datadrive mount point:
To ensure that the drive is remounted automatically after a reboot, it must be added to the /etc/fstab file. It is also
highly recommended that the UUID (Universally Unique IDentifier) is used in /etc/fstab to refer to the drive
rather than just the device name (such as, /dev/sdc1). If the OS detects a disk error during boot, using the UUID
avoids the incorrect disk being mounted to a given location. Remaining data disks would then be assigned those
same device IDs. To find the UUID of the new drive, use the blkid utility:
sudo -i blkid
NOTE
Improperly editing the /etc/fstab file could result in an unbootable system. If unsure, refer to the distribution's
documentation for information on how to properly edit this file. It is also recommended that a backup of the /etc/fstab file
is created before editing.
In this example, we use the UUID value for the /dev/sdc1 device that was created in the previous steps, and the
mountpoint of /datadrive. Add the following line to the end of the /etc/fstab file:
NOTE
Later removing a data disk without editing fstab could cause the VM to fail to boot. Most distributions provide either the
nofail and/or nobootwait fstab options. These options allow a system to boot even if the disk fails to mount at boot time.
Consult your distribution's documentation for more information on these parameters.
The nofail option ensures that the VM starts even if the filesystem is corrupt or the disk does not exist at boot time.
Without this option, you may encounter behavior as described in Cannot SSH to Linux VM due to FSTAB errors
In some cases, the discard option may have performance implications. Alternatively, you can run the
fstrim command manually from the command line, or add it to your crontab to run regularly:
Ubuntu
RHEL/CentOS
Troubleshooting
When adding data disks to a Linux VM, you may encounter errors if a disk does not exist at LUN 0. If you are
adding a disk manually using the azure vm disk attach-new command and you specify a LUN ( --lun ) rather
than allowing the Azure platform to determine the appropriate LUN, take care that a disk already exists / will
exist at LUN 0.
Consider the following example showing a snippet of the output from lsscsi :
[5:0:0:0] disk Msft Virtual Disk 1.0 /dev/sdc
[5:0:0:1] disk Msft Virtual Disk 1.0 /dev/sdd
The two data disks exist at LUN 0 and LUN 1 (the first column in the lsscsi output details
[host:channel:target:lun] ). Both disks should be accessbile from within the VM. If you had manually specified
the first disk to be added at LUN 1 and the second disk at LUN 2, you may not see the disks correctly from within
your VM.
NOTE
The Azure host value is 5 in these examples, but this may vary depending on the type of storage you select.
This disk behavior is not an Azure problem, but the way in which the Linux kernel follows the SCSI specifications.
When the Linux kernel scans the SCSI bus for attached devices, a device must be found at LUN 0 in order for the
system to continue scanning for additional devices. As such:
Review the output of lsscsi after adding a data disk to verify that you have a disk at LUN 0.
If your disk does not show up correctly within your VM, verify a disk exists at LUN 0.
Next steps
Remember, that your new disk is not available to the VM if it reboots unless you write that information to your
fstab file.
To ensure your Linux VM is configured correctly, review the Optimize your Linux machine performance
recommendations.
Expand your storage capacity by adding additional disks and configure RAID for additional performance.
Use the portal to attach a data disk to a Linux VM
4/9/2018 • 1 min to read • Edit Online
This article shows you how to attach both new and existing disks to a Linux virtual machine through the Azure
portal. You can also attach a data disk to a Windows VM in the Azure portal.
Before you attach disks to your VM, review these tips:
The size of the virtual machine controls how many data disks you can attach. For details, see Sizes for virtual
machines.
To use Premium storage, you need a DS -series or GS -series virtual machine. You can use both Premium and
Standard disks with these virtual machines. Premium storage is available in certain regions. For details, see
Premium Storage: High-Performance Storage for Azure Virtual Machine Workloads.
Disks attached to virtual machines are actually .vhd files stored in Azure. For details, see About disks and VHDs
for virtual machines.
3. Enter a name for your managed disk. Review the default settings, update as necessary, and then click Create.
4. Click Save to create the managed disk and update the VM configuration:
5. After Azure creates the disk and attaches it to the virtual machine, the new disk is listed in the virtual
machine's disk settings under Data Disks. As managed disks are a top-level resource, the disk appears at the
root of the resource group:
3. Click Save to attach the existing managed disk and update the VM configuration:
4. After Azure attaches the disk to the virtual machine, it's listed in the virtual machine's disk settings under
Data Disks.
Next steps
You can also attach a data disk using the Azure CLI.
How to detach a data disk from a Linux virtual
machine
4/9/2018 • 1 min to read • Edit Online
When you no longer need a data disk that's attached to a virtual machine, you can easily detach it. This removes
the disk from the virtual machine, but doesn't remove it from storage.
WARNING
If you detach a disk it is not automatically deleted. If you have subscribed to Premium storage, you will continue to incur
storage charges for the disk. For more information refer to Pricing and Billing when using Premium Storage.
If you want to use the existing data on the disk again, you can reattach it to the same virtual machine, or another
one.
Next steps
If you want to reuse the data disk, you can just attach it to another VM.
How to expand virtual hard disks on a Linux VM with
the Azure CLI
4/9/2018 • 3 min to read • Edit Online
The default virtual hard disk size for the operating system (OS ) is typically 30 GB on a Linux virtual machine (VM )
in Azure. You can add data disks to provide for additional storage space, but you may also wish to expand an
existing data disk. This article details how to expand managed disks for a Linux VM with the Azure CLI 2.0. You can
also expand the unmanaged OS disk with the Azure CLI 1.0.
WARNING
Always make sure that you back up your data before you perform disk resize operations. For more information, see Back up
Linux VMs in Azure.
NOTE
The VM must be deallocated to expand the virtual hard disk. az vm stop does not release the compute resources.
To release compute resources, use az vm deallocate .
2. View a list of managed disks in a resource group with az disk list. The following example displays a list of
managed disks in the resource group named myResourceGroup:
az disk list \
--resource-group myResourceGroup \
--query '[*].{Name:name,Gb:diskSizeGb,Tier:accountType}' \
--output table
Expand the required disk with az disk update. The following example expands the managed disk named
myDataDisk to be 200Gb in size:
az disk update \
--resource-group myResourceGroup \
--name myDataDisk \
--size-gb 200
NOTE
When you expand a managed disk, the updated size is mapped to the nearest managed disk size. For a table of the
available managed disk sizes and tiers, see Azure Managed Disks Overview - Pricing and Billing.
3. Start your VM with az vm start. The following example starts the VM named myVM in the resource group
named myResourceGroup:
2. To use the expanded disk, you need to expand the underlying partition and filesystem.
a. If already mounted, unmount the disk:
View information about the existing partition layout with print . The output is similar to the following
example, which shows the underlying disk is 215Gb in size:
c. Expand the partition with resizepart . Enter the partition number, 1, and a size for the new partition:
(parted) resizepart
Partition number? 1
End? [107GB]? 215GB
3. With the partition resized, verify the partition consistency with e2fsck :
6. To verify the OS disk has been resized, use df -h . The following example output shows the data drive,
/dev/sdc1, is now 200 GB:
Next steps
If you need additional storage, you also add data disks to a Linux VM. For more information about disk encryption,
see Encrypt disks on a Linux VM using the Azure CLI.
Create a snapshot
4/9/2018 • 1 min to read • Edit Online
Take a snapshot of an OS or data disk for backup or to troubleshoot VM issues. A snapshot is a full, read-only
copy of a VHD.
The following steps show how to take a snapshot using the az snapshot create command with the --source-disk
parameter. The following example assumes that there is a VM called myVM in the myResourceGroup resource group.
Get the disk ID.
az snapshot create \
-g myResourceGroup \
--source "$osDiskId" \
--name osDisk-backup
NOTE
If you would like to store your snapshot in zone-resilient storage, you need to create it in a region that supports availability
zones and include the --sku Standard_ZRS parameter.
Next steps
Create a virtual machine from a snapshot by creating a managed disk from the snapshot and then attaching the
new managed disk as the OS disk. For more information, see the Create a VM from a snapshot script.
Back up Azure unmanaged VM disks with incremental
snapshots
8/21/2017 • 7 min to read • Edit Online
Overview
Azure Storage provides the capability to take snapshots of blobs. Snapshots capture the blob state at that point in
time. In this article, we describe a scenario in which you can maintain backups of virtual machine disks using
snapshots. You can use this methodology when you choose not to use Azure Backup and Recovery Service, and
wish to create a custom backup strategy for your virtual machine disks.
Azure virtual machine disks are stored as page blobs in Azure Storage. Since we are describing a backup strategy
for virtual machine disks in this article, we refer to snapshots in the context of page blobs. To learn more about
snapshots, refer to Creating a Snapshot of a Blob.
What is a snapshot?
A blob snapshot is a read-only version of a blob that is captured at a point in time. Once a snapshot has been
created, it can be read, copied, or deleted, but not modified. Snapshots provide a way to back up a blob as it
appears at a moment in time. Until REST version 2015-04-05, you had the ability to copy full snapshots. With the
REST version 2015-07-08 and above, you can also copy incremental snapshots.
NOTE
If you copy the base blob to another destination, the snapshots of the blob are not copied along with it. Similarly, if you
overwrite a base blob with a copy, snapshots associated with the base blob are not affected and stay intact under the base
blob name.
Scenario
In this section, we describe a scenario that involves a custom backup strategy for virtual machine disks using
snapshots.
Consider a DS -series Azure VM with a premium storage P30 disk attached. The P30 disk called mypremiumdisk is
stored in a premium storage account called mypremiumaccount. A standard storage account called
mybackupstdaccount is used for storing the backup of mypremiumdisk. We would like to keep a snapshot of
mypremiumdisk every 12 hours.
To learn about creating storage account and disks, refer to About Azure storage accounts.
To learn about backing up Azure VMs, refer to Plan Azure VM backups.
Next Steps
Use the following links to learn more about creating snapshots of a blob and planning your VM backup
infrastructure.
Creating a Snapshot of a Blob
Plan your VM Backup Infrastructure
Convert a Linux virtual machine from unmanaged
disks to managed disks
4/9/2018 • 3 min to read • Edit Online
If you have existing Linux virtual machines (VMs) that use unmanaged disks, you can convert the VMs to use
Azure Managed Disks. This process converts both the OS disk and any attached data disks.
This article shows you how to convert VMs by using the Azure CLI. If you need to install or upgrade it, see Install
Azure CLI 2.0.
2. Convert the VM to managed disks by using az vm convert. The following process converts the VM named
myVM , including the OS disk and any data disks:
az vm convert --resource-group myResourceGroup --name myVM
3. Start the VM after the conversion to managed disks by using az vm start. The following example starts the
VM named myVM in the resource group named myResourceGroup .
az vm availability-set show \
--resource-group myResourceGroup \
--name myAvailabilitySet \
--query [virtualMachines[*].id] \
--output table
2. Deallocate all the VMs by using az vm deallocate. The following example deallocates the VM named myVM
in the resource group named myResourceGroup :
3. Convert the availability set by using az vm availability-set convert. The following example converts the
availability set named myAvailabilitySet in the resource group named myResourceGroup :
az vm availability-set convert \
--resource-group myResourceGroup \
--name myAvailabilitySet
4. Convert all the VMs to managed disks by using az vm convert. The following process converts the VM
named myVM , including the OS disk and any data disks:
5. Start all the VMs after the conversion to managed disks by using az vm start. The following example starts
the VM named myVM in the resource group named myResourceGroup :
Next steps
For more information about storage options, see Azure Managed Disks overview.
Convert Azure managed disks storage from standard
to premium, and vice versa
11/1/2017 • 2 min to read • Edit Online
Managed disks offers two storage options: Premium (SSD -based) and Standard (HDD -based). It allows you to
easily switch between the two options with minimal downtime based on your performance needs. This capability is
not available for unmanaged disks. But you can easily convert to managed disks to easily switch between the two
options.
This article shows you how to convert managed disks from standard to premium, and vice versa by using Azure
CLI. If you need to install or upgrade it, see Install Azure CLI 2.0.
Next steps
Take a read-only copy of a VM by using snapshots.
Move files to and from a Linux VM using SCP
4/9/2018 • 2 min to read • Edit Online
This article shows how to move files from your workstation up to an Azure Linux VM, or from an Azure Linux VM
down to your workstation, using Secure Copy (SCP ). Moving files between your workstation and a Linux VM,
quickly and securely, is critical for managing your Azure infrastructure.
For this article, you need a Linux VM deployed in Azure using SSH public and private key files. You also need an
SCP client for your local computer. It is built on top of SSH and included in the default Bash shell of most Linux
and Mac computers and some Windows shells.
Quick commands
Copy a file up to the Linux VM
Detailed walkthrough
As examples, we move an Azure configuration file up to a Linux VM and pull down a log file directory, both using
SCP and SSH keys.
The -r cli flag instructs SCP to recursively copy the files and directories from the point of the directory listed in
the command. Also notice that the command-line syntax is similar to a cp copy command.
Next steps
Manage users, SSH, and check or repair disks on Azure Linux VMs using the VMAccess Extension
Migrate to Premium Storage by using Azure Site
Recovery
5/2/2018 • 11 min to read • Edit Online
Azure Premium Storage delivers high-performance, low -latency disk support for virtual machines (VMs) that are
running I/O -intensive workloads. This guide helps you migrate your VM disks from a standard storage account to
a premium storage account by using Azure Site Recovery.
Site Recovery is an Azure service that contributes to your strategy for business continuity and disaster recovery by
orchestrating the replication of on-premises physical servers and VMs to the cloud (Azure) or to a secondary
datacenter. When outages occur in your primary location, you fail over to the secondary location to keep
applications and workloads available. You fail back to your primary location when it returns to normal operation.
Site Recovery provides test failovers to support disaster recovery drills without affecting production environments.
You can run failovers with minimal data loss (depending on replication frequency) for unexpected disasters. In the
scenario of migrating to Premium Storage, you can use the failover in Site Recovery to migrate target disks to a
premium storage account.
We recommend migrating to Premium Storage by using Site Recovery because this option provides minimal
downtime. This option also avoids the manual execution of copying disks and creating new VMs. Site Recovery will
systematically copy your disks and create new VMs during failover.
Site Recovery supports a number of types of failover with minimal or no downtime. To plan your downtime and
estimate data loss, see the types of failover in Site Recovery. If you prepare to connect to Azure VMs after failover,
you should be able to connect to the Azure VM by using RDP after failover.
NOTE
Site Recovery does not support the migration of Storage Spaces disks.
Azure essentials
These are the Azure requirements for this migration scenario:
An Azure subscription.
An Azure premium storage account to store replicated data.
An Azure virtual network to which VMs will connect when they're created at failover. The Azure virtual network
must be in the same region as the one in which Site Recovery runs.
An Azure standard storage account to store replication logs. This can be the same storage account for the VM
disks that are being migrated.
Prerequisites
Understand the relevant migration scenario components in the preceding section.
Plan your downtime by learning about failover in Site Recovery.
3. Under Protection goal, in the first drop-down list, select To Azure. In the second drop-down list, select
Not virtualized / Other, and then select OK.
Step 3: Set up the source environment (configuration server)
1. Download Azure Site Recovery Unified Setup and the vault registration key by going to the Prepare
infrastructure > Prepare source > Add Server panes.
You will need the vault registration key to run the unified setup. The key is valid for five days after you
generate it.
c. In Environment Details, select whether you're going to replicate VMware VMs. For this migration
scenario, choose No.
4. After the installation is complete, do the following in the Microsoft Azure Site Recovery Configuration
Server window:
a. Use the Manage Accounts tab to create the account that Site Recovery can use for automatic discovery.
(In the scenario about protecting physical machines, setting up the account isn't relevant, but you need at
least one account to enable one of the following steps. In this case, you can name the account and
password as any.)
b. Use the Vault Registration tab to upload the vault credential file.
NOTE
If you're using a premium storage account for replicated data, you need to set up an additional standard storage account to
store replication logs.
The failed-over VM will have two temporary disks: one from the primary VM and the other created during
the provisioning of the VM in the recovery region. To exclude the temporary disk before replication, install
the mobility service before you enable replication. To learn more about how to exclude the temporary disk,
see Exclude disks from replication.
2. Enable replication as follows:
a. Select Replicate Application > Source. After you've enabled replication for the first time, select
+Replicate in the vault to enable replication for additional machines.
b. In step 1, set up Source as your process server.
c. In step 2, specify the post-failover deployment model, a premium storage account to migrate to, a
standard storage account to save logs, and a virtual network to fail to.
d. In step 3, add protected VMs by IP address. (You might need an internal IP address to find them.)
e. In step 4, configure the properties by selecting the accounts that you set up previously on the process
server.
f. In step 5, choose the replication policy that you created previously in "Step 5: Set up replication settings."
g. Select OK.
NOTE
When an Azure VM is deallocated and started again, there is no guarantee that it will get the same IP address. If the
IP address of the configuration server/process server or the protected Azure VMs changes, the replication in this
scenario might not work correctly.
When you design your Azure Storage environment, we recommend that you use separate storage accounts for
each VM in an availability set. We recommend that you follow the best practice in the storage layer to use multiple
storage accounts for each availability set. Distributing VM disks to multiple storage accounts helps to improve
storage availability and distributes the I/O across the Azure storage infrastructure.
If your VMs are in an availability set, instead of replicating disks of all VMs into one storage account, we highly
recommend migrating multiple VMs multiple times. That way, the VMs in the same availability set do not share a
single storage account. Use the Enable Replication pane to set up a destination storage account for each VM, one
at a time.
You can choose a post-failover deployment model according to your need. If you choose Azure Resource Manager
as your post-failover deployment model, you can fail over a VM (Resource Manager) to a VM (Resource Manager),
or you can fail over a VM (classic) to a VM (Resource Manager).
Step 8: Run a test failover
To check whether your replication is complete, select your Site Recovery instance and then select Settings >
Replicated Items. You will see the status and percentage of your replication process.
After initial replication is complete, run a test failover to validate your replication strategy. For detailed steps of a
test failover, see Run a test failover in Site Recovery.
NOTE
Before you run any failover, make sure that your VMs and replication strategy meet the requirements. For more information
about running a test failover, see Test failover to Azure in Site Recovery.
You can see the status of your test failover in Settings > Jobs > YOUR_FAILOVER_PLAN_NAME. In the pane, you
can see a breakdown of the steps and success/failure results. If the test failover fails at any step, select the step to
check the error message.
Step 9: Run a failover
After the test failover is completed, run a failover to migrate your disks to Premium Storage and replicate the VM
instances. Follow the detailed steps in Run a failover.
Be sure to select Shut down VMs and synchronize the latest data. This option specifies that Site Recovery
should try to shut down the protected VMs and synchronize the data so that the latest version of the data will be
failed over. If you don't select this option or the attempt doesn't succeed, the failover will be from the latest
available recovery point for the VM.
Site Recovery will create a VM instance whose type is the same as or similar to a Premium Storage-capable VM.
You can check the performance and price of various VM instances by going to Windows Virtual Machines Pricing
or Linux Virtual Machines Pricing.
Post-migration steps
1. Configure replicated VMs to the availability set if applicable. Site Recovery does not support
migrating VMs along with the availability set. Depending on the deployment of your replicated VM, do one
of the following:
For a VM created through the classic deployment model: Add the VM to the availability set in the Azure
portal. For detailed steps, go to Add an existing virtual machine to an availability set.
For a VM created through the Resource Manager deployment model: Save your configuration of the VM
and then delete and re-create the VMs in the availability set. To do so, use the script at Set Azure
Resource Manager VM Availability Set. Before you run this script, check its limitations and plan your
downtime.
2. Delete old VMs and disks. Make sure that the Premium disks are consistent with source disks and that the
new VMs perform the same function as the source VMs. Delete the VM and delete the disks from your
source storage accounts in the Azure portal. If there's a problem in which the disk is not deleted even though
you deleted the VM, see Troubleshoot storage resource deletion errors.
3. Clean the Azure Site Recovery infrastructure. If Site Recovery is no longer needed, you can clean its
infrastructure. Delete replicated items, the configuration server, and the recovery policy, and then delete the
Azure Site Recovery vault.
Troubleshooting
Monitor and troubleshoot protection for virtual machines and physical servers
Microsoft Azure Site Recovery forum
Next steps
For specific scenarios for migrating virtual machines, see the following resources:
Migrate Azure Virtual Machines between Storage Accounts
Upload a Linux virtual hard disk
Migrating Virtual Machines from Amazon AWS to Microsoft Azure
Also, see the following resources to learn more about Azure Storage and Azure Virtual Machines:
Azure Storage
Azure Virtual Machines
Premium Storage: High-performance storage for Azure virtual machine workloads
Find and delete unattached Azure managed and
unmanaged disks
4/9/2018 • 2 min to read • Edit Online
When you delete a virtual machine (VM ) in Azure, by default, any disks that are attached to the VM aren't deleted.
This feature helps to prevent data loss due to the unintentional deletion of VMs. After a VM is deleted, you will
continue to pay for unattached disks. This article shows you how to find and delete any unattached disks and
reduce unnecessary costs.
IMPORTANT
First, run the script by setting the deleteUnattachedDisks variable to 0. This action lets you find and view all the unattached
managed disks.
After you review all the unattached disks, run the script again and set the deleteUnattachedDisks variable to 1. This action
lets you delete all the unattached managed disks.
else
echo $id
fi
done
IMPORTANT
First, run the script by setting the deleteUnattachedVHDs variable to 0. This action lets you find and view all the unattached
unmanaged VHDs.
After you review all the unattached disks, run the script again and set the deleteUnattachedVHDs variable to 1. This action
lets you delete all the unattached unmanaged VHDs.
for id in ${storageAccountIds[@]}
do
connectionString=$(az storage account show-connection-string --ids $id --query connectionString -o tsv)
containers=$(az storage container list --connection-string $connectionString --query [].[name] -o tsv)
if [ "$leaseStatus" == "unlocked" ]
then
if (( $deleteUnattachedVHDs == 1 ))
then
fi
done
done
done
Next steps
Delete storage account
Mount Azure File storage on Linux VMs using SMB
4/13/2018 • 4 min to read • Edit Online
This article shows you how to utilize the Azure File storage service on a Linux VM using an SMB mount with the
Azure CLI 2.0. Azure File storage offers file shares in the cloud using the standard SMB protocol. You can also
perform these steps with the Azure CLI 1.0. The requirements are:
an Azure account
SSH public and private key files
Quick Commands
A resource group
An Azure virtual network
A network security group with an SSH inbound
A subnet
An Azure storage account
Azure storage account keys
An Azure File storage share
A Linux VM
Replace any examples with your own settings.
Create a directory for the local mount
mkdir -p /mnt/mymountpoint
Detailed walkthrough
File storage offers file shares in the cloud that use the standard SMB protocol. With the latest release of File
storage, you can also mount a file share from any OS that supports SMB 3.0. When you use an SMB mount on
Linux, you get easy backups to a robust, permanent archiving storage location that is supported by an SL A.
Moving files from a VM to an SMB mount that's hosted on File storage is a great way to debug logs. That's
because the same SMB share can be mounted locally to your Mac, Linux, or Windows workstation. SMB isn't the
best solution for streaming Linux or application logs in real time, because the SMB protocol is not built to handle
such heavy logging duties. A dedicated, unified logging layer tool such as Fluentd would be a better choice than
SMB for collecting Linux and application logging output.
For this detailed walkthrough, we create the prerequisites needed to first create the File storage share, and then
mount it via SMB on a Linux VM.
1. Create a resource group with az group create to hold the file share.
To create a resource group named myResourceGroup in the "West US" location, use the following example:
2. Create an Azure storage account with az storage account create to store the actual files.
To create a storage account named mystorageaccount by using the Standard_LRS storage SKU, use the
following example:
To extract a single key, use the --query flag. The following example extracts the first key ( [0] ):
Next steps
Using cloud-init to customize a Linux VM during creation
Add a disk to a Linux VM
Encrypt disks on a Linux VM by using the Azure CLI
Using Managed Disks in Azure Resource Manager
Templates
8/21/2017 • 4 min to read • Edit Online
This document walks through the differences between managed and unmanaged disks when using Azure Resource
Manager templates to provision virtual machines. This will help you to update existing templates that are using
unmanaged Disks to managed disks. For reference, we are using the 101-vm-simple-windows template as a guide.
You can see the template using both managed Disks and a prior version using unmanaged disks if you'd like to
directly compare them.
{
"type": "Microsoft.Storage/storageAccounts",
"name": "[variables('storageAccountName')]",
"apiVersion": "2016-01-01",
"location": "[resourceGroup().location]",
"sku": {
"name": "Standard_LRS"
},
"kind": "Storage",
"properties": {}
}
Within the virtual machine object, we need a dependency on the storage account to ensure that it's created before
the virtual machine. Within the storageProfile section, we then specify the full URI of the VHD location, which
references the storage account and is needed for the OS disk and any data disks.
{
"apiVersion": "2015-06-15",
"type": "Microsoft.Compute/virtualMachines",
"name": "[variables('vmName')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))]",
"[resourceId('Microsoft.Network/networkInterfaces/', variables('nicName'))]"
],
"properties": {
"hardwareProfile": {...},
"osProfile": {...},
"storageProfile": {
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "[parameters('windowsOSVersion')]",
"version": "latest"
},
"osDisk": {
"name": "osdisk",
"vhd": {
"uri": "[concat(reference(resourceId('Microsoft.Storage/storageAccounts/',
variables('storageAccountName'))).primaryEndpoints.blob, 'vhds/osdisk.vhd')]"
},
"caching": "ReadWrite",
"createOption": "FromImage"
},
"dataDisks": [
{
"name": "datadisk1",
"diskSizeGB": 1023,
"lun": 0,
"vhd": {
"uri": "[concat(reference(resourceId('Microsoft.Storage/storageAccounts/',
variables('storageAccountName'))).primaryEndpoints.blob, 'vhds/datadisk1.vhd')]"
},
"createOption": "Empty"
}
]
},
"networkProfile": {...},
"diagnosticsProfile": {...}
}
}
NOTE
It is recommended to use an API version later than 2016-04-30-preview as there were breaking changes between
2016-04-30-preview and 2017-03-30 .
{
"apiVersion": "2017-03-30",
"type": "Microsoft.Compute/virtualMachines",
"name": "[variables('vmName')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))]",
"[resourceId('Microsoft.Network/networkInterfaces/', variables('nicName'))]"
],
"properties": {
"hardwareProfile": {...},
"osProfile": {...},
"storageProfile": {
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "[parameters('windowsOSVersion')]",
"version": "latest"
},
"osDisk": {
"createOption": "FromImage"
},
"dataDisks": [
{
"diskSizeGB": 1023,
"lun": 0,
"createOption": "Empty"
}
]
},
"networkProfile": {...},
"diagnosticsProfile": {...}
}
}
{
"type": "Microsoft.Compute/disks",
"name": "[concat(variables('vmName'),'-datadisk1')]",
"apiVersion": "2017-03-30",
"location": "[resourceGroup().location]",
"sku": {
"name": "Standard_LRS"
},
"properties": {
"creationData": {
"createOption": "Empty"
},
"diskSizeGB": 1023
}
}
Within the VM object, we can then reference this disk object to be attached. Specifying the resource ID of the
managed disk we created in the managedDisk property allows the attachment of the disk as the VM is created. Note
that the apiVersion for the VM resource is set to 2017-03-30 . Also note that we've created a dependency on the
disk resource to ensure it's successfully created before VM creation.
{
"apiVersion": "2017-03-30",
"type": "Microsoft.Compute/virtualMachines",
"name": "[variables('vmName')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))]",
"[resourceId('Microsoft.Network/networkInterfaces/', variables('nicName'))]",
"[resourceId('Microsoft.Compute/disks/', concat(variables('vmName'),'-datadisk1'))]"
],
"properties": {
"hardwareProfile": {...},
"osProfile": {...},
"storageProfile": {
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "[parameters('windowsOSVersion')]",
"version": "latest"
},
"osDisk": {
"createOption": "FromImage"
},
"dataDisks": [
{
"lun": 0,
"name": "[concat(variables('vmName'),'-datadisk1')]",
"createOption": "attach",
"managedDisk": {
"id": "[resourceId('Microsoft.Compute/disks/', concat(variables('vmName'),'-
datadisk1'))]"
}
}
]
},
"networkProfile": {...},
"diagnosticsProfile": {...}
}
}
Next steps
For full templates that use managed disks visit the following Azure Quickstart Repo links.
Windows VM with managed disk
Linux VM with managed disk
Full list of managed disk templates
Visit the Azure Managed Disks Overview document to learn more about managed disks.
Review the template reference documentation for virtual machine resources by visiting the
Microsoft.Compute/virtualMachines template reference document.
Review the template reference documentation for disk resources by visiting the Microsoft.Compute/disks
template reference document.
Optimize your Linux VM on Azure
5/10/2018 • 6 min to read • Edit Online
Creating a Linux virtual machine (VM ) is easy to do from the command line or from the portal. This tutorial shows
you how to ensure you have set it up to optimize its performance on the Microsoft Azure platform. This topic uses
an Ubuntu Server VM, but you can also create Linux virtual machine using your own images as templates.
Prerequisites
This topic assumes you already have a working Azure subscription (free trial signup) and have already provisioned
a VM into your Azure subscription. Make sure that you have the latest Azure CLI 2.0 installed and logged in to
your Azure subscription with az login before you create a VM.
Azure OS Disk
Once you create a Linux VM in Azure, it has two disks associated with it. /dev/sda is your OS disk, /dev/sdb is
your temporary disk. Do not use the main OS disk (/dev/sda) for anything except the operating system as it is
optimized for fast VM boot time and does not provide good performance for your workloads. You want to attach
one or more disks to your VM to get persistent and optimized storage for your data.
azuseruser@myVM:~$ free
total used free shared buffers cached
Mem: 3525156 804168 2720988 408 8428 633192
-/+ buffers/cache: 162548 3362608
Swap: 524284 0 524284
cat /sys/block/sda/queue/scheduler
azureuser@myVM:~$ sudo su -
root@myVM:~# echo "noop" >/sys/block/sda/queue/scheduler
root@myVM:~# sed -i 's/GRUB_CMDLINE_LINUX=""/GRUB_CMDLINE_LINUX_DEFAULT="quiet splash elevator=noop"/g'
/etc/default/grub
root@myVM:~# update-grub
NOTE
Applying this setting for /dev/sda alone is not useful. Set on all data disks where sequential I/O dominates the I/O pattern.
You should see the following output, indicating that grub.cfg has been rebuilt successfully and that the default
scheduler has been updated to NOOP.
For the Redhat distribution family, you only need the following command:
Next Steps
Remember, as with all optimization discussions, you need to perform tests before and after each change to
measure the impact the change has. Optimization is a step by step process that has different results across different
machines in your environment. What works for one configuration may not work for others.
Some useful links to additional resources:
Premium Storage: High-Performance Storage for Azure Virtual Machine Workloads
Azure Linux Agent User Guide
Optimizing MySQL Performance on Azure Linux VMs
Configure Software RAID on Linux
Configure Software RAID on Linux
4/9/2018 • 5 min to read • Edit Online
It's a common scenario to use software RAID on Linux virtual machines in Azure to present multiple attached
data disks as a single RAID device. Typically this can be used to improve performance and allow for improved
throughput compared to using just a single disk.
Command action
e extended
p primary partition (1-4)
5. Select the starting point of the new partition, or press <enter> to accept the default to place the partition
at the beginning of the free space on the drive:
6. Select the size of the partition, for example type '+10G' to create a 10 gigabyte partition. Or, press
<enter> create a single partition that spans the entire drive:
7. Next, change the ID and type of the partition from the default ID '83' (Linux) to ID 'fd' (Linux raid auto):
8. Finally, write the partition table to the drive and exit fdisk:
b. SLES 11
NOTE
A reboot may be required after making these changes on SUSE systems. This step is not required on SLES 12.
1. Create the desired mount point for your new file system, for example:
2. When editing /etc/fstab, the UUID should be used to reference the file system rather than the device
name. Use the blkid utility to determine the UUID for the new file system:
sudo /sbin/blkid
...........
/dev/md127: UUID="aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee" TYPE="ext4"
3. Open /etc/fstab in a text editor and add an entry for the new file system, for example:
Or on SLES 11:
sudo mount -a
If this command results in an error message, please check the syntax in the /etc/fstab file.
Next run the mount command to ensure the file system is mounted:
mount
.................
/dev/md127 on /data type ext4 (rw)
TRIM/UNMAP support
Some Linux kernels support TRIM/UNMAP operations to discard unused blocks on the disk. These operations
are primarily useful in standard storage to inform Azure that deleted pages are no longer valid and can be
discarded. Discarding pages can save cost if you create large files and then delete them.
NOTE
RAID may not issue discard commands if the chunk size for the array is set to less than the default (512KB). This is because
the unmap granularity on the Host is also 512KB. If you modified the array's chunk size via mdadm's --chunk=
parameter, then TRIM/unmap requests may be ignored by the kernel.
There are two ways to enable TRIM support in your Linux VM. As usual, consult your distribution for the
recommended approach:
Use the discard mount option in /etc/fstab , for example:
In some cases the discard option may have performance implications. Alternatively, you can run the
fstrim command manually from the command line, or add it to your crontab to run regularly:
Ubuntu
# sudo apt-get install util-linux
# sudo fstrim /data
RHEL/CentOS
This document will discuss how to configure Logical Volume Manager (LVM ) in your Azure virtual machine.
While it is feasible to configure LVM on any disk attached to the virtual machine, by default most cloud images
will not have LVM configured on the OS disk. This is to prevent problems with duplicate volume groups if the OS
disk is ever attached to another VM of the same distribution and type, i.e. during a recovery scenario. Therefore it
is recommended only to use LVM on the data disks.
SLES 11
sudo zypper install lvm2
On SLES11 you must also edit /etc/sysconfig/lvm and set LVM_ACTIVATED_ON_DISCOVERED to "enable":
LVM_ACTIVATED_ON_DISCOVERED="enable"
Configure LVM
In this guide we will assume you have attached three data disks, which we'll refer to as /dev/sdc , /dev/sdd and
/dev/sde . Note that these may not always be the same path names in your VM. You can run ' sudo fdisk -l ' or
similar command to list your available disks.
1. Prepare the physical volumes:
2. Create a volume group. In this example we are calling the volume group data-vg01 :
3. Create the logical volume(s). The command below we will create a single logical volume called data-lv01
to span the entire volume group, but note that it is also feasible to create multiple logical volumes in the
volume group.
NOTE
With SLES11 use -t ext3 instead of ext4. SLES11 only supports read-only access to ext4 filesystems.
1. Create the desired mount point for your new file system, for example:
sudo mkdir /data
lvdisplay
--- Logical volume ---
LV Path /dev/data-vg01/data-lv01
....
3. Open /etc/fstab in a text editor and add an entry for the new file system, for example:
sudo mount -a
If this command results in an error message please check the syntax in the /etc/fstab file.
Next run the mount command to ensure the file system is mounted:
mount
......
/dev/mapper/data--vg01-data--lv01 on /data type ext4 (rw)
Many distributions include either the nobootwait or nofail mount parameters that may be added to the
/etc/fstab file. These parameters allow for failures when mounting a particular file system and allow the
Linux system to continue to boot even if it is unable to properly mount the RAID file system. Please refer
to your distribution's documentation for more information on these parameters.
Example (Ubuntu):
TRIM/UNMAP support
Some Linux kernels support TRIM/UNMAP operations to discard unused blocks on the disk. These operations
are primarily useful in standard storage to inform Azure that deleted pages are no longer valid and can be
discarded. Discarding pages can save cost if you create large files and then delete them.
There are two ways to enable TRIM support in your Linux VM. As usual, consult your distribution for the
recommended approach:
Use the discard mount option in /etc/fstab , for example:
In some cases the discard option may have performance implications. Alternatively, you can run the
fstrim command manually from the command line, or add it to your crontab to run regularly:
Ubuntu
RHEL/CentOS
You open a port, or create an endpoint, to a virtual machine (VM ) in Azure by creating a network filter on a subnet
or VM network interface. You place these filters, which control both inbound and outbound traffic, on a Network
Security Group attached to the resource that receives the traffic. Let's use a common example of web traffic on
port 80. This article shows you how to open a port to a VM with the Azure CLI 2.0. You can also perform these
steps with the Azure CLI 1.0.
To create a Network Security Group and rules you need the latest Azure CLI 2.0 installed and logged in to an
Azure account using az login.
In the following examples, replace example parameter names with your own values. Example parameter names
include myResourceGroup, myNetworkSecurityGroup, and myVnet.
For more control over the rules, such as defining a source IP address range, continue with the additional steps in
this article.
Add a rule with az network nsg rule create to allow HTTP traffic to your webserver (or adjust for your own
scenario, such as SSH access or database connectivity). The following example creates a rule named
myNetworkSecurityGroupRule to allow TCP traffic on port 80:
Alternatively, you can associate your Network Security Group with a virtual network subnet with az network vnet
subnet update rather than just to the network interface on a single VM. The following example associates an
existing subnet named mySubnet in the myVnet virtual network with the Network Security Group named
myNetworkSecurityGroup:
Next steps
In this example, you created a simple rule to allow HTTP traffic. You can find information on creating more detailed
environments in the following articles:
Azure Resource Manager overview
What is a Network Security Group (NSG )?
Create a VM with a static public IP address using the
Azure CLI
4/16/2018 • 5 min to read • Edit Online
You can create virtual machines (VMs) in Azure and expose them to the public Internet by using a public IP
address. By default, Public IPs are dynamic and the address associated to them may change when the VM is
deleted or stopped/deallocated. To guarantee that the VM always uses the same public IP address, you need to
create a static Public IP.
Before you can implement static Public IPs in VMs, it is necessary to understand when you can use static Public
IPs, and how they are used. Read the IP addressing overview to learn more about IP addressing in Azure.
Azure has two different deployment models for creating and working with resources: Resource Manager and
classic. This article covers using the Resource Manager deployment model, which Microsoft recommends for most
new deployments instead of the classic deployment model.
Scenario
This document will walk through a deployment that uses a static public IP address allocated to a virtual machine
(VM ). In this scenario, you have a single VM with its own static public IP address. The VM is part of a subnet
named FrontEnd and also has a static private IP address (192.168.1.101) in that subnet.
You may need a static IP address for web servers that require SSL connections in which the SSL certificate is
linked to an IP address.
You can follow the steps below to deploy the environment shown in the figure above.
Create the VM
The values in "" for the variables in the steps that follow create resources with settings from the scenario. Change
the values, as appropriate, for your environment.
1. Install the Azure CLI 2.0 if you don't already have it installed.
2. Create an SSH public and private key pair for Linux VMs by completing the steps in the Create an SSH public
and private key pair for Linux VMs.
3. From a command shell, login with the command az login .
4. Create the VM by executing the script that follows on a Linux or Mac computer. The Azure public IP address,
virtual network, network interface, and VM resources must all exist in the same location. Though the resources
don't all have to exist in the same resource group, in the following script they do.
RgName="IaaSStory"
Location="westus"
az group create \
--name $RgName \
--location $Location
# Create a public IP address resource with a static IP address using the --allocation-method Static option.
# If you do not specify this option, the address is allocated dynamically. The address is assigned to the
# resource from a pool of IP adresses unique to each Azure region. The DnsName must be unique within the
# Azure location it's created in. Download and view the file from https://www.microsoft.com/en-
us/download/details.aspx?id=41653#
# that lists the ranges for each region.
PipName="PIPWEB1"
DnsName="iaasstoryws1"
az network public-ip create \
--name $PipName \
--resource-group $RgName \
--location $Location \
--allocation-method Static \
--dns-name $DnsName
VnetName="TestVNet"
VnetPrefix="192.168.0.0/16"
SubnetName="FrontEnd"
SubnetPrefix="192.168.1.0/24"
az network vnet create \
--name $VnetName \
--resource-group $RgName \
--location $Location \
--address-prefix $VnetPrefix \
--subnet-name $SubnetName \
--subnet-prefix $SubnetPrefix
# Create a network interface connected to the VNet with a static private IP address and associate the public
IP address
# resource to the NIC.
NicName="NICWEB1"
PrivateIpAddress="192.168.1.101"
az network nic create \
--name $NicName \
--resource-group $RgName \
--location $Location \
--subnet $SubnetName \
--vnet-name $VnetName \
--private-ip-address $PrivateIpAddress \
--public-ip-address $PipName
VmName="WEB1"
# Replace the value for the VmSize variable with a value from the
# https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-sizes article.
VmSize="Standard_DS1"
# Replace the value for the OsImage variable with a value for *urn* from the output returned by entering
# the `az vm image list` command.
OsImage="credativ:Debian:8:latest"
Username='adminuser'
# Replace the following value with the path to your public key file.
SshKeyValue="~/.ssh/id_rsa.pub"
az vm create \
--name $VmName \
--resource-group $RgName \
--image $OsImage \
--location $Location \
--size $VmSize \
--nics $NicName \
--admin-username $Username \
--ssh-key-value $SshKeyValue
# If creating a Windows VM, remove the previous line and you'll be prompted for the password you want to
configure for the VM.
Next steps
Any network traffic can flow to and from the VM created in this article. You can define inbound and outbound
security rules within a network security group that limit the traffic that can flow to and from the network interface,
the subnet, or both. To learn more about network security groups, see Network security group overview.
How to create a Linux virtual machine in Azure with
multiple network interface cards
4/16/2018 • 5 min to read • Edit Online
You can create a virtual machine (VM ) in Azure that has multiple virtual network interfaces (NICs) attached to it. A
common scenario is to have different subnets for front-end and back-end connectivity, or a network dedicated to a
monitoring or backup solution. This article details how to create a VM with multiple NICs attached to it and how to
add or remove NICs from an existing VM. Different VM sizes support a varying number of NICs, so size your VM
accordingly.
This article details how to create a VM with multiple NICs with the Azure CLI 2.0. You can also perform these steps
with the Azure CLI 1.0.
Create the virtual network with az network vnet create. The following example creates a virtual network named
myVnet and subnet named mySubnetFrontEnd:
Create a subnet for the back-end traffic with az network vnet subnet create. The following example creates a subnet
named mySubnetBackEnd:
Create a network security group with az network nsg create. The following example creates a network security
group named myNetworkSecurityGroup:
az vm create \
--resource-group myResourceGroup \
--name myVM \
--image UbuntuLTS \
--size Standard_DS3_v2 \
--admin-username azureuser \
--generate-ssh-keys \
--nics myNic1 myNic2
Add routing tables to the guest OS by completing the steps in Configure the guest OS for multiple NICs.
Add a NIC to a VM
The previous steps created a VM with multiple NICs. You can also add NICs to an existing VM with the Azure CLI
2.0. Different VM sizes support a varying number of NICs, so size your VM accordingly. If needed, you can resize a
VM.
Create another NIC with az network nic create. The following example creates a NIC named myNic3 connected to
the back-end subnet and network security group created in the previous steps:
To add a NIC to an existing VM, first deallocate the VM with az vm deallocate. The following example deallocates
the VM named myVM:
az vm deallocate --resource-group myResourceGroup --name myVM
Add the NIC with az vm nic add. The following example adds myNic3 to myVM:
az vm nic add \
--resource-group myResourceGroup \
--vm-name myVM \
--nics myNic3
Add routing tables to the guest OS by completing the steps in Configure the guest OS for multiple NICs.
Remove the NIC with az vm nic remove. The following example removes myNic3 from myVM:
az vm nic remove \
--resource-group myResourceGroup \
--vm-name myVM \
--nics myNic3
"copy": {
"name": "multiplenics"
"count": "[parameters('count')]"
}
You can read a complete example of creating multiple NICs using Resource Manager templates.
Add routing tables to the guest OS by completing the steps in Configure the guest OS for multiple NICs.
To make the change persistent and applied during network stack activation, edit /etc/sysconfig/network-
scripts/ifcfg -eth0 and /etc/sysconfig/network-scripts/ifcfg -eth1. Alter the line "NM_CONTROLLED=yes" to
"NM_CONTROLLED=no". Without this step, the additional rules/routing are not automatically applied.
Next, extend the routing tables. Let's assume we have the following setup in place:
Routing
Interfaces
You would then create the following files and add the appropriate rules and routes to each:
/etc/sysconfig/network-scripts/rule-eth0
/etc/sysconfig/network-scripts/route-eth0
/etc/sysconfig/network-scripts/rule-eth1
The routing rules are now correctly in place and you can connect with either interface as needed.
Next steps
Review Linux VM sizes when trying to creating a VM with multiple NICs. Pay attention to the maximum number of
NICs each VM size supports.
Create a Linux virtual machine with Accelerated
Networking
5/7/2018 • 9 min to read • Edit Online
In this tutorial, you learn how to create a Linux virtual machine (VM ) with Accelerated Networking. To create a
Windows VM with Accelerated Networking, see Create a Windows VM with Accelerated Networking. Accelerated
networking enables single root I/O virtualization (SR -IOV ) to a VM, greatly improving its networking performance.
This high-performance path bypasses the host from the datapath, reducing latency, jitter, and CPU utilization, for
use with the most demanding network workloads on supported VM types. The following picture shows
communication between two VMs with and without accelerated networking:
Without accelerated networking, all networking traffic in and out of the VM must traverse the host and the virtual
switch. The virtual switch provides all policy enforcement, such as network security groups, access control lists,
isolation, and other network virtualized services to network traffic. To learn more about virtual switches, read the
Hyper-V network virtualization and virtual switch article.
With accelerated networking, network traffic arrives at the VM's network interface (NIC ), and is then forwarded to
the VM. All network policies that the virtual switch applies are now offloaded and applied in hardware. Applying
policy in hardware enables the NIC to forward network traffic directly to the VM, bypassing the host and the
virtual switch, while maintaining all the policy it applied in the host.
The benefits of accelerated networking only apply to the VM that it is enabled on. For the best results, it is ideal to
enable this feature on at least two VMs connected to the same Azure Virtual Network (VNet). When
communicating across VNets or connecting on-premises, this feature has minimal impact to overall latency.
Benefits
Lower Latency / Higher packets per second (pps): Removing the virtual switch from the datapath removes
the time packets spend in the host for policy processing and increases the number of packets that can be
processed inside the VM.
Reduced jitter: Virtual switch processing depends on the amount of policy that needs to be applied and the
workload of the CPU that is doing the processing. Offloading the policy enforcement to the hardware removes
that variability by delivering packets directly to the VM, removing the host to VM communication and all
software interrupts and context switches.
Decreased CPU utilization: Bypassing the virtual switch in the host leads to less CPU utilization for
processing network traffic.
The network security group contains several default rules, one of which disables all inbound access from the
Internet. Open a port to allow SSH access to the virtual machine with az network nsg rule create:
Create a network interface with az network nic create with accelerated networking enabled. The following example
creates a network interface named myNic in the mySubnet subnet of the myVnet virtual network and associates
the myNetworkSecurityGroup network security group to the network interface:
az network nic create \
--resource-group myResourceGroup \
--name myNic \
--vnet-name myVnet \
--subnet mySubnet \
--accelerated-networking true \
--public-ip-address myPublicIp \
--network-security-group myNetworkSecurityGroup
az vm create \
--resource-group myResourceGroup \
--name myVM \
--image UbuntuLTS \
--size Standard_DS4_v2 \
--admin-username azureuser \
--generate-ssh-keys \
--nics myNic
{
"fqdns": "",
"id": "/subscriptions/<ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM",
"location": "centralus",
"macAddress": "00-0D-3A-23-9A-49",
"powerState": "VM running",
"privateIpAddress": "192.168.0.4",
"publicIpAddress": "40.68.254.142",
"resourceGroup": "myResourceGroup"
}
ssh azureuser@<your-public-ip-address>
From the Bash shell, enter uname -r and confirm that the kernel version is one of the following versions, or
greater:
Ubuntu 16.04: 4.11.0-1013
SLES SP3: 4.4.92-6.18
RHEL: 7.4.2017120423
CentOS: 7.4.20171206
Confirm the Mellanox VF device is exposed to the VM with the lspci command. The returned output is similar to
the following output:
0000:00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (AGP disabled) (rev 03)
0000:00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 01)
0000:00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
0000:00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 02)
0000:00:08.0 VGA compatible controller: Microsoft Corporation Hyper-V virtual VGA
0001:00:02.0 Ethernet controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro
Virtual Function]
Check for activity on the VF (virtual function) with the ethtool -S eth0 | grep vf_ command. If you receive output
similar to the following sample output, accelerated networking is enabled and working.
vf_rx_packets: 992956
vf_rx_bytes: 2749784180
vf_tx_packets: 2656684
vf_tx_bytes: 1099443970
vf_tx_dropped: 0
az vm deallocate \
--resource-group myResourceGroup \
--name myVM
Important, please note, if your VM was created individually, without an availability set, you only need to
stop/deallocate the individual VM to enable Accelerated Networking. If your VM was created with an availability
set, all VMs contained in the availability set will need to be stopped/deallocated before enabling Accelerated
Networking on any of the NICs.
Once stopped, enable Accelerated Networking on the NIC of your VM:
Restart your VM or, if in an Availability Set, all the VMs in the Set and confirm that Accelerated Networking is
enabled:
az vm start --resource-group myResourceGroup \
--name myVM
VMSS
VMSS is slightly different but follows the same workflow. First, stop the VMs:
az vmss deallocate \
--name myvmss \
--resource-group myrg
Once the VMs are stopped, update the Accelerated Networking property under the network interface:
Please note, a VMSS has VM upgrades that apply updates using three different settings, automatic, rolling and
manual. In these instructions the policy is set to automatic so that the VMSS will pick up the changes immediately
after restarting. To set it to automatic so that the changes are immediately picked up:
az vmss update \
--name myvmss \
--resource-group myrg \
--set upgradePolicy.mode="automatic"
az vmss start \
--name myvmss \
--resource-group myrg
Once you restart, wait for the upgrades to finish but once completed, the VF will appear inside the VM. (Please
make sure you are using a supported OS and VM size.)
Resizing existing VMs with Accelerated Networking
VMs with Accelerated Networking enabled can only be resized to VMs that support Accelerated Networking.
A VM with Accelerated Networking enabled cannot be resized to a VM instance that does not support Accelerated
Networking using the resize operation. Instead, to resize one of these VMs:
Stop/Deallocate the VM or if in an availability set/VMSS, stop/deallocate all the VMs in the set/VMSS.
Accelerated Networking must be disabled on the NIC of the VM or if in an availability set/VMSS, all VMs in the
set/VMSS.
Once Accelerated Networking is disabled, the VM/availability set/VMSS can be moved to a new size that does
not support Accelerated Networking and restarted.
Create a fully qualified domain name in the Azure
portal for a Linux VM
12/14/2017 • 1 min to read • Edit Online
When you create a virtual machine (VM ) in the Azure portal, a public IP resource for the virtual machine is
automatically created. You use this IP address to remotely access the VM. Although the portal does not create a
fully qualified domain name, or FQDN, you can add one once the VM is created. This article demonstrates the
steps to create a DNS name or FQDN.
Create a FQDN
This article assumes that you have already created a VM. If needed, you can create a VM in the portal or with the
Azure CLI. Follow these steps once your VM is up and running:
1. Select your VM in the portal. The DNS name is currently blank. Select Public IP address:
3. To return to the VM overview blade, close the Public IP address blade. Verify that the DNS name is now
shown.
You can now connect remotely to the VM using this DNS name such as with
ssh [email protected] .
Next steps
Now that your VM has a public IP and DNS name, you can deploy common application frameworks or services
such as nginx, MongoDB, Docker, etc.
You can also read more about using Resource Manager for tips on building your Azure deployments.
How to find and delete unattached network interface
cards (NICs) for Azure VMs
4/11/2018 • 1 min to read • Edit Online
When you delete a virtual machine (VM ) in Azure, the network interface cards (NICs) are not deleted by default. If
you create and delete multiple VMs, the unused NICs continue to use the internal IP address leases. As you create
other VM NICs, they may be unable to obtain an IP lease in the address space of the subnet. This article shows you
how to find and delete unattached NICs.
Next steps
For more information on how to create and manage virtual networks in Azure, see create and manage VM
networks.
DNS Name Resolution options for Linux virtual
machines in Azure
4/9/2018 • 7 min to read • Edit Online
Azure provides DNS name resolution by default for all virtual machines that are in a single virtual network. You
can implement your own DNS name resolution solution by configuring your own DNS services on your virtual
machines that Azure hosts. The following scenarios should help you choose the one that works for your situation.
Name resolution that Azure provides
Name resolution using your own DNS server
The type of name resolution that you use depends on how your virtual machines and role instances need to
communicate with each other.
The following table illustrates scenarios and corresponding name resolution solutions:
Name resolution between role instances Name resolution that Azure provides hostname or fully-qualified domain
or virtual machines in the same virtual name (FQDN)
network
Name resolution between role instances Customer-managed DNS servers that FQDN only
or virtual machines in different virtual forward queries between virtual
networks networks for resolution by Azure (DNS
proxy). See Name resolution using your
own DNS server.
Reverse DNS for internal IPs Name resolution using your own DNS n/a
server
NOTE
: The 'dnsmasq' package is only one of the many DNS caches that are available for Linux. Before you use it, check its suitability
for your needs and that no other cache is installed.
Client-side retries
DNS is primarily a UDP protocol. Because the UDP protocol doesn't guarantee message delivery, the DNS
protocol itself handles retry logic. Each DNS client (operating system) can exhibit different retry logic depending
on the creator's preference:
Windows operating systems retry after one second and then again after another two, four, and another four
seconds.
The default Linux setup retries after five seconds. You should change this to retry five times at one-second
intervals.
To check the current settings on a Linux virtual machine, 'cat /etc/resolv.conf', and look at the 'options' line, for
example:
The resolv.conf file is auto-generated and should not be edited. The specific steps that add the 'options' line vary by
distribution:
Ubuntu (uses resolvconf )
1. Add the options line to '/etc/resolveconf/resolv.conf.d/head'.
2. Run 'resolvconf -u' to update.
SUSE (uses netconf )
1. Add 'timeout:1 attempts:5' to the NETCONFIG_DNS_RESOLVER_OPTIONS="" parameter in
'/etc/sysconfig/network/config'.
2. Run 'netconfig update' to update.
CentOS by Rogue Wave Software (formerly OpenLogic) (uses NetworkManager)
1. Add 'RES_OPTIONS="timeout:1 attempts:5"' to '/etc/sysconfig/network'.
2. Run 'service network restart' to update.
When you use name resolution that Azure provides, the internal DNS suffix is provided to each virtual machine by
using DHCP. When you use your own name resolution solution, this suffix is not supplied to virtual machines
because the suffix interferes with other DNS architectures. To refer to machines by FQDN or to configure the suffix
on your virtual machines, you can use PowerShell or the API to determine the suffix:
For virtual networks that are managed by Azure Resource Manager, the suffix is available via the network
interface card resource. You can also run the azure network public-ip show <resource group> <pip name>
command to display the details of your public IP, which includes the FQDN of the NIC.
If forwarding queries to Azure doesn't suit your needs, you need to provide your own DNS solution. Your DNS
solution needs to:
Provide appropriate hostname resolution, for example via DDNS. If you use DDNS, you might need to disable
DNS record scavenging. DHCP leases of Azure are very long and scavenging may remove DNS records
prematurely.
Provide appropriate recursive resolution to allow resolution of external domain names.
Be accessible (TCP and UDP on port 53) from the clients it serves and be able to access the Internet.
Be secured against access from the Internet to mitigate threats posed by external agents.
NOTE
For best performance, when you use virtual machines in Azure DNS servers, disable IPv6 and assign an Instance-Level Public
IP to each DNS server virtual machine.
Create virtual network interface cards and use internal
DNS for VM name resolution on Azure
4/9/2018 • 5 min to read • Edit Online
This article shows you how to set static internal DNS names for Linux VMs using virtual network interface cards
(vNics) and DNS label names with the Azure CLI 2.0. You can also perform these steps with the Azure CLI 1.0.
Static DNS names are used for permanent infrastructure services like a Jenkins build server, which is used for this
document, or a Git server.
The requirements are:
an Azure account
SSH public and private key files
Quick commands
If you need to quickly accomplish the task, the following section details the commands needed. More detailed
information and context for each step can be found in the rest of the document, starting here. To perform these
steps, you need the latest Azure CLI 2.0 installed and logged in to an Azure account using az login.
Pre-Requirements: Resource Group, virtual network and subnet, Network Security Group with SSH inbound.
Create a virtual network interface card with a static internal DNS name
Create the vNic with az network nic create. The --internal-dns-name CLI flag is for setting the DNS label, which
provides the static DNS name for the virtual network interface card (vNic). The following example creates a vNic
named myNic , connects it to the myVnet virtual network, and creates an internal DNS name record called jenkins
:
az vm create \
--resource-group myResourceGroup \
--name myVM \
--nics myNic \
--image UbuntuLTS \
--admin-username azureuser \
--ssh-key-value ~/.ssh/id_rsa.pub
Detailed walkthrough
A full continuous integration and continuous deployment (CiCd) infrastructure on Azure requires certain servers to
be static or long-lived servers. It is recommended that Azure assets like the virtual networks and Network Security
Groups are static and long lived resources that are rarely deployed. Once a virtual network has been deployed, it
can be reused by new deployments without any adverse affects to the infrastructure. You can later add a Git
repository server or a Jenkins automation server delivers CiCd to this virtual network for your development or test
environments.
Internal DNS names are only resolvable inside an Azure virtual network. Because the DNS names are internal, they
are not resolvable to the outside internet, providing additional security to the infrastructure.
In the following examples, replace example parameter names with your own values. Example parameter names
include myResourceGroup , myNic , and myVM .
Create the virtual network interface card and static DNS names
Azure is very flexible, but to use DNS names for VM name resolution, you need to create virtual network interface
cards (vNics) that include a DNS label. vNics are important as you can reuse them by connecting them to different
VMs over the infrastructure lifecycle. This approach keeps the vNic as a static resource while the VMs can be
temporary. By using DNS labeling on the vNic, we are able to enable simple name resolution from other VMs in
the VNet. Using resolvable names enables other VMs to access the automation server by the DNS name Jenkins
or the Git server as gitrepo .
Create the vNic with az network nic create. The following example creates a vNic named myNic , connects it to the
myVnet virtual network named myVnet , and creates an internal DNS name record called jenkins :
By using the CLI flags to call out existing resources, we instruct Azure to deploy the VM inside the existing network.
To reiterate, once a VNet and subnet have been deployed, they can be left as static or permanent resources inside
your Azure region.
Next steps
Create your own custom environment for a Linux VM using Azure CLI commands directly
Create a Linux VM on Azure using templates
1 min to read •
Edit Online
1 min to read •
Edit Online
1 min to read •
Edit Online
1 min to read •
Edit Online
1 min to read •
Edit Online
1 min to read •
Edit Online
Use Azure Policy to restrict extensions installation on
Linux VMs
5/10/2018 • 3 min to read • Edit Online
If you want to prevent the use or installation of certain extensions on your Linux VMs, you can create an Azure
policy using the CLI to restrict extensions for VMs within a resource group.
This tutorial uses the CLI within the Azure Cloud Shell, which is constantly updated to the latest version. If you
want to run the Azure CLI locally, you need to install version 2.0.26 or later. Run az --version to find the version. If
you need to install or upgrade, see Install Azure CLI 2.0.
vim ~/clouddrive/azurepolicy.rules.json
{
"if": {
"allOf": [
{
"field": "type",
"equals": "Microsoft.OSTCExtensions/virtualMachines/extensions"
},
{
"field": "Microsoft.OSTCExtensions/virtualMachines/extensions/publisher",
"equals": "Microsoft.OSTCExtensions"
},
{
"field": "Microsoft.OSTCExtensions/virtualMachines/extensions/type",
"in": "[parameters('notAllowedExtensions')]"
}
]
},
"then": {
"effect": "deny"
}
}
When you are done, hit the Esc key and then type :wq to save and close the file.
vim ~/clouddrive/azurepolicy.parameters.json
{
"notAllowedExtensions": {
"type": "Array",
"metadata": {
"description": "The list of extensions that will be denied. Example: CustomScriptForLinux,
VMAccessForLinux etc.",
"strongType": "type",
"displayName": "Denied extension"
}
}
}
When you are done, hit the Esc key and then type :wq to save and close the file.
az vm create \
--resource-group myResourceGroup \
--name myVM \
--image UbuntuLTS \
--generate-ssh-keys
Try to create a new user named myNewUser using the VM Access extension.
az vm user update \
--resource-group myResourceGroup \
--name myVM \
--username myNewUser \
--password 'mynewuserpwd123!'
Next steps
For more information, see Azure Policy.
Migrate from Amazon Web Services (AWS) and
other platforms to Managed Disks in Azure
4/25/2018 • 4 min to read • Edit Online
You can upload VHD files from AWS or on-premises virtualization solutions to Azure to create VMs that take
advantage of Managed Disks. Azure Managed Disks removes the need to manage storage accounts for Azure IaaS
VMs. You have to only specify the type (Premium or Standard) and size of disk you need, and Azure creates and
manages the disk for you.
You can upload either generalized and specialized VHDs.
Generalized VHD - has had all of your personal account information removed using Sysprep.
Specialized VHD - maintains the user accounts, applications, and other state data from your original VM.
IMPORTANT
Before uploading any VHD to Azure, you should follow Prepare a Windows VHD or VHDX to upload to Azure
SCENARIO DOCUMENTATION
You have existing AWS EC2 instances that you would like to Move a VM from Amazon Web Services (AWS) to Azure
migrate to Azure VMs using managed disks
You have a VM from another virtualization platform that you Upload a generalized VHD and use it to create a new VM in
would like to use as an image to create multiple Azure VMs. Azure
You have a uniquely customized VM that you would like to Upload a specialized VHD to Azure and create a new VM
recreate in Azure.
Throughput per disk 100 MB per second 150 MB per second 200 MB per second
STANDARD DISK
TYPE S4 S6 S10 S20 S30
Next Steps
Before uploading any VHD to Azure, you should follow Prepare a Windows VHD or VHDX to upload to Azure
Move a Windows VM from Amazon Web Services
(AWS) to Azure using PowerShell
4/9/2018 • 2 min to read • Edit Online
If you are evaluating Azure virtual machines for hosting your workloads, you can export an existing Amazon Web
Services (AWS ) EC2 Windows VM instance then upload the virtual hard disk (VHD ) to Azure. Once the VHD is
uploaded, you can create a new VM in Azure from the VHD.
This topic covers moving a single VM from AWS to Azure. If you want to move VMs from AWS to Azure at scale,
see Migrate virtual machines in Amazon Web Services (AWS ) to Azure with Azure Site Recovery.
Prepare the VM
You can upload both generalized and specialized VHDs to Azure. Each type requires that you prepare the VM
before exporting from AWS.
Generalized VHD - a generalized VHD has had all of your personal account information removed using
Sysprep. If you intend to use the VHD as an image to create new VMs from, you should:
Prepare a Windows VM.
Generalize the virtual machine using Sysprep.
Specialized VHD - a specialized VHD maintains the user accounts, applications and other state data from
your original VM. If you intend to use the VHD as-is to create a new VM, ensure the following steps are
completed.
Prepare a Windows VHD to upload to Azure. Do not generalize the VM using Sysprep.
Remove any guest virtualization tools and agents that are installed on the VM (i.e. VMware tools).
Ensure the VM is configured to pull its IP address and DNS settings via DHCP. This ensures that the
server obtains an IP address within the VNet when it starts up.
Once the VHD has been exported, follow the instructions in How Do I Download an Object from an S3 Bucket? to
download the VHD file from the S3 bucket.
IMPORTANT
AWS charges data transfer fees for downloading the VHD. See Amazon S3 Pricing for more information.
Next steps
Now you can upload the VHD to Azure and create a new VM.
If you ran Sysprep on your source to generalize it before exporting, see Upload a generalized VHD and use it
to create a new VMs in Azure
If you did not run Sysprep before exporting, the VHD is considered specialized, see Upload a specialized VHD
to Azure and create a new VM
Create a Linux VM from custom disk with the Azure
CLI 2.0
4/9/2018 • 6 min to read • Edit Online
This article shows you how to upload a customized virtual hard disk (VHD ) or copy a an existing VHD in Azure
and create new Linux virtual machines (VMs) from the custom disk. You can install and configure a Linux distro to
your requirements and then use that VHD to quickly create a new Azure virtual machine.
If you want to create multiple VMs from your customized disk, you should create an image from your VM or
VHD. For more information, see Create a custom image of an Azure VM using the CLI.
You have two options:
Upload a VHD
Copy an existing Azure VM
Quick commands
When creating a new VM using az vm create from a customized or specialized disk you attach the disk (--attach-
os-disk) instead of specifying a custom or marketplace image (--image). The following example creates a VM
named myVM using the managed disk named myManagedDisk created from your customized VHD:
Requirements
To complete the following steps, you need:
A Linux virtual machine that has been prepared for use in Azure. The Prepare the VM section of this article
covers how to find distro specific information on installing the Azure Linux Agent (waagent) which is required
for the VM to work properly in Azure and for you to be able to connect to it using SSH.
The VHD file from an existing Azure-endorsed Linux distribution (or see information for non-endorsed
distributions) to a virtual disk in the VHD format. Multiple tools exist to create a VM and VHD:
Install and configure QEMU or KVM, taking care to use VHD as your image format. If needed, you can
convert an image using qemu-img convert.
You can also use Hyper-V on Windows 10 or on Windows Server 2012/2012 R2.
NOTE
The newer VHDX format is not supported in Azure. When you create a VM, specify VHD as the format. If needed, you can
convert VHDX disks to VHD using qemu-img convert or the Convert-VHD PowerShell cmdlet. Further, Azure does not
support uploading dynamic VHDs, so you need to convert such disks to static VHDs before uploading. You can use tools
such as Azure VHD Utilities for GO to convert dynamic disks during the process of uploading to Azure.
Make sure that you have the latest Azure CLI 2.0 installed and logged in to an Azure account using az login.
In the following examples, replace example parameter names with your own values. Example parameter names
included myResourceGroup, mystorageaccount, and mydisks.
Prepare the VM
Azure supports various Linux distributions (see Endorsed Distributions). The following articles guide you through
how to prepare the various Linux distributions that are supported on Azure:
CentOS -based Distributions
Debian Linux
Oracle Linux
Red Hat Enterprise Linux
SLES & openSUSE
Ubuntu
Other - Non-Endorsed Distributions
Also see the Linux Installation Notes for more general tips on preparing Linux images for Azure.
NOTE
The Azure platform SLA applies to VMs running Linux only when one of the endorsed distributions is used with the
configuration details as specified under 'Supported Versions' in Linux on Azure-Endorsed Distributions.
az group create \
--name myResourceGroup \
--location eastus
Make a note of key1 as you will use it to interact with your storage account in the next steps.
Create a storage container
In the same way that you create different directories to logically organize your local file system, you create
containers within a storage account to organize your disks. A storage account can contain any number of
containers. Create a container with az storage container create.
The following example creates a container named mydisks:
az disk create \
--resource-group myResourceGroup \
--name myManagedDisk \
--source https://mystorageaccount.blob.core.windows.net/mydisks/myDisk.vhd
Option 2: Copy an existing VM
You can also create the customized VM in Azure and then copy the OS disk and attach it to a new VM to create
another copy. This is fine for testing, but if you want to use an existing Azure VM as the model for multiple new
VMs, you really should create an image instead. For more information about creating an image from an existing
Azure VM, see Create a custom image of an Azure VM using the CLI
Create a snapshot
This example creates a snapshot of a VM named myVM in resource group myResourceGroup and creates a
snapshot named osDiskSnapshot.
snapshotId=$(az snapshot show --name osDiskSnapshot --resource-group myResourceGroup --query [id] -o tsv)
Create the managed disk. In this example, we will create a managed disk named myManagedDisk from our
snapshot, that is 128GB in size in standard storage.
az disk create \
--resource-group myResourceGroup \
--name myManagedDisk \
--sku Standard_LRS \
--size-gb 128 \
--source $snapshotId
Create the VM
Now, create your VM with az vm create and attach (--attach-os-disk) the managed disk as the OS disk. The
following example creates a VM named myNewVM using the managed disk created from your uploaded VHD:
az vm create \
--resource-group myResourceGroup \
--location eastus \
--name myNewVM \
--os-type linux \
--attach-os-disk myManagedDisk
You should be able to SSH into the VM using the credentials from the source VM.
Next steps
After you have prepared and uploaded your custom virtual disk, you can read more about using Resource
Manager and templates. You may also want to add a data disk to your new VMs. If you have applications running
on your VMs that you need to access, be sure to open ports and endpoints.
1 min to read •
Edit Online
Platform-supported migration of IaaS resources from
classic to Azure Resource Manager
4/9/2018 • 7 min to read • Edit Online
In this article, we describe how we're enabling migration of infrastructure as a service (IaaS ) resources from the
Classic to Resource Manager deployment models. You can read more about Azure Resource Manager features
and benefits. We detail how to connect resources from the two deployment models that coexist in your
subscription by using virtual network site-to-site gateways.
NOTE
In this migration scope, both the management-plane operations and the data-plane operations may not be allowed for a
period of time during the migration.
NOTE
In this migration scope, the management plane may not be allowed for a period of time during the migration. For certain
configurations as described earlier, data-plane downtime occurs.
NOTE
The Resource Manager deployment model doesn't have the concept of Classic images and disks. When the storage account
is migrated, Classic images and disks are not visible in the Resource Manager stack but the backing VHDs remain in the
storage account.
Unattached resources (Network Security Groups, Route Tables & Reserved IPs)
Network Security Groups, Route Tables & Reserved IPs that are not attached to any Virtual Machines and Virtual
Networks can be migrated independently.
Compute Unassociated virtual machine disks. The VHD blobs behind these disks will
get migrated when the Storage
Account is migrated
Compute Virtual machine images. The VHD blobs behind these disks will
get migrated when the Storage
Account is migrated
Network Virtual networks using VNet Peering. Migrate Virtual Network to Resource
Manager, then peer. Learn more about
VNet Peering.
Unsupported configurations
The following configurations are not currently supported.
Resource Manager Role Based Access Control (RBAC) for Because the URI of the resources is
classic resources modified after migration, it is
recommended that you plan the RBAC
policy updates that need to happen
after migration.
Compute Virtual machines that belong to a You can optionally delete the VM.
virtual network but don't have an
explicit subnet assigned
Compute Virtual machines that have alerts, The migration goes through and these
Autoscale policies settings are dropped. It is highly
recommended that you evaluate your
environment before you do the
migration. Alternatively, you can
reconfigure the alert settings after
migration is complete.
SERVICE CONFIGURATION RECOMMENDATION
Compute Boot diagnostics with Premium storage Disable Boot Diagnostics feature for the
VMs before continuing with migration.
You can re-enable boot diagnostics in
the Resource Manager stack after the
migration is complete. Additionally,
blobs that are being used for
screenshot and serial logs should be
deleted so you are no longer charged
for those blobs.
Compute Cloud services that contain web/worker This is currently not supported.
roles
Compute Cloud services that contain more than This is currently not supported. Please
one availability set or multiple move the Virtual Machines to the same
availability sets. availability set before migrating.
Network Virtual networks that contain virtual This is currently not supported. Please
machines and web/worker roles move the Web/Worker roles to their
own Virtual Network before migrating.
Once the classic Virtual Network is
migrated, the migrated Azure Resource
Manager Virtual Network can be
peered with the classic Virtual Network
to achieve similar configuration as
before.
SERVICE CONFIGURATION RECOMMENDATION
Network Classic Express Route circuits This is currently not supported. These
circuits need to be migrated to Azure
Resource Manager before beginning
IaaS migration. To learn more about
this see Moving ExpressRoute circuits
from the classic to the Resource
Manager deployment model.
Azure App Service Virtual networks that contain App This is currently not supported.
Service environments
Azure HDInsight Virtual networks that contain This is currently not supported.
HDInsight services
Microsoft Dynamics Lifecycle Services Virtual networks that contain virtual This is currently not supported.
machines that are managed by
Dynamics Lifecycle Services
Azure AD Domain Services Virtual networks that contain Azure AD This is currently not supported.
Domain services
Azure RemoteApp Virtual networks that contain Azure This is currently not supported.
RemoteApp deployments
Azure API Management Virtual networks that contain Azure API This is currently not supported. To
Management deployments migrate the IaaS VNET, please change
the VNET of the API Management
deployment which is a no downtime
operation.
Next steps
Technical deep dive on platform-supported migration from classic to Azure Resource Manager
Planning for migration of IaaS resources from classic to Azure Resource Manager
Use PowerShell to migrate IaaS resources from classic to Azure Resource Manager
Use CLI to migrate IaaS resources from classic to Azure Resource Manager
Community tools for assisting with migration of IaaS resources from classic to Azure Resource Manager
Review most common migration errors
Review the most frequently asked questions about migrating IaaS resources from classic to Azure Resource
Manager
Technical deep dive on platform-supported migration
from classic to Azure Resource Manager
4/9/2018 • 13 min to read • Edit Online
Let's take a deep-dive on migrating from the Azure classic deployment model to the Azure Resource Manager
deployment model. We look at resources at a resource and feature level to help you understand how the Azure
platform migrates resources between the two deployment models. For more information, please read the service
announcement article: Platform-supported migration of IaaS resources from classic to Azure Resource Manager.
NOTE
The operations described in the following sections are all idempotent. If you have a problem other than an unsupported
feature or a configuration error, retry the prepare, abort, or commit operation. Azure tries the action again.
Validate
The validate operation is the first step in the migration process. The goal of this step is to analyze the state of the
resources you want to migrate in the classic deployment model. The operation evaluates whether the resources
are capable of migration (success or failure).
You select the virtual network or a cloud service (if it’s not in a virtual network) that you want to validate for
migration. If the resource is not capable of migration, Azure lists the reasons why.
Checks not done in the validate operation
The validate operation only analyzes the state of the resources in the classic deployment model. It can check for all
failures and unsupported scenarios due to various configurations in the classic deployment model. It is not
possible to check for all issues that the Azure Resource Manager stack might impose on the resources during
migration. These issues are only checked when the resources undergo transformation in the next step of migration
(the prepare operation). The following table lists all the issues not checked in the validate operation:
NETWORKING CHECKS NOT IN THE VALIDATE OPERATION
Azure Resource Manager quota checks for networking resources. For example: static public IP, dynamic public IPs, load balancer,
network security groups, route tables, and network interfaces.
All load balancer rules are valid across deployment and the virtual network.
Conflicting private IPs between stop-deallocated VMs in the same virtual network.
Prepare
The prepare operation is the second step in the migration process. The goal of this step is to simulate the
transformation of the IaaS resources from the classic deployment model to Resource Manager resources. Further,
the prepare operation presents this side-by-side for you to visualize.
NOTE
Your resources in the classic deployment model are not modified during this step. It's a safe step to run if you're trying out
migration.
You select the virtual network or the cloud service (if it’s not a virtual network) that you want to prepare for
migration.
If the resource is not capable of migration, Azure stops the migration process and lists the reason why the
prepare operation failed.
If the resource is capable of migration, Azure locks down the management-plane operations for the resources
under migration. For example, you are not able to add a data disk to a VM under migration.
Azure then starts the migration of metadata from the classic deployment model to Resource Manager for the
migrating resources.
After the prepare operation is complete, you have the option of visualizing the resources in both the classic
deployment model and Resource Manager. For every cloud service in the classic deployment model, the Azure
platform creates a resource group name that has the pattern cloud-service-name>-Migrated .
NOTE
It is not possible to select the name of a resource group created for migrated resources (that is, "-Migrated"). After migration
is complete, however, you can use the move feature of Azure Resource Manager to move resources to any resource group
you want. For more information, see Move resources to new resource group or subscription.
The following two screenshots show the result after a succesful prepare operation. The first one shows a resource
group that contains the original cloud service. The second one shows the new "-Migrated" resource group that
contains the equivalent Azure Resource Manager resources.
Here is a behind-the-scenes look at your resources after the completion of the prepare phase. Note that the
resource in the data plane is the same. It's represented in both the management plane (classic deployment model)
and the control plane (Resource Manager).
NOTE
VMs that are not in a virtual network in the classic deployment model are stopped and deallocated in this phase of
migration.
Commit
After you finish the validation, you can commit the migration. Resources do not appear anymore in the classic
deployment model, and are available only in the Resource Manager deployment model. The migrated resources
can be managed only in the new portal.
NOTE
This is an idempotent operation. If it fails, retry the operation. If it continues to fail, create a support ticket or create a forum
post with a "ClassicIaaSMigration" tag on our VM forum.
Migration flowchart
Here is a flowchart that shows how to proceed with migration:
Cloud service name DNS name During migration, a new resource group
is created for every cloud service with
the naming pattern
<cloudservicename>-migrated . This
resource group contains all your
resources. The cloud service name
becomes a DNS name that is associated
with the public IP address.
Disk resources attached to VM Implicit disks attached to VM Disks are not modeled as top-level
resources in the Resource Manager
deployment model. They are migrated
as implicit disks under the VM. Only
disks that are attached to a VM are
currently supported. Resource Manager
VMs can now use storage accounts in
the classic deployment model, which
allows the disks to be easily migrated
without any updates.
Virtual machine certificates Certificates in Azure Key Vault If a cloud service contains service
certificates, the migration creates a new
Azure key vault per cloud service, and
moves the certificates into the key
vault. The VMs are updated to
reference the certificates from the key
vault.
Load-balanced endpoint set Load balancer In the classic deployment model, the
platform assigned an implicit load
balancer for every cloud service. During
migration, a new load-balancer resource
is created, and the load-balancing
endpoint set becomes load-balancer
rules.
Inbound NAT rules Inbound NAT rules Input endpoints defined on the VM are
converted to inbound network address
translation rules under the load
balancer during the migration.
VIP address Public IP address with DNS name The virtual IP address becomes a public
IP address, and is associated with the
load balancer. A virtual IP can only be
migrated if there is an input endpoint
assigned to it.
Virtual network Virtual network The virtual network is migrated, with all
its properties, to the Resource Manager
deployment model. A new resource
group is created with the name
-migrated .
Reserved IPs Public IP address with static allocation Reserved IPs associated with the load
method balancer are migrated, along with the
migration of the cloud service or the
virtual machine. Unassociated reserved
IP migration is not currently supported.
Public IP address per VM Public IP address with dynamic The public IP address associated with
allocation method the VM is converted as a public IP
address resource, with the allocation
method set to static.
IP forwarding property on a VM's IP forwarding property on the NIC The IP forwarding property on a VM is
network configuration converted to a property on the
network interface during the migration.
Load balancer with multiple IPs Load balancer with multiple public IP Every public IP associated with the load
resources balancer is converted to a public IP
resource, and associated with the load
balancer after migration.
Internal DNS names on the VM Internal DNS names on the NIC During migration, the internal DNS
suffixes for the VMs are migrated to a
read-only property named
“InternalDomainNameSuffix” on the
NIC. The suffix remains unchanged after
migration, and VM resolution should
continue to work as previously.
Virtual network gateway Virtual network gateway Virtual network gateway properties are
migrated unchanged. The VIP
associated with the gateway does not
change either.
Local network site Local network gateway Local network site properties are
migrated unchanged to a new resource
called a local network gateway. This
represents on-premises address prefixes
and the remote gateway IP.
Next steps
Overview of platform-supported migration of IaaS resources from classic to Azure Resource Manager
Planning for migration of IaaS resources from classic to Azure Resource Manager
Use PowerShell to migrate IaaS resources from classic to Azure Resource Manager
Use CLI to migrate IaaS resources from classic to Azure Resource Manager
Community tools for assisting with migration of IaaS resources from classic to Azure Resource Manager
Review most common migration errors
Review the most frequently asked questions about migrating IaaS resources from classic to Azure Resource
Manager
Planning for migration of IaaS resources from classic
to Azure Resource Manager
4/9/2018 • 13 min to read • Edit Online
While Azure Resource Manager offers a lot of amazing features, it is critical to plan out your migration journey to
make sure things go smoothly. Spending time on planning will ensure that you do not encounter issues while
executing migration activities.
NOTE
The following guidance was heavily contributed to by the Azure Customer Advisory team and Cloud Solution architects
working with customers on migrating large enviornments. As such this document will continue to get updated as new
patterns of success emerge, so check back from time to time to see if there are any new recommendations.
Plan
Technical considerations and tradeoffs
Depending on your technical requirements size, geographies and operational practices, you might want to
consider:
1. Why is Azure Resource Manager desired for your organization? What are the business reasons for a
migration?
2. What are the technical reasons for Azure Resource Manager? What (if any) additional Azure services would
you like to leverage?
3. Which application (or sets of virtual machines) is included in the migration?
4. Which scenarios are supported with the migration API? Review the unsupported features and configurations.
5. Will your operational teams now support applications/VMs in both Classic and Azure Resource Manager?
6. How (if at all) does Azure Resource Manager change your VM deployment, management, monitoring, and
reporting processes? Do your deployment scripts need to be updated?
7. What is the communications plan to alert stakeholders (end users, application owners, and infrastructure
owners)?
8. Depending on the complexity of the environment, should there be a maintenance period where the application
is unavailable to end users and to application owners? If so, for how long?
9. What is the training plan to ensure stakeholders are knowledgeable and proficient in Azure Resource
Manager?
10. What is the program management or project management plan for the migration?
11. What are the timelines for the Azure Resource Manager migration and other related technology road maps?
Are they optimally aligned?
Patterns of success
Successful customers have detailed plans where the above questions are discussed, documented and governed.
Ensure the migration plans are broadly communicated to sponsors and stakeholders. Equip yourself with
knowledge about your migration options; reading through this migration document set below is highly
recommended.
Overview of platform-supported migration of IaaS resources from classic to Azure Resource Manager
Technical deep dive on platform-supported migration from classic to Azure Resource Manager
Planning for migration of IaaS resources from classic to Azure Resource Manager
Use PowerShell to migrate IaaS resources from classic to Azure Resource Manager
Use CLI to migrate IaaS resources from classic to Azure Resource Manager
Community tools for assisting with migration of IaaS resources from classic to Azure Resource Manager
Review most common migration errors
Review the most frequently asked questions about migrating IaaS resources from classic to Azure Resource
Manager
Pitfalls to avoid
Failure to plan. The technology steps of this migration are proven and the outcome is predictable.
Assumption that the platform supported migration API will account for all scenarios. Read the unsupported
features and configurations to understand what scenarios are supported.
Not planning potential application outage for end users. Plan enough buffer to adequately warn end users of
potentially unavailable application time.
Lab Test
Replicate your enviornment and do a test migration
NOTE
Exact replication of your existing environment is executed by using a community-contributed tool which is not officially
supported by Microsoft Support. Therefore, it is an optional step but it is the best way to find out issues without touching
your production environments. If using a community-contributed tool is not an option, then read about the
Validate/Prepare/Abort Dry Run recommendation below.
Conducting a lab test of your exact scenario (compute, networking, and storage) is the best way to ensure a
smooth migration. This will help ensure:
A wholly separate lab or an existing non-production environment to test. We recommend a wholly separate lab
that can be migrated repeatedly and can be destructively modified. Scripts to collect/hydrate metadata from
the real subscriptions are listed below.
It's a good idea to create the lab in a separate subscription. The reason is that the lab will be torn down
repeatedly, and having a separate, isolated subscription will reduce the chance that something real will get
accidently deleted.
This can be accomplished by using the AsmMetadataParser tool. Read more about this tool here
Patterns of success
The following were issues discovered in many of the larger migrations. This is not an exhaustive list and you
should refer to the unsupported features and configurations for more detail. You may or may not encounter these
technical issues but if you do solving these before attempting migration will ensure a smoother experience.
Do a Validate/Prepare/Abort Dry Run - This is perhaps the most important step to ensure Classic to
Azure Resource Manager migration success. The migration API has three main steps: Validate, Prepare and
Commit. Validate will read the state of your classic environment and return a result of all issues. However,
because some issues might exist in the Azure Resource Manager stack, Validate will not catch everything.
The next step in migration process, Prepare will help expose those issues. Prepare will move the metadata
from Classic to Azure Resource Manager, but will not commit the move, and will not remove or change
anything on the Classic side. The dry run involves preparing the migration, then aborting (not
committing) the migration prepare. The goal of validate/prepare/abort dry run is to see all of the
metadata in the Azure Resource Manager stack, examine it (programmatically or in Portal), and verify that
everything migrates correctly, and work through technical issues. It will also give you a sense of migration
duration so you can plan for downtime accordingly. A validate/prepare/abort does not cause any user
downtime; therefore, it is non-disruptive to application usage.
The items below will need to be solved before the dry run, but a dry run test will also safely flush out
these preparation steps if they are missed. During enterprise migration, we've found the dry run to be a
safe and invaluable way to ensure migration readiness.
When prepare is running, the control plane (Azure management operations) will be locked for the whole
virtual network, so no changes can be made to VM metadata during validate/prepare/abort. But
otherwise any application function (RD, VM usage, etc.) will be unaffected. Users of the VMs will not
know that the dry run is being executed.
Express Route Circuits and VPN. Currently Express Route Gateways with authorization links cannot be
migrated without downtime. For the workaround, see Migrate ExpressRoute circuits and associated virtual
networks from the classic to the Resource Manager deployment model.
VM Extensions - Virtual Machine extensions are potentially one of the biggest roadblocks to migrating
running VMs. Remediation of VM Extensions could take upwards of 1-2 days, so plan accordingly. A
working Azure agent is needed to report back VM Extension status of running VMs. If the status comes
back as bad for a running VM, this will halt migration. The agent itself does not need to be in working order
to enable migration, but if extensions exist on the VM, then both a working agent AND outbound internet
connectivity (with DNS ) will be needed for migration to move forward.
If connectivity to a DNS server is lost during migration, all VM Extensions except BGInfo v1.* need to
first be removed from every VM before migration prepare, and subsequently re-added back to the VM
after Azure Resource Manager migration. This is only for VMs that are running. If the VMs are
stopped deallocated, VM Extensions do not need to be removed. Note: Many extensions like Azure
diagnostics and security center monitoring will reinstall themselves after migration, so removing them is
not a problem.
In addition, make sure Network Security Groups are not restricting outbound internet access. This can
happen with some Network Security Groups configurations. Outbound internet access (and DNS ) is
needed for VM Extensions to be migrated to Azure Resource Manager.
There are two versions of the BGInfo extension: v1 and v2. If the VM was created using the Azure portal
or PowerShell, the VM will likely have the v1 extension on it. This extension does not need to be
removed and will be skipped (not migrated) by the migration API. However, if the Classic VM was
created with the new Azure portal, it will likely have the JSON -based v2 version of BGInfo, which can be
migrated to Azure Resource Manager provided the agent is working and has outbound internet access
(and DNS ).
Remediation Option 1. If you know your VMs will not have outbound internet access, a working DNS
service, and working Azure agents on the VMs, then uninstall all VM extensions as part of the migration
before Prepare, then reinstall the VM Extensions after migration.
Remediation Option 2. If VM extensions are too big of a hurdle, another option is to
shutdown/deallocate all VMs before migration. Migrate the deallocated VMs, then restart them on
the Azure Resource Manager side. The benefit here is that VM extensions will migrate. The downside
is that all public facing Virtual IPs will be lost (this may be a non-starter), and obviously the VMs will
shut down causing a much greater impact on working applications.
NOTE
If an Azure Security Center policy is configured against the running VMs being migrated, the security policy
needs to be stopped before removing extensions, otherwise the security monitoring extension will be
reinstalled automatically on the VM after removing it.
Availability Sets - For a virtual network (vNet) to be migrated to Azure Resource Manager, the Classic
deployment (i.e. cloud service) contained VMs must all be in one availability set, or the VMs must all not be
in any availability set. Having more than one availability set in the cloud service is not compatible with
Azure Resource Manager and will halt migration. Additionally, there cannot be some VMs in an availability
set, and some VMs not in an availability set. To resolve this, you will need to remediate or reshuffle your
cloud service. Plan accordingly as this might be time consuming.
Web/Worker Role Deployments - Cloud Services containing web and worker roles cannot migrate to
Azure Resource Manager. The web/worker roles must first be removed from the virtual network before
migration can start. A typical solution is to just move web/worker role instances to a separate Classic
virtual network that is also linked to an ExpressRoute circuit, or to migrate the code to newer PaaS App
Services (this discussion is beyond the scope of this document). In the former redeploy case, create a new
Classic virtual network, move/redeploy the web/worker roles to that new virtual network, then delete the
deployments from the virtual network being moved. No code changes required. The new Virtual Network
Peering capability can be used to peer together the classic virtual network containing the web/worker roles
and other virtual networks in the same Azure region such as the virtual network being migrated (after
virtual network migration is completed as peered virtual networks cannot be migrated), hence
providing the same capabilities with no performance loss and no latency/bandwidth penalties. Given the
addition of Virtual Network Peering, web/worker role deployments can now easily be mitigated and not
block the migration to Azure Resource Manager.
Azure Resource Manager Quotas - Azure regions have separate quotas/limits for both Classic and Azure
Resource Manager. Even though in a migration scenario new hardware isn't being consumed (we're
swapping existing VMs from Classic to Azure Resource Manager ), Azure Resource Manager quotas still
need to be in place with enough capacity before migration can start. Listed below are the major limits we've
seen cause problems. Open a quota support ticket to raise the limits.
NOTE
These limits need to be raised in the same region as your current enviornment to be migrated.
Network Interfaces
Load Balancers
Public IPs
Static Public IPs
Cores
Network Security Groups
Route Tables
You can check your current Azure Resource Manager quotas using the following commands with the
latest version of Azure CLI 2.0.
Compute (Cores, Avaiability Sets)
Azure Resource Manager API throttling limits - If you have a large enough environment (eg. > 400
VMs in a VNET), you might hit the default API throttling limits for writes (currently 1200 writes/hour) in
Azure Resource Manager. Before starting migration, you should raise a support ticket to increase this limit
for your subscription.
Provisioning Timed Out VM Status - If any VM has the status of provisioning timed out, this needs to
be resolved pre-migration. The only way to do this is with downtime by deprovisioning/reprovisioning the
VM (delete it, keep the disk, and recreate the VM ).
RoleStateUnknown VM Status - If migration halts due to a role state unknown error message, inspect
the VM using the portal and ensure it is running. This error will typically go away on its own (no
remediation required) after a few minutes and is often a transient type often seen during a Virtual Machine
start, stop, restart operations. Recommended practice: re-try migration again after a few minutes.
Fabric Cluster does not exist - In some cases, certain VMs cannot be migrated for various odd reasons.
One of these known cases is if the VM was recently created (within the last week or so) and happened to
land an Azure cluster that is not yet equipped for Azure Resource Manager workloads. You will get an error
that says fabric cluster does not exist and the VM cannot be migrated. Waiting a couple of days will
usually resolve this particular problem as the cluster will soon get Azure Resource Manager enabled.
However, one immediate workaround is to stop-deallocate the VM, then continue forward with migration,
and start the VM back up in Azure Resource Manager after migrating.
Pitfalls to avoid
Do not take shortcuts and omit the validate/prepare/abort dry run migrations.
Most, if not all, of your potential issues will surface during the validate/prepare/abort steps.
Migration
Technical considerations and tradeoffs
Now you are ready because you have worked through the known issues with your environment.
For the real migrations, you might want to consider:
1. Plan and schedule the virtual network (smallest unit of migration) with increasing priority. Do the simple
virtual networks first, and progress with the more complicated virtual networks.
2. Most customers will have non-production and production environments. Schedule production last.
3. (OPTIONAL ) Schedule a maintenance downtime with plenty of buffer in case unexpected issues arise.
4. Communicate with and align with your support teams in case issues arise.
Patterns of success
The technical guidance from the Lab Test section above should be considered and mitigated prior to a real
migration. With adequate testing, the migration is actually a non-event. For production environments, it might be
helpful to have additional support, such as a trusted Microsoft partner or Microsoft Premier services.
Pitfalls to avoid
Not fully testing may cause issues and delay in the migration.
Beyond Migration
Technical considerations and tradeoffs
Now that you are in Azure Resource Manager, maximize the platform. Read the overview of Azure Resource
Manager to find out about additional benefits.
Things to consider:
Bundling the migration with other activities. Most customers opt for an application maintenance window. If so,
you might want to use this downtime to enable other Azure Resource Manager capabilities like encryption and
migration to Managed Disks.
Revisit the technical and business reasons for Azure Resource Manager; enable the additional services
available only on Azure Resource Manager that apply to your environment.
Modernize your environment with PaaS services.
Patterns of success
Be purposeful on what services you now want to enable in Azure Resource Manager. Many customers find the
below compelling for their Azure environments:
Role Based Access Control.
Azure Resource Manager templates for easier and more controlled deployment.
Tags.
Activity Control
Azure Policies
Pitfalls to avoid
Remember why you started this Classic to Azure Resource Manager migration journey. What were the original
business reasons? Did you achieve the business reason?
Next steps
Overview of platform-supported migration of IaaS resources from classic to Azure Resource Manager
Technical deep dive on platform-supported migration from classic to Azure Resource Manager
Planning for migration of IaaS resources from classic to Azure Resource Manager
Use PowerShell to migrate IaaS resources from classic to Azure Resource Manager
Community tools for assisting with migration of IaaS resources from classic to Azure Resource Manager
Review most common migration errors
Review the most frequently asked questions about migrating IaaS resources from classic to Azure Resource
Manager
Migrate IaaS resources from classic to Azure
Resource Manager by using Azure CLI
4/9/2018 • 6 min to read • Edit Online
These steps show you how to use Azure command-line interface (CLI) commands to migrate infrastructure as a
service (IaaS ) resources from the classic deployment model to the Azure Resource Manager deployment model.
The article requires the Azure CLI 1.0. Since Azure CLI 2.0 is only applicable for Azure Resource Manager
resources, it cannot be used for this migration.
NOTE
All the operations described here are idempotent. If you have a problem other than an unsupported feature or a
configuration error, we recommend that you retry the prepare, abort, or commit operation. The platform will then try the
action again.
Here is a flowchart to identify the order in which steps need to be executed during a migration process
IMPORTANT
Application Gateways are not currently supported for migration from classic to Resource Manager. To migrate a classic virtual
network with an Application gateway, remove the gateway before running a Prepare operation to move the network. After
you complete the migration, reconnect the gateway in Azure Resource Manager.
ExpressRoute gateways connecting to ExpressRoute circuits in another subscription cannot be migrated automatically. In
such cases, remove the ExpressRoute gateway, migrate the virtual network and recreate the gateway. Please see Migrate
ExpressRoute circuits and associated virtual networks from the classic to the Resource Manager deployment model for more
information.
azure login
NOTE
Registration is a one time step but it needs to be done once before attempting migration. Without registering you'll see the
following error message
BadRequest : Subscription is not registered for migration.
Register with the migration resource provider by using the following command. Note that in some cases, this
command times out. However, the registration will be successful.
Please wait five minutes for the registration to finish. You can check the status of the approval by using the
following command. Make sure that RegistrationState is Registered before you proceed.
Step 3: Make sure you have enough Azure Resource Manager Virtual
Machine vCPUs in the Azure region of your current deployment or
VNET
For this step you'll need to switch to arm mode. Do this with the following command.
You can use the following CLI command to check the current number of vCPUs you have in Azure Resource
Manager. To learn more about vCPU quotas, see Limits and the Azure Resource Manager
Once you're done verifying this step, you can switch back to asm mode.
Run the following command to get the deployment name for the cloud service from the verbose output. In most
cases, the deployment name is the same as the cloud service name.
First, validate if you can migrate the cloud service using the following commands:
azure service deployment validate-migration <serviceName> <deploymentName> new "" "" ""
Prepare the virtual machines in the cloud service for migration. You have two options to choose from.
If you want to migrate the VMs to a platform-created virtual network, use the following command.
azure service deployment prepare-migration <serviceName> <deploymentName> new "" "" ""
If you want to migrate to an existing virtual network in the Resource Manager deployment model, use the
following command.
After the prepare operation is successful, you can look through the verbose output to get the migration state of
the VMs and ensure that they are in the Prepared state.
If the prepared configuration looks good, you can move forward and commit the resources by using the following
command.
In the above example, the virtualNetworkName is the entire name "Group classicubuntu16
classicubuntu16".
First, validate if you can migrate the virtual network using the following command:
Prepare the virtual network of your choice for migration by using the following command.
Check the configuration for the prepared virtual machines by using either CLI or the Azure portal. If you are not
ready for migration and you want to go back to the old state, use the following command.
If the prepared configuration looks good, you can move forward and commit the resources by using the following
command.
Check the configuration for the prepared storage account by using either CLI or the Azure portal. If you are not
ready for migration and you want to go back to the old state, use the following command.
If the prepared configuration looks good, you can move forward and commit the resources by using the following
command.
Next steps
Overview of platform-supported migration of IaaS resources from classic to Azure Resource Manager
Technical deep dive on platform-supported migration from classic to Azure Resource Manager
Planning for migration of IaaS resources from classic to Azure Resource Manager
Use PowerShell to migrate IaaS resources from classic to Azure Resource Manager
Community tools for assisting with migration of IaaS resources from classic to Azure Resource Manager
Review most common migration errors
Review the most frequently asked questions about migrating IaaS resources from classic to Azure Resource
Manager
Common errors during Classic to Azure Resource
Manager migration
4/9/2018 • 9 min to read • Edit Online
This article catalogs the most common errors and mitigations during the migration of IaaS resources from Azure
classic deployment model to the Azure Resource Manager stack.
List of errors
ERROR STRING MITIGATION
Internal server error In some cases, this is a transient error that goes away with a
retry. If it continues to persist, contact Azure support as it
needs investigation of platform logs.
Migration is not supported for Deployment {deployment- This happens when a deployment contains a web/worker role.
name} in HostedService {hosted-service-name} because it is a Since migration is only supported for Virtual Machines, please
PaaS deployment (Web/Worker). remove the web/worker role from the deployment and try
migration again.
Template {template-name} deployment failed. CorrelationId= In the backend of migration service, we use Azure Resource
{guid} Manager templates to create resources in the Azure Resource
Manager stack. Since templates are idempotent, usually you
can safely retry the migration operation to get past this error.
If this error continues to persist, please contact Azure support
and give them the CorrelationId.
The virtual network {virtual-network-name} does not exist. This can happen if you created the Virtual Network in the new
Azure portal. The actual Virtual Network name follows the
pattern "Group * "
VM {vm-name} in HostedService {hosted-service-name} XML extensions such as BGInfo 1.* are not supported in
contains Extension {extension-name} which is not supported Azure Resource Manager. Therefore, these extensions cannot
in Azure Resource Manager. It is recommended to uninstall it be migrated. If these extensions are left installed on the
from the VM before continuing with migration. virtual machine, they are automatically uninstalled before
completing the migration.
VM {vm-name} in HostedService {hosted-service-name} This is the scenario where the virtual machine is configured
contains Extension VMSnapshot/VMSnapshotLinux, which is for Azure Backup. Since this is currently an unsupported
currently not supported for Migration. Uninstall it from the scenario, please follow the workaround at
VM and add it back using Azure Resource Manager after the https://aka.ms/vmbackupmigration
Migration is Complete
ERROR STRING MITIGATION
VM {vm-name} in HostedService {hosted-service-name} Azure guest agent & VM Extensions need outbound internet
contains Extension {extension-name} whose Status is not access to the VM storage account to populate their status.
being reported from the VM. Hence, this VM cannot be Common causes of status failure include
migrated. Ensure that the Extension status is being reported a Network Security Group that blocks outbound access to
or uninstall the extension from the VM and retry migration. the internet
If the VNET has on-prem DNS servers and DNS
VM {vm-name} in HostedService {hosted-service-name} connectivity is lost
contains Extension {extension-name} reporting Handler
Status: {handler-status}. Hence, the VM cannot be migrated. If you continue to see an unsupported status, you can
Ensure that the Extension handler status being reported is uninstall the extensions to skip this check and move forward
{handler-status} or uninstall it from the VM and retry with migration.
migration.
Migration is not supported for Deployment {deployment- Currently, only hosted services that have 1 or less Availability
name} in HostedService {hosted-service-name} because it has sets can be migrated. To work around this problem, please
multiple Availability Sets. move the additional Availability sets and Virtual machines in
those Availability sets to a different hosted service.
Migration is not supported for Deployment {deployment- The workaround for this scenario is to either move all the
name} in HostedService {hosted-service-name because it has virtual machines into a single Availability set or remove all
VMs that are not part of the Availability Set even though the Virtual machines from the Availability set in the hosted
HostedService contains one. service.
Storage account/HostedService/Virtual Network {virtual- This error happens when the "Prepare" migration operation
network-name} is in the process of being migrated and hence has been completed on the resource and an operation that
cannot be changed would make a change to the resource is triggered. Because of
the lock on the management plane after "Prepare" operation,
any changes to the resource are blocked. To unlock the
management plane, you can run the "Commit" migration
operation to complete migration or the "Abort" migration
operation to roll back the "Prepare" operation.
Migration is not allowed for HostedService {hosted-service- The VM might be undergoing through a state transition,
name} because it has VM {vm-name} in State: which usually happens when during an update operation on
RoleStateUnknown. Migration is allowed only when the VM is the HostedService such as a reboot, extension installation etc.
in one of the following states - Running, Stopped, Stopped It is recommended for the update operation to complete on
Deallocated. the HostedService before trying migration.
Deployment {deployment-name} in HostedService {hosted- This error happens if you've resized the VHD blob without
service-name} contains a VM {vm-name} with Data Disk updating the size in the VM API model. Detailed mitigation
{data-disk-name} whose physical blob size {size-of-the-vhd- steps are outlined below.
blob-backing-the-data-disk} bytes does not match the VM
Data Disk logical size {size-of-the-data-disk-specified-in-the-
vm-api} bytes. Migration will proceed without specifying a size
for the data disk for the Azure Resource Manager VM.
A storage exception occurred while validating data disk {data This error can happen if the disks of the VM have been
disk name} with media link {data disk Uri} for VM {VM name} deleted or are not accessible anymore. Please make sure the
in Cloud Service {Cloud Service name}. Please ensure that the disks for the VM exist.
VHD media link is accessible for this virtual machine
ERROR STRING MITIGATION
VM {vm-name} in HostedService {cloud-service-name} This error occurs when the name of the blob has a "/" in it
contains Disk with MediaLink {vhd-uri} which has blob name which is not supported in Compute Resource Provider
{vhd-blob-name} that is not supported in Azure Resource currently.
Manager.
Migration is not allowed for Deployment {deployment-name} In 2014, Azure announced that networking resources will
in HostedService {cloud-service-name} as it is not in the move from a cluster level scope to regional scope. See
regional scope. Please refer to http://aka.ms/regionalscope for [http://aka.ms/regionalscope] for more details
moving this deployment to regional scope. (http://aka.ms/regionalscope). This error happens when the
deployment being migrated has not had an update operation,
which automatically moves it to a regional scope. Best
workaround is to either add an endpoint to a VM or a data
disk to the VM and then retry migration.
See How to set up endpoints on a classic Windows virtual
machine in Azure or Attach a data disk to a Windows virtual
machine created with the classic deployment model
Migration is not supported for Virtual Network {vnet-name} This error occurs when you have non-gateway PaaS
because it has non-gateway PaaS deployments. deployments such as Application Gateway or API
Management services that are connected to the Virtual
Network.
Detailed mitigations
VM with Data Disk whose physical blob size bytes does not match the VM Data Disk logical size bytes.
This happens when the Data disk logical size can get out of sync with the actual VHD blob size. This can be easily
verified using the following commands:
Verifying the issue
# Store the VM details in the VM object
$vm = Get-AzureVM -ServiceName $servicename -Name $vmname
HostCaching : None
DiskLabel :
DiskName : coreosvm-coreosvm-0-201611230636240687
Lun : 0
LogicalDiskSizeInGB : 11
MediaLink : https://contosostorage.blob.core.windows.net/vhds/coreosvm-dd1.vhd
SourceMediaLink :
IOType : Standard
ExtensionData :
# Now get the properties of the blob backing the data disk above
# NOTE the size of the blob is about 15 GB which is different from LogicalDiskSizeInGB above
$blob = Get-AzureStorageblob -Blob "coreosvm-dd1.vhd" -Container vhds
$blob
ICloudBlob : Microsoft.WindowsAzure.Storage.Blob.CloudPageBlob
BlobType : PageBlob
Length : 16106127872
ContentType : application/octet-stream
LastModified : 11/23/2016 7:16:22 AM +00:00
SnapshotTime :
ContinuationToken :
Context : Microsoft.WindowsAzure.Commands.Common.Storage.AzureStorageContext
Name : coreosvm-dd1.vhd
# Convert the blob size in bytes to GB into a variable which we'll use later
$newSize = [int]($blob.Length / 1GB)
15
# Store the disk name of the data disk as we'll use this to identify the disk to be updated
$diskName = $vm.VM.DataVirtualHardDisks[0].DiskName
# Now remove the data disk from the VM so that the disk isn't leased by the VM and it's size can be updated
Remove-AzureDataDisk -LUN $lunToRemove -VM $vm | Update-AzureVm -Name $vmname -ServiceName $servicename
AffinityGroup :
AttachedTo :
IsCorrupted : False
Label :
Location : East US
DiskSizeInGB : 11
DiskSizeInGB : 11
MediaLink : https://contosostorage.blob.core.windows.net/vhds/coreosvm-dd1.vhd
DiskName : coreosvm-coreosvm-0-201611230636240687
SourceImageName :
OS :
IOType : Standard
OperationDescription : Get-AzureDisk
OperationId : 0c56a2b7-a325-123b-7043-74c27d5a61fd
OperationStatus : Succeeded
# Now verify that the "DiskSizeInGB" property of the disk matches the size of the blob
Get-AzureDisk -DiskName $diskName
AffinityGroup :
AttachedTo :
IsCorrupted : False
Label : coreosvm-coreosvm-0-201611230636240687
Location : East US
DiskSizeInGB : 15
MediaLink : https://contosostorage.blob.core.windows.net/vhds/coreosvm-dd1.vhd
DiskName : coreosvm-coreosvm-0-201611230636240687
SourceImageName :
OS :
IOType : Standard
OperationDescription : Get-AzureDisk
OperationId : 1v53bde5-cv56-5621-9078-16b9c8a0bad2
OperationStatus : Succeeded
# Now we'll add the disk back to the VM as a data disk. First we need to get an updated VM object
$vm = Get-AzureVM -ServiceName $servicename -Name $vmname
Add-AzureDataDisk -Import -DiskName $diskName -LUN 0 -VM $vm -HostCaching ReadWrite | Update-AzureVm -Name
$vmname -ServiceName $servicename
Next steps
Overview of platform-supported migration of IaaS resources from classic to Azure Resource Manager
Technical deep dive on platform-supported migration from classic to Azure Resource Manager
Planning for migration of IaaS resources from classic to Azure Resource Manager
Use PowerShell to migrate IaaS resources from classic to Azure Resource Manager
Use CLI to migrate IaaS resources from classic to Azure Resource Manager
Community tools for assisting with migration of IaaS resources from classic to Azure Resource Manager
Review the most frequently asked questions about migrating IaaS resources from classic to Azure Resource
Manager
Community tools to migrate IaaS resources from
classic to Azure Resource Manager
4/9/2018 • 1 min to read • Edit Online
This article catalogs the tools that have been provided by the community to assist with migration of IaaS
resources from classic to the Azure Resource Manager deployment model.
NOTE
These tools are not officially supported by Microsoft Support. Therefore they are open sourced on GitHub and we're happy
to accept PRs for fixes or additional scenarios. To report an issue, use the GitHub issues feature.
Migrating with these tools will cause downtime for your classic Virtual Machine. If you're looking for platform supported
migration, visit
Platform supported migration of IaaS resources from Classic to Azure Resource Manager stack
Technical Deep Dive on Platform supported migration from Classic to Azure Resource Manager
Migrate IaaS resources from Classic to Azure Resource Manager using Azure PowerShell
AsmMetadataParser
This is a collection of helper tools created as part of enterprise migrations from Azure Service Management to
Azure Resource Manager. This tool allows you to replicate your infrastructure into another subscription which can
be used for testing migration and iron out any issues before running the migration on your Production
subscription.
Link to the tool documentation
migAz
migAz is an additional option to migrate a complete set of classic IaaS resources to Azure Resource Manager
IaaS resources. The migration can occur within the same subscription or between different subscriptions and
subscription types (ex: CSP subscriptions).
Link to the tool documentation
Next Steps
Overview of platform-supported migration of IaaS resources from classic to Azure Resource Manager
Technical deep dive on platform-supported migration from classic to Azure Resource Manager
Planning for migration of IaaS resources from classic to Azure Resource Manager
Use PowerShell to migrate IaaS resources from classic to Azure Resource Manager
Use CLI to migrate IaaS resources from classic to Azure Resource Manager
Review most common migration errors
Review the most frequently asked questions about migrating IaaS resources from classic to Azure Resource
Manager
Frequently asked questions about classic to Azure
Resource Manager migration
4/9/2018 • 5 min to read • Edit Online
the VM from classic to Resource Manager mode. Follow these steps to transfer your VM backups:
1. In the Backup vault, go to the Protected Items tab and select the VM. Click Stop Protection. Leave Delete
associated backup data option unchecked.
2. Delete the backup/snapshot extension from the VM.
3. Migrate the virtual machine from classic mode to Resource Manager mode. Make sure the storage and
network information corresponding to the virtual machine is also migrated to Resource Manager mode.
4. Create a Recovery Services vault and configure backup on the migrated virtual machine using Backup action
on top of vault dashboard. For detailed information on backing up a VM to a Recovery Services vault, see the
article, Protect Azure VMs with a Recovery Services vault.
What happens if I run into a quota error while preparing the IaaS
resources for migration?
We recommend that you abort your migration and then log a support request to increase the quotas in the
region where you are migrating the VMs. After the quota request is approved, you can start executing the
migration steps again.
What if I don't like the names of the resources that the platform chose
during migration?
All the resources that you explicitly provide names for in the classic deployment model are retained during
migration. In some cases, new resources are created. For example: a network interface is created for every VM.
We currently don't support the ability to control the names of these new resources created during migration. Log
your votes for this feature on the Azure feedback forum.
I got the message "VM is reporting the overall agent status as Not
Ready. Hence, the VM cannot be migrated. Ensure that the VM Agent is
reporting overall agent status as Ready" or "VM contains Extension
whose Status is not being reported from the VM. Hence, this VM cannot
be migrated."
This message is received when the VM does not have outbound connectivity to the internet. The VM agent uses
outbound connectivity to reach the Azure storage account for updating the agent status every five minutes.
Next steps
Overview of platform-supported migration of IaaS resources from classic to Azure Resource Manager
Technical deep dive on platform-supported migration from classic to Azure Resource Manager
Planning for migration of IaaS resources from classic to Azure Resource Manager
Use PowerShell to migrate IaaS resources from classic to Azure Resource Manager
Use CLI to migrate IaaS resources from classic to Azure Resource Manager
Community tools for assisting with migration of IaaS resources from classic to Azure Resource Manager
Review most common migration errors
Troubleshoot SSH connections to an Azure Linux VM
that fails, errors out, or is refused
5/10/2018 • 10 min to read • Edit Online
There are various reasons that you encounter Secure Shell (SSH) errors, SSH connection failures, or SSH is
refused when you try to connect to a Linux virtual machine (VM ). This article helps you find and correct the
problems. You can use the Azure portal, Azure CLI, or VM Access Extension for Linux to troubleshoot and resolve
connection problems.
NOTE
Azure has two different deployment models for creating and working with resources: Resource Manager and classic. This
article covers using both models, but Microsoft recommends that most new deployments use the Resource Manager
model.
If you need more help at any point in this article, you can contact the Azure experts on the MSDN Azure and
Stack Overflow forums. Alternatively, you can file an Azure support incident. Go to the Azure support site and
select Get support. For information about using Azure Support, read the Microsoft Azure support FAQ.
If using SSH key authentication, you can reset the SSH key for a given user. The following example uses az vm
access set-linux-user to update the SSH key stored in ~/.ssh/id_rsa.pub for the user named myUsername , on
the VM named myVM in myResourceGroup . Use your own values as follows:
{
"reset_ssh":"True"
}
Using the Azure CLI, you then call the VMAccessForLinux extension to reset your SSHD connection by specifying
your json file. The following example uses az vm extension set to reset SSHD on the VM named myVM in
myResourceGroup . Use your own values as follows:
az vm extension set --resource-group philmea --vm-name Ubuntu \
--name VMAccessForLinux --publisher Microsoft.OSTCExtensions --version 1.2 --settings settings.json
{
"username":"myUsername", "password":"myPassword"
}
Or to reset the SSH key for a user, first create a file named settings.json . The following example resets the
credentials for myUsername to the value specified in myPassword , on the VM named myVM in myResourceGroup .
Enter the following lines into your settings.json file, using your own values:
{
"username":"myUsername", "ssh_key":"mySSHKey"
}
After creating your json file, use the Azure CLI to call the VMAccessForLinux extension to reset your SSH user
credentials by specifying your json file. The following example resets credentials on the VM named myVM in
myResourceGroup . Use your own values as follows:
If you created and uploaded a custom Linux disk image, make sure the Microsoft Azure Linux Agent version 2.0.5
or later is installed. For VMs created using Gallery images, this access extension is already installed and
configured for you.
Reset SSH configuration
The SSHD configuration itself may be misconfigured or the service encountered an error. You can reset SSHD to
make sure the SSH configuration itself is valid. Resetting SSHD should be the first troubleshooting step you take.
The following example resets SSHD on a VM named myVM in the resource group named myResourceGroup . Use
your own VM and resource group names as follows:
If using SSH key authentication, you can reset the SSH key for a given user. The following example updates the
SSH key stored in ~/.ssh/id_rsa.pub for the user named myUsername , on the VM named myVM in
myResourceGroup . Use your own values as follows:
Restart a VM
If you have reset the SSH configuration and user credentials, or encountered an error in doing so, you can try
restarting the VM to address underlying compute issues.
Azure portal
To restart a VM using the Azure portal, select your VM and click the Restart button as in the following example:
Redeploy a VM
You can redeploy a VM to another node within Azure, which may correct any underlying networking issues. For
information about redeploying a VM, see Redeploy virtual machine to new Azure node.
NOTE
After this operation finishes, ephemeral disk data will be lost and dynamic IP addresses that are associated with the virtual
machine will be updated.
Azure portal
To redeploy a VM using the Azure portal, select your VM and scroll down to the Support + Troubleshooting
section. Click the Redeploy button as in the following example:
Additional resources
If you are still unable to SSH to your VM after following the after steps, see more detailed troubleshooting
steps to review additional steps to resolve your issue.
For more information about troubleshooting application access, see Troubleshoot access to an application
running on an Azure virtual machine
For more information about troubleshooting virtual machines that were created by using the classic
deployment model, see How to reset a password or SSH for Linux-based virtual machines.
Detailed SSH troubleshooting steps for issues
connecting to a Linux VM in Azure
1/22/2018 • 6 min to read • Edit Online
There are many possible reasons that the SSH client might not be able to reach the SSH service on the VM. If you
have followed through the more general SSH troubleshooting steps, you need to further troubleshoot the
connection issue. This article guides you through detailed troubleshooting steps to determine where the SSH
connection is failing and how to resolve it.
The following steps help you isolate the source of the failure and figure out solutions or workarounds.
1. Check the status of the VM in the portal. In the Azure portal, select Virtual machines > VM name.
The status pane for the VM should show Running. Scroll down to show recent activity for compute,
storage, and network resources.
2. Select Settings to examine endpoints, IP addresses, network security groups, and other settings.
The VM should have an endpoint defined for SSH traffic that you can view in Endpoints or Network
security group. Endpoints in VMs that were created by using Resource Manager are stored in a network
security group. Verify that the rules have been applied to the network security group and are referenced in
the subnet.
To verify network connectivity, check the configured endpoints and see if you can connect to the VM through
another protocol, such as HTTP or another service.
After these steps, try the SSH connection again.
Find the source of the issue
The SSH client on your computer might fail to connect to the SSH service on the Azure VM due to issues or
misconfigurations in the following areas:
SSH client computer
Organization edge device
Cloud service endpoint and access control list (ACL )
Network security groups
Linux-based Azure VM
If the connection fails, check for the following issues on your computer:
A local firewall setting that is blocking inbound or outbound SSH traffic (TCP 22)
Locally installed client proxy software that is preventing SSH connections
Locally installed network monitoring software that is preventing SSH connections
Other types of security software that either monitor traffic or allow/disallow specific types of traffic
If one of these conditions apply, temporarily disable the software and try an SSH connection to an on-premises
computer to find out the reason the connection is being blocked on your computer. Then work with your network
administrator to correct the software settings to allow SSH connections.
If you are using certificate authentication, verify that you have these permissions to the .ssh folder in your home
directory:
Chmod 700 ~/.ssh
Chmod 644 ~/.ssh/*.pub
Chmod 600 ~/.ssh/id_rsa (or any other files that have your private keys stored in them)
Chmod 644 ~/.ssh/known_hosts (contains hosts that you’ve connected to via SSH)
Source 2: Organization edge device
To eliminate your organization edge device as the source of the failure, verify that a computer directly connected to
the Internet can make SSH connections to your Azure VM. If you are accessing the VM over a site-to-site VPN or
an Azure ExpressRoute connection, skip to Source 4: Network security groups.
If you don't have a computer that is directly connected to the Internet, create a new Azure VM in its own resource
group or cloud service and use that new VM. For more information, see Create a virtual machine running Linux in
Azure. Delete the resource group or VM and cloud service when you're done with your testing.
If you can create an SSH connection with a computer that's directly connected to the Internet, check your
organization edge device for:
An internal firewall that's blocking SSH traffic with the Internet
A proxy server that's preventing SSH connections
Intrusion detection or network monitoring software running on devices in your edge network that's preventing
SSH connections
Work with your network administrator to correct the settings of your organization edge devices to allow SSH
traffic with the Internet.
To eliminate the cloud service endpoint and ACL as the source of the failure, verify that another Azure VM in the
same virtual network can connect using SSH.
If you don't have another VM in the same virtual network, you can easily create one. For more information, see
Create a Linux VM on Azure using the CLI. Delete the extra VM when you are done with your testing.
If you can create an SSH connection with a VM in the same virtual network, check the following areas:
The endpoint configuration for SSH traffic on the target VM. The private TCP port of the endpoint should
match the TCP port on which the SSH service on the VM is listening. (The default port is 22). Verify the SSH
TCP port number in the Azure portal by selecting Virtual machines > VM name > Settings > Endpoints.
The ACL for the SSH traffic endpoint on the target virtual machine. An ACL enables you to specify
allowed or denied incoming traffic from the Internet, based on its source IP address. Misconfigured ACLs can
prevent incoming SSH traffic to the endpoint. Check your ACLs to ensure that incoming traffic from the public
IP addresses of your proxy or other edge server is allowed. For more information, see About network access
control lists (ACLs).
To eliminate the endpoint as a source of the problem, remove the current endpoint, create another endpoint, and
specify the SSH name (TCP port 22 for the public and private port number). For more information, see Set up
endpoints on a virtual machine in Azure.
Additional resources
For more information about troubleshooting application access, see Troubleshoot access to an application running
on an Azure virtual machine
How to reset local Linux password on Azure VMs
1/3/2018 • 1 min to read • Edit Online
This article introduces several methods to reset local Linux Virtual Machine (VM ) passwords. If the user account is
expired or you just want to create a new account, you can use the following methods to create a new local admin
account and re-gain access to the VM.
Symptoms
You can't log in to the VM, and you receive a message that indicates that the password that you used is incorrect.
Additionally, you can't use VMAgent to reset your password on the Azure Portal.
~~~~
sudo su
~~~~
1. Run fdisk -l or look at system logs to find the newly attached disk. Locate the drive name to mount. Then
on the temporal VM, look in the relevant log file.
mkdir /tempmount
3. Mount the OS disk on the mount point. You usually need to mount sdc1 or sdc2. This will depend on the
hosting partition in /etc directory from the broken machine disk.
passwd <<USER>>
6. Move the modified files to the correct location on the broken machine's disk.
cp /etc/passwd /tempmount/etc/passwd
cp /etc/shadow /tempmount/etc/shadow
cp /etc/passwd_orig /etc/passwd
cp /etc/shadow_orig /etc/shadow
cd /
umount /tempmount
Next steps
Troubleshoot Azure VM by attaching OS disk to another Azure VM
Azure CLI: How to delete and re-deploy a VM from VHD
Understand a system reboot for Azure VM
5/11/2018 • 7 min to read • Edit Online
Azure virtual machines (VMs) might sometimes reboot for no apparent reason, without evidence of your having
initiated the reboot operation. This article lists the actions and events that can cause VMs to reboot and provides
insight into how to avoid unexpected reboot issues or reduce the impact of such issues.
NOTE
Linux machines that have old kernel versions are affected by a kernel panic during this update method. To avoid this issue,
update to kernel version 3.10.0-327.10.1 or later. For more information, see An Azure Linux VM on a 3.10-based kernel
panics after a host node upgrade.
Support for two debugging features is now available in Azure: Console Output and Screenshot support for Azure
Virtual Machines Resource Manager deployment model.
When bringing your own image to Azure or even booting one of the platform images, there can be many reasons
why a Virtual Machine gets into a non-bootable state. These features enable you to easily diagnose and recover
your Virtual Machines from boot failures.
For Linux Virtual Machines, you can easily view the output of your console log from the Portal:
However, for both Windows and Linux Virtual Machines, Azure also enables you to see a screenshot of the VM
from the hypervisor:
Both of these features are supported for Azure Virtual Machines in all regions. Note, screenshots, and output can
take up to 10 minutes to appear in your storage account.
NOTE
The Boot diagnostics feature does not support premium storage account. If you use the premium storage account
for Boot diagnostics, you might receive the StorageAccountTypeNotSupported error when you start the VM.
3. If you are deploying from an Azure Resource Manager template, navigate to your virtual machine resource
and append the diagnostics profile section. Remember to use the “2015-06-15” API version header.
{
"apiVersion": "2015-06-15",
"type": "Microsoft.Compute/virtualMachines",
…
4. The diagnostics profile enables you to select the storage account where you want to put these logs.
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true,
"storageUri": "[concat('http://', parameters('newStorageAccountName'),
'.blob.core.windows.net')]"
}
}
}
}
To deploy a sample virtual machine with boot diagnostics enabled, check out our repo here.
Enable Boot diagnostics on existing virtual machine
To enable Boot diagnostics on an existing virtual machine, follow these steps:
1. Log in to the Azure portal, and then select the virtual machine.
2. In Support + troubleshooting, select Boot diagnostics > Settings, change the status to On, and then select
a storage account.
3. Make sure that the Boot diagnostics option is selected and then save the change.
Next steps
If you see a "Failed to get contents of the log" error when you use VM Boot Diagnostics, see Failed to get contents
of the log error in VM Boot Diagnostics.
Virtual machine serial console (preview)
4/11/2018 • 7 min to read • Edit Online
The virtual machine serial console on Azure provides access to a text-based console for Linux and Windows virtual
machines. This serial connection is to COM1 serial port of the virtual machine and provides access to the virtual
machine and are not related to virtual machine's network / operating system state. Access to the serial console for a
virtual machine can be done only via Azure portal currently and allowed only for those users who have VM
Contributor or above access to the virtual machine.
NOTE
Previews are made available to you on the condition that you agree to the terms of use. For more information, see Microsoft
Azure Supplemental Terms of Use for Microsoft Azure Previews. Currently this service is in public preview and access to the
serial console for virtual machines is available to global Azure regions. At this point serial console is not available Azure
Government, Azure Germany, and Azure China cloud.
Prerequisites
Virtual machine MUST have boot diagnostics enabled
The account using the serial console must have Contributor role for VM and the boot diagnostics storage
account.
For settings specific to Linux distro, see Accessing the serial console for Linux
Disable feature
The serial console functionality can be deactivated for specific VMs by disabling that VM's boot diagnostics setting.
While no access passwords for the console are logged, if commands run within the console contain or output
passwords, secrets, user names or any other form of Personally Identifiable Information (PII), those will be written
to the virtual machine boot diagnostics logs, along with all other visible text, as part of the implementation of the
serial console's scrollback functionality. These logs are circular and only individuals with read permissions to the
diagnostics storage account have access to them, however we recommend following the best practice of using the
SSH console for anything that may involve secrets and/or PII.
Concurrent usage
If a user is connected to serial console and another user successfully requests access to that same virtual machine,
the first user will be disconnected and the second user connected in a manner akin to the first user standing up and
leaving the physical console and a new user sitting down.
Cau t i on
This means that the user who gets disconnected will not be logged out! The ability to enforce a logout upon
disconnect (via SIGHUP or similar mechanism) is still in the roadmap. For Windows there is an automatic timeout
enabled in SAC, however for Linux you can configure terminal timeout setting. To do this simply add
export TMOUT=600 in your .bash_profile or .profile for the user you logon in the console with, to timeout the session
after 10 minutes.
Disable feature
The serial console functionality can be deactivated for specific VMs by disabling that VM's boot diagnostics setting.
Broken FSTAB file Enter key to continue and fix fstab file Linux
using a text editor. See how to fix fstab
issues
Incorrect firewall rules Access serial console and fix iptables or Linux/Windows
Windows firewall rules
Network lock down system Access serial console via portal to Linux/Windows
manage system
Interacting with bootloader Access GRUB/BCD via the serial console Linux/Windows
Now if the system boots into single user mode you can log in via root password.
Alternatively for RHEL 7.4+ or 6.9+ you can enable single user mode in the GRUB prompts, see instructions here
Access for Ubuntu
Ubuntu images available on Azure have console access enabled by default. If the system boots into Single User
Mode you can access without additional credentials.
Access for CoreOS
CoreOS images available on Azure have console access enabled by default. If necessary system can be booted into
Single User Mode via changing GRUB parameters and adding coreos.autologin=ttyS0 would enable core user to
log in and available in serial console.
Access for SUSE
SLES images available on Azure have console access enabled by default. If you are using older versions of SLES on
Azure, follow the KB article to enable serial console. Newer Images of SLES 12 SP3+ also allows access via the
serial console in case the system boots into emergency mode.
Access for CentOS
CentOS images available on Azure have console access enabled by default. For Single User Mode, follow
instructions similar to Red Hat Images above.
Access for Oracle Linux
Oracle Linux images available on Azure have console access enabled by default. For Single User Mode, follow
instructions similar to Red Hat Images above.
Access for custom Linux image
To enable serial console for your custom Linux VM image, enable console access in /etc/inittab to run a terminal on
ttyS0. Below is an example to add this in the inittab file
S0:12345:respawn:/sbin/agetty -L 115200 console vt102
Errors
Most errors are transient in nature and retrying connection address these. Below table shows a list of errors and
mitigation
ERROR MITIGATION
Unable to retrieve boot diagnostics settings for ''. To use the Ensure that the VM has boot diagnostics enabled.
serial console, ensure that boot diagnostics is enabled for this
VM.
The VM is in a stopped deallocated state. Start the VM and Virtual machine must be in a started state to access the serial
retry the serial console connection. console
You do not have the required permissions to use this VM the Serial console access requires certain permission to access. See
serial console. Ensure you have at least VM Contributor role access requirements for details
permissions.
Unable to determine the resource group for the boot Serial console access requires certain permission to access.See
diagnostics storage account ''. Verify that boot diagnostics is access requirements for details
enabled for this VM and you have access to this storage
account.
Known issues
As we are still in the preview stages for serial console access, we are working through some known issues, below is
the list of these with possible workarounds
ISSUE MITIGATION
There is no option with virtual machine scale set instance serial At the time of preview, access to the serial console for virtual
console machine scale set instances is not supported.
Hitting enter after the connection banner does not show a log Hitting enter does nothing
in prompt
Next steps
The the serial console is also available for Windows VMs
Learn more about bootdiagnostics
Troubleshoot application connectivity issues on a
Linux virtual machine in Azure
5/11/2018 • 5 min to read • Edit Online
There are various reasons when you cannot start or connect to an application running on an Azure virtual machine
(VM ). Reasons include the application not running or listening on the expected ports, the listening port blocked, or
networking rules not correctly passing traffic to the application. This article describes a methodical approach to
find and correct the problem.
If you are having issues connecting to your VM using RDP or SSH, see one of the following articles first:
Troubleshoot Remote Desktop connections to a Windows-based Azure Virtual Machine
Troubleshoot Secure Shell (SSH) connections to a Linux-based Azure virtual machine.
NOTE
Azure has two different deployment models for creating and working with resources: Resource Manager and classic. This
article covers using both models, but Microsoft recommends that most new deployments use the Resource Manager model.
If you need more help at any point in this article, you can contact the Azure experts on the MSDN Azure and the
Stack Overflow forums. Alternatively, you can also file an Azure support incident. Go to the Azure support site and
select Get Support.
For example, if the application is a web server, try to access the web page from a browser running on a computer
that is not in the virtual network.
If you cannot access the application, verify the following settings:
For VMs created using the classic deployment model:
Verify that the endpoint configuration for the VM is allowing the incoming traffic, especially the protocol
(TCP or UDP ) and the public and private port numbers.
Verify that access control lists (ACLs) on the endpoint are not preventing incoming traffic from the
Internet.
For more information, see How to Set Up Endpoints to a Virtual Machine.
For VMs created using the Resource Manager deployment model:
Verify that the inbound NAT rule configuration for the VM is allowing the incoming traffic, especially the
protocol (TCP or UDP ) and the public and private port numbers.
Verify that Network Security Groups are allowing the inbound request and outbound response traffic.
For more information, see What is a Network Security Group (NSG )?
If the virtual machine or endpoint is a member of a load-balanced set:
Verify that the probe protocol (TCP or UDP ) and port number are correct.
If the probe protocol and port is different than the load-balanced set protocol and port:
Verify that the application is listening on the probe protocol (TCP or UDP ) and port number (use netstat
–a on the target VM ).
Verify that the host firewall on the target VM is allowing the inbound probe request and outbound probe
response traffic.
If you can access the application, ensure that your Internet edge device is allowing:
The outbound application request traffic from your client computer to the Azure virtual machine.
The inbound application response traffic from the Azure virtual machine.
Step 4 If you cannot access the application, use IP Verify to check the
settings.
For more information, see Azure network monitoring overview.
Additional resources
Troubleshoot Remote Desktop connections to a Windows-based Azure Virtual Machine
Troubleshoot Secure Shell (SSH) connections to a Linux-based Azure virtual machine
Troubleshoot allocation failures when you create,
restart, or resize Linux VMs in Azure
4/16/2018 • 6 min to read • Edit Online
When you create a virtual machine (VM ), restart stopped (deallocated) VMs, or resize a VM, Microsoft Azure
allocates compute resources to your subscription. We are continually investing in additional infrastructure and
features to make sure that we always have all VM types available to support customer demand. However, you may
occasionally experience resource allocation failures because of unprecedented growth in demand for Azure
services in specific regions. This problem can occur when you try to create or start VMs in a region while the VMs
display the following error code and message:
Error code: AllocationFailed or ZonalAllocationFailed
Error message: "Allocation failed. We do not have sufficient capacity for the requested VM size in this region. Read
more about improving likelihood of allocation success at http://aka.ms/allocation-guidance"
This article explains the causes of some of the common allocation failures and suggests possible remedies.
If your Azure issue is not addressed in this article, visit the Azure forums on MSDN and Stack Overflow. You can
post your issue on these forums or to @AzureSupport on Twitter. Also, you can file an Azure support request by
selecting Get support on the Azure support site.
Until your preferred VM type is available in your preferred region, we advise customers who encounter
deployment issues to consider the guidance in the following table as a temporary workaround.
Identify the scenario that best matches your case, and then retry the allocation request by using the corresponding
suggested workaround to increase the likelihood of allocation success. Alternatively, you can always retry later. This
is because enough resources may have been freed in the cluster, region, or zone to accommodate your request.
Allocation failures for older VM sizes (Av1, Dv1, DSv1, D15v2, DS15v2,
etc.)
As we expand Azure infrastructure, we deploy newer-generation hardware that’s designed to support the latest
virtual machine types. Some of the older series VMs do not run on our latest generation infrastructure. For this
reason, customers may occasionally experience allocation failures for these legacy SKUs. To avoid this problem, we
encourage customers who are using legacy series virtual machines to consider moving to the equivalent newer
VMs per the following recommendations: These VMs are optimized for the latest hardware and will let you take
advantage of better pricing and performance.
Background information
How allocation works
The servers in Azure datacenters are partitioned into clusters. Normally, an allocation request is attempted in
multiple clusters, but it's possible that certain constraints from the allocation request force the Azure platform to
attempt the request in only one cluster. In this article, we'll refer to this as "pinned to a cluster." Diagram 1 below
illustrates the case of a normal allocation that is attempted in multiple clusters. Diagram 2 illustrates the case of an
allocation that's pinned to Cluster 2 because that's where the existing Cloud Service CS_1 or availability set is
hosted.
Why allocation failures happen
When an allocation request is pinned to a cluster, there's a higher chance of failing to find free resources since the
available resource pool is smaller. Furthermore, if your allocation request is pinned to a cluster but the type of
resource you requested is not supported by that cluster, your request will fail even if the cluster has free resources.
The following Diagram 3 illustrates the case where a pinned allocation fails because the only candidate cluster does
not have free resources. Diagram 4 illustrates the case where a pinned allocation fails because the only candidate
cluster does not support the requested VM size, even though the cluster has free resources.
Troubleshooting steps specific to allocation failure
scenarios in the classic deployment model
4/16/2018 • 5 min to read • Edit Online
The following are common allocation scenarios that cause an allocation request to be pinned. We'll dive into each
scenario later in this article.
Resize a VM or add VMs or role instances to an existing cloud service
Restart partially stopped (deallocated) VMs
Restart fully stopped (deallocated) VMs
Staging and production deployments (platform as a service only)
Affinity group (VM or service proximity)
Affinity–group-based virtual network
When you receive an allocation error, check whether any of the listed scenarios apply to your error. Use the
allocation error that’s returned by the Azure platform to identify the corresponding scenario. If your request is
pinned, remove some of the pinning constraints to open your request to more clusters, thereby increasing the
chance of allocation success. In general, if the error does not state that "the requested VM size is not supported,"
you can always retry at a later time. This is because enough resources may have been freed in the cluster to
accommodate your request. If the problem is that the requested VM size is not supported, try a different VM size.
Otherwise, the only option is to remove the pinning constraint.
Two common failure scenarios are related to affinity groups. In the past, an affinity group was used to provide close
proximity to VMs and service instances, or it was used to enable the creation of a virtual network. With the
introduction of regional virtual networks, affinity groups are no longer required to create a virtual network. With
the reduction of network latency in Azure infrastructure, the recommendation to use affinity groups for VMs or
service proximity has changed.
The following Diagram presents the taxonomy of the (pinned) allocation scenarios.
To troubleshoot virtual machine (VM ) deployment issues in Azure, review the top issues for common failures and
resolutions.
If you need more help at any point in this article, you can contact the Azure experts on the MSDN Azure and Stack
Overflow forums. Alternatively, you can file an Azure support incident. Go to the Azure support site and select Get
Support.
Top issues
The following top issues may help resolve your issue. To start troubleshooting, review these steps:
The cluster cannot support the requested VM size
The cluster does not have free resources
Why can I not install the GPU driver for an Ubuntu NV VM?
Currently, Linux GPU support is only available on Azure NC VMs running Ubuntu Server 16.04 LTS. For more
information, see Set up GPU drivers for N -series VMs running Linux.
I am not able to see VM Size family that I want when resizing my VM.
When a VM is running, it is deployed to a physical server. The physical servers in Azure regions are grouped in
clusters of common physical hardware. Resizing a VM that requires the VM to be moved to different hardware
clusters is different depending on which deployment model was used to deploy the VM.
VMs deployed in Classic deployment model, the cloud service deployment must be removed and
redeployed to change the VMs to a size in another size family.
VMs deployed in Resource Manager deployment model, you must stop all VMs in the availability set before
changing the size of any VM in the availability set.
Next steps
If you need more help at any point in this article, you can contact the Azure experts on the MSDN Azure and Stack
Overflow forums.
Alternatively, you can file an Azure support incident. Go to the Azure support site and select Get Support.
Troubleshoot Resource Manager deployment issues
with creating a new Linux virtual machine in Azure
1/3/2018 • 4 min to read • Edit Online
When you try to create a new Azure Virtual Machine (VM ), the common errors you encounter are provisioning
failures or allocation failures.
A provisioning failure happens when the OS image fails to load either due to incorrect preparatory steps or
because of selecting the wrong settings during the image capture from the portal.
An allocation failure results when the cluster or region either does not have resources available or cannot
support the requested VM size.
If your Azure issue is not addressed in this article, visit the Azure forums on MSDN and Stack Overflow. You can
post your issue in these forums, or post to @AzureSupport on Twitter. You also can submit an Azure support
request. To submit a support request, on the Azure support page, select Get support.
Top issues
The following top issues may help resolve your issue. To start troubleshooting, review these steps:
The cluster cannot support the requested VM size
The cluster does not have free resources
For other VM deployment issues and questions, see Troubleshoot deploying Linux virtual machine issues in Azure.
Linux gen. N1 Y N3 Y
Linux spec. Y N2 Y N4
Y: If the OS is Linux generalized, and it is uploaded and/or captured with the generalized setting, then there won’t
be any errors. Similarly, if the OS is Linux specialized, and it is uploaded and/or captured with the specialized
setting, then there won’t be any errors.
Upload Errors:
N 1: If the OS is Linux generalized, and it is uploaded as specialized, you will get a provisioning timeout error
because the VM is stuck at the provisioning stage.
N 2: If the OS is Linux specialized, and it is uploaded as generalized, you will get a provisioning failure error
because the new VM is running with the original computer name, username and password.
Resolution:
To resolve both these errors, upload the original VHD, available on-prem, with the same setting as that for the OS
(generalized/specialized). To upload as generalized, remember to run -deprovision first.
Capture Errors:
N 3: If the OS is Linux generalized, and it is captured as specialized, you will get a provisioning timeout error
because the original VM is not usable as it is marked as generalized.
N 4: If the OS is Linux specialized, and it is captured as generalized, you will get a provisioning failure error because
the new VM is running with the original computer name, username and password. Also, the original VM is not
usable because it is marked as specialized.
Resolution:
To resolve both these errors, delete the current image from the portal, and recapture it from the current VHDs with
the same setting as that for the OS (generalized/specialized).
Next steps
If you encounter issues when you start a stopped Linux VM or resize an existing Linux VM in Azure, see
Troubleshoot Resource Manager deployment issues with restarting or resizing an existing Linux Virtual Machine in
Azure.
Troubleshoot Linux VM device name changes
5/11/2018 • 3 min to read • Edit Online
This article explains why device names change after you restart a Linux VM or reattach the data disks. The article
also provides solutions for this problem.
Symptoms
You may experience the following problems when running Linux VMs in Microsoft Azure:
The VM fails to boot after a restart.
When data disks are detached and reattached, the disk device names are changed.
An application or script that references a disk by using the device name fails because the device name has
changed.
Cause
Device paths in Linux aren't guaranteed to be consistent across restarts. Device names consist of major numbers
(letters) and minor numbers. When the Linux storage device driver detects a new device, the driver assigns major
and minor numbers from the available range to the device. When a device is removed, the device numbers are
freed for reuse.
The problem occurs because device scanning in Linux is scheduled by the SCSI subsystem to happen
asynchronously. As a result, a device path name can vary across restarts.
Solution
To resolve this problem, use persistent naming. There are four ways to use persistent naming: by filesystem label,
by UUID, by ID, or by path. We recommend using the filesystem label or UUID for Azure Linux VMs.
Most distributions provide the fstab nofail or nobootwait parameters. These parameters enable a system to
boot when the disk fails to mount at startup. Check your distribution documentation for more information about
these parameters. For information on how to configure a Linux VM to use a UUID when you add a data disk, see
Connect to the Linux VM to mount the new disk.
When the Azure Linux agent is installed on a VM, the agent uses Udev rules to construct a set of symbolic links
under the /dev/disk/azure path. Applications and scripts use Udev rules to identify disks that are attached to the
VM, along with the disk type and disk LUNs.
Identify disk LUNs
Applications use LUNs to find all of the attached disks and to construct symbolic links. The Azure Linux agent
includes Udev rules that set up symbolic links from a LUN to the devices:
$ tree /dev/disk/azure
/dev/disk/azure
├── resource -> ../../sdb
├── resource-part1 -> ../../sdb1
├── root -> ../../sda
├── root-part1 -> ../../sda1
└── scsi1
├── lun0 -> ../../../sdc
├── lun0-part1 -> ../../../sdc1
├── lun1 -> ../../../sdd
├── lun1-part1 -> ../../../sdd1
├── lun1-part2 -> ../../../sdd2
└── lun1-part3 -> ../../../sdd3
LUN information from the Linux guest account is retrieved by using lsscsi or a similar tool:
$ sudo lsscsi
The guest LUN information is used with Azure subscription metadata to locate the VHD in Azure Storage that
contains the partition data. For example, you can use the az CLI:
/dev/sr0: UUID="120B021372645f72"
/dev/sda1: UUID="52c6959b-79b0-4bdd-8ed6-71e0ba782fb4"
/dev/sdb1: UUID="176250df-9c7c-436f-94e4-d13f9bdea744"
/dev/sdc1: UUID="b0048738-4ecc-4837-9793-49ce296d2692"
The Azure Linux agent Udev rules construct a set of symbolic links under the /dev/disk/azure path:
$ ls -l /dev/disk/azure
total 0
lrwxrwxrwx 1 root root 9 Jun 2 23:17 resource -> ../../sdb
lrwxrwxrwx 1 root root 10 Jun 2 23:17 resource-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 9 Jun 2 23:17 root -> ../../sda
lrwxrwxrwx 1 root root 10 Jun 2 23:17 root-part1 -> ../../sda1
Applications use the links to identify the boot disk device and the resource (ephemeral) disk. In Azure, applications
should look in the /dev/disk/azure/root-part1 or /dev/disk/azure-resource-part1 paths to discover these partitions.
Any additional partitions from the blkid list reside on a data disk. Applications maintain the UUID for these
partitions and use a path to discover the device name at runtime:
$ ls -l /dev/disk/by-uuid/b0048738-4ecc-4837-9793-49ce296d2692
See also
For more information, see the following articles:
Ubuntu: Using UUID
Red Hat: Persistent naming
Linux: What UUIDs can do for you
Udev: Introduction to device management in a modern Linux system
Redeploy Linux virtual machine to new Azure node
3/8/2018 • 1 min to read • Edit Online
If you face difficulties troubleshooting SSH or application access to a Linux virtual machine (VM ) in Azure,
redeploying the VM may help. When you redeploy a VM, it moves the VM to a new node within the Azure
infrastructure and then powers it back on. All your configuration options and associated resources are retained.
This article shows you how to redeploy a VM using Azure CLI or the Azure portal.
NOTE
After you redeploy a VM, the temporary disk is lost and dynamic IP addresses associated with virtual network interface are
updated.
You can redeploy a VM using one of the following options. You only need to choose one option to redeploy your
VM:
Azure CLI 2.0
Azure CLI 1.0
Azure portal
3. The Status of the VM changes to Updating as the VM prepares to redeploy, as shown in the following
example:
4. The Status then changes to Starting as the VM boots up on a new Azure host, as shown in the following
example:
5. After the VM finishes the boot process, the Status then returns to Running, indicating the VM has been
successfully redeployed:
Next steps
If you are having issues connecting to your VM, you can find specific help on troubleshooting SSH connections or
detailed SSH troubleshooting steps. If you cannot access an application running on your VM, you can also read
application troubleshooting issues.
Understand common error messages when you
manage Linux virtual machines in Azure
4/9/2018 • 15 min to read • Edit Online
This article describes some of the most common error codes and messages you may encounter when you create or
manage Linux virtual machines (VMs) in Azure.
NOTE
You can leave comments on this page for feedback or through Azure feedback with #azerrormessage tag.
{
"status": "status code",
"error": {
"code":"Top level error code",
"message":"Top level error message",
"details":[
{
"code":"Inner evel error code",
"message":"Inner level error message"
}
]
}
}
An error response always includes a status code and an error object. Each error object always contains an error
code and a message. If the VM is created with a template, the error object also contains a details section that
contains an inner level of error codes and message. Normally, the most inner level of error message is the root
failure.
AcquireDiskLeaseFailed Failed to acquire lease while creating disk '{0}' using blob with
URI {1}. Blob is already in use.
ArtifactNotFound The VM extension with publisher '{0}' and type '{1}' could not
be found in location '{2}'.
ArtifactNotFound Extension with publisher '{0}', type '{1}', and type handler
version '{2}' could not be found in the extension repository.
AttachDiskWhileBeingDetached Cannot attach data disk '{0}' to VM '{1}' because the disk is
currently being detached. Please wait until the disk is
completely detached and then try again.
BadRequest Aligned' Availability Sets are not yet supported in this region.
CertificateImproperlyFormatted The secret's JSON representation retrieved from {0} has a data
field which is not a properly formatted PFX file, or the
password provided does not decode the PFX file correctly.
CertificateImproperlyFormatted The data retrieved from {0} is not deserializable into JSON.
ConflictingUserInput Source and destination storage accounts for disk {0} are
different.
DiskBlobNotFound Unable to find VHD blob with URI {0} for disk '{1}'.
DiskEncryptionKeySecretMissingTags {0} secret doesn't have the {1} tags. Please update the secret
version, add the required tags and retry.
DiskImageNotReady Disk image {0} is in {1} state. Please retry when image is ready.
ImageBlobNotFound Unable to find VHD blob with URI {0} for disk '{1}'.
IncorrectDiskBlobType Disk blobs can only be of type page blob. Blob {0} for disk '{1}'
is of type block blob.
IncorrectDiskBlobType Disk blobs can only be of type page blob. Blob {0} is of type
'{1}'.
IncorrectImageBlobType Disk blobs can only be of type page blob. Blob {0} for disk '{1}'
is of type block blob.
IncorrectImageBlobType Disk blobs can only be of type page blob. Blob {0} is of type
'{1}'.
InternalOperationError Could not resolve storage account {0}. Please ensure it was
created through the Storage Resource Provider in the same
location as the compute resource.
InvalidParameter The blob name in URL {0} contains a slash. This is presently
not supported for disks.
InvalidParameter The URI {0} does not look to be correct blob URI.
InvalidParameter A disk named '{0}' already uses the same LUN: {1}.
InvalidParameter Cannot specify user image overrides for a disk already defined
in the specified image reference.
InvalidParameter A disk named '{0}' already uses the same VHD URL {1}.
InvalidParameter The specified fault domain count {0} must fall in the range {1}
to {2}.
InvalidParameter The license type {0} is invalid. Valid license types are:
Windows_Client or Windows_Server, case sensitive.
InvalidParameter Destination path for Ssh public keys is currently limited to its
default value {0} due to a known issue in Linux provisioning
agent.
InvalidParameter Blob name in URL {0} must end with '{1}' extension.
InvalidParameter {0}' is not a valid captured VHD blob name prefix. A valid prefix
matches regex '{1}'.
InvalidParameter Unable to create the VM because the requested size {0} is not
available in the cluster where the availability set is currently
allocated. The available sizes are: {1}. Read more on VM
resizing strategy at https://aka.ms/azure-resizevm.
MissingMoveDependentResources The move resources request does not contain all the
dependent resources. Please check error details for missing
resource ids.
NotFound Source Virtual Machine '{0}' specified in the request does not
exist in this Azure location.
NotSupported The license type is {0}, but the image blob {1} is not from on-
premises.
OperationNotAllowed The resource {0} cannot be created from Image {1} until Image
has been successfully created.
OperationNotAllowed Operation '{0}' is not allowed on Image '{1}' since the Image is
marked for deletion. You can only retry the Delete operation
(or wait for an ongoing one to complete).
OperationNotAllowed Operation '{0}' is not allowed since the Virtual Machines '{1}'
are being provisioned using the Image '{2}'.
OperationNotAllowed Disk with size {0}GB, which is smaller than the size {1}GB of
corresponding disk in Image, is not allowed.
OperationNotAllowed VM created from Image cannot have blob based disks. All
disks have to be managed disks.
OperationNotAllowed Unable to add or update the VM. The requested VM size {0}
may not be available in the existing allocation unit. Read more
on VM resizing strategy at https://aka.ms/azure-resizevm.
OperationNotAllowed Unable to resize the VM because the requested size {0} is not
available in the cluster where the availability set is currently
allocated. The available sizes are: {1}. Read more on VM
resizing strategy at https://aka.ms/azure-resizevm.
OperationNotAllowed Unable to resize the VM because the requested size {0} is not
available in the cluster where the VM is currently allocated. To
resize your VM to {1} please deallocate (this is Stop operation
in the Azure portal) and try the resize operation again. Read
more on VM resizing strategy at https://aka.ms/azure-
resizevm.
OSProvisioningClientError OS provisioning for VM '{0}' failed. Error details: {1} Make sure
the image has been properly prepared (generalized).
Instructions for Windows:
https://azure.microsoft.com/documentation/articles/virt
ual-machines-windows-upload-image/
OSProvisioningClientError SSH host key generation failed. Error details: {0}. To resolve this
issue verify if Linux agent is set up properly.
You can check the instructions at :
https://azure.microsoft.com/documentation/articles/virt
ual-machines-linux-agent-user-guide/
ERROR CODE ERROR MESSAGE
OSProvisioningTimedOut OS Provisioning for VM '{0}' did not finish in the allotted time.
The VM may still finish provisioning successfully. Please check
provisioning state later.
OSProvisioningTimedOut OS Provisioning for VM '{0}' did not finish in the allotted time.
The VM may still finish provisioning successfully. Please check
provisioning state later. Also, make sure the image has been
properly prepared (generalized).
Instructions for Windows:
https://azure.microsoft.com/documentation/articles/virt
ual-machines-windows-upload-image/
Instructions for Linux:
https://azure.microsoft.com/documentation/articles/virt
ual-machines-linux-capture-image/
OSProvisioningTimedOut OS Provisioning for VM '{0}' did not finish in the allotted time.
However, the VM guest agent was detected running. This
suggests the guest OS has not been properly prepared to be
used as a VM image (with CreateOption=FromImage). To
resolve this issue, either use the VHD as is with
CreateOption=Attach or prepare it properly for use as an
image:
Instructions for Windows:
https://azure.microsoft.com/documentation/articles/virt
ual-machines-windows-upload-image/
Instructions for Linux:
https://azure.microsoft.com/documentation/articles/virt
ual-machines-linux-capture-image/
StorageAccountLimitation Storage account '{0}' does not support page blobs which are
required to create disks.
StorageAccountLocationMismatch Could not resolve storage account {0}. Please ensure it was
created through the Storage Resource Provider in the same
location as the compute resource.
StorageAccountNotFound Storage account {0} not found. Ensure storage account is not
deleted and belongs to the same Azure location as the VM.
StorageAccountTypeNotSupported Disk {0} uses {1} which is a Blob storage account. Please retry
with General purpose storage account.
TargetDiskBlobAlreadyExists Blob {0} already exists. Please provide a different blob URI to
create a new blank data disk '{1}'.
TargetDiskBlobAlreadyExists Blob {0} already exists. Please provide a different blob URI as
target for disk '{1}'.
VHDSizeInvalid The specified disk size value of {0} for disk '{1}' with blob {2} is
invalid. Disk size must be between {3} and {4}.
VMExtensionHandlerNonTransientError Handler '{0}' has reported failure for VM Extension '{1}' with
terminal error code '{2}' and error message: '{3}'
VMStartTimedOut VM '{0}' did not start in the allotted time. The VM may still
start successfully. Please check the power state later.
Next steps
If you need more help, you can contact the Azure experts on the MSDN Azure and Stack Overflow forums.
Alternatively, you can file an Azure support incident. Go to the Azure support site and select Get Support.
Troubleshoot a Linux VM by attaching the OS disk to
a recovery VM with the Azure CLI 2.0
4/9/2018 • 7 min to read • Edit Online
If your Linux virtual machine (VM ) encounters a boot or disk error, you may need to perform troubleshooting steps
on the virtual hard disk itself. A common example would be an invalid entry in /etc/fstab that prevents the VM
from being able to boot successfully. This article details how to use the Azure CLI 2.0 to connect your virtual hard
disk to another Linux VM to fix any errors, then re-create your original VM. You can also perform these steps with
the Azure CLI 1.0.
Review the serial output to determine why the VM is failing to boot. If the serial output isn't providing any
indication, you may need to review log files in /var/log once you have the virtual hard disk connected to a
troubleshooting VM.
Delete existing VM
Virtual hard disks and VMs are two distinct resources in Azure. A virtual hard disk is where the operating system
itself, applications, and configurations are stored. The VM itself is just metadata that defines the size or location, and
references resources such as a virtual hard disk or virtual network interface card (NIC ). Each virtual hard disk has a
lease assigned when attached to a VM. Although data disks can be attached and detached even while the VM is
running, the OS disk cannot be detached unless the VM resource is deleted. The lease continues to associate the
OS disk with a VM even when that VM is in a stopped and deallocated state.
The first step to recover your VM is to delete the VM resource itself. Deleting the VM leaves the virtual hard disks
in your storage account. After the VM is deleted, you attach the virtual hard disk to another VM to troubleshoot
and resolve the errors.
Delete the VM with az vm delete. The following example deletes the VM named myVM from the resource group
named myResourceGroup :
Wait until the VM has finished deleting before you attach the virtual hard disk to another VM. The lease on the
virtual hard disk that associates it with the VM needs to be released before you can attach the virtual hard disk to
another VM.
1. SSH to your troubleshooting VM using the appropriate credentials. If this disk is the first data disk attached
to your troubleshooting VM, the disk is likely connected to /dev/sdc . Use dmseg to view attached disks:
In the preceding example, the OS disk is at /dev/sda and the temporary disk provided for each VM is at
/dev/sdb . If you had multiple data disks, they should be at /dev/sdd , /dev/sde , and so on.
2. Create a directory to mount your existing virtual hard disk. The following example creates a directory named
troubleshootingdisk :
3. If you have multiple partitions on your existing virtual hard disk, mount the required partition. The following
example mounts the first primary partition at /dev/sdc1 :
NOTE
Best practice is to mount data disks on VMs in Azure using the universally unique identifier (UUID) of the virtual hard
disk. For this short troubleshooting scenario, mounting the virtual hard disk using the UUID is not necessary.
However, under normal use, editing /etc/fstab to mount virtual hard disks using device name rather than UUID
may cause the VM to fail to boot.
cd /
Now unmount the existing virtual hard disk. The following example unmounts the device at /dev/sdc1 :
sudo umount /dev/sdc1
2. Now detach the virtual hard disk from the VM. Exit the SSH session to your troubleshooting VM. List the
attached data disks to your troubleshooting VM with az vm unmanaged-disk list. The following example lists
the data disks attached to the VM named myVMRecovery in the resource group named myResourceGroup :
Note the name for your existing virtual hard disk. For example, the name of a disk with the URI of
https://mystorageaccount.blob.core.windows.net/vhds/myVM.vhd is myVHD.
Detach the data disk from your VM az vm unmanaged-disk detach. The following example detaches the disk
named myVHD from the VM named myVMRecovery in the myResourceGroup resource group:
Next steps
If you are having issues connecting to your VM, see Troubleshoot SSH connections to an Azure VM. For issues
with accessing applications running on your VM, see Troubleshoot application connectivity issues on a Linux VM.
Troubleshoot a Linux VM by attaching the OS disk to
a recovery VM using the Azure portal
4/9/2018 • 7 min to read • Edit Online
If your Linux virtual machine (VM ) encounters a boot or disk error, you may need to perform troubleshooting steps
on the virtual hard disk itself. A common example would be an invalid entry in /etc/fstab that prevents the VM
from being able to boot successfully. This article details how to use the Azure portal to connect your virtual hard
disk to another Linux VM to fix any errors, then re-create your original VM.
Typically you have a container named vhds that stores your virtual hard disks. Select the container to view a list of
virtual hard disks. Note the name of your VHD (the prefix is usually the name of your VM ):
Select your existing virtual hard disk from the list and copy the URL for use in the following steps:
Delete existing VM
Virtual hard disks and VMs are two distinct resources in Azure. A virtual hard disk is where the operating system
itself, applications, and configurations are stored. The VM itself is just metadata that defines the size or location, and
references resources such as a virtual hard disk or virtual network interface card (NIC ). Each virtual hard disk has a
lease assigned when attached to a VM. Although data disks can be attached and detached even while the VM is
running, the OS disk cannot be detached unless the VM resource is deleted. The lease continues to associate the
OS disk with a VM even when that VM is in a stopped and deallocated state.
The first step to recover your VM is to delete the VM resource itself. Deleting the VM leaves the virtual hard disks
in your storage account. After the VM is deleted, you attach the virtual hard disk to another VM to troubleshoot
and resolve the errors.
Select your VM in the portal, then click Delete:
Wait until the VM has finished deleting before you attach the virtual hard disk to another VM. The lease on the
virtual hard disk that associates it with the VM needs to be released before you can attach the virtual hard disk to
another VM.
4. With your VHD now selected, click OK to attach the existing virtual hard disk:
5. After a few seconds, the Disks pane for your VM lists your existing virtual hard disk connected as a data
disk:
Mount the attached data disk
NOTE
The following examples detail the steps required on an Ubuntu VM. If you are using a different Linux distro, such as Red Hat
Enterprise Linux or SUSE, the log file locations and mount commands may be a little different. Refer to the documentation for
your specific distro for the appropriate changes in commands.
1. SSH to your troubleshooting VM using the appropriate credentials. If this disk is the first data disk attached
to your troubleshooting VM, it is likely connected to /dev/sdc . Use dmseg to list attached disks:
In the preceding example, the OS disk is at /dev/sda and the temporary disk provided for each VM is at
/dev/sdb . If you had multiple data disks, they should be at /dev/sdd , /dev/sde , and so on.
2. Create a directory to mount your existing virtual hard disk. The following example creates a directory named
troubleshootingdisk :
3. If you have multiple partitions on your existing virtual hard disk, mount the required partition. The following
example mounts the first primary partition at /dev/sdc1 :
cd /
Now unmount the existing virtual hard disk. The following example unmounts the device at /dev/sdc1 :
2. Now detach the virtual hard disk from the VM. Select your VM in the portal and click Disks. Select your
existing virtual hard disk and then click Detach:
Wait until the VM has successfully detached the data disk before continuing.
The template is loaded into the Azure portal for deployment. Enter the names for your new VM and existing Azure
resources, and paste the URL to your existing virtual hard disk. To begin the deployment, click Purchase:
Re-enable boot diagnostics
When you create your VM from the existing virtual hard disk, boot diagnostics may not automatically be enabled.
To check the status of boot diagnostics and turn on if needed, select your VM in the portal. Under Monitoring,
click Diagnostics settings. Ensure the status is On, and the check mark next to Boot diagnostics is selected. If
you make any changes, click Save:
Next steps
If you are having issues connecting to your VM, see Troubleshoot SSH connections to an Azure VM. For issues
with accessing applications running on your VM, see Troubleshoot application connectivity issues on a Linux VM.
For more information about using Resource Manager, see Azure Resource Manager overview.
Understand the structure and syntax of Azure
Resource Manager templates
5/7/2018 • 6 min to read • Edit Online
This article describes the structure of an Azure Resource Manager template. It presents the different sections of a
template and the properties that are available in those sections. The template consists of JSON and expressions
that you can use to construct values for your deployment. For a step-by-step tutorial on creating a template, see
Create your first Azure Resource Manager template.
Template format
In its simplest structure, a template contains the following elements:
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "",
"parameters": { },
"variables": { },
"functions": { },
"resources": [ ],
"outputs": { }
}
Each element contains properties you can set. The following example contains the full syntax for a template:
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "",
"parameters": {
"<parameter-name>" : {
"type" : "<type-of-parameter-value>",
"defaultValue": "<default-value-of-parameter>",
"allowedValues": [ "<array-of-allowed-values>" ],
"minValue": <minimum-value-for-int>,
"maxValue": <maximum-value-for-int>,
"minLength": <minimum-length-for-string-or-array>,
"maxLength": <maximum-length-for-string-or-array-parameters>,
"metadata": {
"description": "<description-of-the parameter>"
}
}
},
"variables": {
"<variable-name>": "<variable-value>",
"<variable-object-name>": {
<variable-complex-type-value>
},
"<variable-object-name>": {
"copy": [
{
"name": "<name-of-array-property>",
"count": <number-of-iterations>,
"input": {
<properties-to-repeat>
}
}
]
},
"copy": [
{
"name": "<variable-array-name>",
"count": <number-of-iterations>,
"input": {
<properties-to-repeat>
}
}
]
},
"functions": [
{
"namespace": "<namespace-for-your-function>",
"members": {
"<function-name>": {
"parameters": [
{
"name": "<parameter-name>",
"type": "<type-of-parameter-value>"
}
],
"output": {
"type": "<type-of-output-value>",
"value": "<function-expression>"
}
}
}
}
}
],
"resources": [
{
"condition": "<boolean-value-whether-to-deploy>",
"apiVersion": "<api-version-of-resource>",
"type": "<resource-provider-namespace/resource-type-name>",
"name": "<name-of-the-resource>",
"location": "<location-of-resource>",
"tags": {
"<tag-name1>": "<tag-value1>",
"<tag-name2>": "<tag-value2>"
},
"comments": "<your-reference-notes>",
"copy": {
"name": "<name-of-copy-loop>",
"count": "<number-of-iterations>",
"mode": "<serial-or-parallel>",
"batchSize": "<number-to-deploy-serially>"
},
"dependsOn": [
"<array-of-related-resource-names>"
],
"properties": {
"<settings-for-the-resource>",
"copy": [
{
"name": ,
"count": ,
"input": {}
}
]
},
"resources": [
"<array-of-child-resources>"
]
}
],
"outputs": {
"<outputName>" : {
"type" : "<type-of-output-value>",
"value": "<output-value-expression>"
}
}
}
Syntax
The basic syntax of the template is JSON. However, expressions and functions extend the JSON values available
within the template. Expressions are written within JSON string literals whose first and last characters are the
brackets: [ and ] , respectively. The value of the expression is evaluated when the template is deployed. While
written as a string literal, the result of evaluating the expression can be of a different JSON type, such as an array
or integer, depending on the actual expression. To have a literal string start with a bracket [ , but not have it
interpreted as an expression, add an extra bracket to start the string with [[ .
Typically, you use expressions with functions to perform operations for configuring the deployment. Just like in
JavaScript, function calls are formatted as functionName(arg1,arg2,arg3) . You reference properties by using the dot
and [index] operators.
The following example shows how to use several functions when constructing a value:
"variables": {
"storageName": "[concat(toLower(parameters('storageNamePrefix')), uniqueString(resourceGroup().id))]"
}
For the full list of template functions, see Azure Resource Manager template functions.
Parameters
In the parameters section of the template, you specify which values you can input when deploying the resources.
These parameter values enable you to customize the deployment by providing values that are tailored for a
particular environment (such as dev, test, and production). You do not have to provide parameters in your template,
but without parameters your template would always deploy the same resources with the same names, locations,
and properties.
The following example shows a simple parameter definition:
"parameters": {
"siteNamePrefix": {
"type": "string",
"metadata": {
"description": "The name prefix of the web app that you wish to create."
}
},
},
For information about defining parameters, see Parameters section of Azure Resource Manager templates.
Variables
In the variables section, you construct values that can be used throughout your template. You do not need to define
variables, but they often simplify your template by reducing complex expressions.
The following example shows a simple variable definition:
"variables": {
"webSiteName": "[concat(parameters('siteNamePrefix'), uniqueString(resourceGroup().id))]",
},
For information about defining variables, see Variables section of Azure Resource Manager templates.
Functions
Within your template, you can create your own functions. These functions are available for use in your template.
Typically, you define complicated expression that you do not want to repeat throughout your template. You create
the user-defined functions from expressions and functions that are supported in templates.
When defining a user function, there are some restrictions:
The function can't access variables.
The function can't use the reference function.
Parameters for the function can't have default values.
Your functions require a namespace value to avoid naming conflicts with template functions. The following
example shows a function that returns a storage account name:
"functions": [
{
"namespace": "contoso",
"members": {
"uniqueName": {
"parameters": [
{
"name": "namePrefix",
"type": "string"
}
],
"output": {
"type": "string",
"value": "[concat(toLower(parameters('namePrefix')), uniqueString(resourceGroup().id))]"
}
}
}
}
],
"resources": [
{
"name": "[contoso.uniqueName(parameters('storageNamePrefix'))]",
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2016-01-01",
"sku": {
"name": "Standard_LRS"
},
"kind": "Storage",
"location": "South Central US",
"tags": {},
"properties": {}
}
]
Resources
In the resources section, you define the resources that are deployed or updated. This section can get complicated
because you must understand the types you are deploying to provide the right values.
"resources": [
{
"apiVersion": "2016-08-01",
"name": "[variables('webSiteName')]",
"type": "Microsoft.Web/sites",
"location": "[resourceGroup().location]",
"properties": {
"serverFarmId": "/subscriptions/<subscription-id>/resourcegroups/<resource-group-
name>/providers/Microsoft.Web/serverFarms/<plan-name>"
}
}
],
For more information, see Resources section of Azure Resource Manager templates.
Outputs
In the Outputs section, you specify values that are returned from deployment. For example, you could return the
URI to access a deployed resource.
"outputs": {
"newHostName": {
"type": "string",
"value": "[reference(variables('webSiteName')).defaultHostName]"
}
}
For more information, see Outputs section of Azure Resource Manager templates.
Template limits
Limit the size of your template to 1 MB, and each parameter file to 64 KB. The 1-MB limit applies to the final state
of the template after it has been expanded with iterative resource definitions, and values for variables and
parameters.
You are also limited to:
256 parameters
256 variables
800 resources (including copy count)
64 output values
24,576 characters in a template expression
You can exceed some template limits by using a nested template. For more information, see Using linked templates
when deploying Azure resources. To reduce the number of parameters, variables, or outputs, you can combine
several values into an object. For more information, see Objects as parameters.
Next steps
To view complete templates for many different types of solutions, see the Azure Quickstart Templates.
For details about the functions you can use from within a template, see Azure Resource Manager Template
Functions.
To combine multiple templates during deployment, see Using linked templates with Azure Resource Manager.
You may need to use resources that exist within a different resource group. This scenario is common when
working with storage accounts or virtual networks that are shared across multiple resource groups. For more
information, see the resourceId function.
Frequently asked question about Linux Virtual
Machines
3/23/2018 • 3 min to read • Edit Online
This article addresses some common questions about Linux virtual machines created in Azure using the Resource
Manager deployment model. For the Windows version of this topic, see Frequently asked question about Windows
Virtual Machines
Why am I not seeing Canada Central and Canada East regions through
Azure Resource Manager?
The two new regions of Canada Central and Canada East are not automatically registered for virtual machine
creation for existing Azure subscriptions. This registration is done automatically when a virtual machine is
deployed through the Azure portal to any other region using Azure Resource Manager. After a virtual machine is
deployed to any other Azure region, the new regions should be available for subsequent virtual machines.
admin1 1 123 a