Azure SQL
Azure SQL
Applies to: Azure SQL Database Azure SQL Managed Instance SQL Server on Azure VM
Azure SQL is a family of managed, secure, and intelligent products that use the SQL Server database engine in
the Azure cloud.
Azure SQL Database : Support modern cloud applications on an intelligent, managed database service, that
includes serverless compute.
Azure SQL Managed Instance : Modernize your existing SQL Server applications at scale with an
intelligent fully managed instance as a service, with almost 100% feature parity with the SQL Server
database engine. Best for most migrations to the cloud.
SQL Ser ver on Azure VMs : Lift-and-shift your SQL Server workloads with ease and maintain 100% SQL
Server compatibility and operating system-level access.
Azure SQL is built upon the familiar SQL Server engine, so you can migrate applications with ease and continue
to use the tools, languages, and resources you're familiar with. Your skills and experience transfer to the cloud,
so you can do even more with what you already have.
Learn how each product fits into Microsoft's Azure SQL data platform to match the right option for your
business requirements. Whether you prioritize cost savings or minimal administration, this article can help you
decide which approach delivers against the business requirements you care about most.
If you're new to Azure SQL, check out the What is Azure SQL video from our in-depth Azure SQL video series:
Overview
In today's data-driven world, driving digital transformation increasingly depends on our ability to manage
massive amounts of data and harness its potential. But today's data estates are increasingly complex, with data
hosted on-premises, in the cloud, or at the edge of the network. Developers who are building intelligent and
immersive applications can find themselves constrained by limitations that can ultimately impact their
experience. Limitations arising from incompatible platforms, inadequate data security, insufficient resources and
price-performance barriers create complexity that can inhibit app modernization and development.
One of the first things to understand in any discussion of Azure versus on-premises SQL Server databases is
that you can use it all. Microsoft's data platform leverages SQL Server technology and makes it available across
physical on-premises machines, private cloud environments, third-party hosted private cloud environments, and
the public cloud.
Fully managed and always up to date
Spend more time innovating and less time patching, updating, and backing up your databases. Azure is the only
cloud with evergreen SQL that automatically applies the latest updates and patches so that your databases are
always up to date—eliminating end-of-support hassle. Even complex tasks like performance tuning, high
availability, disaster recovery, and backups are automated, freeing you to focus on applications.
Protect your data with built-in intelligent security
Azure constantly monitors your data for threats. With Azure SQL, you can:
Remediate potential threats in real time with intelligent advanced threat detection and proactive vulnerability
assessment alerts.
Get industry-leading, multi-layered protection with built-in security controls including T-SQL, authentication,
networking, and key management.
Take advantage of the most comprehensive compliance coverage of any cloud database service.
Business motivations
There are several factors that can influence your decision to choose between the different data offerings:
Cost: Both platform as a service (PaaS) and infrastructure as a service (IaaS) options include base price that
covers underlying infrastructure and licensing. However, with the IaaS option you need to invest additional
time and resources to manage your database, while in PaaS you get these administration features included in
the price. IaaS enables you to shut down resources while you are not using them to decrease the cost, while
PaaS is always running unless you drop and re-create your resources when they are needed.
Administration: PaaS options reduce the amount of time that you need to invest to administer the database.
However, it also limits the range of custom administration tasks and scripts that you can perform or run. For
example, the CLR is not supported with SQL Database, but is supported for an instance of SQL Managed
Instance. Also, no deployment options in PaaS support the use of trace flags.
Service-level agreement: Both IaaS and PaaS provide high, industry standard SLA. PaaS option guarantees
99.99% SLA, while IaaS guarantees 99.95% SLA for infrastructure, meaning that you need to implement
additional mechanisms to ensure availability of your databases. You can attain 99.99% SLA by creating an
additional SQL virtual machine, and implementing the SQL Server Always On availability group high
availability solution.
Time to move to Azure: SQL Server on Azure VM is the exact match of your environment, so migration from
on-premises to the Azure VM is no different than moving the databases from one on-premises server to
another. SQL Managed Instance also enables easy migration; however, there might be some changes that you
need to apply before your migration.
Service comparison
As seen in the diagram, each service offering can be characterized by the level of administration you have over
the infrastructure, and by the degree of cost efficiency.
In Azure, you can have your SQL Server workloads running as a hosted service (PaaS), or a hosted infrastructure
(IaaS). Within PaaS, you have multiple product options, and service tiers within each option. The key question
that you need to ask when deciding between PaaS or IaaS is do you want to manage your database, apply
patches, and take backups, or do you want to delegate these operations to Azure?
Azure SQL Database
Azure SQL Database is a relational database-as-a-service (DBaaS) hosted in Azure that falls into the industry
category of Platform-as-a-Service (PaaS).
Best for modern cloud applications that want to use the latest stable SQL Server features and have time
constraints in development and marketing.
A fully managed SQL Server database engine, based on the latest stable Enterprise Edition of SQL Server.
SQL Database has two deployment options built on standardized hardware and software that is owned,
hosted, and maintained by Microsoft.
With SQL Server, you can use built-in features and functionality that requires extensive configuration (either on-
premises or in an Azure virtual machine). When using SQL Database, you pay-as-you-go with options to scale
up or out for greater power with no interruption. SQL Database has some additional features that are not
available in SQL Server, such as built-in high availability, intelligence, and management.
Azure SQL Database offers the following deployment options:
As a single database with its own set of resources managed via a logical SQL server. A single database is
similar to a contained database in SQL Server. This option is optimized for modern application development
of new cloud-born applications. Hyperscale and serverless options are available.
An elastic pool, which is a collection of databases with a shared set of resources managed via a logical server.
Single databases can be moved into and out of an elastic pool. This option is optimized for modern
application development of new cloud-born applications using the multi-tenant SaaS application pattern.
Elastic pools provide a cost-effective solution for managing the performance of multiple databases that have
variable usage patterns.
Azure SQL Managed Instance
Azure SQL Managed Instance falls into the industry category of Platform-as-a-Service (PaaS), and is best for
most migrations to the cloud. SQL Managed Instance is a collection of system and user databases with a shared
set of resources that is lift-and-shift ready.
Best for new applications or existing on-premises applications that want to use the latest stable SQL Server
features and that are migrated to the cloud with minimal changes. An instance of SQL Managed Instance is
similar to an instance of the Microsoft SQL Server database engine offering shared resources for databases
and additional instance-scoped features.
SQL Managed Instance supports database migration from on-premises with minimal to no database change.
This option provides all of the PaaS benefits of Azure SQL Database but adds capabilities that were
previously only available in SQL Server VMs. This includes a native virtual network and near 100%
compatibility with on-premises SQL Server. Instances of SQL Managed Instance provide full SQL Server
access and feature compatibility for migrating SQL Servers to Azure.
SQL Server on Azure VM
SQL Server on Azure VM falls into the industry category Infrastructure-as-a-Service (IaaS) and allows you to
run SQL Server inside a fully managed virtual machine (VM) in Azure.
SQL Server installed and hosted in the cloud runs on Windows Server or Linux virtual machines running on
Azure, also known as an infrastructure as a service (IaaS). SQL virtual machines are a good option for
migrating on-premises SQL Server databases and applications without any database change. All recent
versions and editions of SQL Server are available for installation in an IaaS virtual machine.
Best for migrations and applications requiring OS-level access. SQL virtual machines in Azure are lift-and-
shift ready for existing applications that require fast migration to the cloud with minimal changes or no
changes. SQL virtual machines offer full administrative control over the SQL Server instance and underlying
OS for migration to Azure.
The most significant difference from SQL Database and SQL Managed Instance is that SQL Server on Azure
Virtual Machines allows full control over the database engine. You can choose when to start
maintenance/patching, change the recovery model to simple or bulk-logged, pause or start the service when
needed, and you can fully customize the SQL Server database engine. With this additional control comes the
added responsibility to manage the virtual machine.
Rapid development and test scenarios when you do not want to buy on-premises non-production SQL
Server hardware. SQL virtual machines also run on standardized hardware that is owned, hosted, and
maintained by Microsoft. When using SQL virtual machines, you can either pay-as-you-go for a SQL Server
license already included in a SQL Server image or easily use an existing license. You can also stop or resume
the VM as needed.
Optimized for migrating existing applications to Azure or extending existing on-premises applications to the
cloud in hybrid deployments. In addition, you can use SQL Server in a virtual machine to develop and test
traditional SQL Server applications. With SQL virtual machines, you have the full administrative rights over a
dedicated SQL Server instance and a cloud-based VM. It is a perfect choice when an organization already has
IT resources available to maintain the virtual machines. These capabilities allow you to build a highly
customized system to address your application’s specific performance and availability requirements.
Comparison table
Additional differences are listed in the following table, but both SQL Database and SQL Managed Instance are
optimized to reduce overall management costs to a minimum for provisioning and managing many databases.
Ongoing administration costs are reduced since you do not have to manage any virtual machines, operating
system, or database software. You do not have to manage upgrades, high availability, or backups.
In general, SQL Database and SQL Managed Instance can dramatically increase the number of databases
managed by a single IT or development resource. Elastic pools also support SaaS multi-tenant application
architectures with features including tenant isolation and the ability to scale to reduce costs by sharing
resources across databases. SQL Managed Instance provides support for instance-scoped features enabling
easy migration of existing applications, as well as sharing resources among databases. Whereas, SQL Server on
Azure VMs provide DBAs with an experience most similar to the on-premises environment they're familiar with.
Supports most on-premises database- Supports almost all on-premises You have full control over the SQL
level capabilities. The most commonly instance-level and database-level Server engine. Supports all on-
used SQL Server features are available. capabilities. High compatibility with premises capabilities.
99.995% availability guaranteed. SQL Server. Up to 99.99% availability.
Built-in backups, patching, recovery. 99.99% availability guaranteed. Full parity with the matching version of
Latest stable Database Engine version. Built-in backups, patching, recovery. on-premises SQL Server.
Ability to assign necessary resources Latest stable Database Engine version. Fixed, well-known Database Engine
(CPU/storage) to individual databases. Easy migration from SQL Server. version.
Built-in advanced intelligence and Private IP address within Azure Virtual Easy migration from SQL Server.
security. Network. Private IP address within Azure Virtual
Online change of resources Built-in advanced intelligence and Network.
(CPU/storage). security. You have the ability to deploy
Online change of resources application or services on the host
(CPU/storage). where SQL Server is placed.
Migration from SQL Server might be There is still some minimal number of You may use manual or automated
challenging. SQL Server features that are not backups.
Some SQL Server features are not available. You need to implement your own
available. Configurable maintenance windows. High-Availability solution.
Configurable maintenance windows. Compatibility with the SQL Server There is a downtime while changing
Compatibility with the SQL Server version can be achieved only using the resources(CPU/storage)
version can be achieved only using database compatibility levels.
database compatibility levels.
Private IP address support with Azure
Private Link.
A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E SQ L SERVER O N A Z URE VM
On-premises application can access Native virtual network implementation With SQL virtual machines, you can
data in Azure SQL Database. and connectivity to your on-premises have applications that run partly in the
environment using Azure Express cloud and partly on-premises. For
Route or VPN Gateway. example, you can extend your on-
premises network and Active Directory
Domain to the cloud via Azure Virtual
Network. For more information on
hybrid cloud solutions, see Extending
on-premises data solutions to the
cloud.
Cost
Whether you're a startup that is strapped for cash, or a team in an established company that operates under
tight budget constraints, limited funding is often the primary driver when deciding how to host your databases.
In this section, you learn about the billing and licensing basics in Azure associated with the Azure SQL family of
services. You also learn about calculating the total application cost.
Billing and licensing basics
Currently, both SQL Database and SQL Managed Instance are sold as a service and are available with
several options and in several service tiers with different prices for resources, all of which are billed hourly at a
fixed rate based on the service tier and compute size you choose. For the latest information on the current
supported service tiers, compute sizes, and storage amounts, see DTU-based purchasing model for SQL
Database and vCore-based purchasing model for both SQL Database and SQL Managed Instance.
With SQL Database, you can choose a service tier that fits your needs from a wide range of prices starting
from 5$/month for basic tier and you can create elastic pools to share resources among databases to reduce
costs and accommodate usage spikes.
With SQL Managed Instance, you can also bring your own license. For more information on bring-your-own
licensing, see License Mobility through Software Assurance on Azure or use the Azure Hybrid Benefit
calculator to see how to save up to 40% .
In addition, you are billed for outgoing Internet traffic at regular data transfer rates. You can dynamically adjust
service tiers and compute sizes to match your application’s varied throughput needs.
With SQL Database and SQL Managed Instance , the database software is automatically configured, patched,
and upgraded by Azure, which reduces your administration costs. In addition, its built-in backup capabilities help
you achieve significant cost savings, especially when you have a large number of databases.
With SQL on Azure VMs , you can use any of the platform-provided SQL Server images (which includes a
license) or bring your SQL Server license. All the supported SQL Server versions (2008R2, 2012, 2014, 2016,
2017, 2019) and editions (Developer, Express, Web, Standard, Enterprise) are available. In addition, Bring-Your-
Own-License versions (BYOL) of the images are available. When using the Azure provided images, the
operational cost depends on the VM size and the edition of SQL Server you choose. Regardless of VM size or
SQL Server edition, you pay per-minute licensing cost of SQL Server and the Windows or Linux Server, along
with the Azure Storage cost for the VM disks. The per-minute billing option allows you to use SQL Server for as
long as you need without buying addition SQL Server licenses. If you bring your own SQL Server license to
Azure, you are charged for server and storage costs only. For more information on bring-your-own licensing,
see License Mobility through Software Assurance on Azure. In addition, you are billed for outgoing Internet
traffic at regular data transfer rates.
Calculating the total application cost
When you start using a cloud platform, the cost of running your application includes the cost for new
development and ongoing administration costs, plus the public cloud platform service costs.
For more information on pricing, see the following resources:
SQL Database & SQL Managed Instance pricing
Virtual machine pricing for SQL and for Windows
Azure Pricing Calculator
Administration
For many businesses, the decision to transition to a cloud service is as much about offloading complexity of
administration as it is cost. With IaaS and PaaS, Azure administers the underlying infrastructure and
automatically replicates all data to provide disaster recovery, configures and upgrades the database software,
manages load balancing, and does transparent failover if there is a server failure within a data center.
With SQL Database and SQL Managed Instance , you can continue to administer your database, but you
no longer need to manage the database engine, the operating system, or the hardware. Examples of items
you can continue to administer include databases and logins, index and query tuning, and auditing and
security. Additionally, configuring high availability to another data center requires minimal configuration and
administration.
With SQL on Azure VM , you have full control over the operating system and SQL Server instance
configuration. With a VM, it's up to you to decide when to update/upgrade the operating system and
database software and when to install any additional software such as anti-virus. Some automated features
are provided to dramatically simplify patching, backup, and high availability. In addition, you can control the
size of the VM, the number of disks, and their storage configurations. Azure allows you to change the size of
a VM as needed. For information, see Virtual Machine and Cloud Service Sizes for Azure.
Create and manage Azure SQL resources with the Azure portal
The Azure portal provides a single page where you can manage all of your Azure SQL resources including your
SQL Server on Azure virtual machines (VMs).
To access the Azure SQL page, from the Azure portal menu, select Azure SQL or search for and select Azure
SQL in any page.
NOTE
Azure SQL provides a quick and easy way to access all of your SQL resources in the Azure portal, including single and
pooled databases in Azure SQL Database as well as the logical server hosting them, SQL Managed Instances, and SQL
Server on Azure VMs. Azure SQL is not a service or resource, but rather a family of SQL-related services.
To manage existing resources, select the desired item in the list. To create new Azure SQL resources, select +
Create .
After selecting + Create , view additional information about the different options by selecting Show details on
any tile.
For details, see:
Create a single database
Create an elastic pool
Create a managed instance
Create a SQL virtual machine
Next steps
See Your first Azure SQL Database to get started with SQL Database.
See Your first Azure SQL Managed Instance to get started with SQL Managed Instance.
See SQL Database pricing.
See Azure SQL Managed Instance pricing.
See Provision a SQL Server virtual machine in Azure to get started with SQL Server on Azure VMs.
Identify the right SQL Database or SQL Managed Instance SKU for your on-premises database.
Migrate SQL Server workloads (FAQ)
9/13/2022 • 22 minutes to read • Edit Online
Applies to: SQL Server Azure SQL Database Azure SQL Managed Instance SQL Server on
Azure VM
Migrating on-premises SQL Server workloads and associated applications to the cloud usually brings a wide
range of questions which go beyond mere product feature information.
This article provides a holistic view and helps understand how to fully unlock the value when migrating to Azure
SQL. The Modernize applications and SQL section covers questions about Azure SQL in general as well as
common application and SQL modernization scenarios. The Business and technical evaluation section
covers cost saving, licensing, minimizing migration risk, business continuity, security, workloads and
architecture, performance and similar business and technical evaluation questions. The last section covers the
actual Migration and modernization process , including guidance on migration tools.
See also
Frequently asked questions for SQL Server on Azure VMs
Azure SQL Managed Instance frequently asked questions (FAQ)
Azure SQL Database Hyperscale FAQ
Azure Hybrid Benefit FAQ
vCore purchasing model overview - Azure SQL
Database and Azure SQL Managed Instance
9/13/2022 • 5 minutes to read • Edit Online
Overview
A virtual core (vCore) represents a logical CPU and offers you the option to choose the physical characteristics
of the hardware (for example, the number of cores, the memory, and the storage size). The vCore-based
purchasing model gives you flexibility, control, transparency of individual resource consumption, and a
straightforward way to translate on-premises workload requirements to the cloud. This model optimizes price,
and allows you to choose compute, memory, and storage resources based on your workload needs.
In the vCore-based purchasing model, your costs depend on the choice and usage of:
Service tier
Hardware configuration
Compute resources (the number of vCores and the amount of memory)
Reserved database storage
Actual backup storage
IMPORTANT
In Azure SQL Database, compute resources (CPU and memory), I/O, and data and log storage are charged per database
or elastic pool. Backup storage is charged per each database.
The vCore purchasing model provides transparency in database CPU, memory, and storage resource allocation,
hardware configuration, higher scaling granularity, and pricing discounts with the Azure Hybrid Benefit (AHB)
and Reserved Instance (RI).
In the case of Azure SQL Database, the vCore purchasing model provides higher compute, memory, I/O, and
storage limits than the DTU model.
Service tiers
Two vCore service tiers are available in both Azure SQL Database and Azure SQL Managed Instance:
General purpose is a budget-friendly tier designed for most workloads with common performance and
availability requirements.
Business Critical tier is designed for performance-sensitive workloads with strict availability requirements.
The Hyperscale service tier is also available for single databases in Azure SQL Database. This service tier is
designed for most business workloads, providing highly scalable storage, read scale-out, fast scaling, and fast
database restore capabilities.
Resource limits
For more information on resource limits, see:
Azure SQL Database: logical server, single databases, pooled databases
Azure SQL Managed Instance
Compute cost
The vCore-based purchasing model has a provisioned compute tier for both Azure SQL Database and Azure
SQL Managed Instance, and a serverless compute tier for Azure SQL Database.
In the provisioned compute tier, the compute cost reflects the total compute capacity continuously provisioned
for the application independent of workload activity. Choose the resource allocation that best suits your
business needs based on vCore and memory requirements, then scale resources up and down as needed by
your workload.
In the serverless compute tier for Azure SQL database, compute resources are auto-scaled based on workload
capacity and billed for the amount of compute used, per second.
Since three additional replicas are automatically allocated in the Business Critical service tier, the price is
approximately 2.7 times higher than it is in the General Purpose service tier. Likewise, the higher storage price
per GB in the Business Critical service tier reflects the higher IO limits and lower latency of the local SSD storage.
TIP
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see
Manage file space in Azure SQL Database.
Backup storage
Storage for database backups is allocated to support the point-in-time restore (PITR) and long-term retention
(LTR) capabilities of SQL Database and SQL Managed Instance. This storage is separate from data and log file
storage, and is billed separately.
PITR : In General Purpose and Business Critical tiers, individual database backups are copied to Azure storage
automatically. The storage size increases dynamically as new backups are created. The storage is used by full,
differential, and transaction log backups. The storage consumption depends on the rate of change of the
database and the retention period configured for backups. You can configure a separate retention period for
each database between 1 and 35 days for SQL Database, and 0 to 35 days for SQL Managed Instance. A
backup storage amount equal to the configured maximum data size is provided at no extra charge.
LTR : You also have the option to configure long-term retention of full backups for up to 10 years. If you set
up an LTR policy, these backups are stored in Azure Blob storage automatically, but you can control how
often the backups are copied. To meet different compliance requirements, you can select different retention
periods for weekly, monthly, and/or yearly backups. The configuration you choose determines how much
storage will be used for LTR backups. For more information, see Long-term backup retention.
Next steps
To get started, see:
Creating a SQL Database using the Azure portal
Creating a SQL Managed Instance using the Azure portal
For pricing details, see
Azure SQL Database pricing page
Azure SQL Managed Instance single instance pricing page
Azure SQL Managed Instance pools pricing page
For details about the specific compute and storage sizes available in the General Purpose and Business Critical
service tiers, see:
vCore-based resource limits for Azure SQL Database.
vCore-based resource limits for pooled Azure SQL Database.
vCore-based resource limits for Azure SQL Managed Instance.
Azure Hybrid Benefit - Azure SQL Database & SQL
Managed Instance
9/13/2022 • 4 minutes to read • Edit Online
Overview
Diagram of vCore pricing structure for SQL Database. 'License Included' pricing is made up of base compute
and SQL license components. Azure Hybrid Benefit pricing is made up of base compute and software assurance
components.
With Azure Hybrid Benefit, you pay only for the underlying Azure infrastructure by using your existing SQL
Server license for the SQL Server database engine itself (Base Compute pricing). If you do not use Azure Hybrid
Benefit, you pay for both the underlying infrastructure and the SQL Server license (License-Included pricing).
For Azure SQL Database, Azure Hybrid Benefit is only available when using the provisioned compute tier of the
vCore-based purchasing model. Azure Hybrid Benefit doesn't apply to DTU-based purchasing models or the
serverless compute tier.
SQL Server Enterprise Edition core customers with SA Can pay base rate on Hyperscale, General Purpose, or
Business Critical SKU
SQL Server Standard Edition core customers with SA Can pay base rate on Hyperscale, General Purpose, or
Business Critical SKU
Next steps
For help with choosing an Azure SQL deployment option, see Service comparison.
For a comparison of SQL Database and SQL Managed Instance features, see Features of SQL Database and
SQL Managed Instance.
Save costs for resources with reserved capacity -
Azure SQL Database & SQL Managed Instance
9/13/2022 • 6 minutes to read • Edit Online
NOTE
Purchasing reserved capacity does not pre-allocate or reserve specific infrastructure resources (virtual machines or nodes)
for your use.
Deployment Type The SQL resource type that you want to buy the
reservation for.
Performance Tier The service tier for the databases or managed instances.
Limitation
You cannot reserve DTU-based (basic, standard, or premium) databases in SQL Database. Reserved capacity
pricing is only supported for features and products that are in General Availability state.
Next steps
The vCore reservation discount is applied automatically to the number of databases or managed instances that
match the capacity reservation scope and attributes. You can update the scope of the capacity reservation
through the Azure portal, PowerShell, Azure CLI, or the API.
For information on Azure SQL Database service tiers for the vCore model, see vCore model overview - Azure
SQL Database.
For information on Azure SQL Managed Instance service tiers for the vCore model, see vCore model
overview - Azure SQL Managed Instance.
To learn how to manage the capacity reservation, see manage reserved capacity.
To learn more about Azure Reservations, see the following articles:
What are Azure Reservations?
Manage Azure Reservations
Understand Azure Reservations discount
Understand reservation usage for your Pay-As-You-Go subscription
Understand reservation usage for your Enterprise enrollment
Azure Reservations in Partner Center Cloud Solution Provider (CSP) program
General Purpose service tier - Azure SQL Database
and Azure SQL Managed Instance
9/13/2022 • 4 minutes to read • Edit Online
Overview
The architectural model for the General Purpose service tier is based on a separation of compute and storage.
This architectural model relies on high availability and reliability of Azure Blob storage that transparently
replicates database files and guarantees no data loss if underlying infrastructure failure happens.
The following figure shows four nodes in standard architectural model with the separated compute and storage
layers.
In the architectural model for the General Purpose service tier, there are two layers:
A stateless compute layer that is running the sqlservr.exe process and contains only transient and cached
data (for example – plan cache, buffer pool, column store pool). This stateless node is operated by Azure
Service Fabric that initializes process, controls health of the node, and performs failover to another place if
necessary.
A stateful data layer with database files (.mdf/.ldf) that are stored in Azure Blob storage. Azure Blob storage
guarantees that there will be no data loss of any record that is placed in any database file. Azure Storage has
built-in data availability/redundancy that ensures that every record in log file or page in data file will be
preserved even if the process crashes.
Whenever the database engine or operating system is upgraded, some part of underlying infrastructure fails, or
if some critical issue is detected in the sqlservr.exe process, Azure Service Fabric will move the stateless
process to another stateless compute node. There is a set of spare nodes that is waiting to run new compute
service if a failover of the primary node happens in order to minimize failover time. Data in Azure storage layer
is not affected, and data/log files are attached to newly initialized process. This process guarantees 99.99%
availability by default and 99.995% availability when zone redundancy is enabled. There may be some
performance impacts on heavy workloads that are running due to transition time and the fact the new node
starts with cold cache.
Storage size 1 GB - 4 TB 2 GB - 16 TB
Log write throughput Single databases: 4.5 MB/s per vCore General Purpose: 3 MB/s per vCore
(max 50 MB/s) (max 120 MB/s)
Elastic pools: 6 MB/s per vCore (max Business Critical: 4 MB/s per vCore
62.5 MB/s) (max 96 MB/s)
Next steps
Find resource characteristics (number of cores, I/O, memory) of the General Purpose/standard tier in SQL
Managed Instance, single database in vCore model or DTU model, or elastic pool in vCore model and DTU
model.
Learn about Business Critical and Hyperscale service tiers.
Learn about Service Fabric.
For more options for high availability and disaster recovery, see Business Continuity.
Business Critical tier - Azure SQL Database and
Azure SQL Managed Instance
9/13/2022 • 5 minutes to read • Edit Online
Overview
The Business Critical service tier model is based on a cluster of database engine processes. This architectural
model relies on a fact that there's always a quorum of available database engine nodes and has minimal
performance impact on your workload even during maintenance activities.
Azure upgrades and patches underlying operating system, drivers, and SQL Server database engine
transparently with the minimal down-time for end users.
In the Business Critical model, compute and storage is integrated on each node. High availability is achieved by
replication of data between database engine processes on each node of a four node cluster, with each node
using locally attached SSD as data storage. This technology is similar to SQL Server Always On availability
groups.
Both the SQL Server database engine process and underlying .mdf/.ldf files are placed on the same node with
locally attached SSD storage providing low latency to your workload. High availability is implemented using
technology similar to SQL Server Always On availability groups. Every database is a cluster of database nodes
with one primary database that is accessible for customer workloads, and a three secondary processes
containing copies of data. The primary node constantly pushes changes to the secondary nodes in order to
ensure that the data is available on secondary replicas if the primary node fails for any reason. Failover is
handled by the SQL Server database engine – one secondary replica becomes the primary node and a new
secondary replica is created to ensure there are enough nodes in the cluster. The workload is automatically
redirected to the new primary node.
In addition, the Business Critical cluster has built-in Read Scale-Out capability that provides free-of charge built-
in read-only replica that can be used to run read-only queries (for example reports) that shouldn't affect
performance of your primary workload.
Compute size 1 to 128 vCores 4, 8, 16, 24, 32, 40, 64, 80 vCores
Storage size 1 GB – 4 TB 32 GB – 16 TB
Log write throughput Single databases: 12 MB/s per vCore 4 MB/s per vCore (max 48 MB/s)
(max 96 MB/s)
Elastic pools: 15 MB/s per vCore (max
120 MB/s)
Backups RA-GRS, 1-35 days (7 days by default) RA-GRS, 1-35 days (7 days by default)
C AT EGO RY A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E
Read-only replicas 1 built-in high availability replica is 1 built-in high availability replica is
readable readable
0 - 4 geo-replicas 0 - 1 geo-replicas using auto-failover
groups
Next steps
Find resource characteristics (number of cores, I/O, memory) of Business Critical tier in SQL Managed
Instance, Single database in vCore model or DTU model, or Elastic pool in vCore model and DTU model.
Learn about General Purpose and Hyperscale service tiers.
Learn about Service Fabric.
For more options for high availability and disaster recovery, see Business Continuity.
Features comparison: Azure SQL Database and
Azure SQL Managed Instance
9/13/2022 • 14 minutes to read • Edit Online
Always Encrypted Yes - see Cert store and Key vault Yes - see Cert store and Key vault
Attach a database No No
Azure Active Directory (Azure AD) Yes. Azure AD users only. Yes. Including server-level Azure AD
authentication logins.
BACKUP command No, only system-initiated automatic Yes, user initiated copy-only backups
backups - see Automated backups to Azure Blob storage (automatic
system backups can't be initiated by
user) - see Backup differences
Built-in functions Most - see individual functions Yes - see Stored procedures, functions,
triggers differences
BULK INSERT statement Yes, but just from Azure Blob storage Yes, but just from Azure Blob Storage
as a source. as a source - see differences.
Certificates and asymmetric keys Yes, without access to file system for Yes, without access to file system for
BACKUP and CREATE operations. BACKUP and CREATE operations -
see certificate differences.
Change data capture - CDC Yes, for S3 tier and above. Basic, S0, S1, Yes
S2 are not supported.
Collation - server/instance No, default server collation Yes, can be set when the instance is
SQL_Latin1_General_CP1_CI_AS is created and can't be updated later.
always used.
Common language runtime - CLR No Yes, but without access to file system
in CREATE ASSEMBLY statement - see
CLR differences
Credentials Yes, but only database scoped Yes, but only Azure Key Vault and
credentials. SHARED ACCESS SIGNATURE are
supported - see details
Database mirroring No No
Database snapshots No No
F EAT URE A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E
DBCC statements Most - see individual statements Yes - see DBCC differences
DDL statements Most - see individual statements Yes - see T-SQL differences
Elastic query (in public preview) Yes, with required RDBMS type. No, use native cross-DB queries and
Linked Server instead
Extended events (XEvent) Some - see Extended events in SQL Yes - see Extended events differences
Database
Files and file groups Primary file group only Yes. File paths are automatically
assigned and the file location can't be
specified in
ALTER DATABASE ADD FILE
statement.
Filestream No No
Full-text search (FTS) Yes, but third-party word breakers are Yes, but third-party word breakers are
not supported not supported
Functions Most - see individual functions Yes - see Stored procedures, functions,
triggers differences
In-memory optimization Yes in Premium and Business Critical Yes in Business Critical service tier
service tiers.
Limited support for non-persistent In-
Memory OLTP objects such as
memory-optimized table variables in
Hyperscale service tier.
Language elements Most - see individual elements Yes - see T-SQL differences
Ledger Yes No
F EAT URE A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E
Linked servers No - see Elastic query Yes. Only to SQL Server and SQL
Database without distributed
transactions.
Linked servers that read from files No. Use BULK INSERT or No. Use BULK INSERT or
(CSV, Excel) OPENROWSET as an alternative for OPENROWSET as an alternative for
CSV format. CSV format. Track these requests on
SQL Managed Instance feedback item
Log shipping High availability is included with every Natively built in as a part of Azure
database. Disaster recovery is Data Migration Service (DMS)
discussed in Overview of business migration process. Natively built for
continuity. custom data migration projects as an
external Log Replay Service (LRS).
Not available as High availability
solution, because other High
availability methods are included with
every database and it is not
recommended to use Log-shipping as
HA alternative. Disaster recovery is
discussed in Overview of business
continuity. Not available as a
replication mechanism between
databases - use secondary replicas on
Business Critical tier, auto-failover
groups, or transactional replication as
the alternatives.
Logins and users Yes, but CREATE and ALTER login Yes, with some differences. Windows
statements do not offer all the options logins are not supported and they
(no Windows and server-level Azure should be replaced with Azure Active
Active Directory logins). Directory logins.
EXECUTE AS LOGIN is not supported
- use EXECUTE AS USER instead.
Minimal logging in bulk import No, only Full Recovery model is No, only Full Recovery model is
supported. supported.
OLE Automation No No
OPENROWSET Yes, only to import from Azure Blob Yes, only to SQL Database, SQL
storage. Managed Instance and SQL Server,
and to import from Azure Blob
storage. See T-SQL differences
Polybase No. You can query data in the files Yes, for Azure Data Lake Storage
placed on Azure Blob Storage using (ADLS) and Azure Blob Storage as data
OPENROWSET function or use an source. See Data Virtualization with
external table that references a Azure SQL Managed Instance for more
serverless SQL pool in Synapse details.
Analytics.
Recovery models Only Full Recovery that guarantees Only Full Recovery that guarantees
high availability is supported. Simple high availability is supported. Simple
and Bulk Logged recovery models are and Bulk Logged recovery models are
not available. not available.
Restore database from backup From automated backups only - see From automated backups - see SQL
SQL Database recovery Database recovery and from full
backups placed on Azure Blob Storage
- see Backup differences
Restore database to SQL Server No. Use BACPAC or BCP instead of No, because SQL Server database
native restore. engine used in SQL Managed Instance
has higher version than any RTM
version of SQL Server used on-
premises. Use BACPAC, BCP, or
Transactional replication instead.
Semantic search No No
Set statements Most - see individual statements Yes - see T-SQL differences
SQL Server Agent No - see Elastic jobs (preview) Yes - see SQL Server Agent differences
SQL Server Auditing No - see SQL Database auditing Yes - see Auditing differences
F EAT URE A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E
System stored functions Most - see individual functions Yes - see Stored procedures, functions,
triggers differences
System stored procedures Some - see individual stored Yes - see Stored procedures, functions,
procedures triggers differences
System tables Some - see individual tables Yes - see T-SQL differences
System catalog views Some - see individual views Yes - see T-SQL differences
TempDB Yes. 32-GB size per core for every Yes. 24-GB size per vCore for entire GP
database. tier and limited by instance size on BC
tier
Temporary tables Local and database-scoped global Local and instance-scoped global
temporary tables temporary tables
Transactional Replication Yes, Transactional and snapshot Yes, in public preview. See the
replication subscriber only constraints here.
Windows Server Failover Clustering No. Other techniques that provide No. Other techniques that provide
high availability are included with high availability are included with
every database. Disaster recovery is every database. Disaster recovery is
discussed in Overview of business discussed in Overview of business
continuity with Azure SQL Database. continuity with Azure SQL Database.
Platform capabilities
The Azure platform provides a number of PaaS capabilities that are added as an additional value to the standard
database features. There is a number of external services that can be used with Azure SQL Database.
Active geo-replication Yes - all service tiers. No, see Auto-failover groups as an
alternative.
Auto-failover groups Yes - all service tiers. Yes, see Auto-failover groups.
P L AT F O RM F EAT URE A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E
Auto-scale Yes, but only in serverless model. In No, you need to choose reserved
the non-serverless model, the change compute and storage. The change of
of service tier (change of vCore, service tier (vCore or max storage) is
storage, or DTU) is fast and online. The online and requires minimal or no
service tier change requires minimal or downtime.
no downtime.
Automatic backups Yes. Full backups are taken every 7 Yes. Full backups are taken every 7
days, differential 12 hours, and log days, differential 12 hours, and log
backups every 5-10 min. backups every 5-10 min.
Short-term backup retention Yes. 7 days default, max 35 days. Yes. 7 days default, max 35 days.
Elastic jobs Yes - see Elastic jobs (preview) No (SQL Agent can be used instead).
File system access No. Use BULK INSERT or No. Use BULK INSERT or
OPENROWSET to access and load data OPENROWSET to access and load data
from Azure Blob Storage as an from Azure Blob Storage as an
alternative. alternative.
Long-term backup retention - LTR Yes, keep automatically taken backups Yes, keep automatically taken backups
up to 10 years. Long-term retention up to 10 years.
policies are not yet supported for
Hyperscale databases.
Policy-based management No No
Public IP address Yes. The access can be restricted using Yes. Needs to be explicitly enabled and
firewall or service endpoints. port 3342 must be enabled in NSG
rules. Public IP can be disabled if
needed. See Public endpoint for more
details.
Point in time database restore Yes - all service tiers. See SQL Yes - see SQL Database recovery
Database recovery
P L AT F O RM F EAT URE A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E
Resource pools Yes, as Elastic pools Yes. A single instance of SQL Managed
Instance can have multiple databases
that share the same pool of resources.
In addition, you can deploy multiple
instances of SQL Managed Instance in
instance pools (preview) that can share
the resources.
Scaling up or down (online) Yes, you can either change DTU or Yes, you can change reserved vCores
reserved vCores or max storage with or max storage with the minimal
the minimal downtime. downtime.
SQL Alias No, use DNS Alias No, use Cliconfg to set up alias on the
client machines.
SQL Server Analysis Services (SSAS) No, Azure Analysis Services is a No, Azure Analysis Services is a
separate Azure cloud service. separate Azure cloud service.
SQL Server Integration Services (SSIS) Yes, with a managed SSIS in Azure Yes, with a managed SSIS in Azure
Data Factory (ADF) environment, Data Factory (ADF) environment,
where packages are stored in SSISDB where packages are stored in SSISDB
hosted by Azure SQL Database and hosted by SQL Managed Instance and
executed on Azure SSIS Integration executed on Azure SSIS Integration
Runtime (IR), see Create Azure-SSIS IR Runtime (IR), see Create Azure-SSIS IR
in ADF. in ADF.
To compare the SSIS features in SQL To compare the SSIS features in SQL
Database and SQL Managed Instance, Database and SQL Managed Instance,
see Compare SQL Database to SQL see Compare SQL Database to SQL
Managed Instance. Managed Instance.
SQL Server Reporting Services (SSRS) No - see Power BI No - use Power BI paginated reports
instead or host SSRS on an Azure VM.
While SQL Managed Instance cannot
run SSRS as a service, it can host SSRS
catalog databases for a reporting
server installed on Azure Virtual
Machine, using SQL Server
authentication.
Query Performance Insights (QPI) Yes No. Use built-in reports in SQL Server
Management Studio and Azure Data
Studio.
VNet Partial, it enables restricted access Yes, SQL Managed Instance is injected
using VNet Endpoints in customer's VNet. See subnet and
VNet
VNet Global peering Yes, using Private IP and service Yes, using Virtual network peering.
endpoints
P L AT F O RM F EAT URE A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E
Tools
Azure SQL Database and Azure SQL Managed Instance support various data tools that can help you manage
your data.
BACPAC file (export) Yes - see SQL Database export Yes - see SQL Managed Instance
export
BACPAC file (import) Yes - see SQL Database import Yes - see SQL Managed Instance
import
Master Data Services (MDS) No No. Host MDS on an Azure VM. While
SQL Managed Instance cannot run
MDS as a service, it can host MDS
databases for an MDS service installed
on Azure Virtual Machine, using SQL
Server authentication.
SQL Server Management Studio Yes Yes version 18.0 and higher
(SSMS)
Migration methods
You can use different migration methods to move your data between SQL Server, Azure SQL Database and
Azure SQL Managed Instance. Some methods are Online and picking-up all changes that are made on the
source while you are running migration, while in Offline methods you need to stop your workload that is
modifying data on the source while the migration is in progress.
SO URC E A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E
SQL Server (on-premises, AzureVM, Online: Transactional Replication Online: Data Migration Service (DMS),
Amazon RDS) Offline: Data Migration Service Transactional Replication
(DMS), BACPAC file (import), BCP Offline: Native backup/restore,
BACPAC file (import), BCP, Snapshot
replication
Single database Offline: BACPAC file (import), BCP Offline: BACPAC file (import), BCP
Next steps
Microsoft continues to add features to Azure SQL Database. Visit the Service Updates webpage for Azure for the
newest updates using these filters:
Filtered to Azure SQL Database.
Filtered to General Availability (GA) announcements for SQL Database features.
For more information about Azure SQL Database and Azure SQL Managed Instance, see:
What is Azure SQL Database?
What is Azure SQL Managed Instance?
What is an Azure SQL Managed Instance pool?
Multi-model capabilities of Azure SQL Database
and SQL Managed Instance
9/13/2022 • 8 minutes to read • Edit Online
NOTE
You can use JSONPath expressions, XQuery/XPath expressions, spatial functions, and graph query expressions in the same
Transact-SQL query to access any data that you stored in the database. Any tool or programming language that can
execute Transact-SQL queries can also use that query interface to access multi-model data. This is the key difference from
multi-model databases such as Azure Cosmos DB, which provide specialized APIs for data models.
Graph features
Azure SQL products offer graph database capabilities to model many-to-many relationships in a database. A
graph is a collection of nodes (or vertices) and edges (or relationships). A node represents an entity (for
example, a person or an organization). An edge represents a relationship between the two nodes that it connects
(for example, likes or friends).
Here are some features that make a graph database unique:
Edges are first-class entities in a graph database. They can have attributes or properties associated with them.
A single edge can flexibly connect multiple nodes in a graph database.
You can express pattern matching and multi-hop navigation queries easily.
You can express transitive closure and polymorphic queries easily.
Graph relationships and graph query capabilities are integrated into Transact-SQL and receive the benefits of
using the SQL Server database engine as the foundational database management system. Graph features use
standard Transact-SQL queries enhanced with the graph MATCH operator to query the graph data.
A relational database can achieve anything that a graph database can. However, a graph database can make it
easier to express certain queries. Your decision to choose one over the other can be based on the following
factors:
You need to model hierarchical data where one node can have multiple parents, so you can't use the
hierarchyId data type.
Your application has complex many-to-many relationships. As the application evolves, new relationships are
added.
You need to analyze interconnected data and relationships.
You want to use graph-specific T-SQL search conditions such as SHORTEST_PATH.
JSON features
In Azure SQL products, you can parse and query data represented in JavaScript Object Notation (JSON) format,
and export your relational data as JSON text. JSON is a core feature of the SQL Server database engine.
JSON features enable you to put JSON documents in tables, transform relational data into JSON documents,
and transform JSON documents into relational data. You can use the standard Transact-SQL language enhanced
with JSON functions for parsing documents. You can also use non-clustered indexes, columnstore indexes, or
memory-optimized tables to optimize your queries.
JSON is a popular data format for exchanging data in modern web and mobile applications. JSON is also used
for storing semistructured data in log files or in NoSQL databases. Many REST web services return results
formatted as JSON text or accept data formatted as JSON.
Most Azure services have REST endpoints that return or consume JSON. These services include Azure Cognitive
Search, Azure Storage, and Azure Cosmos DB.
If you have JSON text, you can extract data from JSON or verify that JSON is properly formatted by using the
built-in functions JSON_VALUE, JSON_QUERY, and ISJSON. The other functions are:
JSON_MODIFY: Lets you update values inside JSON text.
OPENJSON: Can transform an array of JSON objects into a set of rows, for more advanced querying and
analysis. Any SQL query can be executed on the returned result set.
FOR JSON: Lets you format data stored in your relational tables as JSON text.
XML features
XML features enable you to store and index XML data in your database and use native XQuery/XPath operations
to work with XML data. Azure SQL products have a specialized, built-in XML data type and query functions that
process XML data.
The SQL Server database engine provides a powerful platform for developing applications to manage
semistructured data. Support for XML is integrated into all the components of the database engine and includes:
The ability to store XML values natively in an XML data-type column that can be typed according to a
collection of XML schemas or left untyped. You can index the XML column.
The ability to specify an XQuery query against XML data stored in columns and variables of the XML type.
You can use XQuery functionalities in any Transact-SQL query that accesses a data model that you use in
your database.
Automatic indexing of all elements in XML documents by using the primary XML index. Or you can specify
the exact paths that should be indexed by using the secondary XML index.
OPENROWSET , which allows the bulk loading of XML data.
The ability to transform relational data into XML format.
You can use document models instead of the relational models in some specific scenarios:
High normalization of the schema doesn't bring significant benefits because you access all the fields of the
objects at once, or you never update normalized parts of the objects. However, the normalized model
increases the complexity of your queries because you need to join a large number of tables to get the data.
You're working with applications that natively use XML documents for communication or data models, and
you don't want to introduce more layers that transform relational data into JSON and vice versa.
You need to simplify your data model by denormalizing child tables or Entity-Object-Value patterns.
You need to load or export data stored in XML format without an additional tool that parses the data.
Spatial features
Spatial data represents information about the physical location and shape of objects. These objects can be point
locations or more complex objects such as countries/regions, roads, or lakes.
Azure SQL supports two spatial data types:
The geometry type represents data in a Euclidean (flat) coordinate system.
The geography type represents data in a round-earth coordinate system.
Spatial features in Azure SQL enable you to store geometrical and geographical data. You can use spatial objects
in Azure SQL to parse and query data represented in JSON format, and export your relational data as JSON text.
These spatial objects include Point, LineString, and Polygon. Azure SQL also provides specialized spatial indexes
that you can use to improve the performance of your spatial queries.
Spatial support is a core feature of the SQL Server database engine.
Key-value pairs
Azure SQL products don't have specialized types or structures that support key-value pairs, because key-value
structures can be natively represented as standard relational tables:
You can customize this key-value structure to fit your needs without any constraints. As an example, the value
can be an XML document instead of the nvarchar(max) type. If the value is a JSON document, you can use a
CHECK constraint that verifies the validity of JSON content. You can put any number of values related to one key
in the additional columns. For example:
Add computed columns and indexes to simplify and optimize data access.
Define the table as a memory-optimized, schema-only table to get better performance.
For an example of how a relational model can be effectively used as a key-value pair solution in practice, see
How bwin is using SQL Server 2016 In-Memory OLTP to achieve unprecedented performance and scale. In this
case study, bwin used a relational model for its ASP.NET caching solution to achieve 1.2 million batches per
second.
Next steps
Multi-model capabilities are core SQL Server database engine features that are shared among Azure SQL
products. To learn more about these features, see these articles:
Graph processing with SQL Server and Azure SQL Database
JSON data in SQL Server
Spatial data in SQL Server
XML data in SQL Server
Key-value store performance in Azure SQL Database
Optimize performance by using in-memory
technologies in Azure SQL Database and Azure
SQL Managed Instance
9/13/2022 • 11 minutes to read • Edit Online
Overview
Azure SQL Database and Azure SQL Managed Instance have the following in-memory technologies:
In-Memory OLTP increases number of transactions per second and reduces latency for transaction
processing. Scenarios that benefit from In-Memory OLTP are: high-throughput transaction processing such
as trading and gaming, data ingestion from events or IoT devices, caching, data load, and temporary table
and table variable scenarios.
Clustered columnstore indexes reduce your storage footprint (up to 10 times) and improve performance for
reporting and analytics queries. You can use it with fact tables in your data marts to fit more data in your
database and improve performance. Also, you can use it with historical data in your operational database to
archive and be able to query up to 10 times more data.
Nonclustered columnstore indexes for HTAP help you to gain real-time insights into your business through
querying the operational database directly, without the need to run an expensive extract, transform, and load
(ETL) process and wait for the data warehouse to be populated. Nonclustered columnstore indexes allow fast
execution of analytics queries on the OLTP database, while reducing the impact on the operational workload.
Memory-optimized clustered columnstore indexes for HTAP enables you to perform fast transaction
processing, and to concurrently run analytics queries very quickly on the same data.
Both columnstore indexes and In-Memory OLTP have been part of the SQL Server product since 2012 and
2014, respectively. Azure SQL Database, Azure SQL Managed Instance, and SQL Server share the same
implementation of in-memory technologies.
Benefits of in-memory technology
Because of the more efficient query and transaction processing, in-memory technologies also help you to
reduce cost. You typically don't need to upgrade the pricing tier of the database to achieve performance gains. In
some cases, you might even be able reduce the pricing tier, while still seeing performance improvements with
in-memory technologies.
By using In-Memory OLTP, Quorum Business Solutions was able to double their workload while improving DTUs
by 70%. For more information, see the blog post: In-Memory OLTP.
NOTE
In-memory technologies are available in the Premium and Business Critical tiers.
This article describes aspects of In-Memory OLTP and columnstore indexes that are specific to Azure SQL
Database and Azure SQL Managed Instance, and also includes samples:
You'll see the impact of these technologies on storage and data size limits.
You'll see how to manage the movement of databases that use these technologies between the different
pricing tiers.
You'll see two samples that illustrate the use of In-Memory OLTP, as well as columnstore indexes.
For more information about in-memory in SQL Server, see:
In-Memory OLTP Overview and Usage Scenarios (includes references to customer case studies and
information to get started)
Documentation for In-Memory OLTP
Columnstore Indexes Guide
Hybrid transactional/analytical processing (HTAP), also known as real-time operational analytics
In-Memory OLTP
In-Memory OLTP technology provides extremely fast data access operations by keeping all data in memory. It
also uses specialized indexes, native compilation of queries, and latch-free data-access to improve performance
of the OLTP workload. There are two ways to organize your In-Memory OLTP data:
Memor y-optimized rowstore format where every row is a separate memory object. This is a classic
In-Memory OLTP format optimized for high-performance OLTP workloads. There are two types of
memory-optimized tables that can be used in the memory-optimized rowstore format:
Durable tables (SCHEMA_AND_DATA) where the rows placed in memory are preserved after server
restart. This type of tables behaves like a traditional rowstore table with the additional benefits of in-
memory optimizations.
Non-durable tables (SCHEMA_ONLY) where the rows are not-preserved after restart. This type of
table is designed for temporary data (for example, replacement of temp tables), or tables where you
need to quickly load data before you move it to some persisted table (so called staging tables).
Memor y-optimized columnstore format where data is organized in a columnar format. This structure
is designed for HTAP scenarios where you need to run analytic queries on the same data structure where
your OLTP workload is running.
NOTE
In-Memory OLTP technology is designed for the data structures that can fully reside in memory. Since the In-memory
data cannot be offloaded to disk, make sure that you are using database that has enough memory. See Data size and
storage cap for In-Memory OLTP for more details.
A quick primer on In-Memory OLTP: Quickstart 1: In-Memory OLTP Technologies for Faster T-SQL
Performance.
There is a programmatic way to understand whether a given database supports In-Memory OLTP. You can
execute the following Transact-SQL query:
If the query returns 1 , In-Memory OLTP is supported in this database. The following queries identify all objects
that need to be removed before a database can be downgraded to General Purpose, Standard, or Basic:
IMPORTANT
In-Memory OLTP isn't supported in the General Purpose, Standard or Basic tier. Therefore, it isn't possible to move a
database that has any In-Memory OLTP objects to one of these tiers.
Before you downgrade the database to General Purpose, Standard, or Basic, remove all memory-optimized
tables and table types, as well as all natively compiled T-SQL modules.
Scaling-down resources in Business Critical tier: Data in memory-optimized tables must fit within the In-
Memory OLTP storage that is associated with the tier of the database or the managed instance, or it is available
in the elastic pool. If you try to scale-down the tier or move the database into a pool that doesn't have enough
available In-Memory OLTP storage, the operation fails.
In-memory columnstore
In-memory columnstore technology is enabling you to store and query a large amount of data in the tables.
Columnstore technology uses column-based data storage format and batch query processing to achieve gain
up to 10 times the query performance in OLAP workloads over traditional row-oriented storage. You can also
achieve gains up to 10 times the data compression over the uncompressed data size. There are two types of
columnstore models that you can use to organize your data:
Clustered columnstore where all data in the table is organized in the columnar format. In this model, all
rows in the table are placed in columnar format that highly compresses the data and enables you to execute
fast analytical queries and reports on the table. Depending on the nature of your data, the size of your data
might be decreased 10x-100x. Clustered columnstore model also enables fast ingestion of large amount of
data (bulk-load) since large batches of data greater than 100K rows are compressed before they are stored
on disk. This model is a good choice for the classic data warehouse scenarios.
Non-clustered columnstore where the data is stored in traditional rowstore table and there is an index in
the columnstore format that is used for the analytical queries. This model enables Hybrid Transactional-
Analytic Processing (HTAP): the ability to run performant real-time analytics on a transactional workload.
OLTP queries are executed on rowstore table that is optimized for accessing a small set of rows, while OLAP
queries are executed on columnstore index that is better choice for scans and analytics. The query optimizer
dynamically chooses rowstore or columnstore format based on the query. Non-clustered columnstore
indexes don't decrease the size of the data since original data-set is kept in the original rowstore table
without any change. However, the size of additional columnstore index should be in order of magnitude
smaller than the equivalent B-tree index.
NOTE
In-memory columnstore technology keeps only the data that is needed for processing in the memory, while the data that
cannot fit into the memory is stored on-disk. Therefore, the amount of data in in-memory columnstore structures can
exceed the amount of available memory.
NOTE
SQL Managed Instance supports Columnstore indexes in all tiers.
Next steps
Quickstart 1: In-Memory OLTP Technologies for faster T-SQL Performance
Use In-Memory OLTP in an existing Azure SQL application
Monitor In-Memory OLTP storage for In-Memory OLTP
Try in-memory features
Additional resources
Deeper information
Learn how Quorum doubles key database's workload while lowering DTU by 70% with In-Memory OLTP in
SQL Database
In-Memory OLTP Blog Post
Learn about In-Memory OLTP
Learn about columnstore indexes
Learn about real-time operational analytics
See Common Workload Patterns and Migration Considerations (which describes workload patterns where
In-Memory OLTP commonly provides significant performance gains)
Application design
In-Memory OLTP (in-memory optimization)
Use In-Memory OLTP in an existing Azure SQL application
Tools
Azure portal
SQL Server Management Studio (SSMS)
SQL Server Data Tools (SSDT)
Getting started with temporal tables in Azure SQL
Database and Azure SQL Managed Instance
9/13/2022 • 7 minutes to read • Edit Online
Temporal scenario
This article illustrates the steps to utilize temporal tables in an application scenario. Suppose that you want to
track user activity on a new website that is being developed from scratch or on an existing website that you
want to extend with user activity analytics. In this simplified example, we assume that the number of visited web
pages during a period of time is an indicator that needs to be captured and monitored in the website database
that is hosted on Azure SQL Database or Azure SQL Managed Instance. The goal of the historical analysis of user
activity is to get inputs to redesign website and provide better experience for the visitors.
The database model for this scenario is very simple - user activity metric is represented with a single integer
field, PageVisited , and is captured along with basic information on the user profile. Additionally, for time-based
analysis, you would keep a series of rows for each user, where every row represents the number of pages a
particular user visited within a specific period of time.
Fortunately, you do not need to put any effort in your app to maintain this activity information. With temporal
tables, this process is automated - giving you full flexibility during website design and more time to focus on the
data analysis itself. The only thing you have to do is to ensure that WebSiteInfo table is configured as temporal
system-versioned. The exact steps to utilize temporal tables in this scenario are described below.
In SSDT, choose "Temporal Table (System-Versioned)" template when adding new items to the database project.
That will open table designer and enable you to easily specify the table layout:
You can also create temporal table by specifying the Transact-SQL statements directly, as shown in the example
below. Note that the mandatory elements of every temporal table are the PERIOD definition and the
SYSTEM_VERSIONING clause with a reference to another user table that will store historical row versions:
When you create system-versioned temporal table, the accompanying history table with the default
configuration is automatically created. The default history table contains a clustered B-tree index on the period
columns (end, start) with page compression enabled. This configuration is optimal for the majority of scenarios
in which temporal tables are used, especially for data auditing.
In this particular case, we aim to perform time-based trend analysis over a longer data history and with bigger
data sets, so the storage choice for the history table is a clustered columnstore index. A clustered columnstore
provides very good compression and performance for analytical queries. Temporal tables give you the flexibility
to configure indexes on the current and temporal tables completely independently.
NOTE
Columnstore indexes are available in the Business Critical, General Purpose, and Premium tiers and in the Standard tier, S3
and above.
The following script shows how default index on history table can be changed to the clustered columnstore:
Temporal tables are represented in the Object Explorer with the specific icon for easier identification, while its
history table is displayed as a child node.
Alter existing table to temporal
Let's cover the alternative scenario in which the WebsiteUserInfo table already exists, but was not designed to
keep a history of changes. In this case, you can simply extend the existing table to become temporal, as shown in
the following example:
It is important to notice that the update query doesn't need to know the exact time when the actual operation
occurred nor how historical data will be preserved for future analysis. Both aspects are automatically handled by
Azure SQL Database and Azure SQL Managed Instance. The following diagram illustrates how history data is
being generated on every update.
You can easily modify this query to analyze the site visits as of a day ago, a month ago or at any point in the past
you wish.
To perform basic statistical analysis for the previous day, use the following example:
To search for activities of a specific user, within a period of time, use the CONTAINED IN clause:
DECLARE @hourAgo datetime2 = DATEADD(HOUR, -1, SYSUTCDATETIME());
DECLARE @twoHoursAgo datetime2 = DATEADD(HOUR, -2, SYSUTCDATETIME());
SELECT * FROM dbo.WebsiteUserInfo
FOR SYSTEM_TIME CONTAINED IN (@twoHoursAgo, @hourAgo)
WHERE [UserID] = 1;
Graphic visualization is especially convenient for temporal queries as you can show trends and usage patterns in
an intuitive way very easily:
Similarly, you can change column definition while your workload is active:
Finally, you can remove a column that you do not need anymore.
Next steps
For more information on temporal tables, see check out Temporal Tables.
Dynamically scale database resources with minimal
downtime
9/13/2022 • 5 minutes to read • Edit Online
Overview
When demand for your app grows from a handful of devices and customers to millions, Azure SQL Database
and SQL Managed Instance scale on the fly with minimal downtime. Scalability is one of the most important
characteristics of platform as a service (PaaS) that enables you to dynamically add more resources to your
service when needed. Azure SQL Database enables you to easily change resources (CPU power, memory, IO
throughput, and storage) allocated to your databases.
You can mitigate performance issues due to increased usage of your application that cannot be fixed using
indexing or query rewrite methods. Adding more resources enables you to quickly react when your database
hits the current resource limits and needs more power to handle the incoming workload. Azure SQL Database
also enables you to scale-down the resources when they are not needed to lower the cost.
You don't need to worry about purchasing hardware and changing underlying infrastructure. Scaling a database
can be easily done via the Azure portal using a slider.
Azure SQL Database offers the DTU-based purchasing model and the vCore-based purchasing model, while
Azure SQL Managed Instance offers just the vCore-based purchasing model.
The DTU-based purchasing model offers a blend of compute, memory, and I/O resources in three service
tiers to support lightweight to heavyweight database workloads: Basic, Standard, and Premium. Performance
levels within each tier provide a different mix of these resources, to which you can add additional storage
resources.
The vCore-based purchasing model lets you choose the number of vCores, the amount or memory, and the
amount and speed of storage. This purchasing model offers three service tiers: General Purpose, Business
Critical, and Hyperscale.
The service tier, compute tier, and resource limits for a database, elastic pool, or managed instance can be
changed at any time. For example, you can build your first app on a single database using the serverless
compute tier and then change its service tier manually or programmatically at any time, to the provisioned
compute tier, to meet the needs of your solution.
NOTE
Notable exceptions where you cannot change the service tier of a database are:
Databases using features which are only available in the Business Critical / Premium service tiers, cannot be changed
to use the General Purpose / Standard service tier.
Databases originally created in the Hyperscale service tier cannot be migrated to other service tiers. If you migrate an
existing database in Azure SQL Database to the Hyperscale service tier, you can reverse migrate to the General
Purpose service tier within 45 days of the original migration to Hyperscale. If you wish to migrate the database to
another service tier, such as Business Critical, first reverse migrate to the General Purpose service tier, then perform a
further migration. Learn more in How to reverse migrate from Hyperscale.
You can adjust the resources allocated to your database by changing service objective, or scaling, to meet
workload demands. This also enables you to only pay for the resources that you need, when you need them.
Please refer to the note on the potential impact that a scale operation might have on an application.
NOTE
Dynamic scalability is different from autoscale. Autoscale is when a service scales automatically based on criteria, whereas
dynamic scalability allows for manual scaling with a minimal downtime. Single databases in Azure SQL Database can be
scaled manually, or in the case of the Serverless tier, set to automatically scale the compute resources. Elastic pools, which
allow databases to share resources in a pool, can currently only be scaled manually.
Azure SQL Database offers the ability to dynamically scale your databases:
With a single database, you can use either DTU or vCore models to define maximum amount of resources
that will be assigned to each database.
Elastic pools enable you to define maximum resource limit per group of databases in the pool.
Azure SQL Managed Instance allows you to scale as well:
SQL Managed Instance uses vCores mode and enables you to define maximum CPU cores and maximum of
storage allocated to your instance. All databases within the managed instance will share the resources
allocated to the instance.
NOTE
It is not recommended to scale your managed instance if a long-running transaction, such as data import, data
processing jobs, index rebuild, etc., is running, or if you have any active connection on the instance. To prevent the scaling
from taking longer time to complete than usual, you should scale the instance upon the completion of all long-running
operations.
NOTE
You can expect a short connection break when the scale up/scale down process is finished. If you have implemented Retry
logic for standard transient errors, you will not notice the failover.
Next steps
For information about improving database performance by changing database code, see Find and apply
performance recommendations.
For information about letting built-in database intelligence optimize your database, see Automatic tuning.
For information about read scale-out in Azure SQL Database, see how to use read-only replicas to load
balance read-only query workloads.
For information about a Database sharding, see Scaling out with Azure SQL Database.
For an example of using scripts to monitor and scale a single database, see Use PowerShell to monitor and
scale a single SQL Database.
Use read-only replicas to offload read-only query
workloads
9/13/2022 • 11 minutes to read • Edit Online
NOTE
Read scale-out is always enabled in the Business Critical service tier of Managed Instance, and for Hyperscale databases
with at least one secondary replica.
If your SQL connection string is configured with ApplicationIntent=ReadOnly , the application will be redirected
to a read-only replica of that database or managed instance. For information on how to use the
ApplicationIntent property, see Specifying Application Intent.
If you wish to ensure that the application connects to the primary replica regardless of the ApplicationIntent
setting in the SQL connection string, you must explicitly disable read scale-out when creating the database or
when altering its configuration. For example, if you upgrade your database from Standard or General Purpose
tier to Premium or Business Critical and want to make sure all your connections continue to go to the primary
replica, disable read scale-out. For details on how to disable it, see Enable and disable read scale-out.
NOTE
Query Store and SQL Profiler features are not supported on read-only replicas.
Data consistency
Data changes made on the primary replica are persisted on read-only replicas synchronously or asynchronously
depending on replica type. However, for all replica types, reads from a read-only replica are always
asynchronous with respect to the primary. Within a session connected to a read-only replica, reads are always
transactionally consistent. Because data propagation latency is variable, different replicas can return data at
slightly different points in time relative to the primary and each other. If a read-only replica becomes unavailable
and a session reconnects, it may connect to a replica that is at a different point in time than the original replica.
Likewise, if an application changes data using a read-write session on the primary and immediately reads it
using a read-only session on a read-only replica, it is possible that the latest changes will not be immediately
visible.
Typical data propagation latency between the primary replica and read-only replicas varies in the range from
tens of milliseconds to single-digit seconds. However, there is no fixed upper bound on data propagation latency.
Conditions such as high resource utilization on the replica can increase latency substantially. Applications that
require guaranteed data consistency across sessions, or require committed data to be readable immediately
should use the primary replica.
NOTE
To monitor data propagation latency, see Monitoring and troubleshooting read-only replica.
For example, the following connection string connects the client to a read-only replica (replacing the items in the
angle brackets with the correct values for your environment and dropping the angle brackets):
Server=tcp:<server>.database.windows.net;Database=<mydatabase>;ApplicationIntent=ReadOnly;User ID=
<myLogin>;Password=<myPassword>;Trusted_Connection=False; Encrypt=True;
To connect to a read-only replica using SQL Server Management Studio (SSMS), select Options
Select Additional Connection Parameters and enter ApplicationIntent=ReadOnly and then select Connect
Either of the following connection strings connects the client to a read-write replica (replacing the items in the
angle brackets with the correct values for your environment and dropping the angle brackets):
Server=tcp:<server>.database.windows.net;Database=<mydatabase>;ApplicationIntent=ReadWrite;User ID=
<myLogin>;Password=<myPassword>;Trusted_Connection=False; Encrypt=True;
Server=tcp:<server>.database.windows.net;Database=<mydatabase>;User ID=<myLogin>;Password=
<myPassword>;Trusted_Connection=False; Encrypt=True;
NOTE
In Premium and Business Critical service tiers, only one of the read-only replicas is accessible at any given time.
Hyperscale supports multiple read-only replicas.
NOTE
The sys.resource_stats and sys.elastic_pool_resource_stats DMVs in the logical master database return
resource utilization data of the primary replica.
NOTE
If you receive error 3961, 1219, or 3947 when running queries against a read-only replica, retry the query. Alternatively,
avoid operations that modify object metadata (schema changes, index maintenance, statistics updates, etc.) on the
primary replica while long-running queries execute on secondary replicas.
TIP
In Premium and Business Critical service tiers, when connected to a read-only replica, the redo_queue_size and
redo_rate columns in the sys.dm_database_replica_states DMV may be used to monitor data synchronization process,
serving as indicators of data propagation latency on the read-only replica.
NOTE
For single databases and elastic pool databases, the ability to disable read scale-out is provided for backward
compatibility. Read scale-out cannot be disabled on Business Critical managed instances.
Azure portal
You can manage the read scale-out setting on the Configure database blade.
PowerShell
IMPORTANT
The PowerShell Azure Resource Manager module is still supported, but all future development is for the Az.Sql module.
The Azure Resource Manager module will continue to receive bug fixes until at least December 2020. The arguments for
the commands in the Az module and in the Azure Resource Manager modules are substantially identical. For more
information about their compatibility, see Introducing the new Azure PowerShell Az module.
Managing read scale-out in Azure PowerShell requires the December 2016 Azure PowerShell release or newer.
For the newest PowerShell release, see Azure PowerShell.
You can disable or re-enable read scale-out in Azure PowerShell by invoking the Set-AzSqlDatabase cmdlet and
passing in the desired value ( Enabled or Disabled ) for the -ReadScale parameter.
To disable read scale-out on an existing database (replacing the items in the angle brackets with the correct
values for your environment and dropping the angle brackets):
To disable read scale-out on a new database (replacing the items in the angle brackets with the correct values for
your environment and dropping the angle brackets):
To re-enable read scale-out on an existing database (replacing the items in the angle brackets with the correct
values for your environment and dropping the angle brackets):
REST API
To create a database with read scale-out disabled, or to change the setting for an existing database, use the
following method with the readScale property set to Enabled or Disabled , as in the following sample request.
Method: PUT
URL:
https://management.azure.com/subscriptions/{SubscriptionId}/resourceGroups/{GroupName}/providers/Microsoft.S
ql/servers/{ServerName}/databases/{DatabaseName}?api-version= 2014-04-01-preview
Body: {
"properties": {
"readScale":"Disabled"
}
}
NOTE
There is no automatic round-robin or any other load-balanced routing between the replicas of a geo-replicated secondary
database, with the exception of a Hyperscale geo-replica with more than one HA replica. In that case, sessions with read-
only intent are distributed over all HA replicas of a geo-replica.
Next steps
For information about SQL Database Hyperscale offering, see Hyperscale service tier.
Distributed transactions across cloud databases
9/13/2022 • 12 minutes to read • Edit Online
Common scenarios
Elastic database transactions enable applications to make atomic changes to data stored in several different
databases. Both SQL Database and SQL Managed Instance support client-side development experiences in C#
and .NET. A server-side experience (code written in stored procedures or server-side scripts) using Transact-SQL
is available for SQL Managed Instance only.
IMPORTANT
Running elastic database transactions between Azure SQL Database and Azure SQL Managed Instance is not supported.
Elastic database transaction can only span across a set of databases in SQL Database or a set databases across managed
instances.
<LocalResources>
...
<LocalStorage name="TEMP" sizeInMB="5000" cleanOnRoleRecycle="false" />
<LocalStorage name="TMP" sizeInMB="5000" cleanOnRoleRecycle="false" />
</LocalResources>
<Startup>
<Task commandLine="install.cmd" executionContext="elevated" taskType="simple">
<Environment>
...
<Variable name="TEMP">
<RoleInstanceValue
xpath="/RoleEnvironment/CurrentInstance/LocalResources/LocalResource[@name='TEMP']/@path" />
</Variable>
<Variable name="TMP">
<RoleInstanceValue
xpath="/RoleEnvironment/CurrentInstance/LocalResources/LocalResource[@name='TMP']/@path" />
</Variable>
</Environment>
</Task>
</Startup>
USE AdventureWorks2012;
GO
SET XACT_ABORT ON;
GO
BEGIN DISTRIBUTED TRANSACTION;
-- Delete candidate from local instance.
DELETE AdventureWorks2012.HumanResources.JobCandidate
WHERE JobCandidateID = 13;
-- Delete candidate from remote instance.
DELETE RemoteServer.AdventureWorks2012.HumanResources.JobCandidate
WHERE JobCandidateID = 13;
COMMIT TRANSACTION;
GO
s.Complete();
}
Following example shows a transaction that is implicitly promoted to distributed transaction once the second
SqlConnecton was started within the TransactionScope.
s.Complete();
}
The following diagram shows a Server Trust Group with managed instances that can execute distributed
transactions with .NET or Transact-SQL:
Monitoring transaction status
Use Dynamic Management Views (DMVs) to monitor status and progress of your ongoing elastic database
transactions. All DMVs related to transactions are relevant for distributed transactions in SQL Database and SQL
Managed Instance. You can find the corresponding list of DMVs here: Transaction Related Dynamic Management
Views and Functions (Transact-SQL).
These DMVs are particularly useful:
sys.dm_tran_active_transactions : Lists currently active transactions and their status. The UOW (Unit Of
Work) column can identify the different child transactions that belong to the same distributed transaction. All
transactions within the same distributed transaction carry the same UOW value. For more information, see
the DMV documentation.
sys.dm_tran_database_transactions : Provides additional information about transactions, such as
placement of the transaction in the log. For more information, see the DMV documentation.
sys.dm_tran_locks : Provides information about the locks that are currently held by ongoing transactions.
For more information, see the DMV documentation.
Limitations
The following limitations currently apply to elastic database transactions in SQL Database:
Only transactions across databases in SQL Database are supported. Other X/Open XA resource providers and
databases outside of SQL Database can't participate in elastic database transactions. That means that elastic
database transactions can't stretch across on premises SQL Server and Azure SQL Database. For distributed
transactions on premises, continue to use MSDTC.
Only client-coordinated transactions from a .NET application are supported. Server-side support for T-SQL
such as BEGIN DISTRIBUTED TRANSACTION is planned, but not yet available.
Transactions across WCF services aren't supported. For example, you have a WCF service method that
executes a transaction. Enclosing the call within a transaction scope will fail as a
System.ServiceModel.ProtocolException.
The following limitations currently apply to distributed transactions in SQL Managed Instance:
Only transactions across databases in managed instances are supported. Other X/Open XA resource
providers and databases outside of Azure SQL Managed Instance can't participate in distributed transactions.
That means that distributed transactions can't stretch across on-premises SQL Server and Azure SQL
Managed Instance. For distributed transactions on premises, continue to use MSDTC.
Transactions across WCF services aren't supported. For example, you have a WCF service method that
executes a transaction. Enclosing the call within a transaction scope will fail as a
System.ServiceModel.ProtocolException.
Azure SQL Managed Instance must be part of a Server trust group in order to participate in distributed
transaction.
Limitations of Server trust groups affect distributed transactions.
Managed Instances that participate in distributed transactions need to have connectivity over private
endpoints (using private IP address from the virtual network where they are deployed) and need to be
mutually referenced using private FQDNs. Client applications can use distributed transactions on private
endpoints. Additionally, in cases when Transact-SQL leverages linked servers referencing private endpoints,
client applications can use distributed transactions on public endpoints as well. This limitation is explained on
the following diagram.
Next steps
For questions, reach out to us on the Microsoft Q&A question page for SQL Database.
For feature requests, add them to the SQL Database feedback forum or SQL Managed Instance forum.
Maintenance window
9/13/2022 • 10 minutes to read • Edit Online
NOTE
The maintenance window feature only protects from planned impact from upgrades or scheduled maintenance. It does
not protect from all failover causes; exceptions that may cause short connection interruptions outside of a maintenance
window include hardware failures, cluster load balancing, and database reconfigurations due to events like a change in
database Service Level Objective.
Advance notifications (Preview) are available for databases configured to use a non-default maintenance
window. Advance notifications enable customers to configure notifications to be sent up to 24 hours in advance
of any planned event.
Overview
Azure periodically performs planned maintenance of SQL Database and SQL managed instance resources.
During Azure SQL maintenance event, databases are fully available but can be subject to short reconfigurations
within respective availability SLAs for SQL Database and SQL managed instance.
Maintenance window is intended for production workloads that are not resilient to database or instance
reconfigurations and cannot absorb short connection interruptions caused by planned maintenance events. By
choosing a maintenance window you prefer, you can minimize the impact of planned maintenance as it will be
occurring outside of your peak business hours. Resilient workloads and non-production workloads may rely on
Azure SQL's default maintenance policy.
The maintenance window is free of charge and can be configured on creation or for existing Azure SQL
resources. It can be configured using theAzure portal,PowerShell, CLI, or Azure API.
IMPORTANT
Configuring maintenance window is a long running asynchronous operation, similar to changing the service tier of the
Azure SQL resource. The resource is available during the operation, except a short reconfiguration that happens at the
end of the operation and typically lasts up to 8 seconds even in case of interrupted long-running transactions. To
minimize the impact of the reconfiguration you should perform the operation outside of the peak hours.
IMPORTANT
In very rare circumstances where any postponement of action could cause serious impact, like applying critical security
patch, configured maintenance window may be temporarily overriden.
Advance notifications
Maintenance notifications can be configured to alert you of upcoming planned maintenance events for your
Azure SQL Database and Azure SQL Managed Instance. The alerts arrive 24 hours in advance, at the time of
maintenance, and when the maintenance is complete. For more information, see Advance Notifications.
Feature availability
Supported subscription types
Configuring and using maintenance window is available for the following offer types: Pay-As-You-Go, Cloud
Solution Provider (CSP), Microsoft Enterprise Agreement, or Microsoft Customer Agreement.
Offers restricted to dev/test usage only are not eligible (like Pay-As-You-Go Dev/Test or Enterprise Dev/Test as
examples).
NOTE
An Azure offer is the type of the Azure subscription you have. For example, a subscription with pay-as-you-go rates,
Azure in Open, and Visual Studio Enterprise are all Azure offers. Each offer or plan has different terms and benefits. Your
offer or plan is shown on the subscription's Overview. For more information on switching your subscription to a different
offer, see Change your Azure subscription to a different offer.
West US 3 Yes
Gateway maintenance
To get the maximum benefit from maintenance windows, make sure your client applications are using the
redirect connection policy. Redirect is the recommended connection policy, where clients establish connections
directly to the node hosting the database, leading to reduced latency and improved throughput.
In Azure SQL Database, any connections using the proxy connection policy could be affected by both the
chosen maintenance window and a gateway node maintenance window. However, client connections
using the recommended redirect connection policy are unaffected by a gateway node maintenance
reconfiguration.
In Azure SQL Managed Instance, the gateway nodes are hosted within the virtual cluster and have the
same maintenance window as the managed instance, but using the redirect connection policy is still
recommended to minimize number of disruptions during the maintenance event.
For more on the client connection policy in Azure SQL Database, see Azure SQL Database Connection policy.
For more on the client connection policy in Azure SQL Managed Instance, see Azure SQL Managed Instance
connection types.
IMPORTANT
A short reconfiguration happens at the end of the operation of configuring maintenance window and typically lasts up to
8 seconds even in case of interrupted long-running transactions. To minimize the impact of the reconfiguration, initiate
the operation outside of the peak hours.
IMPORTANT
Make sure that NSG and firewall rules won't block data traffic after IP address change.
servicehealthresources
| where type =~ 'Microsoft.ResourceHealth/events'
| extend impact = properties.Impact
| extend impactedService = parse_json(impact[0]).ImpactedService
| where impactedService =~ 'SQL Database'
| extend eventType = properties.EventType, status = properties.Status, description = properties.Title,
trackingId = properties.TrackingId, summary = properties.Summary, priority = properties.Priority,
impactStartTime = todatetime(tolong(properties.ImpactStartTime)), impactMitigationTime =
todatetime(tolong(properties.ImpactMitigationTime))
| where eventType == 'PlannedMaintenance'
| order by impactStartTime desc
To check for the maintenance events for all managed instances in your subscription, use the following sample
query in Azure Resource Graph Explorer:
servicehealthresources
| where type =~ 'Microsoft.ResourceHealth/events'
| extend impact = properties.Impact
| extend impactedService = parse_json(impact[0]).ImpactedService
| where impactedService =~ 'SQL Managed Instance'
| extend eventType = properties.EventType, status = properties.Status, description = properties.Title,
trackingId = properties.TrackingId, summary = properties.Summary, priority = properties.Priority,
impactStartTime = todatetime(tolong(properties.ImpactStartTime)), impactMitigationTime =
todatetime(tolong(properties.ImpactMitigationTime))
| where eventType == 'PlannedMaintenance'
| order by impactStartTime desc
For the full reference of the sample queries and how to use them across tools like PowerShell or Azure CLI, visit
Azure Resource Graph sample queries for Azure Service Health.
Next steps
Configure maintenance window
Advance notifications
Learn more
Maintenance window FAQ
Azure SQL Database
Azure SQL Managed Instance
Plan for Azure maintenance events in Azure SQL Database and Azure SQL Managed Instance
Configure maintenance window
9/13/2022 • 11 minutes to read • Edit Online
IMPORTANT
Configuring maintenance window is a long running asynchronous operation, similar to changing the service tier of the
Azure SQL resource. The resource is available during the operation, except a short reconfiguration that happens at the
end of the operation and typically lasts up to 8 seconds even in case of interrupted long-running transactions. To
minimize the impact of the reconfiguration you should perform the operation outside of the peak hours.
To configure the maintenance window when you create a database, elastic pool, or managed instance, set the
desired Maintenance window on the Additional settings page.
Set the maintenance window while creating a single database or elastic pool
For step-by-step information on creating a new database or pool, see Create an Azure SQL Database single
database.
Set the maintenance window while creating a managed instance
For step-by-step information on creating a new managed instance, see Create an Azure SQL Managed Instance.
Configure maintenance window for existing databases
When applying a maintenance window selection to a database, a brief reconfiguration (several seconds) may be
experienced in some cases as Azure applies the required changes.
Portal
PowerShell
CLI
The following steps set the maintenance window on an existing database, elastic pool, or managed instance
using the Azure portal:
Set the maintenance window for an existing database or elastic pool
1. Navigate to the SQL database or elastic pool you want to set the maintenance window for.
2. In the Settings menu select Maintenance , then select the desired maintenance window.
Cleanup resources
Be sure to delete unneeded resources after you're finished with them to avoid unnecessary charges.
Portal
PowerShell
CLI
Next steps
To learn more about maintenance window, see Maintenance window.
For more information, see Maintenance window FAQ.
To learn about optimizing performance, see Monitoring and performance tuning in Azure SQL Database and
Azure SQL Managed Instance.
Advance notifications for planned maintenance
events (Preview)
9/13/2022 • 4 minutes to read • Edit Online
IMPORTANT
For Azure SQL Database, advance notifications cannot be configured for the System default maintenance window
option. Choose a maintenance window other than the System default to configure and enable Advance notifications.
NOTE
While maintenance windows are generally available, advance notifications for maintenance windows are in public preview
for Azure SQL Database and Azure SQL Managed Instance.
3. Complete the Create action group form, then select Next: Notifications .
4. On the Notifications tab, select the Notification type . The Email/SMS message/Push/Voice option
offers the most flexibility and is the recommended option. Select the pen to configure the notification.
a. Complete the Add or edit notification form that opens and select OK :
b. Actions and Tags are optional. Here you can configure additional actions to be triggered or use
tags to categorize and organize your Azure resources.
c. Check the details on the Review + create tab and select Create .
5. After selecting create, the alert rule configuration screen opens and the action group will be selected. Give
a name to your new alert rule, then choose the resource group for it, and select Create aler t rule .
6. Click the Health aler ts menu item again, and the list of alerts now contains your new alert.
You're all set. Next time there's a planned Azure SQL maintenance event, you'll receive an advance notification.
Receiving notifications
The following table shows the general-information notifications you may receive:
The following table shows additional notifications that may be sent while maintenance is ongoing:
Permissions
While Advance Notifications can be sent to any email address, Azure subscription RBAC (role-based access
control) policy determines who can access the links in the email. Querying resource graph is covered by Azure
RBAC access management. To enable read access, each recipient should have resource group level read access.
For more information, see Steps to assign an Azure role.
In Azure Resource Graph (ARG) explorer you might find values for the status of deployment that are bit different
than the ones displayed in the notification content.
Retr yLater Planned maintenance for resource xyz has started but
couldn't progress to the end and will continue in next
maintenance window.
For the full reference of the sample queries and how to use them across tools like PowerShell or Azure CLI, visit
Azure Resource Graph sample queries for Azure Service Health.
Next steps
Maintenance window
Maintenance window FAQ
Overview of alerts in Microsoft Azure
Email Azure Resource Manager Role
An overview of Azure SQL Database and SQL
Managed Instance security capabilities
9/13/2022 • 10 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
This article outlines the basics of securing the data tier of an application using Azure SQL Database, Azure SQL
Managed Instance, and Azure Synapse Analytics. The security strategy described follows the layered defense-in-
depth approach as shown in the picture below, and moves from the outside in:
Network security
Microsoft Azure SQL Database, SQL Managed Instance, and Azure Synapse Analytics provide a relational
database service for cloud and enterprise applications. To help protect customer data, firewalls prevent network
access to the server until access is explicitly granted based on IP address or Azure Virtual network traffic origin.
IP firewall rules
IP firewall rules grant access to databases based on the originating IP address of each request. For more
information, see Overview of Azure SQL Database and Azure Synapse Analytics firewall rules.
Virtual network firewall rules
Virtual network service endpoints extend your virtual network connectivity over the Azure backbone and enable
Azure SQL Database to identify the virtual network subnet that traffic originates from. To allow traffic to reach
Azure SQL Database, use the SQL service tags to allow outbound traffic through Network Security Groups.
Virtual network rules enable Azure SQL Database to only accept communications that are sent from selected
subnets inside a virtual network.
NOTE
Controlling access with firewall rules does not apply to SQL Managed Instance . For more information about the
networking configuration needed, see Connecting to a managed instance
Access management
IMPORTANT
Managing databases and servers within Azure is controlled by your portal user account's role assignments. For more
information on this article, see Azure role-based access control in the Azure portal.
Authentication
Authentication is the process of proving the user is who they claim to be. Azure SQL Database and SQL
Managed Instance support SQL authentication and Azure AD authentication. SQL Managed instance additionally
supports Windows Authentication for Azure AD principals.
SQL authentication :
SQL authentication refers to the authentication of a user when connecting to Azure SQL Database or
Azure SQL Managed Instance using username and password. A ser ver admin login with a username
and password must be specified when the server is being created. Using these credentials, a ser ver
admin can authenticate to any database on that server or instance as the database owner. After that,
additional SQL logins and users can be created by the server admin, which enable users to connect using
username and password.
Azure Active Director y authentication :
Azure Active Directory authentication is a mechanism of connecting to Azure SQL Database, Azure SQL
Managed Instance and Azure Synapse Analytics by using identities in Azure Active Directory (Azure AD).
Azure AD authentication allows administrators to centrally manage the identities and permissions of
database users along with other Azure services in one central location. This includes the minimization of
password storage and enables centralized password rotation policies.
A server admin called the Active Director y administrator must be created to use Azure AD
authentication with SQL Database. For more information, see Connecting to SQL Database By Using
Azure Active Directory Authentication. Azure AD authentication supports both managed and federated
accounts. The federated accounts support Windows users and groups for a customer domain federated
with Azure AD.
Additional Azure AD authentication options available are Active Directory Universal Authentication for
SQL Server Management Studio connections including multi-factor authentication and Conditional
Access.
Windows Authentication for Azure AD Principals (Preview) :
Kerberos authentication for Azure AD Principals (Preview) enables Windows Authentication for Azure
SQL Managed Instance. Windows Authentication for managed instances empowers customers to move
existing services to the cloud while maintaining a seamless user experience and provides the basis for
infrastructure modernization.
To enable Windows Authentication for Azure Active Directory (Azure AD) principals, you will turn your
Azure AD tenant into an independent Kerberos realm and create an incoming trust in the customer
domain. Learn how Windows Authentication for Azure SQL Managed Instance is implemented with Azure
Active Directory and Kerberos.
IMPORTANT
Managing databases and servers within Azure is controlled by your portal user account's role assignments. For more
information on this article, see Azure role-based access control in Azure portal. Controlling access with firewall rules does
not apply to SQL Managed Instance. Please see the following article on connecting to a managed instance for more
information about the networking configuration needed.
Authorization
Authorization refers to controlling access on resources and commands within a database. This is done by
assigning permissions to a user within a database in Azure SQL Database or Azure SQL Managed Instance.
Permissions are ideally managed by adding user accounts to database roles and assigning database-level
permissions to those roles. Alternatively an individual user can also be granted certain object-level permissions.
For more information, see Logins and users
As a best practice, create custom roles when needed. Add users to the role with the least privileges required to
do their job function. Do not assign permissions directly to users. The server admin account is a member of the
built-in db_owner role, which has extensive permissions and should only be granted to few users with
administrative duties. To further limit the scope of what a user can do, the EXECUTE AS can be used to specify
the execution context of the called module. Following these best practices is also a fundamental step towards
Separation of Duties.
Row-level security
Row-Level Security enables customers to control access to rows in a database table based on the characteristics
of the user executing a query (for example, group membership or execution context). Row-Level Security can
also be used to implement custom Label-based security concepts. For more information, see Row-Level security.
Threat protection
SQL Database and SQL Managed Instance secure customer data by providing auditing and threat detection
capabilities.
SQL auditing in Azure Monitor logs and Event Hubs
SQL Database and SQL Managed Instance auditing tracks database activities and helps maintain compliance
with security standards by recording database events to an audit log in a customer-owned Azure storage
account. Auditing allows users to monitor ongoing database activities, as well as analyze and investigate
historical activity to identify potential threats or suspected abuse and security violations. For more information,
see Get started with SQL Database Auditing.
Advanced Threat Protection
Advanced Threat Protection is analyzing your logs to detect unusual behavior and potentially harmful attempts
to access or exploit databases. Alerts are created for suspicious activities such as SQL injection, potential data
infiltration, and brute force attacks or for anomalies in access patterns to catch privilege escalations and
breached credentials use. Alerts are viewed from the Microsoft Defender for Cloud, where the details of the
suspicious activities are provided and recommendations for further investigation given along with actions to
mitigate the threat. Advanced Threat Protection can be enabled per server for an additional fee. For more
information, see Get started with SQL Database Advanced Threat Protection.
IMPORTANT
Note that some non-Microsoft drivers may not use TLS by default or rely on an older version of TLS (<1.2) in order to
function. In this case the server still allows you to connect to your database. However, we recommend that you evaluate
the security risks of allowing such drivers and application to connect to SQL Database, especially if you store sensitive
data.
For further information about TLS and connectivity, see TLS considerations
Always Encrypted is a feature designed to protect sensitive data stored in specific database columns from access
(for example, credit card numbers, national identification numbers, or data on a need to know basis). This
includes database administrators or other privileged users who are authorized to access the database to
perform management tasks, but have no business need to access the particular data in the encrypted columns.
The data is always encrypted, which means the encrypted data is decrypted only for processing by client
applications with access to the encryption key. The encryption key is never exposed to SQL Database or SQL
Managed Instance and can be stored either in the Windows Certificate Store or in Azure Key Vault.
Dynamic data masking
Dynamic data masking limits sensitive data exposure by masking it to non-privileged users. Dynamic data
masking automatically discovers potentially sensitive data in Azure SQL Database and SQL Managed Instance
and provides actionable recommendations to mask these fields, with minimal impact to the application layer. It
works by obfuscating the sensitive data in the result set of a query over designated database fields, while the
data in the database is not changed. For more information, see Get started with SQL Database and SQL
Managed Instance dynamic data masking.
Security management
Vulnerability assessment
Vulnerability assessment is an easy to configure service that can discover, track, and help remediate potential
database vulnerabilities with the goal to proactively improve overall database security. Vulnerability assessment
(VA) is part of the Microsoft Defender for SQL offering, which is a unified package for advanced SQL security
capabilities. Vulnerability assessment can be accessed and managed via the central Microsoft Defender for SQL
portal.
Data discovery and classification
Data discovery and classification (currently in preview) provides basic capabilities built into Azure SQL Database
and SQL Managed Instance for discovering, classifying and labeling the sensitive data in your databases.
Discovering and classifying your utmost sensitive data (business/financial, healthcare, personal data, etc.) can
play a pivotal role in your organizational Information protection stature. It can serve as infrastructure for:
Various security scenarios, such as monitoring (auditing) and alerting on anomalous access to sensitive data.
Controlling access to, and hardening the security of, databases containing highly sensitive data.
Helping meet data privacy standards and regulatory compliance requirements.
For more information, see Get started with data discovery and classification.
Compliance
In addition to the above features and functionality that can help your application meet various security
requirements, Azure SQL Database also participates in regular audits, and has been certified against a number
of compliance standards. For more information, see the Microsoft Azure Trust Center where you can find the
most current list of SQL Database compliance certifications.
Next steps
For a discussion of the use of logins, user accounts, database roles, and permissions in SQL Database and
SQL Managed Instance, see Manage logins and user accounts.
For a discussion of database auditing, see auditing.
For a discussion of threat detection, see threat detection.
Playbook for addressing common security
requirements with Azure SQL Database and Azure
SQL Managed Instance
9/13/2022 • 39 minutes to read • Edit Online
Authentication
Authentication is the process of proving the user is who they claim to be. Azure SQL Database and SQL
Managed Instance support two types of authentication:
SQL authentication
Azure Active Directory authentication
NOTE
Azure Active Directory authentication may not be supported for all tools and 3rd party applications.
NOTE
In SQL Managed Instance, you can also create logins that map to Azure AD principals in the master database.
See CREATE LOGIN (Transact-SQL).
Using Azure AD groups simplifies permission management and both the group owner, and the resource
owner can add/remove members to/from the group.
Create a separate group for Azure AD administrators for each server or managed instance.
See the article, Provision an Azure Active Directory administrator for your server.
Monitor Azure AD group membership changes using Azure AD audit activity reports.
For a managed instance, a separate step is required to create an Azure AD admin.
See the article, Provision an Azure Active Directory administrator for your managed instance.
NOTE
Azure AD authentication is recorded in Azure SQL audit logs, but not in Azure AD sign-in logs.
Azure RBAC permissions granted in Azure do not apply to Azure SQL Database or SQL Managed Instance permissions.
Such permissions must be created/mapped manually using existing SQL permissions.
On the client-side, Azure AD authentication needs access to the internet or via User Defined Route (UDR) to a virtual
network.
The Azure AD access token is cached on the client side and its lifetime depends on token configuration. See the article,
Configurable token lifetimes in Azure Active Directory
For guidance on troubleshooting Azure AD Authentication issues, see the following blog: Troubleshooting Azure AD.
Azure AD Multi-Factor Authentication helps provides additional security by requiring more than one form of
authentication.
How to implement
Enable Multi-Factor Authentication in Azure AD using Conditional Access and use interactive
authentication.
The alternative is to enable Multi-Factor Authentication for the entire Azure AD or AD domain.
Best practices
Activate Conditional Access in Azure AD (requires Premium subscription).
See the article, Conditional Access in Azure AD.
Create Azure AD group(s) and enable Multi-Factor Authentication policy for selected groups using Azure
AD Conditional Access.
See the article, Plan Conditional Access Deployment.
Multi-Factor Authentication can be enabled for the entire Azure AD or for the whole Active Directory
federated with Azure AD.
Use Azure AD Interactive authentication mode for Azure SQL Database and Azure SQL Managed Instance
where a password is requested interactively, followed by Multi-Factor Authentication:
Use Universal Authentication in SSMS. See the article, Using Multi-factor Azure AD authentication with
Azure SQL Database, SQL Managed Instance, Azure Synapse (SSMS support for Multi-Factor
Authentication).
Use Interactive Authentication supported in SQL Server Data Tools (SSDT). See the article, Azure Active
Directory support in SQL Server Data Tools (SSDT).
Use other SQL tools supporting Multi-Factor Authentication.
SSMS Wizard support for export/extract/deploy database
sqlpackage.exe: option '/ua'
sqlcmd Utility: option -G (interactive)
bcp Utility: option -G (interactive)
Implement your applications to connect to Azure SQL Database or Azure SQL Managed Instance using
interactive authentication with Multi-Factor Authentication support.
See the article, Connect to Azure SQL Database with Azure AD Multi-Factor Authentication.
NOTE
This authentication mode requires user-based identities. In cases where a trusted identity model is used that is
bypassing individual Azure AD user authentication (e.g. using managed identity for Azure resources), Multi-Factor
Authentication does not apply.
Password-based authentication methods are a weaker form of authentication. Credentials can be compromised
or mistakenly given away.
How to implement
Use an Azure AD integrated authentication that eliminates the use of passwords.
Best practices
Use single sign-on authentication using Windows credentials. Federate the on-premises AD domain with
Azure AD and use integrated Windows authentication (for domain-joined machines with Azure AD).
See the article, SSMS support for Azure AD Integrated authentication.
Minimize the use of password-based authentication for applications
Mentioned in: OSA Practice #4, ISO Access Control (AC)
How to implement
Enable Azure Managed Identity. You can also use integrated or certificate-based authentication.
Best practices
Use managed identities for Azure resources.
System-assigned managed identity
User-assigned managed identity
Use Azure SQL Database from Azure App Service with managed identity (without code changes)
Use cert-based authentication for an application.
See this code sample.
Use Azure AD authentication for integrated federated domain and domain-joined machine (see section
above).
See the sample application for integrated authentication.
Protect passwords and secrets
For cases when passwords aren't avoidable, make sure they're secured.
How to implement
Use Azure Key Vault to store passwords and secrets. Whenever applicable, use Multi-Factor Authentication
for Azure SQL Database with Azure AD users.
Best practices
If avoiding passwords or secrets aren't possible, store user passwords and application secrets in Azure
Key Vault, and manage access through Key Vault access policies.
Various app development frameworks may also offer framework-specific mechanisms for protecting
secrets in the app. For example: ASP.NET core app.
Use SQL authentication for legacy applications
SQL authentication refers to the authentication of a user when connecting to Azure SQL Database or SQL
Managed Instance using username and password. A login will need to be created in each server or managed
instance, and a user created in each database.
How to implement
Use SQL authentication.
Best practices
As a server or instance admin, create logins and users. Unless using contained database users with
passwords, all passwords are stored in master database.
See the article, Controlling and granting database access to SQL Database, SQL Managed Instance and
Azure Synapse Analytics.
Access management
Access management (also called Authorization) is the process of controlling and managing authorized users'
access and privileges to Azure SQL Database or SQL Managed Instance.
Implement principle of least privilege
Mentioned in: FedRamp controls AC-06, NIST: AC-6, OSA Practice #3
The principle of least privilege states that users shouldn't have more privileges than needed to complete their
tasks. For more information, see the article Just enough administration.
How to implement
Assign only the necessary permissions to complete the required tasks:
In SQL Databases:
Use granular permissions and user-defined database roles (or server-roles in SQL Managed Instance):
1. Create the required roles
CREATE ROLE
CREATE SERVER ROLE
2. Create required users
CREATE USER
3. Add users as members to roles
ALTER ROLE
ALTER SERVER ROLE
4. Then assign permissions to roles.
GRANT
Make sure to not assign users to unnecessary roles.
In Azure Resource Manager:
Use built-in roles if available or Azure custom roles and assign the necessary permissions.
Azure built-in roles
Azure custom roles
Best practices
The following best practices are optional but will result in better manageability and supportability of your
security strategy:
If possible, start with the least possible set of permissions and start adding permissions one by one if
there's a real necessity (and justification) – as opposed to the opposite approach: taking permissions away
step by step.
Refrain from assigning permissions to individual users. Use roles (database or server roles) consistently
instead. Roles helps greatly with reporting and troubleshooting permissions. (Azure RBAC only supports
permission assignment via roles.)
Create and use custom roles with the exact permissions needed. Typical roles that are used in practice:
Security deployment
Administrator
Developer
Support personnel
Auditor
Automated processes
End user
Use built-in roles only when the permissions of the roles match exactly the needed permissions for the
user. You can assign users to multiple roles.
Remember that permissions in the database engine can be applied within the following scopes (the
smaller the scope, the smaller the impact of the granted permissions):
Server (special roles in the master database) in Azure
Database
Schema
It is a best practice to use schemas to grant permissions inside a database. (also see: Schema-
design: Recommendations for Schema design with security in mind)
Object (table, view, procedure, etc.)
NOTE
It is not recommended to apply permissions on the object level because this level adds unnecessary complexity to
the overall implementation. If you decide to use object-level permissions, those should be clearly documented. The
same applies to column-level-permissions, which are even less recommendable for the same reasons. Also be
aware that by default a table-level DENY does not override a column-level GRANT. This would require the
common criteria compliance Server Configuration to be activated.
Perform regular checks using Vulnerability Assessment (VA) to test for too many permissions.
Implement Separation of Duties
Mentioned in: FedRamp: AC-04, NIST: AC-5, ISO: A.6.1.2, PCI 6.4.2, SOC: CM-3, SDL-3
Separation of Duties, also called Segregation of Duties describes the requirement to split sensitive tasks into
multiple tasks that are assigned to different users. Separation of Duties helps prevent data breaches.
How to implement
Identify the required level of Separation of Duties. Examples:
Between Development/Test and Production environments
Security-wise sensitive tasks vs Database Administrator (DBA) management level tasks vs developer
tasks.
Examples: Auditor, creation of security policy for Role-level Security (RLS), Implementing SQL
Database objects with DDL-permissions.
Identify a comprehensive hierarchy of users (and automated processes) that access the system.
Create roles according to the needed user-groups and assign permissions to roles.
For management-level tasks in Azure portal or via PowerShell-automation use Azure roles. Either find
a built-in role matching the requirement, or create an Azure custom role using the available
permissions
Create Server roles for server-wide tasks (creating new logins, databases) in a managed instance.
Create Database Roles for database-level tasks.
For certain sensitive tasks, consider creating special stored procedures signed by a certificate to execute
the tasks on behalf of the users. One important advantage of digitally signed stored procedures is that if
the procedure is changed, the permissions that were granted to the previous version of the procedure are
immediately removed.
Example: Tutorial: Signing Stored Procedures with a Certificate
Implement Transparent Data Encryption (TDE) with customer-managed keys in Azure Key Vault to enable
Separation of Duties between data owner and security owner.
See the article, Configure customer-managed keys for Azure Storage encryption from the Azure
portal.
To ensure that a DBA can't see data that is considered highly sensitive and can still do DBA tasks, you can
use Always Encrypted with role separation.
See the articles, Overview of Key Management for Always Encrypted, Key Provisioning with Role
Separation, and Column Master Key Rotation with Role Separation.
In cases where the use of Always Encrypted isn't feasible, or at least not without major costs and efforts
that may even render the system near unusable, compromises can be made and mitigated through the
use of compensating controls such as:
Human intervention in processes.
Audit trails – for more information on Auditing, see, Audit critical security events.
Best practices
Make sure that different accounts are used for Development/Test and Production environments. Different
accounts help to comply with separation of Test and Production systems.
Refrain from assigning permissions to individual users. Use roles (database or server roles) consistently
instead. Having roles helps greatly with reporting and troubleshooting permissions.
Use built-in roles when the permissions match exactly the needed permissions – if the union of all
permissions from multiple built-in roles leads to a 100% match, you can assign multiple roles
concurrently as well.
Create and use user-defined roles when built-in roles grant too many permissions or insufficient
permissions.
Role assignments can also be done temporarily, also known as Dynamic Separation of Duties (DSD),
either within SQL Agent Job steps in T-SQL or using Azure PIM for Azure roles.
Make sure that DBAs don't have access to the encryption keys or key stores, and that Security
Administrators with access to the keys have no access to the database in turn. The use of Extensible Key
Management (EKM) can make this separation easier to achieve. Azure Key Vault can be used to
implement EKM.
Always make sure to have an Audit trail for security-related actions.
You can retrieve the definition of the Azure built-in roles to see the permissions used and create a custom
role based on excerpts and cumulations of these via PowerShell.
Because any member of the db_owner database role can change security settings like Transparent Data
Encryption (TDE), or change the SLO, this membership should be granted with care. However, there are
many tasks that require db_owner privileges. Task like changing any database setting such as changing
DB options. Auditing plays a key role in any solution.
It is not possible to restrict permissions of a db_owner, and therefore prevent an administrative account
from viewing user data. If there's highly sensitive data in a database, Always Encrypted can be used to
safely prevent db_owners or any other DBA from viewing it.
NOTE
Achieving Separation of Duties (SoD) is challenging for security-related or troubleshooting tasks. Other areas like
development and end-user roles are easier to segregate. Most compliance related controls allow the use of alternate
control functions such as Auditing when other solutions aren't practical.
For the readers that want to dive deeper into SoD, we recommend the following resources:
For Azure SQL Database and SQL Managed Instance:
Controlling and granting database access
Engine Separation of Duties for the Application Developer
Separation of Duties
Signing Stored Procedures
For Azure Resource Management:
Azure built-in roles
Azure custom roles
Using Azure AD Privileged Identity Management for elevated access
Perform regular code reviews
Mentioned in: PCI: 6.3.2, SOC: SDL-3
Separation of Duties is not limited to the data in a database, but includes application code. Malicious code can
potentially circumvent security controls. Before deploying custom code to production, it is essential to review
what's being deployed.
How to implement
Use a database tool like Azure Data Studio that supports source control.
Implement a segregated code deployment process.
Before committing to main branch, a person (other than the author of the code itself) has to inspect the
code for potential elevation of privileges risks as well as malicious data modifications to protect against
fraud and rogue access. This can be done using source control mechanisms.
Best practices
Standardization: It helps to implement a standard procedure that is to be followed for any code updates.
Vulnerability Assessment contains rules that check for excessive permissions, the use of old encryption
algorithms, and other security problems within a database schema.
Further checks can be done in a QA or test environment using Advanced Threat Protection that scans for
code that is vulnerable to SQL-injection.
Examples of what to look out for:
Creation of a user or changing security settings from within an automated SQL-code-update
deployment.
A stored procedure, which, depending on the parameters provided, updates a monetary value in a cell
in a non-conforming way.
Make sure the person conducting the review is an individual other than the originating code author and
knowledgeable in code-reviews and secure coding.
Be sure to know all sources of code-changes. Code can be in T-SQL Scripts. It can be ad-hoc commands
to be executed or be deployed in forms of Views, Functions, Triggers, and Stored Procedures. It can be
part of SQL Agent Job definitions (Steps). It can also be executed from within SSIS packages, Azure Data
Factory, and other services.
Data protection
Data protection is a set of capabilities for safeguarding important information from compromise by encryption
or obfuscation.
NOTE
Microsoft attests to Azure SQL Database and SQL Managed Instance as being FIPS 140-2 Level 1 compliant. This is done
after verifying the strict use of FIPS 140-2 Level 1 acceptable algorithms and FIPS 140-2 Level 1 validated instances of
those algorithms including consistency with required key lengths, key management, key generation, and key storage. This
attestation is meant to allow our customers to respond to the need or requirement for the use of FIPS 140-2 Level 1
validated instances in the processing of data or delivery of systems or applications. We define the terms "FIPS 140-2 Level
1 compliant" and "FIPS 140-2 Level 1 compliance" used in the above statement to demonstrate their intended
applicability to U.S. and Canadian government use of the different term "FIPS 140-2 Level 1 validated."
Protects your data while data moves between your client and server. Refer to Network Security.
Encrypt data at rest
Mentioned in: OSA Practice #6, ISO Control Family: Cryptography
Encryption at rest is the cryptographic protection of data when it is persisted in database, log, and backup files.
How to implement
Transparent Database Encryption (TDE) with service managed keys are enabled by default for any databases
created after 2017 in Azure SQL Database and SQL Managed Instance.
In a managed instance, if the database is created from a restore operation using an on-premises server, the
TDE setting of the original database will be honored. If the original database doesn't have TDE enabled, we
recommend that TDE be manually turned on for the managed instance.
Best practices
Don't store data that requires encryption-at-rest in the master database. The master database can't be
encrypted with TDE.
Use customer-managed keys in Azure Key Vault if you need increased transparency and granular control
over the TDE protection. Azure Key Vault allows the ability to revoke permissions at any time to render
the database inaccessible. You can centrally manage TDE protectors along with other keys, or rotate the
TDE protector at your own schedule using Azure Key Vault.
If you're using customer-managed keys in Azure Key Vault, follow the articles, Guidelines for configuring
TDE with Azure Key Vault and How to configure Geo-DR with Azure Key Vault.
NOTE
Some items considered customer content, such as table names, object names, and index names, may be transmitted in
log files for support and troubleshooting by Microsoft.
NOTE
Always Encrypted does not work with Dynamic Data Masking. It is not possible to encrypt and mask the same column,
which implies that you need to prioritize protecting data in use vs. masking the data for your app users via Dynamic Data
Masking.
Best practices
NOTE
Dynamic Data Masking cannot be used to protect data from high-privilege users. Masking policies do not apply to users
with administrative access like db_owner.
Don't permit app users to run ad-hoc queries (as they may be able to work around Dynamic Data
Masking).
See the article, Bypassing masking using inference or brute-force techniques for details.
Use a proper access control policy (via SQL permissions, roles, RLS) to limit user permissions to make
updates in the masked columns. Creating a mask on a column doesn't prevent updates to that column.
Users that receive masked data when querying the masked column, can update the data if they have
write-permissions.
Dynamic Data Masking doesn't preserve the statistical properties of the masked values. This may impact
query results (for example, queries containing filtering predicates or joins on the masked data).
Network security
Network security refers to access controls and best practices to secure your data in transit to Azure SQL
Database.
Configure my client to connect securely to SQL Database/SQL Managed Instance
Best practices on how to prevent client machines and applications with well-known vulnerabilities (for example,
using older TLS protocols and cipher suites) from connecting to Azure SQL Database and SQL Managed
Instance.
How to implement
Ensure that client machines connecting to Azure SQL Database and SQL Managed Instance are using the
latest Transport Layer Security (TLS) version.
Best practices
Enforce a minimal TLS version at the SQL Database server or SQL Managed Instance level using the
minimal TLS version setting. We recommend setting the minimal TLS version to 1.2, after testing to
confirm your applications supports it. TLS 1.2 includes fixes for vulnerabilities found in previous versions.
Configure all your apps and tools to connect to SQL Database with encryption enabled
Encrypt = On, TrustServerCertificate = Off (or equivalent with non-Microsoft drivers).
If your app uses a driver that doesn't support TLS or supports an older version of TLS, replace the driver,
if possible. If not possible, carefully evaluate the security risks.
Reduce attack vectors via vulnerabilities in SSL 2.0, SSL 3.0, TLS 1.0, and TLS 1.1 by disabling them on
client machines connecting to Azure SQL Database per Transport Layer Security (TLS) registry
settings.
Check cipher suites available on the client: Cipher Suites in TLS/SSL (Schannel SSP). Specifically,
disable 3DES per Configuring TLS Cipher Suite Order.
Minimize attack surface
Minimize the number of features that can be attacked by a malicious user. Implement network access controls
for Azure SQL Database.
How to implement
In SQL Database:
Set Allow Access to Azure services to OFF at the server-level
Use VNet Service endpoints and VNet Firewall Rules.
Use Private Link.
In SQL Managed Instance:
Follow the guidelines in Network requirements.
Best practices
Restricting access to Azure SQL Database and SQL Managed Instance by connecting on a private
endpoint (for example, using a private data path):
A managed instance can be isolated inside a virtual network to prevent external access. Applications
and tools that are in the same or peered virtual network in the same region could access it directly.
Applications and tools that are in different region could use virtual-network-to-virtual-network
connection or ExpressRoute circuit peering to establish connection. Customer should use Network
Security Groups (NSG) to restrict access over port 1433 only to resources that require access to a
managed instance.
For a SQL Database, use the Private Link feature that provides a dedicated private IP for the server
inside your virtual network. You can also use Virtual network service endpoints with virtual network
firewall rules to restrict access to your servers.
Mobile users should use point-to-site VPN connections to connect over the data path.
Users connected to their on-premises network should use site-to-site VPN connection or
ExpressRoute to connect over the data path.
You can access Azure SQL Database and SQL Managed Instance by connecting to a public endpoint (for
example, using a public data path). The following best practices should be considered:
For a server in SQL Database, use IP firewall rules to restrict access to only authorized IP addresses.
For SQL Managed Instance, use Network Security Groups (NSG) to restrict access over port 3342 only
to required resources. For more information, see Use a managed instance securely with public
endpoints.
NOTE
The SQL Managed Instance public endpoint is not enabled by default and it and must be explicitly enabled. If
company policy disallows the use of public endpoints, use Azure Policy to prevent enabling public endpoints in the
first place.
How to implement
DDoS protection is automatically enabled as part of the Azure Platform. It includes always-on traffic monitoring
and real-time mitigation of network-level attacks on public endpoints.
Use Azure DDoS Protection to monitor public IP addresses associated to resources deployed in virtual
networks.
Use Advanced Threat Protection for Azure SQL Database to detect Denial of Service (DoS) attacks against
databases.
Best practices
Follow the practices described in Minimize Attack Surface helps minimize DDoS attack threats.
The Advanced Threat Protection Brute force SQL credentials alert helps to detect brute force attacks.
In some cases, the alert can even distinguish penetration testing workloads.
For Azure VM hosting applications connecting to SQL Database:
Follow recommendation to Restrict access through Internet-facing endpoints in Microsoft Defender
for Cloud.
Use virtual machine scale sets to run multiple instances of your application on Azure VMs.
Disable RDP and SSH from Internet to prevent brute force attack.
NOTE
Enabling auditing to Log Analytics will incur cost based on ingestion rates. Please be aware of the associated cost with
using this option, or consider storing the audit logs in an Azure storage account.
Security Management
This section describes the different aspects and best practices for managing your databases security posture. It
includes best practices for ensuring your databases are configured to meet security standards, for discovering
and for classifying and tracking access to potentially sensitive data in your databases.
Ensure that the databases are configured to meet security best practices
Proactively improve your database security by discovering and remediating potential database vulnerabilities.
How to implement
Enable SQL Vulnerability Assessment (VA) to scan your database for security issues, and to automatically run
periodically on your databases.
Best practices
Initially, run VA on your databases and iterate by remediating failing checks that oppose security best
practices. Set up baselines for acceptable configurations until the scan comes out clean, or all checks has
passed.
Configure periodic recurring scans to run once a week and configure the relevant person to receive
summary emails.
Review the VA summary following each weekly scan. For any vulnerabilities found, evaluate the drift from
the previous scan result and determine if the check should be resolved. Review if there's a legitimate
reason for the change in configuration.
Resolve checks and update baselines where relevant. Create ticket items for resolving actions and track
these until they're resolved.
Fur ther resources
SQL Vulnerability Assessment
SQL Vulnerability Assessment service helps you identify database vulnerabilities
Identify and tag sensitive data
Discover columns that potentially contain sensitive data. What is considered sensitive data heavily depends on
the customer, compliance regulation, etc., and needs to be evaluated by the users in charge of that data. Classify
the columns to use advanced sensitivity-based auditing and protection scenarios.
How to implement
Use SQL Data Discovery and Classification to discover, classify, label, and protect the sensitive data in your
databases.
View the classification recommendations that are created by the automated discovery in the SQL Data
Discovery and Classification dashboard. Accept the relevant classifications, such that your sensitive
data is persistently tagged with classification labels.
Manually add classifications for any additional sensitive data fields that were not discovered by the
automated mechanism.
For more information, see SQL Data Discovery and Classification.
Best practices
Monitor the classification dashboard on a regular basis for an accurate assessment of the database's
classification state. A report on the database classification state can be exported or printed to share for
compliance and auditing purposes.
Continuously monitor the status of recommended sensitive data in SQL Vulnerability Assessment. Track
the sensitive data discovery rule and identify any drift in the recommended columns for classification.
Use classification in a way that is tailored to the specific needs of your organization. Customize your
Information Protection policy (sensitivity labels, information types, discovery logic) in the SQL
Information Protection policy in Microsoft Defender for Cloud.
Track access to sensitive data
Monitor who accesses sensitive data and capture queries on sensitive data in audit logs.
How to implement
Use SQL Audit and Data Classification in combination.
In your SQL Database Audit log, you can track access specifically to sensitive data. You can also view
information such as the data that was accessed, as well as its sensitivity label. For more information,
see Data Discovery and Classification and Auditing access to sensitive data.
Best practices
See best practices for the Auditing and Data Classification sections:
Audit critical security events
Identify and tag sensitive data
Visualize security and compliance status
Use a unified infrastructure security management system that strengthens the security posture of your data
centers (including databases in SQL Database). View a list of recommendations concerning the security of your
databases and compliance status.
How to implement
Monitor SQL-related security recommendations and active threats in Microsoft Defender for Cloud.
Next steps
See An overview of Azure SQL Database security capabilities
Azure Policy Regulatory Compliance controls for
Azure SQL Database & SQL Managed Instance
9/13/2022 • 52 minutes to read • Edit Online
IMPORTANT
Each control below is associated with one or more Azure Policy definitions. These policies may help you assess compliance
with the control; however, there often is not a one-to-one or complete match between a control and one or more policies.
As such, Compliant in Azure Policy refers only to the policies themselves; this doesn't ensure you're fully compliant with
all requirements of a control. In addition, the compliance standard includes controls that aren't addressed by any Azure
Policy definitions at this time. Therefore, compliance in Azure Policy is only a partial view of your overall compliance status.
The associations between controls and Azure Policy Regulatory Compliance definitions for these compliance standards
may change over time.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)
Guidelines for System 1537 Events to be logged - Azure Defender for 2.0.1
Monitoring - Event 1537 SQL should be
logging and auditing enabled for
unprotected Azure
SQL servers
Guidelines for System 1537 Events to be logged - Azure Defender for 1.0.2
Monitoring - Event 1537 SQL should be
logging and auditing enabled for
unprotected SQL
Managed Instances
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)
Network Security NS-2 Secure cloud services Public network access 1.1.0
with network on Azure SQL
controls Database should be
disabled
Logging and Threat LT-1 Enable threat Azure Defender for 2.0.1
Detection detection capabilities SQL should be
enabled for
unprotected Azure
SQL servers
Logging and Threat LT-1 Enable threat Azure Defender for 1.0.2
Detection detection capabilities SQL should be
enabled for
unprotected SQL
Managed Instances
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E
Logging and Threat LT-2 Enable threat Azure Defender for 2.0.1
Detection detection for identity SQL should be
and access enabled for
management unprotected Azure
SQL servers
Logging and Threat LT-2 Enable threat Azure Defender for 1.0.2
Detection detection for identity SQL should be
and access enabled for
management unprotected SQL
Managed Instances
Logging and Threat LT-3 Enable logging for Auditing on SQL 2.0.0
Detection security investigation server should be
enabled
Logging and Threat LT-6 Configure log SQL servers with 3.0.0
Detection storage retention auditing to storage
account destination
should be configured
with 90 days
retention or higher
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)
Logging and 2.7 Enable alerts for Azure Defender for 2.0.1
Monitoring anomalous activity SQL should be
enabled for
unprotected Azure
SQL servers
Logging and 2.7 Enable alerts for Azure Defender for 1.0.2
Monitoring anomalous activity SQL should be
enabled for
unprotected SQL
Managed Instances
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E
Identity and Access 3.9 Use Azure Active An Azure Active 1.0.0
Control Directory Directory
administrator should
be provisioned for
SQL servers
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)
Security Center CIS Microsoft Azure Ensure ASC Default Auditing on SQL 2.0.0
Foundations policy setting server should be
Benchmark "Monitor SQL enabled
recommendation Auditing" is not
2.14 "Disabled"
Security Center CIS Microsoft Azure Ensure ASC Default Transparent Data 2.0.0
Foundations policy setting Encryption on SQL
Benchmark "Monitor SQL databases should be
recommendation Encryption" is not enabled
2.15 "Disabled"
Database Services CIS Microsoft Azure Ensure that 'Auditing' Auditing on SQL 2.0.0
Foundations is set to 'On' server should be
Benchmark enabled
recommendation 4.1
Database Services CIS Microsoft Azure Ensure SQL server's SQL managed 2.0.0
Foundations TDE protector is instances should use
Benchmark encrypted with BYOK customer-managed
recommendation (Use your own key) keys to encrypt data
4.10 at rest
Database Services CIS Microsoft Azure Ensure SQL server's SQL servers should 2.0.1
Foundations TDE protector is use customer-
Benchmark encrypted with BYOK managed keys to
recommendation (Use your own key) encrypt data at rest
4.10
Database Services CIS Microsoft Azure Ensure that SQL Auditing 1.0.0
Foundations 'AuditActionGroups' settings should have
Benchmark in 'auditing' policy for Action-Groups
recommendation 4.2 a SQL server is set configured to capture
properly critical activities
Database Services CIS Microsoft Azure Ensure that 'Auditing' SQL servers with 3.0.0
Foundations Retention is 'greater auditing to storage
Benchmark than 90 days' account destination
recommendation 4.3 should be configured
with 90 days
retention or higher
Database Services CIS Microsoft Azure Ensure that Azure Defender for 2.0.1
Foundations 'Advanced Data SQL should be
Benchmark Security' on a SQL enabled for
recommendation 4.4 server is set to 'On' unprotected Azure
SQL servers
Database Services CIS Microsoft Azure Ensure that Azure Defender for 1.0.2
Foundations 'Advanced Data SQL should be
Benchmark Security' on a SQL enabled for
recommendation 4.4 server is set to 'On' unprotected SQL
Managed Instances
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E
Database Services CIS Microsoft Azure Ensure that Azure An Azure Active 1.0.0
Foundations Active Directory Directory
Benchmark Admin is configured administrator should
recommendation 4.8 be provisioned for
SQL servers
Database Services CIS Microsoft Azure Ensure that 'Data Transparent Data 2.0.0
Foundations encryption' is set to Encryption on SQL
Benchmark 'On' on a SQL databases should be
recommendation 4.9 Database enabled
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)
Database Services CIS Microsoft Azure Ensure that 'Auditing' Auditing on SQL 2.0.0
Foundations is set to 'On' server should be
Benchmark enabled
recommendation
4.1.1
Database Services CIS Microsoft Azure Ensure that 'Data Transparent Data 2.0.0
Foundations encryption' is set to Encryption on SQL
Benchmark 'On' on a SQL databases should be
recommendation Database enabled
4.1.2
Database Services CIS Microsoft Azure Ensure that 'Auditing' SQL servers with 3.0.0
Foundations Retention is 'greater auditing to storage
Benchmark than 90 days' account destination
recommendation should be configured
4.1.3 with 90 days
retention or higher
Database Services CIS Microsoft Azure Ensure that Azure Defender for 2.0.1
Foundations Advanced Threat SQL should be
Benchmark Protection (ATP) on a enabled for
recommendation SQL server is set to unprotected Azure
4.2.1 'Enabled' SQL servers
Database Services CIS Microsoft Azure Ensure that Azure Defender for 1.0.2
Foundations Advanced Threat SQL should be
Benchmark Protection (ATP) on a enabled for
recommendation SQL server is set to unprotected SQL
4.2.1 'Enabled' Managed Instances
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E
Database Services CIS Microsoft Azure Ensure that Azure An Azure Active 1.0.0
Foundations Active Directory Directory
Benchmark Admin is configured administrator should
recommendation 4.4 be provisioned for
SQL servers
Database Services CIS Microsoft Azure Ensure SQL server's SQL managed 2.0.0
Foundations TDE protector is instances should use
Benchmark encrypted with customer-managed
recommendation 4.5 Customer-managed keys to encrypt data
key at rest
Database Services CIS Microsoft Azure Ensure SQL server's SQL servers should 2.0.1
Foundations TDE protector is use customer-
Benchmark encrypted with managed keys to
recommendation 4.5 Customer-managed encrypt data at rest
key
CMMC Level 3
To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see
Azure Policy Regulatory Compliance - CMMC Level 3. For more information about this compliance standard, see
Cybersecurity Maturity Model Certification (CMMC).
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)
Access Control AC.2.016 Control the flow of Public network access 1.1.0
CUI in accordance on Azure SQL
with approved Database should be
authorizations. disabled
Audit and AU.2.041 Ensure that the Azure Defender for 2.0.1
Accountability actions of individual SQL should be
system users can be enabled for
uniquely traced to unprotected Azure
those users so they SQL servers
can be held
accountable for their
actions.
Audit and AU.2.041 Ensure that the Azure Defender for 1.0.2
Accountability actions of individual SQL should be
system users can be enabled for
uniquely traced to unprotected SQL
those users so they Managed Instances
can be held
accountable for their
actions.
Audit and AU.2.042 Create and retain Azure Defender for 2.0.1
Accountability system audit logs SQL should be
and records to the enabled for
extent needed to unprotected Azure
enable the SQL servers
monitoring, analysis,
investigation, and
reporting of unlawful
or unauthorized
system activity.
Audit and AU.2.042 Create and retain Azure Defender for 1.0.2
Accountability system audit logs SQL should be
and records to the enabled for
extent needed to unprotected SQL
enable the Managed Instances
monitoring, analysis,
investigation, and
reporting of unlawful
or unauthorized
system activity.
Audit and AU.3.046 Alert in the event of Azure Defender for 2.0.1
Accountability an audit logging SQL should be
process failure. enabled for
unprotected Azure
SQL servers
Audit and AU.3.046 Alert in the event of Azure Defender for 1.0.2
Accountability an audit logging SQL should be
process failure. enabled for
unprotected SQL
Managed Instances
System and SC.1.175 Monitor, control, and Public network access 1.1.0
Communications protect on Azure SQL
Protection communications (i.e., Database should be
information disabled
transmitted or
received by
organizational
systems) at the
external boundaries
and key internal
boundaries of
organizational
systems.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E
FedRAMP High
To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see
Azure Policy Regulatory Compliance - FedRAMP High. For more information about this compliance standard,
see FedRAMP High.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)
Access Control AC-2 (12) Account Monitoring / Azure Defender for 1.0.2
Atypical Usage SQL should be
enabled for
unprotected SQL
Managed Instances
Audit and AU-6 (4) Central Review and Auditing on SQL 2.0.0
Accountability Analysis server should be
enabled
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E
Audit and AU-6 (4) Central Review and Azure Defender for 2.0.1
Accountability Analysis SQL should be
enabled for
unprotected Azure
SQL servers
Audit and AU-6 (4) Central Review and Azure Defender for 1.0.2
Accountability Analysis SQL should be
enabled for
unprotected SQL
Managed Instances
Audit and AU-12 (1) System-wide / Time- Azure Defender for 2.0.1
Accountability correlated Audit Trail SQL should be
enabled for
unprotected Azure
SQL servers
Audit and AU-12 (1) System-wide / Time- Azure Defender for 1.0.2
Accountability correlated Audit Trail SQL should be
enabled for
unprotected SQL
Managed Instances
System and SC-7 (3) Access Points Public network access 1.1.0
Communications on Azure SQL
Protection Database should be
disabled
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E
FedRAMP Moderate
To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see
Azure Policy Regulatory Compliance - FedRAMP Moderate. For more information about this compliance
standard, see FedRAMP Moderate.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)
Access Control AC-2 (12) Account Monitoring / Azure Defender for 1.0.2
Atypical Usage SQL should be
enabled for
unprotected SQL
Managed Instances
System and SC-7 (3) Access Points Public network access 1.1.0
Communications on Azure SQL
Protection Database should be
disabled
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)
ISO 27001:2013
To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see
Azure Policy Regulatory Compliance - ISO 27001:2013. For more information about this compliance standard,
see ISO 27001:2013.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)
Data management DM-6 20.4.4 Database files Azure Defender for 2.0.1
SQL should be
enabled for
unprotected Azure
SQL servers
Data management DM-6 20.4.4 Database files Azure Defender for 1.0.2
SQL should be
enabled for
unprotected SQL
Managed Instances
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)
Access Control AC-2 (12) Account Monitoring Azure Defender for 1.0.2
for Atypical Usage SQL should be
enabled for
unprotected SQL
Managed Instances
Access Control AC-16 Security and Privacy Azure Defender for 2.0.1
Attributes SQL should be
enabled for
unprotected Azure
SQL servers
Access Control AC-16 Security and Privacy Azure Defender for 1.0.2
Attributes SQL should be
enabled for
unprotected SQL
Managed Instances
Audit and AU-6 Audit Record Review, Azure Defender for 2.0.1
Accountability Analysis, and SQL should be
Reporting enabled for
unprotected Azure
SQL servers
Audit and AU-6 Audit Record Review, Azure Defender for 1.0.2
Accountability Analysis, and SQL should be
Reporting enabled for
unprotected SQL
Managed Instances
Audit and AU-6 (4) Central Review and Auditing on SQL 2.0.0
Accountability Analysis server should be
enabled
Audit and AU-6 (4) Central Review and Azure Defender for 2.0.1
Accountability Analysis SQL should be
enabled for
unprotected Azure
SQL servers
Audit and AU-6 (4) Central Review and Azure Defender for 1.0.2
Accountability Analysis SQL should be
enabled for
unprotected SQL
Managed Instances
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E
Audit and AU-6 (5) Integrated Analysis Azure Defender for 2.0.1
Accountability of Audit Records SQL should be
enabled for
unprotected Azure
SQL servers
Audit and AU-6 (5) Integrated Analysis Azure Defender for 1.0.2
Accountability of Audit Records SQL should be
enabled for
unprotected SQL
Managed Instances
Audit and AU-12 (1) System-wide and Azure Defender for 2.0.1
Accountability Time-correlated SQL should be
Audit Trail enabled for
unprotected Azure
SQL servers
Audit and AU-12 (1) System-wide and Azure Defender for 1.0.2
Accountability Time-correlated SQL should be
Audit Trail enabled for
unprotected SQL
Managed Instances
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E
System and SC-7 (3) Access Points Public network access 1.1.0
Communications on Azure SQL
Protection Database should be
disabled
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)
Requirement 1 PCI DSS v3.2.1 1.3.4 PCI DSS requirement Auditing on SQL 2.0.0
1.3.4 server should be
enabled
Requirement 10 PCI DSS v3.2.1 10.5.4 PCI DSS requirement Auditing on SQL 2.0.0
10.5.4 server should be
enabled
Requirement 11 PCI DSS v3.2.1 11.2.1 PCI DSS requirement SQL databases 4.0.0
11.2.1 should have
vulnerability findings
resolved
Requirement 3 PCI DSS v3.2.1 3.2 PCI DSS requirement An Azure Active 1.0.0
3.2 Directory
administrator should
be provisioned for
SQL servers
Requirement 3 PCI DSS v3.2.1 3.4 PCI DSS requirement Transparent Data 2.0.0
3.4 Encryption on SQL
databases should be
enabled
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E
Requirement 4 PCI DSS v3.2.1 4.1 PCI DSS requirement Transparent Data 2.0.0
4.1 Encryption on SQL
databases should be
enabled
Requirement 5 PCI DSS v3.2.1 5.1 PCI DSS requirement SQL databases 4.0.0
5.1 should have
vulnerability findings
resolved
Requirement 6 PCI DSS v3.2.1 6.2 PCI DSS requirement SQL databases 4.0.0
6.2 should have
vulnerability findings
resolved
Requirement 6 PCI DSS v3.2.1 6.5.3 PCI DSS requirement Transparent Data 2.0.0
6.5.3 Encryption on SQL
databases should be
enabled
Requirement 6 PCI DSS v3.2.1 6.6 PCI DSS requirement SQL databases 4.0.0
6.6 should have
vulnerability findings
resolved
Requirement 7 PCI DSS v3.2.1 7.2.1 PCI DSS requirement An Azure Active 1.0.0
7.2.1 Directory
administrator should
be provisioned for
SQL servers
Requirement 8 PCI DSS v3.2.1 8.3.1 PCI DSS requirement An Azure Active 1.0.0
8.3.1 Directory
administrator should
be provisioned for
SQL servers
RMIT Malaysia
To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see
Azure Policy Regulatory Compliance - RMIT Malaysia. For more information about this compliance standard, see
RMIT Malaysia.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)
Network Resilience RMiT 10.33 Network Resilience - Configure Azure SQL 1.0.0
10.33 Server to disable
public network access
Network Resilience RMiT 10.33 Network Resilience - Configure Azure SQL 1.0.0
10.33 Server to enable
private endpoint
connections
Network Resilience RMiT 10.39 Network Resilience - SQL Server should 1.0.0
10.39 use a virtual network
service endpoint
Cloud Services RMiT 10.49 Cloud Services - SQL Database should 2.0.0
10.49 avoid using GRS
backup redundancy
Cloud Services RMiT 10.53 Cloud Services - SQL servers should 2.0.1
10.53 use customer-
managed keys to
encrypt data at rest
Data Loss Prevention RMiT 11.15 Data Loss Prevention Configure Azure SQL 1.0.0
(DLP) (DLP) - 11.15 Server to disable
public network access
Data Loss Prevention RMiT 11.15 Data Loss Prevention SQL managed 2.0.0
(DLP) (DLP) - 11.15 instances should use
customer-managed
keys to encrypt data
at rest
Data Loss Prevention RMiT 11.15 Data Loss Prevention Transparent Data 2.0.0
(DLP) (DLP) - 11.15 Encryption on SQL
databases should be
enabled
Control Measures on RMiT Appendix 5.6 Control Measures on Azure SQL Database 2.0.0
Cybersecurity Cybersecurity - should be running
Appendix 5.6 TLS version 1.2 or
newer
Control Measures on RMiT Appendix 5.6 Control Measures on Public network access 1.1.0
Cybersecurity Cybersecurity - on Azure SQL
Appendix 5.6 Database should be
disabled
Control Measures on RMiT Appendix 5.6 Control Measures on SQL Managed 1.0.1
Cybersecurity Cybersecurity - Instance should have
Appendix 5.6 the minimal TLS
version of 1.2
Control Measures on RMiT Appendix 5.6 Control Measures on Virtual network 1.0.0
Cybersecurity Cybersecurity - firewall rule on Azure
Appendix 5.6 SQL Database should
be enabled to allow
traffic from the
specified subnet
Control Measures on RMiT Appendix 5.7 Control Measures on Configure Azure SQL 1.0.0
Cybersecurity Cybersecurity - Server to enable
Appendix 5.7 private endpoint
connections
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)
Audit information for 13 Audit information for Azure Defender for 2.0.1
users users SQL should be
enabled for
unprotected Azure
SQL servers
Next steps
Learn more about Azure Policy Regulatory Compliance.
See the built-ins on the Azure Policy GitHub repo.
Microsoft Defender for SQL
9/13/2022 • 3 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
Microsoft Defender for SQL is a Defender plan in Microsoft Defender for Cloud. Microsoft Defender for SQL
includes functionality for surfacing and mitigating potential database vulnerabilities, and detecting anomalous
activities that could indicate a threat to your database. It provides a single go-to location for enabling and
managing these capabilities.
5. Select Save .
Enable Microsoft Defender plans programatically
The flexibility of Azure allows for a number of programmatic methods for enabling Microsoft Defender plans.
Use any of the following tools to enable Microsoft Defender for your subscription:
M ET H O D IN ST RUC T IO N S
PowerShell Set-AzSecurityPricing
Enable Microsoft Defender for Azure SQL Database at the resource level
We recommend enabling Microsoft Defender plans at the subscription level so that new resources are
automatically protected. However, if you have an organizational reason to enable Microsoft Defender for Cloud
at the server level, use the following steps:
1. From the Azure portal, open your server or managed instance.
2. Under the Security heading, select Defender for Cloud .
3. Select Enable Microsoft Defender for SQL .
NOTE
A storage account is automatically created and configured to store your Vulnerability Assessment scan results. If
you've already enabled Microsoft Defender for another server in the same resource group and region, then the existing
storage account is used.
The cost of Microsoft Defender for SQL is aligned with Microsoft Defender for Cloud standard tier pricing per node, where
a node is the entire server or managed instance. You are thus paying only once for protecting all databases on the server
or managed instance with Microsoft Defender for SQL. You can evaluate Microsoft Defender for Cloud with a free trial.
Next steps
Learn more about Vulnerability Assessment
Learn more about Advanced Threat Protection
Learn more about Microsoft Defender for Cloud
SQL Advanced Threat Protection
9/13/2022 • 2 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics SQL
Server on Azure VM Azure Arc-enabled SQL Server
Advanced Threat Protection for Azure SQL Database, Azure SQL Managed Instance, Azure Synapse Analytics,
SQL Server on Azure Virtual Machines and Azure Arc-enabled SQL Server detects anomalous activities
indicating unusual and potentially harmful attempts to access or exploit databases.
Advanced Threat Protection is part of the Microsoft Defender for SQL offering, which is a unified package for
advanced SQL security capabilities. Advanced Threat Protection can be accessed and managed via the central
Microsoft Defender for SQL portal.
Overview
Advanced Threat Protection provides a new layer of security, which enables customers to detect and respond to
potential threats as they occur by providing security alerts on anomalous activities. Users receive an alert upon
suspicious database activities, potential vulnerabilities, and SQL injection attacks, as well as anomalous database
access and queries patterns. Advanced Threat Protection integrates alerts with Microsoft Defender for Cloud,
which include details of suspicious activity and recommend action on how to investigate and mitigate the threat.
Advanced Threat Protection makes it simple to address potential threats to the database without the need to be
a security expert or manage advanced security monitoring systems.
For a full investigation experience, it is recommended to enable auditing, which writes database events to an
audit log in your Azure storage account. To enable auditing, see Auditing for Azure SQL Database and Azure
Synapse or Auditing for Azure SQL Managed Instance.
Alerts
Advanced Threat Protection detects anomalous activities indicating unusual and potentially harmful attempts to
access or exploit databases. For a list of alerts, see the Alerts for SQL Database and Azure Synapse Analytics in
Microsoft Defender for Cloud.
2. Click a specific alert to get additional details and actions for investigating this threat and remediating
future threats.
For example, SQL injection is one of the most common Web application security issues on the Internet
that is used to attack data-driven applications. Attackers take advantage of application vulnerabilities to
inject malicious SQL statements into application entry fields, breaching or modifying data in the
database. For SQL Injection alerts, the alert’s details include the vulnerable SQL statement that was
exploited.
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
Data Discovery & Classification is built into Azure SQL Database, Azure SQL Managed Instance, and Azure
Synapse Analytics. It provides basic capabilities for discovering, classifying, labeling, and reporting the sensitive
data in your databases.
Your most sensitive data might include business, financial, healthcare, or personal information. It can serve as
infrastructure for:
Helping to meet standards for data privacy and requirements for regulatory compliance.
Various security scenarios, such as monitoring (auditing) access to sensitive data.
Controlling access to and hardening the security of databases that contain highly sensitive data.
NOTE
For information about SQL Server on-premises, see SQL Data Discovery & Classification.
NOTE
The below example uses Azure SQL Database, but you should select the appropriate product that you want to configure
Data Discovery & Classification.
7. To complete your classification and persistently label (tag) the database columns with the new
classification metadata, select Save in the Classification page.
Microsoft Information Protection policy
Microsoft Information Protection (MIP) labels provide a simple and uniform way for users to classify sensitive
data uniformly across different Microsoft applications. MIP sensitivity labels are created and managed in
Microsoft 365 compliance center. To learn how to create and publish MIP sensitive labels in Microsoft 365
compliance center, see the article, Create and publish sensitivity labels.
Prerequisites to switch to MIP policy
The current user has tenant wide security admin permissions to apply policy at the tenant root management
group level. For more information, see Grant tenant-wide permissions to yourself.
Your tenant has an active Microsoft 365 subscription and you have labels published for the current user. For
more information, see Create and configure sensitivity labels and their policies.
Classify database in Microsoft Information Protection policy mode
1. Go to the Azure portal.
2. Navigate to your database in Azure SQL Database
3. Go to Data Discover y & Classification under the Security heading in your database pane.
4. To select Microsoft Information Protection policy , select the Over view tab, and select Configure .
5. Select Microsoft Information Protection policy in the Information Protection policy options, and
select Save .
6. If you go to the Classification tab, or select Add classification , you will now see M365 sensitivity
labels appear in the Sensitivity label dropdown.
Information type is [n/a] while you are in MIP policy mode and automatic data discovery &
recommendations remain disabled.
A warning icon may appear against an already classified column if the column was classified using a
different Information Protection policy than the currently active policy. For example, if the column was
classified with a label using SQL Information Protection policy earlier and now you are in Microsoft
Information Protection policy mode. You will see a warning icon against that specific column. This
warning icon does not indicate any problem, but is used only for information purposes.
These are the activities that are actually auditable with sensitivity information:
ALTER TABLE ... DROP COLUMN
BULK INSERT
DELETE
INSERT
MERGE
UPDATE
UPDATETEXT
WRITETEXT
DROP TABLE
BACKUP
DBCC CloneDatabase
SELECT INTO
INSERT INTO EXEC
TRUNCATE TABLE
DBCC SHOW_STATISTICS
sys.dm_db_stats_histogram
Use sys.fn_get_audit_file to return information from an audit file stored in an Azure Storage account.
Permissions
These built-in roles can read the data classification of a database:
Owner
Reader
Contributor
SQL Security Manager
User Access Administrator
These are the required actions to read the data classification of a database are:
Microsoft.Sql/servers/databases/currentSensitivityLabels/*
Microsoft.Sql/servers/databases/recommendedSensitivityLabels/*
Microsoft.Sql/servers/databases/schemas/tables/columns/sensitivityLabels/*
These built-in roles can modify the data classification of a database:
Owner
Contributor
SQL Security Manager
This is the required action to modify the data classification of a database are:
Microsoft.Sql/servers/databases/schemas/tables/columns/sensitivityLabels/*
Learn more about role-based permissions in Azure RBAC.
NOTE
The Azure SQL built-in roles in this section apply to a dedicated SQL pool (formerly SQL DW) but are not available for
dedicated SQL pools and other SQL resources within Azure Synapse workspaces. For SQL resources in Azure Synapse
workspaces, use the available actions for data classification to create custom Azure roles as needed for labelling. For more
information on the Microsoft.Synapse/workspaces/sqlPools provider operations, see Microsoft.Synapse.
Manage classifications
You can use T-SQL, a REST API, or PowerShell to manage classifications.
Use T -SQL
You can use T-SQL to add or remove column classifications, and to retrieve all classifications for the entire
database.
NOTE
When you use T-SQL to manage labels, there's no validation that labels that you add to a column exist in the
organization's information-protection policy (the set of labels that appear in the portal recommendations). So, it's up to
you to validate this.
For information about using T-SQL for classifications, see the following references:
To add or update the classification of one or more columns: ADD SENSITIVITY CLASSIFICATION
To remove the classification from one or more columns: DROP SENSITIVITY CLASSIFICATION
To view all classifications on the database: sys.sensitivity_classifications
Use PowerShell cmdlets
Manage classifications and recommendations for Azure SQL Database and Azure SQL Managed Instance using
PowerShell.
PowerShell cmdlets for Azure SQL Database
Get-AzSqlDatabaseSensitivityClassification
Set-AzSqlDatabaseSensitivityClassification
Remove-AzSqlDatabaseSensitivityClassification
Get-AzSqlDatabaseSensitivityRecommendation
Enable-AzSqlDatabaSesensitivityRecommendation
Disable-AzSqlDatabaseSensitivityRecommendation
PowerShell cmdlets for Azure SQL Managed Instance
Get-AzSqlInstanceDatabaseSensitivityClassification
Set-AzSqlInstanceDatabaseSensitivityClassification
Remove-AzSqlInstanceDatabaseSensitivityClassification
Get-AzSqlInstanceDatabaseSensitivityRecommendation
Enable-AzSqlInstanceDatabaseSensitivityRecommendation
Disable-AzSqlInstanceDatabaseSensitivityRecommendation
Use the REST API
You can use the REST API to programmatically manage classifications and recommendations. The published
REST API supports the following operations:
Create Or Update: Creates or updates the sensitivity label of the specified column.
Delete: Deletes the sensitivity label of the specified column.
Disable Recommendation: Disables sensitivity recommendations on the specified column.
Enable Recommendation: Enables sensitivity recommendations on the specified column. (Recommendations
are enabled by default on all columns.)
Get: Gets the sensitivity label of the specified column.
List Current By Database: Gets the current sensitivity labels of the specified database.
List Recommended By Database: Gets the recommended sensitivity labels of the specified database.
Next steps
Consider configuring Azure SQL Auditing for monitoring and auditing access to your classified sensitive data.
For a presentation that includes data Discovery & Classification, see Discovering, classifying, labeling &
protecting SQL data | Data Exposed.
To classify your Azure SQL Databases and Azure Synapse Analytics with Microsoft Purview labels using T-
SQL commands, see Classify your Azure SQL data using Microsoft Purview labels.
Dynamic data masking
9/13/2022 • 6 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics support dynamic data
masking. Dynamic data masking limits sensitive data exposure by masking it to non-privileged users.
Dynamic data masking helps prevent unauthorized access to sensitive data by enabling customers to designate
how much of the sensitive data to reveal with minimal impact on the application layer. It’s a policy-based
security feature that hides the sensitive data in the result set of a query over designated database fields, while
the data in the database is not changed.
For example, a service representative at a call center might identify a caller by confirming several characters of
their email address, but the complete email address shouldn't be revealed to the service representative. A
masking rule can be defined that masks all the email address in the result set of any query. As another example,
an appropriate data mask can be defined to protect personal data, so that a developer can query production
environments for troubleshooting purposes without violating compliance regulations.
M A SK IN G F UN C T IO N M A SK IN G LO GIC
Credit card Masking method, which exposes the last four digits
of the designated fields and adds a constant string as a
prefix in the form of a credit card.
XXXX-XXXX-XXXX-1234
Custom text Masking method, which exposes the first and last
characters and adds a custom padding string in the middle.
If the original string is shorter than the exposed prefix and
suffix, only the padding string is used.
prefix[padding]suffix
Set up dynamic data masking for your database using the REST API
You can use the REST API to programmatically manage data masking policy and rules. The published REST API
supports the following operations:
Data masking policies
Create Or Update: Creates or updates a database data masking policy.
Get: Gets a database data masking policy.
Data masking rules
Create Or Update: Creates or updates a database data masking rule.
List By Database: Gets a list of database data masking rules.
Permissions
These are the built-in roles to configure dynamic data masking is:
SQL Security Manager
SQL DB Contributor
SQL Server Contributor
These are the required actions to use dynamic data masking:
Read/Write:
Microsoft.Sql/servers/databases/dataMaskingPolicies/* Read:
Microsoft.Sql/servers/databases/dataMaskingPolicies/read Write:
Microsoft.Sql/servers/databases/dataMaskingPolicies/write
To learn more about permissions when using dynamic data masking with T-SQL command, see Permissions
EXECUTE AS USER='ServiceAttendant';
SELECT MemberID,FirstName,LastName,Phone,Email,BirthDay FROM Data. Membership;
SELECT MemberID,Feedback,Rating FROM Service.Feedback;
REVERT;
EXECUTE AS USER='ServiceLead';
SELECT MemberID,FirstName,LastName,Phone,Email,BirthDay FROM Data. Membership;
SELECT MemberID,Feedback,Rating FROM Service.Feedback;
REVERT;
EXECUTE AS USER='ServiceManager';
SELECT MemberID,FirstName,LastName,Phone,Email FROM Data.Membership;
SELECT MemberID,Feedback,Rating FROM Service.Feedback;
REVERT;
EXECUTE AS USER='ServiceHead';
SELECT MemberID,FirstName,LastName,Phone,Email,BirthDay FROM Data.Membership;
SELECT MemberID,Feedback,Rating FROM Service.Feedback;
REVERT;
See also
Dynamic Data Masking for SQL Server.
Data Exposed episode about Granular Permissions for Azure SQL Dynamic Data Masking on Channel 9.
SQL vulnerability assessment helps you identify
database vulnerabilities
9/13/2022 • 10 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
SQL vulnerability assessment is an easy-to-configure service that can discover, track, and help you remediate
potential database vulnerabilities. Use it to proactively improve your database security.
Vulnerability assessment is part of the Microsoft Defender for SQL offering, which is a unified package for
advanced SQL security capabilities. Vulnerability assessment can be accessed and managed via the central
Microsoft Defender for SQL portal.
NOTE
Vulnerability assessment is supported for Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse
Analytics. Databases in Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics are referred to
collectively in the remainder of this article as databases, and the server is referring to the server that hosts databases for
Azure SQL Database and Azure Synapse.
NOTE
SQL vulnerability assessment requires Microsoft Defender for SQL plan to be able to run scans. For more
information about how to enable Microsoft Defender for SQL, see Microsoft Defender for SQL.
4. In the Ser ver settings page, define the Microsoft Defender for SQL settings:
a. Configure a storage account where your scan results for all databases on the server or managed
instance will be stored. For information about storage accounts, see About Azure storage accounts.
TIP
For more information about storing vulnerability assessment scans behind firewalls and VNets, see Store
vulnerability assessment scan results in a storage account accessible behind firewalls and VNets.
NOTE
Each database is randomly assigned a scan time on a set day of the week. Email notifications are
scheduled randomly per server on a set day of the week. The email notification report includes data from
all recurring database scans that were executed during the preceding week (does not include on-demand
scans).
b. To run an on-demand scan to scan your database for vulnerabilities, select Scan from the toolbar:
NOTE
The scan is lightweight and safe. It takes a few seconds to run and is entirely read-only. It doesn't make any changes to
your database.
Remediate vulnerabilities
When a vulnerability scan completes, the report is displayed in the Azure portal. The report presents:
An overview of your security state
The number of issues that were found
A summary by severity of the risks
A list of the findings for further investigations
3. As you review your assessment results, you can mark specific results as being an acceptable baseline in
your environment. A baseline is essentially a customization of how the results are reported. In
subsequent scans, results that match the baseline are considered as passes. After you've established your
baseline security state, vulnerability assessment only reports on deviations from the baseline. In this way,
you can focus your attention on the relevant issues.
4. If you change the baselines, use the Scan button to run an on-demand scan and view the customized
report. Any findings you've added to the baseline will now appear in Passed with an indication that
they've passed because of the baseline changes.
Your vulnerability assessment scans can now be used to ensure that your database maintains a high level of
security, and that your organizational policies are met.
Advanced capabilities
View scan history
Select Scan Histor y in the vulnerability assessment pane to view a history of all scans previously run on this
database. Select a particular scan in the list to view the detailed results of that scan.
Disable specific findings from Microsoft Defender for Cloud (preview)
If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it.
Disabled findings don't impact your secure score or generate unwanted noise.
When a finding matches the criteria you've defined in your disable rules, it won't appear in the list of findings.
Typical scenarios may include:
Disable findings with severity below medium
Disable findings that are non-patchable
Disable findings from benchmarks that aren't of interest for a defined scope
IMPORTANT
1. To disable specific findings, you need permissions to edit a policy in Azure Policy. Learn more in Azure RBAC
permissions in Azure Policy.
2. Disabled findings will still be included in the weekly SQL Vulnerability Assessment email report.
To create a rule:
1. From the recommendations detail page for Vulnerability assessment findings on your SQL
ser vers on machines should be remediated , select Disable rule .
2. Select the relevant scope.
3. Define your criteria. You can use any of the following criteria:
Finding ID
Severity
Benchmarks
IMPORTANT
The PowerShell Azure Resource Manager module is still supported, but all future development is for the Az.Sql module.
For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the AzureRm modules are
substantially identical.
You can use Azure PowerShell cmdlets to programmatically manage your vulnerability assessments. The
supported cmdlets are:
C M DL ET N A M E A S A L IN K DESC RIP T IO N
For a script example, see Azure SQL vulnerability assessment PowerShell support.
Using Resource Manager templates
To configure vulnerability assessment baselines by using Azure Resource Manager templates, use the
Microsoft.Sql/servers/databases/vulnerabilityAssessments/rules/baselines type.
Ensure that you have enabled vulnerabilityAssessments before you add baselines.
Here's an example for defining Baseline Rule VA2065 to master database and VA1143 to user database as
resources in a Resource Manager template:
"resources": [
{
"type": "Microsoft.Sql/servers/databases/vulnerabilityAssessments/rules/baselines",
"apiVersion": "2018-06-01-preview",
"name": "[concat(parameters('server_name'),'/', parameters('database_name') ,
'/default/VA2065/master')]",
"properties": {
"baselineResults": [
{
"result": [
"FirewallRuleName3",
"StartIpAddress",
"EndIpAddress"
]
},
{
"result": [
"FirewallRuleName4",
"62.92.15.68",
"62.92.15.68"
]
}
]
},
"type": "Microsoft.Sql/servers/databases/vulnerabilityAssessments/rules/baselines",
"apiVersion": "2018-06-01-preview",
"name": "[concat(parameters('server_name'),'/', parameters('database_name'),
'/default/VA2130/Default')]",
"dependsOn": [
"[resourceId('Microsoft.Sql/servers/vulnerabilityAssessments', parameters('server_name'),
'Default')]"
],
"properties": {
"baselineResults": [
{
"result": [
"dbo"
]
}
]
}
}
]
For master database and user database, the resource names are defined differently:
Master database - "name": "[concat(parameters('server_name'),'/', parameters('database_name') ,
'/default/VA2065/master ')]",
User database - "name": "[concat(parameters('server_name'),'/', parameters('database_name') ,
'/default/VA2065/default ')]",
To handle Boolean types as true/false, set the baseline result with binary input like "1"/"0".
{
"type": "Microsoft.Sql/servers/databases/vulnerabilityAssessments/rules/baselines",
"apiVersion": "2018-06-01-preview",
"name": "[concat(parameters('server_name'),'/', parameters('database_name'),
'/default/VA1143/Default')]",
"dependsOn": [
"[resourceId('Microsoft.Sql/servers/vulnerabilityAssessments', parameters('server_name'),
'Default')]"
],
"properties": {
"baselineResults": [
{
"result": [
"1"
]
}
]
}
Permissions
One of the following permissions is required to see vulnerability assessment results in the Microsoft Defender
for Cloud recommendation SQL databases should have vulnerability findings resolved :
Security Admin
Security Reader
The following permissions are required to changes vulnerability assessment settings:
SQL Security Manager
Storage Blob Data Reader
Owner role on the storage account
The following permissions are required to open links in email notifications about scan results or to view scan
results at the resource-level:
SQL Security Manager
Storage Blob Data Reader
Data residency
SQL Vulnerability Assessment queries the SQL server using publicly available queries under Defender for Cloud
recommendations for SQL Vulnerability Assessment, and stores the query results. The data is stored in the
configured user-owned storage account.
SQL Vulnerability Assessment allows you to specify the region where your data will be stored by choosing the
location of the storage account. The user is responsible for the security and data resiliency of the storage
account.
Next steps
Learn more about Microsoft Defender for SQL.
Learn more about data discovery and classification.
Learn more about Storing vulnerability assessment scan results in a storage account accessible behind
firewalls and VNets.
SQL Vulnerability Assessment rules reference guide
9/13/2022 • 32 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics SQL
Server (all supported versions)
This article lists the set of built-in rules that are used to flag security vulnerabilities and highlight deviations from
best practices, such as misconfigurations and excessive permissions. The rules are based on Microsoft's best
practices and focus on the security issues that present the biggest risks to your database and its valuable data.
They cover both database-level issues as well as server-level security issues, like server firewall settings and
server-level permissions. These rules also represent many of the requirements from various regulatory bodies
to meet their compliance standards.
The rules shown in your database scans depend on the SQL version and platform that was scanned.
To learn about how to implement Vulnerability Assessment in Azure, see Implement Vulnerability Assessment.
For a list of changes to these rules, see SQL Vulnerability Assessment rules changelog.
Rule categories
SQL Vulnerability Assessment rules have five categories, which are in the following sections:
Authentication and Authorization
Auditing and Logging
Data Protection
Installation Updates and Patches
Surface Area Reduction
1 SQL Ser ver 2012+ refers to all versions of SQL Server 2012 and above.
2 SQL Ser ver 2017+ refers to all versions of SQL Server 2017 and above.
3 SQL Ser ver 2016+ refers to all versions of SQL Server 2016 and above.
VA1020 Database user GUEST High The guest user SQL Server 2012+
should not be a permits access to a
member of any role database for any SQL Database
logins that are not
mapped to a specific
database user. This
rule checks that no
database roles are
assigned to the
Guest user.
VA1043 Principal GUEST Medium The guest user SQL Server 2012+
should not have permits access to a
access to any user database for any SQL Managed
database logins that are not Instance
mapped to a specific
database user. This
rule checks that the
guest user cannot
connect to any
database.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM
VA1054 Excessive permissions Low Every SQL Server SQL Server 2012+
should not be login belongs to the
granted to PUBLIC public server role. SQL Database
role on objects or When a server
columns principal has not
been granted or
denied specific
permissions on a
securable object the
user inherits the
permissions granted
to public on that
object. This rule
displays a list of all
securable objects or
columns that are
accessible to all users
through the PUBLIC
role.
VA1067 Database Mail XPs Medium This rule checks that SQL Server 2012+
should be disabled Database Mail is
when it is not in use disabled when no
database mail profile
is configured.
Database Mail can be
used for sending e-
mail messages from
the SQL Server
Database Engine and
is disabled by default.
If you are not using
this feature, it is
recommended to
disable it to reduce
the surface area.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM
VA1070 Database users Low Database users may SQL Server 2012+
shouldn't share the share the same name
same name as a as a server login. This SQL Managed
server login rule validates that Instance
there are no such
users.
VA1072 Authentication mode Medium There are two SQL Server 2012+
should be Windows possible
Authentication authentication
modes: Windows
Authentication mode
and mixed mode.
Mixed mode means
that SQL Server
enables both
Windows
authentication and
SQL Server
authentication. This
rule checks that the
authentication mode
is set to Windows
Authentication.
VA1095 Excessive permissions Medium Every SQL Server SQL Server 2012+
should not be login belongs to the
granted to PUBLIC public server role. SQL Managed
role When a server Instance
principal has not
been granted or SQL Database
denied specific
permissions on a
securable object the
user inherits the
permissions granted
to public on that
object. This displays a
list of all permissions
that are granted to
the PUBLIC role.
VA1099 GUEST user should Low Each database SQL Server 2012+
not be granted includes a user called
permissions on GUEST. Permissions SQL Managed
database securables granted to GUEST Instance
are inherited by users
who have access to SQL Database
the database but
who do not have a
user account in the
database. This rule
checks that all
permissions have
been revoked from
the GUEST user.
VA1267 Contained users Medium Contained users are SQL Server 2012+
should use Windows users that exist
Authentication within the database SQL Managed
and do not require a Instance
login mapping. This
rule checks that
contained users use
Windows
Authentication.
VA1280 Server Permissions Medium Every SQL Server SQL Server 2012+
granted to public login belongs to the
should be minimized public server role. SQL Managed
When a server Instance
principal has not
been granted or
denied specific
permissions on a
securable object the
user inherits the
permissions granted
to public on that
object. This rule
checks that server
permissions granted
to public are
minimized.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM
VA1282 Orphan roles should Low Orphan roles are SQL Server 2012+
be removed user-defined roles
that have no SQL Managed
members. Eliminate Instance
orphaned roles as
they are not needed SQL Database
on the system. This
rule checks whether Azure Synapse
there are any orphan
roles.
VA2020 Minimal set of High Every SQL Server SQL Server 2012+
principals should be securable has
granted ALTER or permissions SQL Managed
ALTER ANY USER associated with it Instance
database-scoped that can be granted
permissions to principals. SQL Database
Permissions can be
scoped at the server Azure Synapse
level (assigned to
logins and server
roles) or at the
database level
(assigned to
database users and
database roles).
These rules check
that only a minimal
set of principals are
granted ALTER or
ALTER ANY USER
database-scoped
permissions.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM
VA2033 Minimal set of Low This rule checks SQL Server 2012+
principals should be which principals are
granted database- granted EXECUTE SQL Managed
scoped EXECUTE permission on Instance
permission on objects or columns to
objects or columns ensure this SQL Database
permission is granted
to a minimal set of Azure Synapse
principals. Every SQL
Server securable has
permissions
associated with it
that can be granted
to principals.
Permissions can be
scoped at the server
level (assigned to
logins and server
roles) or at the
database level
(assigned to
database users,
database roles, or
application roles). The
EXECUTE permission
applies to both
stored procedures
and scalar functions,
which can be used in
computed columns.
VA2108 Minimal set of High SQL Server provides SQL Server 2012+
principals should be roles to help manage
members of fixed the permissions. SQL Managed
high impact database Roles are security Instance
roles principals that group
other principals. SQL Database
Database-level roles
are database-wide in Azure Synapse
their permission
scope. This rule
checks that a minimal
set of principals are
members of the fixed
database roles.
VA2109 Minimal set of Low SQL Server provides SQL Server 2012+
principals should be roles to help manage
members of fixed low the permissions. SQL Managed
impact database Roles are security Instance
roles principals that group
other principals. SQL Database
Database-level roles
are database-wide in Azure Synapse
their permission
scope. This rule
checks that a minimal
set of principals are
members of the fixed
database roles.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM
VA2114 Minimal set of High SQL Server provides SQL Server 2012+
principals should be roles to help manage
members of high permissions. Roles SQL Managed
impact fixed server are security principals Instance
roles that group other
principals. Server-
level roles are server-
wide in their
permission scope.
This rule checks that
a minimal set of
principals are
members of the fixed
server roles.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM
VA2129 Changes to signed High You can sign a stored SQL Server 2012+
modules should be procedure, function,
authorized or trigger with a SQL Database
certificate or an
asymmetric key. This SQL Managed
is designed for Instance
scenarios when
permissions cannot
be inherited through
ownership chaining
or when the
ownership chain is
broken, such as
dynamic SQL. This
rule checks for
changes made to
signed modules,
which could be an
indication of
malicious use.
VA2130 Track all users with Low This check tracks all SQL Database
access to the users with access to a
database database. Make sure Azure Synapse
that these users are
authorized according
to their current role
in the organization.
VA2201 SQL logins with High This rule checks the SQL Server 2012+
commonly used accounts with
names should be database owner
disabled permission for
commonly used
names. Assigning
commonly used
names to accounts
with database owner
permission increases
the likelihood of
successful brute force
attacks.
VA1045 Default trace should Medium Default trace SQL Server 2012+
be enabled provides
troubleshooting SQL Managed
assistance to Instance
database
administrators by
ensuring that they
have the log data
necessary to
diagnose problems
the first time they
occur. This rule
checks that the
default trace is
enabled.
VA1091 Auditing of both Low SQL Server Login SQL Server 2012+
successful and failed auditing
login attempts configuration enables
(default trace) should administrators to
be enabled when track the users
'Login auditing' is set logging into SQL
up to track logins Server instances. If
the user chooses to
count on 'Login
auditing' to track
users logging into
SQL Server instances,
then it is important
to enable it for both
successful and failed
login attempts.
VA1093 Maximum number of Low Each SQL Server SQL Server 2012+
error logs should be Error log will have all
12 or more the information
related to failures /
errors that have
occurred since SQL
Server was last
restarted or since the
last time you have
recycled the error
logs. This rule checks
that the maximum
number of error logs
is 12 or more.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM
VA1258 Database owners are High Database owners can SQL Server 2016+3
as expected perform all
configuration and SQL Database
maintenance
activities on the Azure Synapse
database and can
also drop databases
in SQL Server.
Tracking database
owners is important
to avoid having
excessive permission
for some principals.
Create a baseline
that defines the
expected database
owners for the
database. This rule
checks whether the
database owners are
as defined in the
baseline.
VA1264 Auditing of both Low SQL Server auditing SQL Server 2012+
successful and failed configuration enables
login attempts administrators to SQL Managed
should be enabled track the users Instance
logging into SQL
Server instances that
they're responsible
for. This rule checks
that auditing is
enabled for both
successful and failed
login attempts.
VA1265 Auditing of both Medium SQL Server auditing SQL Server 2012+
successful and failed configuration enables
login attempts for administrators to SQL Managed
contained DB track users logging Instance
authentication to SQL Server
should be enabled instances that they're
responsible for. This
rule checks that
auditing is enabled
for both successful
and failed login
attempts for
contained DB
authentication.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM
VA1281 All memberships for Medium User-defined roles SQL Server 2012+
user-defined roles are security principals
should be intended defined by the user SQL Managed
to group principals to Instance
easily manage
permissions. SQL Database
Monitoring these
roles is important to Azure Synapse
avoid having
excessive
permissions. Create a
baseline that defines
expected
membership for each
user-defined role.
This rule checks
whether all
memberships for
user-defined roles are
as defined in the
baseline.
Data Protection
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM
VA1098 Any Existing SSB or High Service Broker and SQL Server 2012+
Mirroring endpoint Mirroring endpoints
should require AES support different
connection encryption
algorithms including
no-encryption. This
rule checks that any
existing endpoint
requires AES
encryption.
VA1221 Database Encryption High SQL Server uses SQL Server 2012+
Symmetric Keys encryption keys to
should use AES help secure data SQL Managed
algorithm credentials and Instance
connection
information that is SQL Database
stored in a server
database. SQL Server Azure Synapse
has two kinds of
keys: symmetric and
asymmetric. This rule
checks that Database
Encryption
Symmetric Keys use
AES algorithm.
VA1223 Certificate keys High Certificate keys are SQL Server 2012+
should use at least used in RSA and
2048 bits other encryption SQL Managed
algorithms to protect Instance
data. These keys
need to be of SQL Database
enough length to
secure the user's Azure Synapse
data. This rule checks
that the key's length
is at least 2048 bits
for all certificates.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM
VA1279 Force encryption High When the Force SQL Server 2012+
should be enabled Encryption option for
for TDS the Database Engine
is enabled all
communications
between client and
server is encrypted
regardless of whether
the 'Encrypt
connection' option
(such as from SSMS)
is checked or not.
This rule checks that
Force Encryption
option is enabled.
VA1023 CLR should be High The CLR allows SQL Server 2012+
disabled managed code to be
hosted by and run in
the Microsoft SQL
Server environment.
This rule checks that
CLR is disabled.
VA1026 CLR should be Medium The CLR allows SQL Server 2017+2
disabled managed code to be
hosted by and run in SQL Managed
the Microsoft SQL Instance
Server environment.
CLR strict security
treats SAFE and
EXTERNAL_ACCESS
assemblies as if they
were marked
UNSAFE and requires
all assemblies be
signed by a
certificate or
asymmetric key with
a corresponding
login that has been
granted UNSAFE
ASSEMBLY
permission in the
master database. This
rule checks that CLR
is disabled.
VA1044 Remote Admin Medium This rule checks that SQL Server 2012+
Connections should remote dedicated
be disabled unless admin connections SQL Managed
specifically required are disabled if they Instance
are not being used
for clustering to
reduce attack surface
area. SQL Server
provides a dedicated
administrator
connection (DAC).
The DAC lets an
administrator access
a running server to
execute diagnostic
functions or Transact-
SQL statements, or
to troubleshoot
problems on the
server and it
becomes an
attractive target to
attack when it is
enabled remotely.
VA1071 'Scan for startup Medium When 'Scan for SQL Server 2012+
stored procedures' startup procs' is
option should be enabled SQL Server
disabled scans for and runs all
automatically run
stored procedures
defined on the server.
If this option is
enabled SQL Server
scans for and runs all
automatically run
stored procedures
defined on the server.
This rule checks that
this option is
disabled.
VA1092 SQL Server instance Low SQL Server uses the SQL Server 2012+
shouldn't be SQL Server Browser
advertised by the service to enumerate
SQL Server Browser instances of the
service Database Engine
installed on the
computer. This
enables client
applications to
browse for a server
and helps clients
distinguish between
multiple instances of
the Database Engine
on the same
computer. This rule
checks that the SQL
instance is hidden.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM
VA1102 The Trustworthy bit High The TRUSTWORTHY SQL Server 2012+
should be disabled database property is
on all databases used to indicate SQL Managed
except MSDB whether the instance Instance
of SQL Server trusts
the database and the
contents within it. If
this option is enabled
database modules
(for example user-
defined functions or
stored procedures)
that use an
impersonation
context can access
resources outside the
database. This rule
verifies that the
TRUSTWORTHY bit is
disabled on all
databases except
MSDB.
VA1143 'dbo' user should not Medium The 'dbo' or database SQL Server 2012+
be used for normal owner is a user
service operation account that has SQL Managed
implied permissions Instance
to perform all
activities in the SQL Database
database. Members
of the sysadmin fixed Azure Synapse
server role are
automatically
mapped to dbo. This
rule checks that dbo
is not the only
account allowed to
access this database.
Note that on a newly
created clean
database this rule will
fail until additional
roles are created.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM
VA1144 Model database Medium The Model database SQL Server 2012+
should only be is used as the
accessible by 'dbo' template for all SQL Managed
databases created on Instance
the instance of SQL
Server. Modifications
made to the model
database such as
database size
recovery model and
other database
options are applied
to any databases
created afterward.
This rule checks that
dbo is the only
account allowed to
access the model
database.
VA1244 Orphaned users Medium A database user that SQL Server 2012+
should be removed exists on a database
from SQL server but has no SQL Managed
databases corresponding login Instance
in the master
database or as an
external resource (for
example, a Windows
user) is referred to as
an orphaned user
and it should either
be removed or
remapped to a valid
login. This rule checks
that there are no
orphaned users.
VA1245 The dbo information High There is redundant SQL Server 2012+
should be consistent information about
between the target the dbo identity for SQL Managed
DB and master any database: Instance
metadata stored in
the database itself
and metadata stored
in master DB. This
rule checks that this
information is
consistent between
the target DB and
master.
VA1247 There should be no High When SQL Server SQL Server 2012+
SPs marked as auto- has been configured
start to 'scan for startup
procs' the server will
scan master DB for
stored procedures
marked as auto-
start. This rule checks
that there are no SPs
marked as auto-
start.
VA1256 User CLR assemblies High CLR assemblies can SQL Server 2012+
should not be be used to execute
defined in the arbitrary code on SQL Managed
database SQL Server process. Instance
This rule checks that
there are no user-
defined CLR
assemblies in the
database.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM
VA1278 Create a baseline of Medium The SQL Server SQL Server 2012+
External Key Extensible Key
Management Management (EKM) SQL Managed
Providers enables third-party Instance
EKM / Hardware
Security Modules
(HSM) vendors to
register their
modules in SQL
Server. When
registered SQL
Server users can use
the encryption keys
stored on EKM
modules,this rule
displays a list of EKM
providers being used
in the system.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM
VA2111 Sample databases Low Microsoft SQL Server SQL Server 2012+
should be removed comes shipped with
several sample SQL Managed
databases. This rule Instance
checks whether the
sample databases
have been removed.
VA2120 Features that may High SQL Server is capable SQL Server 2012+
affect security should of providing a wide
be disabled range of features and SQL Managed
services. Some of the Instance
features and services
provided by default
may not be
necessary and
enabling them could
adversely affect the
security of the
system. This rule
checks that these
features are disabled.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM
VA2121 'OLE Automation High SQL Server is capable SQL Server 2012+
Procedures' feature of providing a wide
should be disabled range of features and SQL Managed
services. Some of the Instance
features and services,
provided by default,
may not be
necessary, and
enabling them could
adversely affect the
security of the
system. The OLE
Automation
Procedures option
controls whether OLE
Automation objects
can be instantiated
within Transact-SQL
batches. These are
extended stored
procedures that allow
SQL Server users to
execute functions
external to SQL
Server. Regardless of
its benefits it can also
be used for exploits,
and is known as a
popular mechanism
to plant files on the
target machines. It is
advised to use
PowerShell as a
replacement for this
tool. This rule checks
that 'OLE
Automation
Procedures' feature is
disabled.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM
VA2122 'User Options' Medium SQL Server is capable SQL Server 2012+
feature should be of providing a wide
disabled range of features and SQL Managed
services. Some of the Instance
features and services
provided by default
may not be
necessary and
enabling them could
adversely affect the
security of the
system. The user
options specifies
global defaults for all
users. A list of default
query processing
options is established
for the duration of a
user's work session.
The user options
allows you to change
the default values of
the SET options (if
the server's default
settings are not
appropriate). This
rule checks that 'user
options' feature is
disabled.
Removed rules
RUL E ID RUL E T IT L E
VA1090 Ensure all Government Off The Shelf (GOTS) and Custom
Stored Procedures are encrypted
Next steps
Vulnerability Assessment
SQL Vulnerability Assessment rules changelog
SQL Vulnerability assessment rules changelog
9/13/2022 • 4 minutes to read • Edit Online
This article details the changes made to the SQL Vulnerability Assessment service rules. Rules that are updated,
removed, or added will be outlined below. For an updated list of SQL Vulnerability assessment rules, see SQL
Vulnerability Assessment rules.
June 2022
RUL E ID RUL E T IT L E C H A N GE DETA IL S
January 2022
RUL E ID RUL E T IT L E C H A N GE DETA IL S
June 2021
RUL E ID RUL E T IT L E C H A N GE DETA IL S
December 2020
RUL E ID RUL E T IT L E C H A N GE DETA IL S
VA1067 Database Mail XPs should be disabled Title and description change
when it is not in use
VA1235 Replication XPs should be disabled Title, description, and Logic change
VA1263 List all the active audits in the system Removed rule
VA2126 Features that may affect security Title, description, and logic change
should be disabled
VA2130 Track all users with access to the Description and logic change
database
Next steps
SQL Vulnerability Assessment rules
SQL Vulnerability Assessment overview
Store Vulnerability Assessment scan results in a storage account accessible behind firewalls and VNets
Store Vulnerability Assessment scan results in a
storage account accessible behind firewalls and
VNets
9/13/2022 • 4 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
If you are limiting access to your storage account in Azure for certain VNets or services, you'll need to enable
the appropriate configuration so that Vulnerability Assessment (VA) scanning for SQL Databases or Managed
Instances have access to that storage account.
Prerequisites
The SQL Vulnerability Assessment service needs permission to the storage account to save baseline and scan
results. There are three methods:
Use Storage Account key : Azure creates the SAS key and saves it (though we don't save the account key)
Use Storage SAS key : The SAS key must have: Write | List | Read | Delete permissions
Use SQL Ser ver managed identity : The SQL Server must have a managed identity. The storage account
must have a role assignment for the SQL Managed Identity as Storage Blob Data Contributor. When you
apply the settings, the VA fields storageContainerSasKey and storageAccountAccessKey must be empty.
When storage is behind a firewall or virtual network, then the SQL managed identity is required.
When you use the Azure portal to save SQL VA settings, Azure checks if you have permission to assign a new
role assignment for the managed identity as Storage Blob Data Contributor on the storage. If permissions are
assigned, Azure uses SQL Server managed identity, otherwise Azure uses the key method.
NOTE
The vulnerability assessment service can't access storage accounts protected with firewalls or VNets if they require storage
access keys.
Go to your Resource group that contains the storage account and access the Storage account pane. Under
Settings , select Firewall and vir tual networks .
Ensure that Allow trusted Microsoft ser vices access to this storage account is checked.
To find out which storage account is being used, go to your SQL ser ver pane in the Azure portal, under
Security , and then select Defender for Cloud .
NOTE
You can set up email alerts to notify users in your organization to view or access the scan reports. To do this, ensure that
you have SQL Security Manager and Storage Blob Data Reader permissions.
3. In your Vir tual network pane, under Settings , select Ser vice endpoints . Click Add in the new pane,
and add the Microsoft.Storage Service as a new service endpoint. Make sure the ManagedInstance
Subnet is selected. Click Add .
4. Go to your Storage account that you've selected to store your VA scans. Under Settings , select
Firewall and vir tual networks . Click on Add existing vir tual network . Select your managed
instance virtual network and subnet, and click Add .
You should now be able to store your VA scans for Managed Instances in your storage account.
Next steps
Vulnerability Assessment
Create an Azure Storage account
Microsoft Defender for SQL
Authorize database access to SQL Database, SQL
Managed Instance, and Azure Synapse Analytics
9/13/2022 • 10 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
In this article, you learn about:
Options for configuring Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics to
enable users to perform administrative tasks and to access the data stored in these databases.
The access and authorization configuration after initially creating a new server.
How to add logins and user accounts in the master database and user accounts and then grant these
accounts administrative permissions.
How to add user accounts in user databases, either associated with logins or as contained user accounts.
Configure user accounts with permissions in user databases by using database roles and explicit
permissions.
IMPORTANT
Databases in Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse are referred to collectively in the
remainder of this article as databases, and the server is referring to the server that manages databases for Azure SQL
Database and Azure Synapse.
NOTE
dbmanager and loginmanager roles do not pertain to SQL Managed Instance deployments.
Members of these special master database roles for Azure SQL Database have authority to create and
manage databases or to create and manage logins. In databases created by a user that is a member of the
dbmanager role, the member is mapped to the db_owner fixed database role and can log into and
manage that database using the dbo user account. These roles have no explicit permissions outside of
the master database.
IMPORTANT
You can't create an additional SQL login with full administrative permissions in SQL Database.
TIP
For a security tutorial that includes creating users in Azure SQL Database, see Tutorial: Secure Azure SQL Database.
Using groups
Efficient access management uses permissions assigned to Active Directory security groups and fixed or custom
roles instead of to individual users.
When using Azure Active Directory authentication, put Azure Active Directory users into an Azure Active
Directory security group. Create a contained database user for the group. Add one or more database
users as a member to custom or builtin database roles with the specific permissions appropriate to that
group of users.
When using SQL authentication, create contained database users in the database. Place one or more
database users into a custom database role with specific permissions appropriate to that group of users.
NOTE
You can also use groups for non-contained database users.
You should familiarize yourself with the following features that can be used to limit or elevate permissions:
Impersonation and module-signing can be used to securely elevate permissions temporarily.
Row-Level Security can be used limit which rows a user can access.
Data Masking can be used to limit exposure of sensitive data.
Stored procedures can be used to limit the actions that can be taken on the database.
Next steps
For an overview of all Azure SQL Database and SQL Managed Instance security features, see Security overview.
Use Azure Active Directory authentication
9/13/2022 • 10 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
Azure Active Directory (Azure AD) authentication is a mechanism for connecting to Azure SQL Database, Azure
SQL Managed Instance, and Synapse SQL in Azure Synapse Analytics by using identities in Azure AD.
NOTE
This article applies to Azure SQL Database, SQL Managed Instance, and Azure Synapse Analytics.
With Azure AD authentication, you can centrally manage the identities of database users and other Microsoft
services in one central location. Central ID management provides a single place to manage database users and
simplifies permission management. Benefits include the following:
It provides an alternative to SQL Server authentication.
It helps stop the proliferation of user identities across servers.
It allows password rotation in a single place.
Customers can manage database permissions using external (Azure AD) groups.
It can eliminate storing passwords by enabling integrated Windows authentication and other forms of
authentication supported by Azure Active Directory.
Azure AD authentication uses contained database users to authenticate identities at the database level.
Azure AD supports token-based authentication for applications connecting to SQL Database and SQL
Managed Instance.
Azure AD authentication supports:
Azure AD cloud-only identities.
Azure AD hybrid identities that support:
Cloud authentication with two options coupled with seamless single sign-on (SSO) Pass-
through authentication and password hash authentication.
Federated authentication.
For more information on Azure AD authentication methods and which one to choose, see the
following article:
Choose the right authentication method for your Azure Active Directory hybrid identity
solution
Azure AD supports connections from SQL Server Management Studio that use Active Directory Universal
Authentication, which includes Multi-Factor Authentication. Multi-Factor Authentication includes strong
authentication with a range of easy verification options — phone call, text message, smart cards with pin,
or mobile app notification. For more information, see SSMS support for Azure AD Multi-Factor
Authentication with Azure SQL Database, SQL Managed Instance, and Azure Synapse
Azure AD supports similar connections from SQL Server Data Tools (SSDT) that use Active Directory
Interactive Authentication. For more information, see Azure Active Directory support in SQL Server Data
Tools (SSDT)
NOTE
Connecting to a SQL Server instance that's running on an Azure virtual machine (VM) is not supported using Azure
Active Directory or Azure Active Directory Domain Services. Use an Active Directory domain account instead.
The configuration steps include the following procedures to configure and use Azure Active Directory
authentication.
1. Create and populate Azure AD.
2. Optional: Associate or change the active directory that is currently associated with your Azure Subscription.
3. Create an Azure Active Directory administrator.
4. Configure your client computers.
5. Create contained database users in your database mapped to Azure AD identities.
6. Connect to your database by using Azure AD identities.
NOTE
To learn how to create and populate Azure AD, and then configure Azure AD with Azure SQL Database, SQL Managed
Instance, and Synapse SQL in Azure Synapse Analytics, see Configure Azure AD with Azure SQL Database.
Trust architecture
Only the cloud portion of Azure AD, SQL Database, SQL Managed Instance, and Azure Synapse is considered
to support Azure AD native user passwords.
To support Windows single sign-on credentials (or user/password for Windows credential), use Azure Active
Directory credentials from a federated or managed domain that is configured for seamless single sign-on for
pass-through and password hash authentication. For more information, see Azure Active Directory Seamless
Single Sign-On.
To support Federated authentication (or user/password for Windows credentials), the communication with
ADFS block is required.
For more information on Azure AD hybrid identities, the setup, and synchronization, see the following articles:
Password hash authentication - Implement password hash synchronization with Azure AD Connect sync
Pass-through authentication - Azure Active Directory Pass-through Authentication
Federated authentication - Deploying Active Directory Federation Services in Azure and Azure AD Connect
and federation
For a sample federated authentication with ADFS infrastructure (or user/password for Windows credentials), see
the diagram below. The arrows indicate communication pathways.
The following diagram indicates the federation, trust, and hosting relationships that allow a client to connect to a
database by submitting a token. The token is authenticated by an Azure AD, and is trusted by the database.
Customer 1 can represent an Azure Active Directory with native users or an Azure AD with federated users.
Customer 2 represents a possible solution including imported users, in this example coming from a federated
Azure Active Directory with ADFS being synchronized with Azure Active Directory. It's important to understand
that access to a database using Azure AD authentication requires that the hosting subscription is associated to
the Azure AD. The same subscription must be used to create the Azure SQL Database, SQL Managed Instance, or
Azure Synapse resources.
Administrator structure
When using Azure AD authentication, there are two Administrator accounts: the original Azure SQL Database
administrator and the Azure AD administrator. The same concepts apply to Azure Synapse. Only the
administrator based on an Azure AD account can create the first Azure AD contained database user in a user
database. The Azure AD administrator login can be an Azure AD user or an Azure AD group. When the
administrator is a group account, it can be used by any group member, enabling multiple Azure AD
administrators for the server. Using group account as an administrator enhances manageability by allowing you
to centrally add and remove group members in Azure AD without changing the users or permissions in SQL
Database or Azure Synapse. Only one Azure AD administrator (a user or group) can be configured at any time.
Permissions
To create new users, you must have the ALTER ANY USER permission in the database. The ALTER ANY USER
permission can be granted to any database user. The ALTER ANY USER permission is also held by the server
administrator accounts, and database users with the CONTROL ON DATABASE or ALTER ON DATABASE permission for
that database, and by members of the db_owner database role.
To create a contained database user in Azure SQL Database, SQL Managed Instance, or Azure Synapse, you must
connect to the database or instance using an Azure AD identity. To create the first contained database user, you
must connect to the database by using an Azure AD administrator (who is the owner of the database). This is
demonstrated in Configure and manage Azure Active Directory authentication with SQL Database or Azure
Synapse. Azure AD authentication is only possible if the Azure AD admin was created for Azure SQL Database,
SQL Managed Instance, or Azure Synapse. If the Azure Active Directory admin was removed from the server,
existing Azure Active Directory users created previously inside SQL Server can no longer connect to the
database using their Azure Active Directory credentials.
Grant the db_owner role directly to the individual Azure AD user to mitigate the CREATE DATABASE
SCOPED CREDENTIAL issue.
These system functions return NULL values when executed under Azure AD principals:
SUSER_ID()
SUSER_NAME(<admin ID>)
SUSER_SNAME(<admin SID>)
SUSER_ID(<admin name>)
SUSER_SID(<admin name>)
Next steps
To learn how to create and populate an Azure AD instance and then configure it with Azure SQL Database,
SQL Managed Instance, or Azure Synapse, see Configure and manage Azure Active Directory authentication
with SQL Database, SQL Managed Instance, or Azure Synapse.
For a tutorial of using Azure AD server principals (logins) with SQL Managed Instance, see Azure AD server
principals (logins) with SQL Managed Instance
For an overview of logins, users, database roles, and permissions in SQL Database, see Logins, users,
database roles, and permissions.
For more information about database principals, see Principals.
For more information about database roles, see Database roles.
For syntax on creating Azure AD server principals (logins) for SQL Managed Instance, see CREATE LOGIN.
For more information about firewall rules in SQL Database, see SQL Database firewall rules.
Configure and manage Azure AD authentication
with Azure SQL
9/13/2022 • 25 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
This article shows you how to create and populate an Azure Active Directory (Azure AD) instance, and then use
Azure AD with Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics. For an
overview, see Azure Active Directory authentication.
IMPORTANT
Every Azure subscription has a trust relationship with an Azure AD instance. This means that it trusts that
directory to authenticate users, services, and devices. Multiple subscriptions can trust the same directory, but a
subscription trusts only one directory. This trust relationship that a subscription has with a directory is unlike the
relationship that a subscription has with all other resources in Azure (websites, databases, and so on), which are
more like child resources of a subscription. If a subscription expires, then access to those other resources
associated with the subscription also stops. But the directory remains in Azure, and you can associate another
subscription with that directory and continue to manage the directory users. For more information about
resources, see Understanding resource access in Azure. To learn more about this trusted relationship see How to
associate or add an Azure subscription to Azure Active Directory.
NOTE
Users that are not based on an Azure AD account (including the server administrator account) cannot create Azure AD-
based users, because they do not have permission to validate proposed database users with the Azure AD.
Your SQL Managed Instance needs permissions to read Azure AD to successfully accomplish tasks such as
authentication of users through security group membership or creation of new users. For this to work, you need
to grant the SQL Managed Instance permission to read Azure AD. You can do this using the Azure portal or
PowerShell.
Azure portal
To grant your SQL Managed Instance Azure AD read permission using the Azure portal, log in as Global
Administrator in Azure AD and follow these steps:
1. In the Azure portal, in the upper-right corner select your account, and then choose Switch directories to
confirm which Active Directory is currently your active directory. Switch directories, if necessary.
4. Select the banner on top of the Active Directory admin page and grant permission to the current user.
5. After the operation succeeds, the following notification will show up in the top-right corner:
6. Now you can choose your Azure AD admin for your SQL Managed Instance. For that, on the Active
Directory admin page, select Set admin command.
7. On the Azure AD admin page, search for a user, select the user or group to be an administrator, and then
select Select .
The Active Directory admin page shows all members and groups of your Active Directory. Users or
groups that are grayed out can't be selected because they aren't supported as Azure AD administrators.
See the list of supported admins in Azure AD Features and Limitations. Azure role-based access control
(Azure RBAC) applies only to the Azure portal and isn't propagated to SQL Database, SQL Managed
Instance, or Azure Synapse.
TIP
To later remove an Admin, at the top of the Active Directory admin page, select Remove admin , and then select Save .
PowerShell
To grant your SQL Managed Instance Azure AD read permission by using the PowerShell, run this script:
# Gives Azure Active Directory read permission to a Service Principal representing the SQL Managed Instance.
# Can be executed only by a "Global Administrator" or "Privileged Role Administrator" type of user.
To run PowerShell cmdlets, you need to have Azure PowerShell installed and running. For detailed information,
see How to install and configure Azure PowerShell.
IMPORTANT
The PowerShell Azure Resource Manager (RM) module is still supported by Azure SQL Managed Instance, but all future
development is for the Az.Sql module. The AzureRM module will continue to receive bug fixes until at least December
2020. The arguments for the commands in the Az module and in the AzureRm modules are substantially identical. For
more about their compatibility, see Introducing the new Azure PowerShell Az module.
C M DL ET N A M E DESC RIP T IO N
The following command gets information about an Azure AD administrator for a SQL Managed Instance named
ManagedInstance01 that is associated with a resource group named ResourceGroup01.
The following command provisions an Azure AD administrator group named DBAs for the SQL Managed
Instance named ManagedInstance01. This server is associated with resource group ResourceGroup01.
The following command removes the Azure AD administrator for the SQL Managed Instance named
ManagedInstanceName01 associated with the resource group ResourceGroup01.
The following two procedures show you how to provision an Azure Active Directory administrator for your
server in the Azure portal and by using PowerShell.
Azure portal
1. In the Azure portal, in the upper-right corner, select your connection to drop down a list of possible Active
Directories. Choose the correct Active Directory as the default Azure AD. This step links the subscription-
associated Active Directory with server making sure that the same subscription is used for both Azure AD
and the server.
2. Search for and select SQL ser ver .
NOTE
On this page, before you select SQL ser vers , you can select the star next to the name to favorite the category
and add SQL ser vers to the left navigation bar.
5. In the Add admin page, search for a user, select the user or group to be an administrator, and then select
Select . (The Active Directory admin page shows all members and groups of your Active Directory. Users
or groups that are grayed out cannot be selected because they are not supported as Azure AD
administrators. (See the list of supported admins in the Azure AD Features and Limitations section of
Use Azure Active Directory Authentication for authentication with SQL Database or Azure Synapse.)
Azure role-based access control (Azure RBAC) applies only to the portal and is not propagated to SQL
Server.
6. At the top of the Active Director y admin page, select Save .
For Azure AD users and groups, the Object ID is displayed next to the admin name. For applications
(service principals), the Application ID is displayed.
The process of changing the administrator may take several minutes. Then the new administrator appears in the
Active Director y admin box.
NOTE
When setting up the Azure AD admin, the new admin name (user or group) cannot already be present in the virtual
master database as a server authentication user. If present, the Azure AD admin setup will fail; rolling back its creation and
indicating that such an admin (name) already exists. Since such a server authentication user is not part of the Azure AD,
any effort to connect to the server using Azure AD authentication fails.
To later remove an Admin, at the top of the Active Director y admin page, select Remove admin , and then
select Save .
PowerShell for SQL Database and Azure Synapse
PowerShell
Azure CLI
To run PowerShell cmdlets, you need to have Azure PowerShell installed and running. For detailed information,
see How to install and configure Azure PowerShell. To provision an Azure AD admin, execute the following Azure
PowerShell commands:
Connect-AzAccount
Select-AzSubscription
Cmdlets used to provision and manage Azure AD admin for SQL Database and Azure Synapse:
C M DL ET N A M E DESC RIP T IO N
Use PowerShell command get-help to see more information for each of these commands. For example,
get-help Set-AzSqlServerActiveDirectoryAdministrator .
The following script provisions an Azure AD administrator group named DBA_Group (object ID
40b79501-b343-44ed-9ce7-da4c8cc7353f ) for the demo_ser ver server in a resource group named Group-23 :
The DisplayName input parameter accepts either the Azure AD display name or the User Principal Name. For
example, DisplayName="John Smith" and DisplayName="[email protected]" . For Azure AD groups only the Azure
AD display name is supported.
NOTE
The Azure PowerShell command Set-AzSqlServerActiveDirectoryAdministrator does not prevent you from
provisioning Azure AD admins for unsupported users. An unsupported user can be provisioned, but can not connect to a
database.
NOTE
The Azure AD ObjectID is required when the DisplayName is not unique. To retrieve the ObjectID and DisplayName
values, use the Active Directory section of Azure Classic Portal, and view the properties of a user or group.
The following example returns information about the current Azure AD admin for the server:
Get-AzSqlServerActiveDirectoryAdministrator -ResourceGroupName "Group-23" -ServerName "demo_server" |
Format-List
NOTE
You can also provision an Azure Active Directory Administrator by using the REST APIs. For more information, see Service
Management REST API Reference and Operations for Azure SQL Database Operations for Azure SQL Database
On all client machines, from which your applications or users connect to SQL Database or Azure Synapse using
Azure AD identities, you must install the following software:
.NET Framework 4.6 or later from https://msdn.microsoft.com/library/5a4x27ek.aspx.
Microsoft Authentication Library (MSAL) or Azure Active Directory Authentication Library for SQL Server
(ADAL.DLL). Below are the download links to install the latest SSMS, ODBC, and OLE DB driver that contains
the ADAL.DLL library.
SQL Server Management Studio
ODBC Driver 17 for SQL Server
OLE DB Driver 18 for SQL Server
You can meet these requirements by:
Installing the latest version of SQL Server Management Studio or SQL Server Data Tools meets the .NET
Framework 4.6 requirement.
SSMS installs the x86 version of ADAL.DLL.
SSDT installs the amd64 version of ADAL.DLL.
The latest Visual Studio from Visual Studio Downloads meets the .NET Framework 4.6 requirement,
but does not install the required amd64 version of ADAL.DLL.
NOTE
Database users (with the exception of administrators) cannot be created using the Azure portal. Azure roles are not
propagated to the database in SQL Database, the SQL Managed Instance, or Azure Synapse. Azure roles are used for
managing Azure Resources, and do not apply to database permissions. For example, the SQL Ser ver Contributor role
does not grant access to connect to the database in SQL Database, the SQL Managed Instance, or Azure Synapse. The
access permission must be granted directly in the database using Transact-SQL statements.
WARNING
Special characters like colon : or ampersand & when included as user names in the T-SQL CREATE LOGIN and
CREATE USER statements are not supported.
IMPORTANT
Azure AD users and service principals (Azure AD applications) that are members of more than 2048 Azure AD security
groups are not supported to login into the database in SQL Database, Managed Instance, or Azure Synapse.
To create an Azure AD-based contained database user (other than the server administrator that owns the
database), connect to the database with an Azure AD identity, as a user with at least the ALTER ANY USER
permission. Then use the following Transact-SQL syntax:
Azure_AD_principal_name can be the user principal name of an Azure AD user or the display name for an Azure
AD group.
Examples: To create a contained database user representing an Azure AD federated or managed domain user:
To create a contained database user representing an Azure AD or federated domain group, provide the display
name of a security group:
To create a contained database user representing an application that connects using an Azure AD token:
TIP
You cannot directly create a user from an Azure Active Directory other than the Azure Active Directory that is associated
with your Azure subscription. However, members of other Active Directories that are imported users in the associated
Active Directory (known as external users) can be added to an Active Directory group in the tenant Active Directory. By
creating a contained database user for that AD group, the users from the external Active Directory can gain access to SQL
Database.
For more information about creating contained database users based on Azure Active Directory identities, see
CREATE USER (Transact-SQL).
NOTE
Removing the Azure Active Directory administrator for the server prevents any Azure AD authentication user from
connecting to the server. If necessary, unusable Azure AD users can be dropped manually by a SQL Database
administrator.
NOTE
If you receive a Connection Timeout Expired , you may need to set the TransparentNetworkIPResolution parameter
of the connection string to false. For more information, see Connection timeout issue with .NET Framework 4.6.1 -
TransparentNetworkIPResolution.
When you create a database user, that user receives the CONNECT permission and can connect to that
database as a member of the PUBLIC role. Initially the only permissions available to the user are any
permissions granted to the PUBLIC role, or any permissions granted to any Azure AD groups that they are a
member of. Once you provision an Azure AD-based contained database user, you can grant the user additional
permissions, the same way as you grant permission to any other type of user. Typically grant permissions to
database roles, and add users to roles. For more information, see Database Engine Permission Basics. For more
information about special SQL Database roles, see Managing Databases and Logins in Azure SQL Database. A
federated domain user account that is imported into a managed domain as an external user, must use the
managed domain identity.
NOTE
Azure AD users are marked in the database metadata with type E (EXTERNAL_USER) and for groups with type X
(EXTERNAL_GROUPS). For more information, see sys.database_principals.
Connect to the database using SSMS or SSDT
To confirm the Azure AD administrator is properly set up, connect to the master database using the Azure AD
administrator account. To provision an Azure AD-based contained database user (other than the server
administrator that owns the database), connect to the database with an Azure AD identity that has access to the
database.
IMPORTANT
Support for Azure Active Directory authentication is available with SQL Server Management Studio (SSMS) starting in
2016 and SQL Server Data Tools starting in 2015. The August 2016 release of SSMS also includes support for Active
Directory Universal Authentication, which allows administrators to require Multi-Factor Authentication using a phone call,
text message, smart cards with pin, or mobile app notification.
2. Select the Options button, and on the Connection Proper ties page, in the Connect to database box,
type the name of the user database you want to connect to. For more information, see the article Multi-
factor Azure AD auth on the differences between the Connection Properties for SSMS 17.x and 18.x.
Active Directory password authentication
Use this method when connecting with an Azure AD principal name using the Azure AD managed domain. You
can also use it for federated accounts without access to the domain, for example, when working remotely.
Use this method to authenticate to the database in SQL Database or the SQL Managed Instance with Azure AD
cloud-only identity users, or those who use Azure AD hybrid identities. This method supports users who want to
use their Windows credential, but their local machine is not joined with the domain (for example, using remote
access). In this case, a Windows user can indicate their domain account and password, and can authenticate to
the database in SQL Database, the SQL Managed Instance, or Azure Synapse.
1. Start Management Studio or Data Tools and in the Connect to Ser ver (or Connect to Database
Engine ) dialog box, in the Authentication box, select Azure Active Director y - Password .
2. In the User name box, type your Azure Active Directory user name in the format
[email protected] . User names must be an account from Azure Active Directory or an account
from a managed or federated domain with Azure Active Directory.
3. In the Password box, type your user password for the Azure Active Directory account or
managed/federated domain account.
4. Select the Options button, and on the Connection Proper ties page, in the Connect to database box,
type the name of the user database you want to connect to. (See the graphic in the previous option.)
Active Directory interactive authentication
Use this method for interactive authentication with or without Multi-Factor Authentication (MFA), with password
being requested interactively. This method can be used to authenticate to the database in SQL Database, the SQL
Managed Instance, and Azure Synapse for Azure AD cloud-only identity users, or those who use Azure AD
hybrid identities.
For more information, see Using multi-factor Azure AD authentication with SQL Database and Azure Synapse
(SSMS support for MFA).
The connection string keyword Integrated Security=True is not supported for connecting to Azure SQL
Database. When making an ODBC connection, you will need to remove spaces and set Authentication to
'ActiveDirectoryIntegrated'.
Active Directory password authentication
To connect to a database using Azure AD cloud-only identity user accounts, or those who use Azure AD hybrid
identities, the Authentication keyword must be set to Active Directory Password . The connection string must
contain User ID/UID and Password/PWD keywords and values. The following C# code sample uses ADO .NET.
string ConnectionString =
@"Data Source=n9lxnyuzhv.database.windows.net; Authentication=Active Directory Password; Initial
Catalog=testdb; [email protected]; PWD=MyPassWord!";
SqlConnection conn = new SqlConnection(ConnectionString);
conn.Open();
Learn more about Azure AD authentication methods using the demo code samples available at Azure AD
Authentication GitHub Demo.
Azure AD token
This authentication method allows middle-tier services to obtain JSON Web Tokens (JWT) to connect to the
database in SQL Database, the SQL Managed Instance, or Azure Synapse by obtaining a token from Azure AD.
This method enables various application scenarios including service identities, service principals, and
applications using certificate-based authentication. You must complete four basic steps to use Azure AD token
authentication:
1. Register your application with Azure Active Directory and get the client ID for your code.
2. Create a database user representing the application. (Completed earlier in step 6.)
3. Create a certificate on the client computer runs the application.
4. Add the certificate as a key for your application.
Sample connection string:
For more information, see SQL Server Security Blog. For information about adding a certificate, see Get started
with certificate-based authentication in Azure Active Directory.
sqlcmd
The following statements, connect using version 13.1 of sqlcmd, which is available from the Download Center.
NOTE
sqlcmd with the -G command does not work with system identities, and requires a user principal login.
sqlcmd -S Target_DB_or_DW.testsrv.database.windows.net -G
sqlcmd -S Target_DB_or_DW.testsrv.database.windows.net -U [email protected] -P MyAADPassword -G -l 30
Next steps
For an overview of logins, users, database roles, and permissions in SQL Database, see Logins, users,
database roles, and user accounts.
For more information about database principals, see Principals.
For more information about database roles, see Database roles.
For more information about firewall rules in SQL Database, see SQL Database firewall rules.
For information about how to set an Azure AD guest user as the Azure AD admin, see Create Azure AD guest
users and set as an Azure AD admin.
For information on how to service principals with Azure SQL, see Create Azure AD users using Azure AD
applications
Using multi-factor Azure Active Directory
authentication
9/13/2022 • 6 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics support connections from SQL
Server Management Studio (SSMS) using Azure Active Directory - Universal with MFA authentication. This
article discusses the differences between the various authentication options, and also the limitations associated
with using Universal Authentication in Azure Active Directory (Azure AD) for Azure SQL.
Download the latest SSMS - On the client computer, download the latest version of SSMS, from Download
SQL Server Management Studio (SSMS).
NOTE
In December 2021, releases of SSMS prior to 18.6 will no longer authenticate through Azure Active Directory with MFA.
To continue utilizing Azure Active Directory authentication with MFA, you need SSMS 18.6 or later.
For all the features discussed in this article, use at least July 2017, version 17.2. The most recent connection
dialog box, should look similar to the following image:
Authentication options
There are two non-interactive authentication models for Azure AD, which can be used in many different
applications (ADO.NET, JDCB, ODC, and so on). These two methods never result in pop-up dialog boxes:
Azure Active Directory - Password
Azure Active Directory - Integrated
The interactive method that also supports Azure AD multi-factor authentication (MFA) is:
Azure Active Directory - Universal with MFA
Azure AD MFA helps safeguard access to data and applications while meeting user demand for a simple sign-in
process. It delivers strong authentication with a range of easy verification options (phone call, text message,
smart cards with pin, or mobile app notification), allowing users to choose the method they prefer. Interactive
MFA with Azure AD can result in a pop-up dialog box for validation.
For a description of Azure AD multi-factor authentication, see multi-factor authentication. For configuration
steps, see Configure Azure SQL Database multi-factor authentication for SQL Server Management Studio.
Azure AD domain name or tenant ID parameter
Beginning with SSMS version 17, users that are imported into the current Azure AD from other Azure Active
Directories as guest users, can provide the Azure AD domain name, or tenant ID when they connect. Guest users
include users invited from other Azure ADs, Microsoft accounts such as outlook.com, hotmail.com, live.com, or
other accounts like gmail.com. This information allows Azure Active Directory - Universal with MFA
authentication to identify the correct authenticating authority. This option is also required to support Microsoft
accounts (MSA) such as outlook.com, hotmail.com, live.com, or non-MSA accounts.
All guest users who want to be authenticated using Universal Authentication must enter their Azure AD domain
name or tenant ID. This parameter represents the current Azure AD domain name or tenant ID that the Azure
SQL logical server is associated with. For example, if the SQL logical server is associated with the Azure AD
domain contosotest.onmicrosoft.com , where user [email protected] is hosted as an imported
user from the Azure AD domain contosodev.onmicrosoft.com , the domain name required to authenticate this
user is contosotest.onmicrosoft.com . When the user is a native user of the Azure AD associated to SQL logical
server, and is not an MSA account, no domain name or tenant ID is required. To enter the parameter (beginning
with SSMS version 17.2):
1. Open a connection in SSMS. Input your server name, and select Azure Active Director y - Universal
with MFA authentication. Add the User name that you want to sign in with.
2. Select the Options box, and go over to the Connection Proper ties tab. In the Connect to Database
dialog box, complete the dialog box for your database. Check the AD domain name or tenant ID box,
and provide authenticating authority, such as the domain name (contosotest.onmicrosoft.com ) or the
GUID of the tenant ID.
If you are running SSMS 18.x or later, the AD domain name or tenant ID is no longer needed for guest users
because 18.x or later automatically recognizes it.
Azure AD business to business support
Azure AD users that are supported for Azure AD B2B scenarios as guest users (see What is Azure B2B
collaboration) can connect to SQL Database and Azure Synapse as individual users or members of an Azure AD
group created in the associated Azure AD, and mapped manually using the CREATE USER (Transact-SQL)
statement in a given database.
For example, if [email protected] is invited to Azure AD contosotest (with the Azure AD domain
contosotest.onmicrosoft.com ), a user [email protected] must be created for a specific database (such as
MyDatabase ) by an Azure AD SQL administrator or Azure AD DBO by executing the Transact-SQL
create user [[email protected]] FROM EXTERNAL PROVIDER statement. If [email protected] is part of an Azure AD
group, such as usergroup then this group must be created for a specific database (such as MyDatabase ) by an
Azure AD SQL administrator, or Azure AD DBO by executing the Transact-SQL statement
create user [usergroup] FROM EXTERNAL PROVIDER statement.
After the database user or group is created, then the user [email protected] can sign into MyDatabase using the
SSMS authentication option Azure Active Directory – Universal with MFA . By default, the user or group only
has connect permission. Any further data access will need to be granted in the database by a user with enough
privilege.
NOTE
For SSMS 17.x, using [email protected] as a guest user, you must check the AD domain name or tenant ID box and
add the AD domain name contosotest.onmicrosoft.com in the Connection Proper ty dialog box. The AD domain
name or tenant ID option is only supported for the Azure Active Director y - Universal with MFA authentication.
Otherwise, the check box it is greyed out.
Next steps
For configuration steps, see Configure Azure SQL Database multi-factor authentication for SQL Server
Management Studio.
Grant others access to your database: SQL Database Authentication and Authorization: Granting Access
Make sure others can connect through the firewall: Configure a server-level firewall rule using the Azure
portal
Configure and manage Azure Active Directory authentication with SQL Database or Azure Synapse
Create Azure AD guest users and set as an Azure AD admin
Microsoft SQL Server Data-Tier Application Framework (17.0.0 GA)
SQLPackage.exe
Import a BACPAC file to a new database
Export a database to a BACPAC file
C# interface IUniversalAuthProvider Interface
Configure multi-factor authentication for SQL
Server Management Studio and Azure AD
9/13/2022 • 3 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
This article shows you how to use Azure Active Directory (Azure AD) multi-factor authentication (MFA) with SQL
Server Management Studio (SSMS). Azure AD MFA can be used when connecting SSMS or SqlPackage.exe to
Azure SQL Database, Azure SQL Managed Instance and Azure Synapse Analytics. For an overview of multi-
factor authentication, see Universal Authentication with SQL Database, SQL Managed Instance, and Azure
Synapse (SSMS support for MFA).
IMPORTANT
Databases in Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse are referred to collectively in the
remainder of this article as databases, and the server is referring to the server that hosts databases for Azure SQL
Database and Azure Synapse.
Configuration steps
1. Configure an Azure Active Director y - For more information, see Administering your Azure AD
directory, Integrating your on-premises identities with Azure Active Directory, Add your own domain name
to Azure AD, Microsoft Azure now supports federation with Windows Server Active Directory, and Manage
Azure AD using Windows PowerShell.
2. Configure MFA - For step-by-step instructions, see What is Azure AD Multi-Factor Authentication?,
Conditional Access (MFA) with Azure SQL Database and Data Warehouse. (Full Conditional Access requires a
Premium Azure Active Directory. Limited MFA is available with a standard Azure AD.)
3. Configure Azure AD Authentication - For step-by-step instructions, see Connecting to SQL Database,
SQL Managed Instance, or Azure Synapse using Azure Active Directory Authentication.
4. Download SSMS - On the client computer, download the latest SSMS, from Download SQL Server
Management Studio (SSMS).
NOTE
In December 2021, releases of SSMS prior to 18.6 will no longer authenticate through Azure Active Directory with MFA.
To continue utilizing Azure Active Directory authentication with MFA, you need SSMS 18.6 or later.
1. To connect using Universal Authentication, on the Connect to Ser ver dialog box in SQL Server
Management Studio (SSMS), select Active Director y - Universal with MFA suppor t . (If you see
Active Director y Universal Authentication you are not on the latest version of SSMS.)
2. Complete the User name box with the Azure Active Directory credentials, in the format
[email protected] .
3. If you are connecting as a guest user, you no longer need to complete the AD domain name or tenant ID
field for guest users because SSMS 18.x or later automatically recognizes it. For more information, see
Universal Authentication with SQL Database, SQL Managed Instance, and Azure Synapse (SSMS support
for MFA).
However, If you are connecting as a guest user using SSMS 17.x or older, you must click Options , and on
the Connection Proper ty dialog box, and complete the AD domain name or tenant ID box.
4. Select Options and specify the database on the Options dialog box. (If the connected user is a guest
user (i.e. [email protected]), you must check the box and add the current AD domain name or tenant ID
as part of Options. See Universal Authentication with SQL Database and Azure Synapse Analytics (SSMS
support for MFA). Then click Connect .
5. When the Sign in to your account dialog box appears, provide the account and password of your
Azure Active Directory identity. No password is required if a user is part of a domain federated with
Azure AD.
NOTE
For Universal Authentication with an account that does not require MFA, you connect at this point. For users
requiring MFA, continue with the following steps:
6. Two MFA setup dialog boxes might appear. This one time operation depends on the MFA administrator
setting, and therefore may be optional. For an MFA enabled domain this step is sometimes pre-defined
(for example, the domain requires users to use a smartcard and pin).
7. The second possible one time dialog box allows you to select the details of your authentication method.
The possible options are configured by your administrator.
8. The Azure Active Directory sends the confirming information to you. When you receive the verification
code, enter it into the Enter verification code box, and click Sign in .
When verification is complete, SSMS connects normally presuming valid credentials and firewall access.
Next steps
For an overview of multi-factor authentication, see Universal Authentication with SQL Database, SQL
Managed Instance, and Azure Synapse (SSMS support for MFA).
Grant others access to your database: SQL Database Authentication and Authorization: Granting Access
Make sure others can connect through the firewall: Configure a server-level firewall rule using the Azure
portal
Conditional Access with Azure SQL Database and
Azure Synapse Analytics
9/13/2022 • 2 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics support Microsoft Conditional
Access.
The following steps show how to configure Azure SQL Database, SQL Managed Instance, or Azure Synapse to
enforce a Conditional Access policy.
Prerequisites
You must configure Azure SQL Database, Azure SQL Managed Instance, or dedicated SQL pool in Azure
Synapse to support Azure Active Directory (Azure AD) authentication. For specific steps, see Configure and
manage Azure Active Directory authentication with SQL Database or Azure Synapse.
When Multi-Factor Authentication is enabled, you must connect with a supported tool, such as the latest SQL
Server Management Studio (SSMS). For more information, see Configure Azure SQL Database multi-factor
authentication for SQL Server Management Studio.
1. Sign in to the Azure portal, select Azure Active Director y , and then select Conditional Access . For
more information, see Azure Active Directory Conditional Access technical reference.
2. In the Conditional Access-Policies blade, click New policy , provide a name, and then click Configure
rules .
3. Under Assignments , select Users and groups , check Select users and groups , and then select the
user or group for Conditional Access. Click Select , and then click Done to accept your selection.
4. Select Cloud apps , click Select apps . You see all apps available for Conditional Access. Select Azure
SQL Database , at the bottom click Select , and then click Done .
If you can't find Azure SQL Database listed in the following third screenshot, complete the following
steps:
Connect to your database in Azure SQL Database by using SSMS with an Azure AD admin account.
Execute CREATE USER [[email protected]] FROM EXTERNAL PROVIDER .
Sign into Azure AD and verify that Azure SQL Database, SQL Managed Instance, or Azure Synapse are
listed in the applications in your Azure AD instance.
5. Select Access controls , select Grant , and then check the policy you want to apply. For this example, we
select Require multi-factor authentication .
Summary
The selected application (Azure SQL Database) using Azure AD Premium, now enforces the selected Conditional
Access policy, Required multi-factor authentication.
For questions about Azure SQL Database and Azure Synapse regarding multi-factor authentication, contact
[email protected].
Next steps
For a tutorial, see Secure your database in SQL Database.
Azure Active Directory server principals
9/13/2022 • 5 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
(dedicated SQL pools only)
NOTE
Azure Active Directory (Azure AD) server principals (logins) are currently in public preview for Azure SQL Database. Azure
SQL Managed Instance can already utilize Azure AD logins.
You can now create and utilize Azure AD server principals, which are logins in the virtual master database of a
SQL Database. There are several benefits of using Azure AD server principals for SQL Database:
Support Azure SQL Database server roles for permission management.
Support multiple Azure AD users with special roles for SQL Database, such as the loginmanager and
dbmanager roles.
Functional parity between SQL logins and Azure AD logins.
Increase functional improvement support, such as utilizing Azure AD-only authentication. Azure AD-only
authentication allows SQL authentication to be disabled, which includes the SQL server admin, SQL logins
and users.
Allows Azure AD principals to support geo-replicas. Azure AD principals will be able to connect to the geo-
replica of a user database, with a read-only permission and deny permission to the primary server.
Ability to use Azure AD service principal logins with special roles to execute a full automation of user and
database creation, as well as maintenance provided by Azure AD applications.
Closer functionality between Managed Instance and SQL Database, as Managed Instance already supports
Azure AD logins in the master database.
For more information on Azure AD authentication in Azure SQL, see Use Azure Active Directory authentication
Permissions
The following permissions are required to utilize or create Azure AD logins in the virtual master database.
Azure AD admin permission or membership in the loginmanager server role. The first Azure AD login can
only be created by the Azure AD admin.
Must be a member of Azure AD within the same directory used for Azure SQL Database
By default, the standard permission granted to newly created Azure AD login in the master database is VIEW
ANY DATABASE .
<option_list> ::=
PASSWORD = {'password'}
| , SID = sid, ]
The login_name specifies the Azure AD principal, which is an Azure AD user, group, or application.
For more information, see CREATE LOGIN (Transact-SQL).
Create user syntax
The below T-SQL syntax is already available in SQL Database, and can be used for creating database-level Azure
AD principals mapped to Azure AD logins in the virtual master database.
To create an Azure AD user from an Azure AD login, use the following syntax. Only the Azure AD admin can
execute this command in the virtual master database.
The Azure AD principal login_name won't be able to log into any user database in the SQL Database logical
server where an Azure AD user principal, user_name mapped to login login_name was created.
NOTE
ALTER LOGIN login_name DISABLE is not supported for contained users.
ALTER LOGIN login_name DISABLE is not supported for Azure AD groups.
An individual disabled login cannot belong to a user who is part of a login group created in the master database
(for example, an Azure AD admin group).
For the DISABLE or ENABLE changes to take immediate effect, the authentication cache and the
TokenAndPermUserStore cache must be cleared using the T-SQL commands.
DBCC FLUSHAUTHCACHE
DBCC FREESYSTEMCACHE('TokenAndPermUserStore') WITH NO_INFOMSGS
Next steps
Tutorial: Create and utilize Azure Active Directory server logins
Azure Active Directory service principal with Azure
SQL
9/13/2022 • 5 minutes to read • Edit Online
For more information, see the New-AzSqlServer command, or New-AzSqlInstance command for SQL
Managed Instance.
For existing Azure SQL Logical servers, execute the following command:
For more information, see the Set-AzSqlServer command, or Set-AzSqlInstance command for SQL
Managed Instance.
To check if the server identity is assigned to the server, execute the Get-AzSqlServer command.
NOTE
Server identity can be assigned using REST API and CLI commands as well. For more information, see az sql server
create, az sql server update, and Servers - REST API.
2. Grant the Azure AD Director y Readers permission to the server identity created or assigned to the
server.
To grant this permission, follow the description used for SQL Managed Instance that is available in the
following article: Provision Azure AD admin (SQL Managed Instance)
The Azure AD user who is granting this permission must be part of the Azure AD Global
Administrator or Privileged Roles Administrator role.
For dedicated SQL pools in an Azure Synapse workspace, use the workspace's managed identity
instead of the Azure SQL server identity.
IMPORTANT
With Microsoft Graph support for Azure SQL, the Directory Readers role can be replaced with using lower level
permissions. For more information, see User-assigned managed identity in Azure AD for Azure SQL
Steps 1 and 2 must be executed in the above order. First, create or assign the server identity, followed by granting the
Director y Readers permission, or lower level permissions discussed in User-assigned managed identity in Azure AD for
Azure SQL. Omitting one of these steps, or both will cause an execution error during an Azure AD object creation in Azure
SQL on behalf of an Azure AD application.
You can assign the Director y Readers role to a group in Azure AD. The group owners can then add the managed
identity as a member of this group, which would bypass the need for a Global Administrator or Privileged Roles
Administrator to grant the Director y Readers role. For more information on this feature, see Directory Readers role in
Azure Active Directory for Azure SQL.
Next steps
Tutorial: Create Azure AD users using Azure AD applications
Directory Readers role in Azure Active Directory for
Azure SQL
9/13/2022 • 3 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
Azure Active Directory (Azure AD) has introduced using Azure AD groups to manage role assignments. This
allows for Azure AD roles to be assigned to groups.
NOTE
With Microsoft Graph support for Azure SQL, the Directory Readers role can be replaced with using lower level
permissions. For more information, see User-assigned managed identity in Azure AD for Azure SQL.
When enabling a managed identity for Azure SQL Database, Azure SQL Managed Instance, or Azure Synapse
Analytics, the Azure AD Director y Readers role can be assigned to the identity to allow read access to the
Microsoft Graph API. The managed identity of SQL Database and Azure Synapse is referred to as the server
identity. The managed identity of SQL Managed Instance is referred to as the managed instance identity, and is
automatically assigned when the instance is created. For more information on assigning a server identity to SQL
Database or Azure Synapse, see Enable service principals to create Azure AD users.
The Director y Readers role can be used as the server or instance identity to help:
Create Azure AD logins for SQL Managed Instance
Impersonate Azure AD users in Azure SQL
Migrate SQL Server users that use Windows authentication to SQL Managed Instance with Azure AD
authentication (using the ALTER USER (Transact-SQL) command)
Change the Azure AD admin for SQL Managed Instance
Allow service principals (Applications) to create Azure AD users in Azure SQL
Next steps
Tutorial: Assign Directory Readers role to an Azure AD group and manage role assignments
Azure AD-only authentication with Azure SQL
9/13/2022 • 10 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
(dedicated SQL pools only)
Azure AD-only authentication is a feature within Azure SQL that allows the service to only support Azure AD
authentication, and is supported for Azure SQL Database and Azure SQL Managed Instance.
Azure AD-only authentication is also available for dedicated SQL pools (formerly SQL DW) in standalone
servers. Azure AD-only authentication can be enabled for the Azure Synapse workspace. For more information,
see Azure AD-only authentication with Azure Synapse workspaces.
SQL authentication is disabled when enabling Azure AD-only authentication in the Azure SQL environment,
including connections from SQL server administrators, logins, and users. Only users using Azure AD
authentication are authorized to connect to the server or database.
Azure AD-only authentication can be enabled or disabled using the Azure portal, Azure CLI, PowerShell, or REST
API. Azure AD-only authentication can also be configured during server creation with an Azure Resource
Manager (ARM) template.
For more information on Azure SQL authentication, see Authentication and authorization.
Feature description
When enabling Azure AD-only authentication, SQL authentication is disabled at the server or managed instance
level and prevents any authentication based on any SQL authentication credentials. SQL authentication users
won't be able to connect to the logical server for Azure SQL Database or managed instance, including all of its
databases. Although SQL authentication is disabled, new SQL authentication logins and users can still be created
by Azure AD accounts with proper permissions. Newly created SQL authentication accounts won't be allowed to
connect to the server. Enabling Azure AD-only authentication doesn't remove existing SQL authentication login
and user accounts. The feature only prevents these accounts from connecting to the server, and any database
created for this server.
You can also force servers to be created with Azure AD-only authentication enabled using Azure Policy. For more
information, see Azure Policy for Azure AD-only authentication.
Permissions
Azure AD-only authentication can be enabled or disabled by Azure AD users who are members of high
privileged Azure AD built-in roles, such as Azure subscription Owners, Contributors, and Global Administrators.
Additionally, the role SQL Security Manager can also enable or disable the Azure AD-only authentication feature.
The SQL Server Contributor and SQL Managed Instance Contributor roles won't have permissions to enable or
disable the Azure AD-only authentication feature. This is consistent with the Separation of Duties approach,
where users who can create an Azure SQL server or create an Azure AD admin, can't enable or disable security
features.
Actions required
The following actions are added to the SQL Security Manager role to allow management of the Azure AD-only
authentication feature.
Microsoft.Sql/servers/azureADOnlyAuthentications/*
Microsoft.Sql/servers/administrators/read - required only for users accessing the Azure portal Azure Active
Director y menu
Microsoft.Sql/managedInstances/azureADOnlyAuthentications/*
Microsoft.Sql/managedInstances/read
The above actions can also be added to a custom role to manage Azure AD-only authentication. For more
information, see Create and assign a custom role in Azure Active Directory.
Azure CLI
PowerShell
REST API
ARM Template
Disable
Disable
az sql mi ad-only-auth disable --resource-group myresource --name myserver
SELECT SERVERPROPERTY('IsExternalAuthenticationOnly')
Remarks
A SQL Server Contributor can set or remove an Azure AD admin, but can't set the Azure Active Director y
authentication only setting. The SQL Security Manager can't set or remove an Azure AD admin, but can set
the Azure Active Director y authentication only setting. Only accounts with higher Azure RBAC roles or
custom roles that contain both permissions can set or remove an Azure AD admin and set the Azure Active
Director y authentication only setting. One such role is the Contributor role.
After enabling or disabling Azure Active Director y authentication only in the Azure portal, an Activity
log entry can be seen in the SQL ser ver menu.
The Azure Active Director y authentication only setting can only be enabled or disabled by users with
the right permissions if the Azure Active Director y admin is specified. If the Azure AD admin isn't set, the
Azure Active Director y authentication only setting remains inactive and cannot be enabled or disabled.
Using APIs to enable Azure AD-only authentication will also fail if the Azure AD admin hasn't been set.
Changing an Azure AD admin when Azure AD-only authentication is enabled is supported for users with the
appropriate permissions.
Changing an Azure AD admin and enabling or disabling Azure AD-only authentication is allowed in the Azure
portal for users with the appropriate permissions. Both operations can be completed with one Save in the
Azure portal. The Azure AD admin must be set in order to enable Azure AD-only authentication.
Removing an Azure AD admin when the Azure AD-only authentication feature is enabled isn't supported.
Using an API to remove an Azure AD admin will fail if Azure AD-only authentication is enabled.
If the Azure Active Director y authentication only setting is enabled, the Remove admin button
is inactive in the Azure portal.
Removing an Azure AD admin and disabling the Azure Active Director y authentication only setting is
allowed, but requires the right user permission to complete the operations. Both operations can be
completed with one Save in the Azure portal.
Azure AD users with proper permissions can impersonate existing SQL users.
Impersonation continues working between SQL authentication users even when the Azure AD-only
authentication feature is enabled.
Limitations for Azure AD-only authentication in SQL Database
When Azure AD-only authentication is enabled for SQL Database, the following features aren't supported:
Azure SQL Database server roles are supported for Azure AD server principals, but not if the Azure AD login
is a group.
Elastic jobs
SQL Data Sync
Change data capture (CDC) - If you create a database in Azure SQL Database as an Azure AD user and enable
change data capture on it, a SQL user will not be able to disable or make changes to CDC artifacts. However,
another Azure AD user will be able to enable or disable CDC on the same database. Similarly, if you create an
Azure SQL Database as a SQL user, enabling or disabling CDC as an Azure AD user won't work
Transactional replication - Since SQL authentication is required for connectivity between replication
participants, when Azure AD-only authentication is enabled, transactional replication is not supported for
SQL Database for scenarios where transactional replication is used to push changes made in an Azure SQL
Managed Instance, on-premises SQL Server, or an Azure VM SQL Server instance to a database in Azure SQL
Database
SQL Insights (preview)
EXEC AS statement for Azure AD group member accounts
Limitations for Azure AD-only authentication in Managed Instance
When Azure AD-only authentication is enabled for Managed Instance, the following features aren't supported:
Transactional replication
SQL Agent Jobs in Managed Instance supports Azure AD-only authentication. However, the Azure AD user
who is a member of an Azure AD group that has access to the managed instance cannot own SQL Agent Jobs
SQL Insights (preview)
EXEC AS statement for Azure AD group member accounts
For more limitations, see T-SQL differences between SQL Server & Azure SQL Managed Instance.
Next steps
Tutorial: Enable Azure Active Directory only authentication with Azure SQL
Create server with Azure AD-only authentication enabled in Azure SQL
Azure Policy for Azure Active Directory only
authentication with Azure SQL
9/13/2022 • 2 minutes to read • Edit Online
Permissions
For an overview of the permissions needed to manage Azure Policy, see Azure RBAC permissions in Azure
Policy.
Actions
If you're using a custom role to manage Azure Policy, the following Actions are needed.
*/read
Microsoft.Authorization/policyassignments/*
Microsoft.Authorization/policydefinitions/*
Microsoft.Authorization/policyexemptions/*
Microsoft.Authorization/policysetdefinitions/*
Microsoft.PolicyInsights/*
For more information on custom roles, see Azure custom roles.
For a guide on how to add an Azure Policy for Azure AD-only authentication, see Using Azure Policy to enforce
Azure Active Directory only authentication with Azure SQL.
There are three effects for these policies:
Audit - The default setting, and will only capture an audit report in the Azure Policy activity logs
Deny - Prevents logical server or managed instance creation without Azure AD-only authentication enabled
Disabled - Will disable the policy, and won't restrict users from creating a logical server or managed
instance without Azure AD-only authentication enabled
If the Azure Policy for Azure AD-only authentication is set to Deny , Azure SQL logical server or managed
instance creation will fail. The details of this failure will be recorded in the Activity log of the resource group.
Policy compliance
You can view the Compliance setting under the Policy service to see the compliance state. The Compliance
state will tell you whether the server or managed instance is currently in compliance with having Azure AD-
only authentication enabled.
The Azure Policy can prevent a new logical server or managed instance from being created without having
Azure AD-only authentication enabled, but the feature can be changed after server or managed instance
creation. If a user has disabled Azure AD-only authentication after the server or managed instance was created,
the compliance state will be Non-compliant if the Azure Policy is set to Deny .
Limitations
Azure Policy enforces Azure AD-only authentication during logical server or managed instance creation.
Once the server is created, authorized Azure AD users with special roles (for example, SQL Security Manager)
can disable the Azure AD-only authentication feature. The Azure Policy allows it, but in this case, the server or
managed instance will be listed in the compliance report as Non-compliant and the report will indicate the
server or managed instance name.
For more remarks, known issues, and permissions needed, see Azure AD-only authentication.
Next steps
Using Azure Policy to enforce Azure Active Directory only authentication with Azure SQL
User-assigned managed identity in Azure AD for
Azure SQL
9/13/2022 • 8 minutes to read • Edit Online
Permissions
Once the UMI is created, some permissions are needed to allow the UMI to read from Microsoft Graph as the
server identity. Grant the permissions below, or give the UMI the Directory Readers role. These permissions
should be granted before provisioning an Azure SQL logical server or managed instance. Once the permissions
are granted to the UMI, they're enabled for all servers or instances that are created with the UMI assigned as a
server identity.
IMPORTANT
Only a Global Administrator or Privileged Role Administrator can grant these permissions.
import-module AzureAD
$tenantId = '<tenantId>' # Your Azure AD tenant ID
#Output
Exit
}
# If you have more UMIs with similar names, you have to use the proper $MSI[ ]array number
In the final steps of the script, if you have more UMIs with similar names, you have to use the proper
$MSI[ ]array number, for example, $AAD_SP.ObjectId[0] .
NOTE
You can't change the SQL server administrator or password, nor the Azure AD admin by re-running the provisioning
command for the ARM template.
Next steps
Create an Azure SQL logical server using a user-assigned managed identity
Create an Azure SQL Managed Instance with a user-assigned managed identity
Transparent data encryption for SQL Database, SQL
Managed Instance, and Azure Synapse Analytics
9/13/2022 • 9 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
Transparent data encryption (TDE) helps protect Azure SQL Database, Azure SQL Managed Instance, and Azure
Synapse Analytics against the threat of malicious offline activity by encrypting data at rest. It performs real-time
encryption and decryption of the database, associated backups, and transaction log files at rest without
requiring changes to the application. By default, TDE is enabled for all newly deployed Azure SQL Databases and
must be manually enabled for older databases of Azure SQL Database. For Azure SQL Managed Instance, TDE is
enabled at the instance level and newly created databases. TDE must be manually enabled for Azure Synapse
Analytics.
NOTE
This article applies to Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics (dedicated SQL
pools (formerly SQL DW)). For documentation on Transparent Data Encryption for dedicated SQL pools inside Synapse
workspaces, see Azure Synapse Analytics encryption.
NOTE
Some items considered customer content, such as table names, object names, and index names, may be transmitted in
log files for support and troubleshooting by Microsoft.
TDE performs real-time I/O encryption and decryption of the data at the page level. Each page is decrypted
when it's read into memory and then encrypted before being written to disk. TDE encrypts the storage of an
entire database by using a symmetric key called the Database Encryption Key (DEK). On database startup, the
encrypted DEK is decrypted and then used for decryption and re-encryption of the database files in the SQL
Server database engine process. DEK is protected by the TDE protector. TDE protector is either a service-
managed certificate (service-managed transparent data encryption) or an asymmetric key stored in Azure Key
Vault (customer-managed transparent data encryption).
For Azure SQL Database and Azure Synapse, the TDE protector is set at the server level and is inherited by all
databases associated with that server. For Azure SQL Managed Instance, the TDE protector is set at the instance
level and it is inherited by all encrypted databases on that instance. The term server refers both to server and
instance throughout this document, unless stated differently.
IMPORTANT
All newly created databases in SQL Database are encrypted by default by using service-managed transparent data
encryption. Existing SQL databases created before May 2017 and SQL databases created through restore, geo-replication,
and database copy are not encrypted by default. Existing SQL Managed Instance databases created before February 2019
are not encrypted by default. SQL Managed Instance databases created through restore inherit encryption status from
the source. To restore an existing TDE-encrypted database, the required TDE certificate must first be imported into the
SQL Managed Instance.
NOTE
TDE cannot be used to encrypt system databases, such as the master database, in Azure SQL Database and Azure SQL
Managed Instance. The master database contains objects that are needed to perform the TDE operations on the user
databases. It is recommended to not store any sensitive data in the system databases. Infrastructure encryption is now
being rolled out which encrypts the system databases including master.
When you export a TDE-protected database, the exported content of the database isn't encrypted. This exported
content is stored in unencrypted BACPAC files. Be sure to protect the BACPAC files appropriately and enable TDE
after import of the new database is finished.
For example, if the BACPAC file is exported from a SQL Server instance, the imported content of the new
database isn't automatically encrypted. Likewise, if the BACPAC file is imported to a SQL Server instance, the
new database also isn't automatically encrypted.
The one exception is when you export a database to and from SQL Database. TDE is enabled on the new
database, but the BACPAC file itself still isn't encrypted.
You set the TDE master key, known as the TDE protector, at the server or instance level. To use TDE with BYOK
support and protect your databases with a key from Key Vault, open the TDE settings under your server.
Azure SQL transparent data encryption with
customer-managed key
9/13/2022 • 21 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
Azure SQL transparent data encryption (TDE) with customer-managed key enables Bring Your Own Key (BYOK)
scenario for data protection at rest, and allows organizations to implement separation of duties in the
management of keys and data. With customer-managed TDE, customer is responsible for and in a full control of
a key lifecycle management (key creation, upload, rotation, deletion), key usage permissions, and auditing of
operations on keys.
In this scenario, the key used for encryption of the Database Encryption Key (DEK), called TDE protector, is a
customer-managed asymmetric key stored in a customer-owned and customer-managed Azure Key Vault (AKV),
a cloud-based external key management system. Key Vault is highly available and scalable secure storage for
RSA cryptographic keys, optionally backed by FIPS 140-2 Level 2 validated hardware security modules (HSMs).
It doesn't allow direct access to a stored key, but provides services of encryption/decryption using the key to the
authorized entities. The key can be generated by the key vault, imported, or transferred to the key vault from an
on-premises HSM device.
For Azure SQL Database and Azure Synapse Analytics, the TDE protector is set at the server level and is inherited
by all encrypted databases associated with that server. For Azure SQL Managed Instance, the TDE protector is set
at the instance level and is inherited by all encrypted databases on that instance. The term server refers both to
a server in SQL Database and Azure Synapse and to a managed instance in SQL Managed Instance throughout
this document, unless stated differently.
NOTE
This article applies to Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics (dedicated SQL
pools (formerly SQL DW)). For documentation on transparent data encryption for dedicated SQL pools inside Synapse
workspaces, see Azure Synapse Analytics encryption.
IMPORTANT
For those using service-managed TDE who would like to start using customer-managed TDE, data remains encrypted
during the process of switching over, and there is no downtime nor re-encryption of the database files. Switching from a
service-managed key to a customer-managed key only requires re-encryption of the DEK, which is a fast and online
operation.
NOTE
To provide Azure SQL customers with two layers of encryption of data at rest, infrastructure encryption (using AES-256
encryption algorithm) with platform managed keys is being rolled out. This provides an addition layer of encryption at
rest along with TDE with customer-managed keys, which is already available. For Azure SQL Database and Managed
Instance, all databases, including the master database and other system databases, will be encrypted when infrastructure
encryption is turned on. At this time, customers must request access to this capability. If you are interested in this
capability, contact [email protected].
Benefits of the customer-managed TDE
Customer-managed TDE provides the following benefits to the customer:
Full and granular control over usage and management of the TDE protector;
Transparency of the TDE protector usage;
Ability to implement separation of duties in the management of keys and data within the organization;
Key Vault administrator can revoke key access permissions to make encrypted database inaccessible;
Central management of keys in AKV;
Greater trust from your end customers, since AKV is designed such that Microsoft can't see nor extract
encryption keys;
In order for the Azure SQL server to use TDE protector stored in AKV for encryption of the DEK, the key vault
administrator needs to give the following access rights to the server using its unique Azure Active Directory
(Azure AD) identity:
get - for retrieving the public part and properties of the key in the Key Vault
wrapKey - to be able to protect (encrypt) DEK
unwrapKey - to be able to unprotect (decrypt) DEK
Key vault administrator can also enable logging of key vault audit events, so they can be audited later.
When server is configured to use a TDE protector from AKV, the server sends the DEK of each TDE-enabled
database to the key vault for encryption. Key vault returns the encrypted DEK, which is then stored in the user
database.
When needed, server sends protected DEK to the key vault for decryption.
Auditors can use Azure Monitor to review key vault AuditEvent logs, if logging is enabled.
NOTE
It may take around 10 minutes for any permission changes to take effect for the key vault. This includes revoking access
permissions to the TDE protector in AKV, and users within this time frame may still have access permissions.
IMPORTANT
Both soft-delete and purge protection must be enabled on the key vault when configuring customer-managed TDE
on a new or existing server or managed instance.
Soft-delete and purge protection are important features of Azure Key Vault that allow recovery of deleted vaults
and deleted key vault objects, reducing the risk of a user accidentally or maliciously deleting a key or a key vault.
Soft-deleted resources are retained for 90 days, unless recovered or purged by the customer. The recover
and purge actions have their own permissions associated in a key vault access policy. The soft-delete
feature is on by default for new key vaults and can also be enabled using the Azure portal, PowerShell or
Azure CLI.
Purge protection can be turned on using Azure CLI or PowerShell. When purge protection is enabled, a
vault or an object in the deleted state can't be purged until the retention period has passed. The default
retention period is 90 days, but is configurable from 7 to 90 days through the Azure portal.
Azure SQL requires soft-delete and purge protection to be enabled on the key vault containing the
encryption key being used as the TDE protector for the server or managed instance. This helps prevent
the scenario of accidental or malicious key vault or key deletion that can lead to the database going into
Inaccessible state.
When configuring the TDE protector on an existing server or during server creation, Azure SQL validates
that the key vault being used has soft-delete and purge protection turned on. If soft-delete and purge
protection aren't enabled on the key vault, the TDE protector setup fails with an error. In this case, soft-
delete and purge protection must first be enabled on the key vault and then the TDE protector setup
should be performed.
Requirements for configuring TDE protector
TDE protector can only be an asymmetric, RSA, or RSA HSM key. The supported key lengths are 2048
bytes and 3072 bytes.
The key activation date (if set) must be a date and time in the past. Expiration date (if set) must be a future
date and time.
The key must be in the Enabled state.
If you're importing existing key into the key vault, make sure to provide it in the supported file formats (
.pfx , .byok , or .backup ).
NOTE
Azure SQL now supports using a RSA key stored in a Managed HSM as TDE protector. Azure Key Vault Managed HSM is
a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard
cryptographic keys for your cloud applications, using FIPS 140-2 Level 3 validated HSMs. Learn more about Managed
HSMs.
NOTE
An issue with Thales CipherTrust Manager versions prior to v2.8.0 prevents keys newly imported into Azure Key Vault
from being used with Azure SQL Database or Azure SQL Managed Instance for customer-managed TDE scenarios. More
details about this issue can be found here. For such cases, please wait 24 hours after importing the key into key vault to
begin using it as TDE protector for the server or managed instance. This issue has been resolved in Thales CipherTrust
Manager v2.8.0.
NOTE
Automated rotation of the TDE protector feature is currently in public preview for SQL Database and Managed Instance.
Geo -replication considerations when configuring automated rotation of the TDE protector
To avoid issues while establishing or during geo-replication, when automatic rotation of the TDE protector is
enabled on the primary or secondary server, it's important to follow these rules when configuring geo-
replication:
Both the primary and secondary servers must have Get, wrapKey and unwrapKey permissions to the
primary server's key vault (key vault that holds the primary server's TDE protector key).
For a server with automated key rotation enabled, before initiating geo-replication, add the encryption
key being used as TDE protector on the primary server to the secondary server. The secondary server
requires access to the key in the same key vault being used with the primary server (and not another key
with the same key material). Alternatively, before initiating geo-replication, ensure that the secondary
server's managed identity (user-assigned or system-assigned) has required permissions on the primary
server's key vault, and the system will attempt to add the key to the secondary server.
For an existing geo-replication setup, prior to enabling automated key rotation on the primary server, add
the encryption key being used as TDE protector on the primary server to the secondary server. The
secondary server requires access to the key in the same key vault being used with the primary server
(and not another key with the same key material). Alternatively, before enabling automated key, ensure
that the secondary server's managed identity (user-assigned or system-assigned) has required
permissions on the primary server's key vault, and the system will attempt to add the key to the
secondary server.
NOTE
If the database is inaccessible due to an intermittent networking outage, there is no action required and the databases will
come back online automatically.
After access to the key is restored, taking database back online requires extra time and steps, which may vary
based on the time elapsed without access to the key and the size of the data in the database:
If key access is restored within 30 minutes, the database will autoheal within next hour.
If key access is restored after more than 30 minutes, autoheal isn't possible, and bringing back the
database requires extra steps on the portal and can take a significant amount of time depending on the
size of the database. Once the database is back online, previously configured server-level settings such as
failover group configuration, point-in-time-restore history, and tags will be lost . Therefore, it's
recommended implementing a notification system that allows you to identify and address the underlying
key access issues within 30 minutes.
Below is a view of the extra steps required on the portal to bring an inaccessible database back online.
Accidental TDE protector access revocation
It may happen that someone with sufficient access rights to the key vault accidentally disables server access to
the key by:
revoking the key vault's get, wrapKey, unwrapKey permissions from the server
deleting the key
deleting the key vault
changing the key vault's firewall rules
deleting the managed identity of the server in Azure Active Directory
Learn more about the common causes for database to become inaccessible.
IMPORTANT
At any moment there can be not more than one TDE protector set for a server. It's the key marked with "Make the key
the default TDE protector" in the Azure portal blade. However, multiple additional keys can be linked to a server without
marking them as a TDE protector. These keys are not used for protecting DEK, but can be used during restore from a
backup, if backup file is encrypted with the key with the corresponding thumbprint.
If the key that is needed for restoring a backup is no longer available to the target server, the following error
message is returned on the restore try: "Target server <Servername> doesn't have access to all AKV URIs created
between <Timestamp #1> and <Timestamp #2>. Retry operation after restoring all AKV URIs."
To mitigate it, run the Get-AzSqlServerKeyVaultKey cmdlet for the target server or Get-
AzSqlInstanceKeyVaultKey for the target managed instance to return the list of available keys and identify the
missing ones. To ensure all backups can be restored, make sure the target server for the restore has access to all
of keys needed. These keys don't need to be marked as TDE protector.
To learn more about backup recovery for SQL Database, see Recover a database in SQL Database. To learn more
about backup recovery for dedicated SQL pools in Azure Synapse Analytics, see Recover a dedicated SQL pool.
For SQL Server's native backup/restore with SQL Managed Instance, see Quickstart: Restore a database to SQL
Managed Instance.
Another consideration for log files: Backed up log files remain encrypted with the original TDE protector, even if
it was rotated and the database is now using a new TDE protector. At restore time, both keys will be needed to
restore the database. If the log file is using a TDE protector stored in Azure Key Vault, this key will be needed at
restore time, even if the database has been changed to use service-managed TDE in the meantime.
To test a failover, follow the steps in Active geo-replication overview. Testing failover should be done regularly to
validate that SQL Database has maintained access permission to both key vaults.
Azure SQL Database ser ver and Managed Instance in one region can now be linked to key vault in
any other region. The server and key vault don't have to be co-located in the same region. With this, for
simplicity, the primary and secondary servers can be connected to the same key vault (in any region). This will
help avoid scenarios where key material may be out of sync if separate key vaults are used for both the servers.
Azure Key Vault has multiple layers of redundancy in place to make sure that your keys and key vaults remain
available in case of service or region failures. Azure Key Vault availability and redundancy
Azure Policy for customer-managed TDE
Azure Policy can be used to enforce customer-managed TDE during the creation or update of an Azure SQL
Database server or Azure SQL Managed Instance. With this policy in place, any attempts to create or update a
logical server in Azure or managed instance will fail if it isn't configured with a customer-managed key. The
Azure Policy can be applied to the whole Azure subscription, or just within a resource group.
For more information on Azure Policy, see What is Azure Policy? and Azure Policy definition structure.
The following two built-in policies are supported for customer-managed TDE in Azure Policy:
SQL servers should use customer-managed keys to encrypt data at rest
SQL managed instances should use customer-managed keys to encrypt data at rest
The customer-managed TDE policy can be managed by going to the Azure portal, and searching for the Policy
service. Under Definitions , search for customer-managed key.
There are three effects for these policies:
Audit - The default setting, and will only capture an audit report in the Azure Policy activity logs
Deny - Prevents logical server or managed instance creation or update without a customer-managed key
configured
Disabled - Will disable the policy, and won't restrict users from creating or updating a logical server or
managed instance without customer-managed TDE enabled
If the Azure Policy for customer-managed TDE is set to Deny , Azure SQL logical server or managed instance
creation will fail. The details of this failure will be recorded in the Activity log of the resource group.
IMPORTANT
Earlier versions of built-in policies for customer-managed TDE containing the AuditIfNotExist effect have been
deprecated. Existing policy assignments using the deprecated policies are not impacted and will continue to work as
before.
Next steps
You may also want to check the following PowerShell sample scripts for the common operations with customer-
managed TDE:
Rotate the transparent data encryption protector for SQL Database
Remove a transparent data encryption (TDE) protector for SQL Database
Manage transparent data encryption in SQL Managed Instance with your own key using PowerShell
Additionally, enable Microsoft Defender for SQL to secure your databases and their data, with functionalities for
discovering and mitigating potential database vulnerabilities, and detecting anomalous activities that could
indicate a threat to your databases.
Managed identities for transparent data encryption
with BYOK
9/13/2022 • 4 minutes to read • Edit Online
NOTE
Assigning a user-assigned managed identity for Azure SQL logical servers and Managed Instances is in public preview .
Managed identities in Azure Active Directory (Azure AD) provide Azure services with an automatically managed
identity in Azure AD. This identity can be used to authenticate to any service that supports Azure AD
authentication, such as Azure Key Vault, without any credentials in the code. For more information, see Managed
identity types in Azure.
Managed Identities can be of two types:
System-assigned
User-assigned
Enabling system-assigned managed identity for Azure SQL logical servers and Managed Instances are already
supported today. Assigning user-assigned managed identity to the server is now in public preview.
For TDE with customer-managed key (CMK) in Azure SQL, a managed identity on the server is used for
providing access rights to the server on the key vault. For instance, the system-assigned managed identity of the
server should be provided with key vault permissions prior to enabling TDE with CMK on the server.
In addition to the system-assigned managed identity that is already supported for TDE with CMK, a user-
assigned managed identity (UMI) that is assigned to the server can be used to allow the server to access the key
vault. A prerequisite to enable key vault access is to ensure the user-assigned managed identity has been
provided the Get, wrapKey and unwrapKey permissions on the key vault. Since the user-assigned managed
identity is a standalone resource that can be created and granted access to the key vault, TDE with a customer-
managed key can now be enabled at creation time for the server or database.
NOTE
For assigning a user-assigned managed identity to the logical server or managed instance, a user must have the SQL
Server Contributor or SQL Managed Instance Contributor Azure RBAC role along with any other Azure RBAC role
containing the Microsoft.ManagedIdentity/userAssignedIdentities/*/assign/action action.
IMPORTANT
The primary user-assigned managed identity being used for TDE with CMK should not be deleted from Azure. Deleting
this identity will lead to the server losing access to key vault and databases becoming inaccessible.
Next steps
Create Azure SQL database configured with user-assigned managed identity and customer-managed TDE
Overview of business continuity with Azure SQL
Database & Azure SQL Managed Instance
9/13/2022 • 9 minutes to read • Edit Online
Auto-failover groups 1h 5s
NOTE
Manual database failover refers to failover of a single database to its geo-replicated secondary using the unplanned
mode. See the table earlier in this article for details of the auto-failover RTO and RPO.
NOTE
When the datacenter comes back online the old primaries automatically reconnect to the new primary and become
secondary databases. If you need to relocate the primary back to the original region, you can initiate a planned failover
manually (failback).
NOTE
If the datacenter comes back online before you switch your application over to the recovered database, you can cancel
the recovery.
NOTE
If you are using a failover group and connect to the databases using the read-write listener, the redirection after failover
will happen automatically and transparently to the application.
Next steps
For a discussion of application design considerations for single databases and for elastic pools, see Design an
application for cloud disaster recovery and Elastic pool disaster recovery strategies.
High availability for Azure SQL Database and SQL
Managed Instance
9/13/2022 • 16 minutes to read • Edit Online
Whenever the database engine or the operating system is upgraded, or a failure is detected, Azure Service
Fabric will move the stateless sqlservr.exe process to another stateless compute node with sufficient free
capacity. Data in Azure Blob storage is not affected by the move, and the data/log files are attached to the newly
initialized sqlservr.exe process. This process guarantees 99.99% availability, but a heavy workload may
experience some performance degradation during the transition since the new sqlservr.exe process starts with
cold cache.
IMPORTANT
For General Purpose tier the zone-redundant configuration is Generally Available in the following regions: West Europe,
North Europe, West US 2, and France Central. This is in preview in the following regions: East US, East US 2, Southeast
Asia, Australia East, Japan East, and UK South.
NOTE
Zone-redundant configuration is not available in SQL Managed Instance. In SQL Database this feature is only available
when the Gen5 hardware is selected.
The zone-redundant version of the high availability architecture is illustrated by the following diagram:
IMPORTANT
At least 1 high availability compute replica and the use of zone-redundant or geo-zone-redundant backup storage is
required for enabling the zone redundant configuration for Hyperscale.
Azure PowerShell
Azure CLI
Specify the -ZoneRedundant parameter to enable zone redundancy for your Hyperscale database by using Azure
PowerShell. The database must have at least 1 high availability replica and zone-redundant backup storage must
be specified.
To enable zone redundancy using Azure PowerShell, use the following example command:
Azure PowerShell
Azure CLI
Specify the -ZoneRedundant parameter to enable zone redundancy for your Hyperscale database secondary. The
secondary database must have at least 1 high availability replica and zone-redundant backup storage must be
specified.
To create your zone redundant database using Azure PowerShell, use the following example command:
New-AzSqlDatabaseSecondary -ResourceGroupName "myResourceGroup" -ServerName $sourceserver -DatabaseName
"databaseName" -PartnerResourceGroupName "myPartnerResourceGroup" -PartnerServerName $targetserver -
PartnerDatabaseName "zoneRedundantCopyOfMySampleDatabase” -ZoneRedundant -BackupStorageRedundancy Zone -
HighAvailabilityReplicaCount 1
Azure PowerShell
Azure CLI
Specify the -ZoneRedundant parameter to enable zone redundancy for your Hyperscale database copy. The
database copy must have at least 1 high availability replica and zone-redundant backup storage must be
specified.
To create your zone redundant database using Azure PowerShell, use the following example command:
Azure PowerShell
Azure CLI
Use the following example command to check the value of "ZoneRedundant" property for master database.
IMPORTANT
The Failover command is not available for readable secondary replicas of Hyperscale databases.
Conclusion
Azure SQL Database and Azure SQL Managed Instance feature a built-in high availability solution, that is deeply
integrated with the Azure platform. It is dependent on Service Fabric for failure detection and recovery, on Azure
Blob storage for data protection, and on Availability Zones for higher fault tolerance (as mentioned earlier in
document not applicable to Azure SQL Managed Instance yet). In addition, SQL Database and SQL Managed
Instance use the Always On availability group technology from the SQL Server instance for replication and
failover. The combination of these technologies enables applications to fully realize the benefits of a mixed
storage model and support the most demanding SLAs.
Next steps
Learn about Azure Availability Zones
Learn about Service Fabric
Learn about Azure Traffic Manager
Learn How to initiate a manual failover on SQL Managed Instance
For more options for high availability and disaster recovery, see Business Continuity
Accelerated Database Recovery in Azure SQL
9/13/2022 • 7 minutes to read • Edit Online
NOTE
ADR is enabled by default in Azure SQL Database and Azure SQL Managed Instance. Disabling ADR in Azure SQL
Database and Azure SQL Managed Instance is not supported.
Overview
The primary benefits of ADR are:
Fast and consistent database recover y
With ADR, long running transactions do not impact the overall recovery time, enabling fast and
consistent database recovery irrespective of the number of active transactions in the system or their
sizes.
Instantaneous transaction rollback
With ADR, transaction rollback is instantaneous, irrespective of the time that the transaction has been
active or the number of updates that has performed.
Aggressive log truncation
With ADR, the transaction log is aggressively truncated, even in the presence of active long-running
transactions, which prevents it from growing out of control.
Analysis phase
The process remains the same as before with the addition of reconstructing SLOG and copying log
records for non-versioned operations.
Redo phase
Broken into two phases (P)
Phase 1
Redo from SLOG (oldest uncommitted transaction up to last checkpoint). Redo is a fast operation
as it only needs to process a few records from the SLOG.
Phase 2
Redo from Transaction Log starts from last checkpoint (instead of oldest uncommitted transaction)
Undo phase
The Undo phase with ADR completes almost instantaneously by using SLOG to undo non-versioned
operations and Persisted Version Store (PVS) with Logical Revert to perform row level version-based
Undo.
Next steps
Accelerated database recovery
Troubleshooting Accelerated Database Recovery (ADR) on SQL Server.
Long-term retention - Azure SQL Database and
Azure SQL Managed Instance
9/13/2022 • 5 minutes to read • Edit Online
Many applications have regulatory, compliance, or other business purposes that require you to retain database
backups beyond the 7-35 days provided by Azure SQL Database and Azure SQL Managed Instance automatic
backups. By using the long-term retention (LTR) feature, you can store specified SQL Database and SQL
Managed Instance full backups in Azure Blob storage with configured redundancy for up to 10 years. LTR
backups can then be restored as a new database.
Long-term retention can be enabled for Azure SQL Database and for Azure SQL Managed Instance. This article
provides a conceptual overview of long-term retention. To configure long-term retention, see Configure Azure
SQL Database LTR and Configure Azure SQL Managed Instance LTR.
In Azure SQL Managed Instance, you can use SQL Agent jobs to schedule copy-only database backups as an
alternative to LTR beyond 35 days.
NOTE
Any change to the LTR policy applies only to future backups. For example, if weekly backup retention (W), monthly backup
retention (M), or yearly backup retention (Y) is modified, the new retention setting will only apply to new backups. The
retention of existing backups will not be modified. If your intention is to delete old LTR backups before their retention
period expires, you will need to manually delete the backups.
If you modify the above policy and set W=0 (no weekly backups), Azure only retains the monthly and yearly
backups. No weekly backups are stored under the LTR policy. The storage amount needed to keep these backups
reduces accordingly.
IMPORTANT
The timing of individual LTR backups is controlled by Azure. You cannot manually create an LTR backup or control the
timing of the backup creation. After configuring an LTR policy, it may take up to 7 days before the first LTR backup will
show up on the list of available backups.
If you delete a server or a managed instance, all databases on that server or managed instance are also deleted and can't
be recovered. You can't restore a deleted server or managed instance. However, if you had configured LTR for a database
or managed instance, LTR backups are not deleted, and they can be used to restore databases on a different server or
managed instance in the same subscription, to a point in time when an LTR backup was taken.
NOTE
When the original primary database recovers from an outage that caused the failover, it will become a new secondary.
Therefore, the backup creation will not resume and the existing LTR policy will not take effect until it becomes the primary
again.
NOTE
If you are using LTR backups to meet compliance or other mission-critical requirements, consider conducting periodic
recovery drills to verify that LTR backups can be restored, and that the restore results in expected database state.
Next steps
Because database backups protect data from accidental corruption or deletion, they're an essential part of any
business continuity and disaster recovery strategy.
To learn about the other SQL Database business-continuity solutions, see Business continuity overview.
To learn about service-generated automatic backups, see automatic backups
Monitoring and performance tuning in Azure SQL
Database and Azure SQL Managed Instance
9/13/2022 • 9 minutes to read • Edit Online
* For solutions requiring low latency monitoring, Azure SQL Analytics (preview) is not recommended.
NOTE
Databases with extremely low usage may show in the portal with less than actual usage. Due to the way telemetry is
emitted when converting a double value to the nearest integer certain usage amounts less than 0.5 will be rounded to 0
which causes a loss in granularity of the emitted telemetry. For details, see Low database and elastic pool metrics
rounding to zero.
Azure SQL Database and Azure SQL Managed Instance resource monitoring
You can quickly monitor a variety of resource metrics in the Azure portal in the Metrics view. These metrics
enable you to see if a database is approaching the limits of CPU, memory, IO, or storage resources. High DTU,
CPU or IO utilization may indicate that your workload needs more resources. It might also indicate that queries
need to be optimized. See Microsoft.Sql/servers/databases, Microsoft.Sql/servers/elasticPools and
Microsoft.Sql/managedInstances for supported metrics in Azure SQL Database and Azure SQL Managed
Instance.
Database advisors in Azure SQL Database
Azure SQL Database includes database advisors that provide performance tuning recommendations for single
and pooled databases. These recommendations are available in the Azure portal as well as by using PowerShell.
You can also enable automatic tuning so that Azure SQL Database can automatically implement these tuning
recommendations.
Query Performance Insight in Azure SQL Database
Query Performance Insight shows the performance in the Azure portal of top consuming and longest running
queries for single and pooled databases.
Low database and elastic pool metrics rounding to zero
Starting in September 2020, databases with extremely low usage may show in the portal with less than actual
usage. Due to the way telemetry is emitted when converting a double value to the nearest integer certain usage
amounts less than 0.5 will be rounded to 0, which causes a loss in granularity of the emitted telemetry.
For example: Consider a 1-minute window with the following four data points: 0.1, 0.1, 0.1, 0.1, these low values
are rounded down to 0, 0, 0, 0 and present an average of 0. If any of the data points are greater than 0.5, for
example: 0.1, 0.1, 0.9, 0.1, they are rounded to 0, 0, 1, 0 and show an avg of 0.25.
Affected database metrics:
cpu_percent
log_write_percent
workers_percent
sessions_percent
physical_data_read_percent
dtu_consumption_percent2
xtp_storage_percent
Affected elastic pool metrics:
cpu_percent
physical_data_read_percent
log_write_percent
memory_usage_percent
data_storage_percent
peak_worker_percent
peak_session_percent
xtp_storage_percent
allocated_data_storage_percent
NOTE
Azure SQL Analytics (preview) is an integration with Azure Monitor, where many monitoring solutions are no longer in
active development. Monitor your SQL deployments with SQL Insights (preview).
Next steps
For more information about intelligent performance recommendations for single and pooled databases, see
Database advisor performance recommendations.
For more information about automatically monitoring database performance with automated diagnostics
and root cause analysis of performance issues, see Azure SQL Intelligent Insights.
Monitor your SQL deployments with SQL Insights (preview)
Monitor Azure SQL Database with Azure Monitor
Monitor Azure SQL Managed Instance with Azure Monitor
Intelligent Insights using AI to monitor and
troubleshoot database performance (preview)
9/13/2022 • 9 minutes to read • Edit Online
P RO P ERT Y DETA IL S
Observed time range Start and end time for the period of the detected insight.
Impacted queries and error codes Query hash or error code. These can be used to easily
correlate to affected queries. Metrics that consist of either
query duration increase, waiting time, timeout counts, or
error codes are provided.
Root cause analysis Root cause analysis of the issue identified in a human-
readable format. Some insights might contain a performance
improvement recommendation where possible.
Intelligent Insights shines in discovering and troubleshooting database performance issues. In order to use
Intelligent Insights to troubleshoot database performance issues, see Troubleshoot performance issues with
Intelligent Insights.
NOTE
Intelligent insights is a preview feature, not available in the following regions: West Europe, North Europe, West US 1 and
East US 1.
Detection metrics
Metrics used for detection models that generate Intelligent Insights are based on monitoring:
Query duration
Timeout requests
Excessive wait time
Errored out requests
Query duration and timeout requests are used as primary models in detecting issues with database workload
performance. They're used because they directly measure what is happening with the workload. To detect all
possible cases of workload performance degradation, excessive wait time and errored-out requests are used as
additional models to indicate issues that affect the workload performance.
The system automatically considers changes to the workload and changes in the number of query requests
made to the database to dynamically determine normal and out-of-the-ordinary database performance
thresholds.
All of the metrics are considered together in various relationships through a scientifically derived data model
that categorizes each performance issue detected. Information provided through an intelligent insight includes:
Details of the performance issue detected.
A root cause analysis of the issue detected.
Recommendations on how to improve the performance of the monitored database, where possible.
Query duration
The query duration degradation model analyzes individual queries and detects the increase in the time it takes
to compile and execute a query compared to the performance baseline.
If built-in intelligence detects a significant increase in query compile or query execution time that affects
workload performance, these queries are flagged as query duration performance degradation issues.
The Intelligent Insights diagnostics log outputs the query hash of the query degraded in performance. The query
hash indicates whether the performance degradation was related to query compile or execution time increase,
which increased query duration time.
Timeout requests
The timeout requests degradation model analyzes individual queries and detects any increase in timeouts at the
query execution level and the overall request timeouts at the database level compared to the performance
baseline period.
Some of the queries might time out even before they reach the execution stage. Through the means of aborted
workers vs. requests made, built-in intelligence measures and analyzes all queries that reached the database
whether they got to the execution stage or not.
After the number of timeouts for executed queries or the number of aborted request workers crosses the
system-managed threshold, a diagnostics log is populated with intelligent insights.
The insights generated contain the number of timed-out requests and the number of timed-out queries.
Indication of the performance degradation is related to timeout increase at the execution stage, or the overall
database level is provided. When the increase in timeouts is deemed significant to database performance, these
queries are flagged as timeout performance degradation issues.
Errored requests
The errored requests degradation model monitors individual queries and detects an increase in the number of
queries that errored out compared to the baseline period. This model also monitors critical exceptions that
crossed absolute thresholds managed by built-in intelligence. The system automatically considers the number of
query requests made to the database and accounts for any workload changes in the monitored period.
When the measured increase in errored requests relative to the overall number of requests made is deemed
significant to workload performance, affected queries are flagged as errored requests performance degradation
issues.
The Intelligent Insights log outputs the count of errored requests. It indicates whether the performance
degradation was related to an increase in errored requests or to crossing a monitored critical exception
threshold and measured time of the performance degradation.
If any of the monitored critical exceptions cross the absolute thresholds managed by the system, an intelligent
insight is generated with critical exception details.
Next steps
Learn how to Monitor databases by using SQL Analytics.
Learn how to Troubleshoot performance issues with Intelligent Insights.
Monitor your SQL deployments with SQL Insights
(preview)
9/13/2022 • 6 minutes to read • Edit Online
Pricing
There is no direct cost for SQL Insights (preview). All costs are incurred by the virtual machines that gather the
data, the Log Analytics workspaces that store the data, and any alert rules configured on the data.
Virtual machines
For virtual machines, you're charged based on the pricing published on the virtual machines pricing page. The
number of virtual machines that you need will vary based on the number of connection strings you want to
monitor. We recommend allocating one virtual machine of size Standard_B2s for every 100 connection strings.
For more information, see Azure virtual machine requirements.
Log Analytics workspaces
For the Log Analytics workspaces, you're charged based on the pricing published on the Azure Monitor pricing
page. The Log Analytics workspaces that SQL Insights uses will incur costs for data ingestion, data retention, and
(optionally) data export.
Exact charges will vary based on the amount of data ingested, retained, and exported. The amount of this data
will vary based on your database activity and the collection settings defined in your monitoring profiles.
Alert rules
For alert rules in Azure Monitor, you're charged based on the pricing published on the Azure Monitor pricing
page. If you choose to create alerts with SQL Insights (preview), you're charged for any alert rules created and
any notifications sent.
Supported versions
SQL Insights (preview) supports the following versions of SQL Server:
SQL Server 2012 and newer
SQL Insights (preview) supports SQL Server running in the following environments:
Azure SQL Database
Azure SQL Managed Instance
SQL Server on Azure Virtual Machines (SQL Server running on virtual machines registered with the SQL
virtual machine provider)
Azure VMs (SQL Server running on virtual machines not registered with the SQL virtual machine provider)
SQL Insights (preview) has no support or has limited support for the following:
Non-Azure instances : SQL Server running on virtual machines outside Azure is not supported.
Azure SQL Database elastic pools : Metrics can't be gathered for elastic pools or for databases within
elastic pools.
Azure SQL Database low ser vice tiers : Metrics can't be gathered for databases on Basic, S0, S1, and S2
service tiers.
Azure SQL Database ser verless tier : Metrics can be gathered for databases through the serverless
compute tier. However, the process of gathering metrics will reset the auto-pause delay timer, preventing the
database from entering an auto-paused state.
Secondar y replicas : Metrics can be gathered for only a single secondary replica per database. If a database
has more than one secondary replica, only one can be monitored.
Authentication with Azure Active Director y : The only supported method of authentication for
monitoring is SQL authentication. For SQL Server on Azure Virtual Machines, authentication through Active
Directory on a custom domain controller is not supported.
Regional availability
SQL Insights (preview) is available in all Azure regions where Azure Monitor is available, with the exception of
Azure Government and national clouds.
Collected data
SQL Insights performs all monitoring remotely. No agents are installed on the virtual machines running SQL
Server.
SQL Insights uses dedicated monitoring virtual machines to remotely collect data from your SQL resources.
Each monitoring virtual machine has the Azure Monitor agent and the Workload Insights (WLI) extension
installed.
The WLI extension includes the open-source Telegraf agent. SQL Insights uses data collection rules to specify the
data collection settings for Telegraf's SQL Server plug-in.
Different sets of data are available for Azure SQL Database, Azure SQL Managed Instance, and SQL Server. The
following tables describe the available data. You can customize which datasets to collect and the frequency of
collection when you create a monitoring profile.
The tables have the following columns:
Friendly name : Name of the query as shown in the Azure portal when you're creating a monitoring profile.
Configuration name : Name of the query as shown in the Azure portal when you're editing a monitoring
profile.
Namespace : Name of the query as found in a Log Analytics workspace. This identifier appears in the
InsighstMetrics table on the Namespace property in the Tags column.
DMVs : Dynamic managed views that are used to produce the dataset.
Enabled by default : Whether the data is collected by default.
Default collection frequency : How often the data is collected by default.
Data for Azure SQL Database
DEFA ULT
C O N F IGURAT IO EN A B L ED B Y C O L L EC T IO N
F RIEN DLY N A M E N NAME N A M ESPA C E DM VS DEFA ULT F REQ UEN C Y
Server AzureSQLDBServerProperties
sqlserver_server_properties Yes
sys.dm_os_job_object 60 seconds
properties sys.database_files
sys.databases
sys.database_service_objectives
Performance AzureSQLDBPerformanceCounters
sqlserver_performance Yes
sys.dm_os_performance_counters 60 seconds
counters sys.databases
Resource AzureSQLDBResourceGovernance
sqlserver_db_resource_governance Yes
sys.dm_user_db_resource_governance 60 seconds
governance
Requests No
AzureSQLDBRequests sqlserver_requests sys.dm_exec_sessions Not applicable
sys.dm_exec_requests
sys.dm_exec_sql_text
Schedulers No
AzureSQLDBSchedulerss qlserver_schedulerss ys.dm_os_schedulers Not applicable
Server AzureSQLMIServerProperties
sqlserver_server_properties Yes
sys.server_resource_stats 60 seconds
properties
Performance AzureSQLMIPerformanceCounters
sqlserver_performance Yes
sys.dm_os_performance_counters 60 seconds
counters sys.databases
Requests No
AzureSQLMIRequests sqlserver_requests sys.dm_exec_sessions NA
sys.dm_exec_requests
sys.dm_exec_sql_text
Schedulers No
AzureSQLMISchedulerss qlserver_schedulerss ys.dm_os_schedulers Not applicable
Performance SQLServerPerformanceCounters
sqlserver_performance Yes
sys.dm_os_performance_counters 60 seconds
counters
Schedulers No
SQLServerSchedulerssqlserver_schedulerss ys.dm_os_schedulers Not applicable
Requests SQLServerRequests No
sqlserver_requests sys.dm_exec_sessions Not applicable
sys.dm_exec_requests
sys.dm_exec_sql_text
Availability SQLServerAvailabilityReplicaStates
sqlserver_hadr_replica_states No 60
sys.dm_hadr_availability_replica_states seconds
replica states sys.availability_replicas
sys.availability_groups
sys.dm_hadr_availability_group_states
Availability SQLServerDatabaseReplicaStates
sqlserver_hadr_dbreplica_states No
sys.dm_hadr_database_replica_states 60 seconds
database replicas sys.availability_replicas
Next steps
For frequently asked questions about SQL Insights (preview), see Frequently asked questions.
Monitoring and performance tuning in Azure SQL Database and Azure SQL Managed Instance
Enable SQL Insights (preview)
9/13/2022 • 10 minutes to read • Edit Online
NOTE
To enable SQL Insights (preview) by creating the monitoring profile and virtual machine using a resource manager
template, see Resource Manager template samples for SQL Insights (preview).
To learn how to enable SQL Insights (preview), you can also refer to this Data Exposed episode.
NOTE
SQL Insights (preview) does not support the following Azure SQL Database scenarios:
Elastic pools : Metrics cannot be gathered for elastic pools. Metrics cannot be gathered for databases within elastic
pools.
Low ser vice tiers : Metrics cannot be gathered for databases on Basic, S0, S1, and S2 service tiers
SQL Insights (preview) has limited support for the following Azure SQL Database scenarios:
Ser verless tier : Metrics can be gathered for databases using the serverless compute tier. However, the process of
gathering metrics will reset the auto-pause delay timer, preventing the database from entering an auto-paused state.
Connect to an Azure SQL database with SQL Server Management Studio, Query Editor (preview) in the Azure
portal, or any other SQL client tool.
Run the following script to create a user with the required permissions. Replace user with a username and
mystrongpassword with a strong password.
USE master;
GO
CREATE LOGIN [user] WITH PASSWORD = N'mystrongpassword';
GO
GRANT VIEW SERVER STATE TO [user];
GO
GRANT VIEW ANY DEFINITION TO [user];
GO
SQL Server
Connect to SQL Server on your Azure virtual machine and use SQL Server Management Studio or a similar tool
to run the following script to create the monitoring user with the permissions needed. Replace user with a
username and mystrongpassword with a strong password.
USE master;
GO
CREATE LOGIN [user] WITH PASSWORD = N'mystrongpassword';
GO
GRANT VIEW SERVER STATE TO [user];
GO
GRANT VIEW ANY DEFINITION TO [user];
GO
NOTE
The monitoring profiles specifies what data you will collect from the different types of SQL you want to monitor. Each
monitoring virtual machine can have only one monitoring profile associated with it. If you have a need for multiple
monitoring profiles, then you need to create a virtual machine for each.
Depending upon the network settings of your SQL resources, the virtual machines may need to be placed in the
same virtual network as your SQL resources so they can make network connections to collect monitoring data.
IMPORTANT
You need to ensure that network and security configuration allows the monitoring VM to access Key Vault. For more
information, see Access Azure Key Vault behind a firewall and Configure Azure Key Vault networking settings.
The profile will store the information that you want to collect from your SQL systems. It has specific settings for:
Azure SQL Database
Azure SQL Managed Instance
SQL Server running on virtual machines
For example, you might create one profile named SQL Production and another named SQL Staging with
different settings for frequency of data collection, what data to collect, and which workspace to send the data to.
The profile is stored as a data collection rule resource in the subscription and resource group you select. Each
profile needs the following:
Name. Cannot be edited once created.
Location. This is an Azure region.
Log Analytics workspace to store the monitoring data.
Collection settings for the frequency and type of sql monitoring data to collect.
NOTE
The location of the profile should be in the same location as the Log Analytics workspace you plan to send the monitoring
data to.
Select Create monitoring profile once you've entered the details for your monitoring profile. It can take up to
a minute for the profile to be deployed. If you don't see the new profile listed in Monitoring profile combo
box, select the refresh button and it should appear once the deployment is completed. Once you've selected the
new profile, select the Manage profile tab to add a monitoring machine that will be associated with the profile.
Add monitoring machine
Select Add monitoring machine to open a context panel to choose the virtual machine from which to monitor
your SQL instances and provide the connection strings.
Select the subscription and name of your monitoring virtual machine. If you're using Key Vault to store your
password for the monitoring user, select the Key Vault resources with these secrets and enter the URI and secret
name for the password to be used in the connection strings. See the next section for details on identifying the
connection string for different SQL deployments.
"sqlAzureConnections": [
"Server=mysqlserver1.database.windows.net;Port=1433;Database=mydatabase;User
Id=$username;Password=$password;",
"Server=mysqlserver2.database.windows.net;Port=1433;Database=mydatabase;User
Id=$username;Password=$password;"
]
Get the details from the Connection strings page and the appropriate ADO.NET endpoint for the database.
To monitor a readable secondary, append ;ApplicationIntent=ReadOnly to the connection string. SQL Insights
supports monitoring a single secondary. The collected data will be tagged to reflect primary or secondary.
Azure SQL Managed Instance
TCP connections from the monitoring machine to the IP address and port used by the managed instance must
be allowed by any firewalls or network security groups (NSGs) that may exist on the network path. For details
on IP addresses and ports, see Azure SQL Managed Instance connection types.
Enter the connection string in the form:
"sqlManagedInstanceConnections": [
"Server= mysqlserver1.<dns_zone>.database.windows.net;Port=1433;User Id=$username;Password=$password;",
"Server= mysqlserver2.<dns_zone>.database.windows.net;Port=1433;User Id=$username;Password=$password;"
]
Get the details from the Connection strings page and the appropriate ADO.NET endpoint for the managed
instance. If using managed instance public endpoint, replace port 1433 with 3342.
To monitor a readable secondary, append ;ApplicationIntent=ReadOnly to the connection string. SQL Insights
supports monitoring of a single high-availability (HA) secondary replica for a given primary database. Collected
data will be tagged to reflect Primary or Secondary.
SQL Server
The TCP/IP protocol must be enabled for the SQL Server instance you want to monitor. TCP connections from
the monitoring machine to the IP address and port used by the SQL Server instance must be allowed by any
firewalls or network security groups (NSGs) that may exist on the network path.
If you want to monitor SQL Server configured for high availability (using either availability groups or failover
cluster instances), we recommend monitoring each SQL Server instance in the cluster individually rather than
connecting via an availability group listener or a failover cluster name. This ensures that monitoring data is
collected regardless of the current instance role (primary or secondary).
Enter the connection string in the form:
"sqlVmConnections": [
"Server=SQLServerInstanceIPAddress1;Port=1433;User Id=$username;Password=$password;",
"Server=SQLServerInstanceIPAddress2;Port=1433;User Id=$username;Password=$password;"
]
Use the IP address that the SQL Server instance listens on.
If your SQL Server instance is configured to listen on a non-default port, replace 1433 with that port number in
the connection string. If you're using SQL Server on Azure Virtual Machine, you can see which port to use on the
Security page for the resource.
For any SQL Server instance, you can determine all IP addresses and ports it is listening on by connecting to the
instance and executing the following T-SQL query, as long as there is at least one TCP connection to the instance:
SELECT DISTINCT local_net_address, local_tcp_port
FROM sys.dm_exec_connections
WHERE net_transport = 'TCP'
AND
protocol_type = 'TSQL';
NOTE
If you need to update your monitoring profile or the connection strings on your monitoring VMs, you may do so via the
SQL Insights (preview) Manage profile tab. Once your updates have been saved the changes will be applied in
approximately 5 minutes.
Next steps
See Troubleshooting SQL Insights (preview) if SQL Insights isn't working properly after being enabled.
Create alerts with SQL Insights (preview)
9/13/2022 • 2 minutes to read • Edit Online
NOTE
To create an alert for SQL Insights (preview) using a resource manager template, see Resource Manager template samples
for SQL Insights (preview).
NOTE
If you have requests for more SQL Insights (preview) alert rule templates, please send feedback using the link at the
bottom of this page or using the SQL Insights (preview) feedback link in the Azure portal.
NOTE
You can also create custom log alert rules by running queries on the data sets in the InsightsMetrics table and then
saving those queries as an alert rule.
Select SQL (preview) from the Insights section of the Azure Monitor menu in the Azure portal. Select Aler ts .
The Aler ts pane opens on the right side of the page. By default, it will display fired alerts for SQL resources in
the selected monitoring profile based on the alert rules you've already created. Select Aler t templates , which
will display the list of available templates you can use to create an alert rule.
On the Create Aler t rule page, review the default settings for the rule and edit them as needed. You can also
select an action group to create notifications and actions when the alert rule is triggered. Select Enable aler t
rule to create the alert rule once you've verified all of its properties.
To deploy the alert rule immediately, select Deploy aler t rule . Select View Template if you want to view the
rule template before actually deploying it.
If you choose to view the templates, select Deploy from the template page to create the alert rule.
Next steps
Learn more about alerts in Azure Monitor.
Troubleshoot SQL Insights (preview)
9/13/2022 • 7 minutes to read • Edit Online
NOTE
Make sure that you're trying to collect data from a supported version of SQL. For example, trying to collect data with a
valid profile and connection string but from an unsupported version of Azure SQL Database will result in a Not
collecting status.
SQL Insights (preview) uses the following query to retrieve this information:
InsightsMetrics
| extend Tags = todynamic(Tags)
| extend SqlInstance = tostring(Tags.sql_instance)
| where TimeGenerated > ago(10m) and isnotempty(SqlInstance) and Namespace ==
'sqlserver_server_properties' and Name == 'uptime'
Check if any logs from Telegraf help identify the root cause the problem. If there are log entries, you can select
Not collecting and check the logs and troubleshooting info for common problems.
If there are no log entries, check the logs on the monitoring virtual machine for the following services installed
by two virtual machine extensions:
Microsoft.Azure.Monitor.AzureMonitorLinuxAgent
Service: mdsd
Microsoft.Azure.Monitor.Workloads.Workload.WLILinuxExtension
Service: wli
Service: ms-telegraf
Service: td-agent-bit-wli
Extension log to check installation failures:
/var/log/azure/Microsoft.Azure.Monitor.Workloads.Workload.WLILinuxExtension/wlilogs.log
If you see the following error log, there's a problem with the mdsd service:
2021-01-27T06:09:28Z [Error] Failed to get config data. Error message: dial unix
/var/run/mdsd/default_fluent.socket: connect: no such file or directory
.
Telegraf service logs
Service logs: /var/log/ms-telegraf/telegraf.log
To see recent logs: tail -n 100 -f /var/log/ms-telegraf/telegraf.log
To see recent error and warning logs: tail -n 1000 /var/log/ms-telegraf/telegraf.log | grep "E\!\|W!"
The configuration that telegraf uses is generated by the wli service and placed in:
/etc/ms-telegraf/telegraf.d/wli
If a bad configuration is generated, the ms-telegraf service might fail to start. Check if the ms-telegraf service is
running by using this command: service ms-telegraf status
To see error messages from the telegraf service, run it manually by using the following command:
{
"version": 1,
"secrets": {
"telegrafPassword": {
"keyvault": "https://mykeyvault.vault.azure.net/",
"name": "sqlPassword"
}
},
"parameters": {
"sqlAzureConnections": [
"Server=mysqlserver.database.windows.net;Port=1433;Database=mydatabase;User
Id=telegraf;Password=$telegrafPassword;"
],
"sqlVmConnections": [
],
"sqlManagedInstanceConnections": [
]
}
}
This configuration specifies the replacement tokens to be used in the profile configuration on your monitoring
virtual machine. It also allows you to reference secrets from Azure Key Vault, so you don't have to keep secret
values in any configuration (which we strongly recommend).
In this configuration, the database connection string includes a $telegrafPassword replacement token. SQL
Insights replaces this token by the SQL authentication password retrieved from Key Vault. The Key Vault URI is
specified in the telegrafPassword configuration section under secrets .
Secrets
Secrets are tokens whose values are retrieved at runtime from an Azure key vault. A secret is defined by a value
pair that includes key vault URI and a secret name. This definition allows SQL Insights to get the value of the
secret at runtime and use it in downstream configuration.
You can define as many secrets as needed, including secrets stored in multiple key vaults.
"secrets": {
"<secret-token-name-1>": {
"keyvault": "<key-vault-uri>",
"name": "<key-vault-secret-name>"
},
"<secret-token-name-2>": {
"keyvault": "<key-vault-uri-2>",
"name": "<key-vault-secret-name-2>"
}
}
The permission to access the key vault is provided to a managed identity on the monitoring virtual machine.
This managed identity must be granted the Get permission on all Key Vault secrets referenced in the monitoring
profile configuration. This can be done from the Azure portal, PowerShell, the Azure CLI, or an Azure Resource
Manager template.
Parameters
Parameters are tokens that can be referenced in the profile configuration via JSON templates. Parameters have a
name and a value. Values can be any JSON type, including objects and arrays. A parameter is referenced in the
profile configuration by its name, using this convention: .Parameters.<name> .
Parameters can reference secrets in Key Vault by using the same convention. For example, sqlAzureConnections
references the secret telegrafPassword by using the convention $telegrafPassword .
At runtime, all parameters and secrets will be resolved and merged with the profile configuration to construct
the actual configuration to be used on the machine.
NOTE
The parameter names of sqlAzureConnections , sqlVmConnections , and sqlManagedInstanceConnections are all
required in configuration, even if you don't provide connection strings for some of them.
InsightsMetrics
|extendTags=todynamic(Tags)
|extendSqlInstance=tostring(Tags.sql_instance)
|whereTimeGenerated>ago(240m)andisnotempty(SqlInstance)andNamespace=='sqlserver_server_properties'andName=='
uptime'
WorkloadDiagnosticLogs
| summarizeErrors=countif(Status=='Error')
NOTE
If you don't see any data in WorkloadDiagnosticLogs , you might need to update your monitoring profile. From within
SQL Insights in Azure portal, select Manage profile > Edit profile > Update monitoring profile .
Best practices
Ensure access to Key Vault from the monitoring VM . If you use Key Vault to store SQL
authentication passwords (strongly recommended), you need to ensure that network and security
configuration allows the monitoring VM to access Key Vault. For more information, see Access Azure Key
Vault behind a firewall and Configure Azure Key Vault networking settings. To verify that the monitoring
VM can access Key Vault, you can execute the following commands from an SSH session connected to the
VM. You should be able to successfully retrieve the access token and the secret. Replace
[YOUR-KEY-VAULT-URL] , [YOUR-KEY-VAULT-SECRET] , and [YOUR-KEY-VAULT-ACCESS-TOKEN] with actual values.
Next steps
Get details on enabling SQL Insights (preview).
Automatic tuning in Azure SQL Database and
Azure SQL Managed Instance
9/13/2022 • 5 minutes to read • Edit Online
SIN GL E DATA B A SE A N D
A UTO M AT IC T UN IN G P O O L ED DATA B A SE IN STA N C E DATA B A SE
O P T IO N DESC RIP T IO N SUP P O RT SUP P O RT
SIN GL E DATA B A SE A N D
A UTO M AT IC T UN IN G P O O L ED DATA B A SE IN STA N C E DATA B A SE
O P T IO N DESC RIP T IO N SUP P O RT SUP P O RT
IMPORTANT
In case you are applying tuning recommendations through T-SQL, the automatic performance validation and reversal
mechanisms are not available. Recommendations applied in such way will remain active and shown in the list of tuning
recommendations for 24-48 hours before the system automatically withdraws them. If you would like to remove a
recommendation sooner, you can discard it from Azure portal.
Automatic tuning options can be independently enabled or disabled for each database, or they can be
configured at the server-level and applied on every database that inherits settings from the server. By default,
new servers inherit Azure defaults for automatic tuning settings. Azure defaults are set to
FORCE_LAST_GOOD_PLAN enabled, CREATE_INDEX disabled, and DROP_INDEX disabled.
Configuring automatic tuning options on a server and inheriting settings for databases belonging to the parent
server is the recommended method for configuring automatic tuning. It simplifies management of automatic
tuning options for a large number of databases.
To learn about building email notifications for automatic tuning recommendations, see Email notifications for
automatic tuning.
Automatic tuning for Azure SQL Managed Instance
Automatic tuning for SQL Managed Instance only supports FORCE L AST GOOD PL AN . For more information
about configuring automatic tuning options through T-SQL, see Automatic tuning introduces automatic plan
correction and Automatic plan correction.
Next steps
Read the blog post Artificial Intelligence tunes Azure SQL Database.
Learn how automatic tuning works under the hood in Automatically indexing millions of databases in
Microsoft Azure SQL Database.
Learn how automatic tuning can proactively help you Diagnose and troubleshoot high CPU on Azure SQL
Database
Optimize performance by using in-memory
technologies in Azure SQL Database and Azure
SQL Managed Instance
9/13/2022 • 11 minutes to read • Edit Online
Overview
Azure SQL Database and Azure SQL Managed Instance have the following in-memory technologies:
In-Memory OLTP increases number of transactions per second and reduces latency for transaction
processing. Scenarios that benefit from In-Memory OLTP are: high-throughput transaction processing such
as trading and gaming, data ingestion from events or IoT devices, caching, data load, and temporary table
and table variable scenarios.
Clustered columnstore indexes reduce your storage footprint (up to 10 times) and improve performance for
reporting and analytics queries. You can use it with fact tables in your data marts to fit more data in your
database and improve performance. Also, you can use it with historical data in your operational database to
archive and be able to query up to 10 times more data.
Nonclustered columnstore indexes for HTAP help you to gain real-time insights into your business through
querying the operational database directly, without the need to run an expensive extract, transform, and load
(ETL) process and wait for the data warehouse to be populated. Nonclustered columnstore indexes allow fast
execution of analytics queries on the OLTP database, while reducing the impact on the operational workload.
Memory-optimized clustered columnstore indexes for HTAP enables you to perform fast transaction
processing, and to concurrently run analytics queries very quickly on the same data.
Both columnstore indexes and In-Memory OLTP have been part of the SQL Server product since 2012 and
2014, respectively. Azure SQL Database, Azure SQL Managed Instance, and SQL Server share the same
implementation of in-memory technologies.
Benefits of in-memory technology
Because of the more efficient query and transaction processing, in-memory technologies also help you to
reduce cost. You typically don't need to upgrade the pricing tier of the database to achieve performance gains. In
some cases, you might even be able reduce the pricing tier, while still seeing performance improvements with
in-memory technologies.
By using In-Memory OLTP, Quorum Business Solutions was able to double their workload while improving DTUs
by 70%. For more information, see the blog post: In-Memory OLTP.
NOTE
In-memory technologies are available in the Premium and Business Critical tiers.
This article describes aspects of In-Memory OLTP and columnstore indexes that are specific to Azure SQL
Database and Azure SQL Managed Instance, and also includes samples:
You'll see the impact of these technologies on storage and data size limits.
You'll see how to manage the movement of databases that use these technologies between the different
pricing tiers.
You'll see two samples that illustrate the use of In-Memory OLTP, as well as columnstore indexes.
For more information about in-memory in SQL Server, see:
In-Memory OLTP Overview and Usage Scenarios (includes references to customer case studies and
information to get started)
Documentation for In-Memory OLTP
Columnstore Indexes Guide
Hybrid transactional/analytical processing (HTAP), also known as real-time operational analytics
In-Memory OLTP
In-Memory OLTP technology provides extremely fast data access operations by keeping all data in memory. It
also uses specialized indexes, native compilation of queries, and latch-free data-access to improve performance
of the OLTP workload. There are two ways to organize your In-Memory OLTP data:
Memor y-optimized rowstore format where every row is a separate memory object. This is a classic
In-Memory OLTP format optimized for high-performance OLTP workloads. There are two types of
memory-optimized tables that can be used in the memory-optimized rowstore format:
Durable tables (SCHEMA_AND_DATA) where the rows placed in memory are preserved after server
restart. This type of tables behaves like a traditional rowstore table with the additional benefits of in-
memory optimizations.
Non-durable tables (SCHEMA_ONLY) where the rows are not-preserved after restart. This type of
table is designed for temporary data (for example, replacement of temp tables), or tables where you
need to quickly load data before you move it to some persisted table (so called staging tables).
Memor y-optimized columnstore format where data is organized in a columnar format. This structure
is designed for HTAP scenarios where you need to run analytic queries on the same data structure where
your OLTP workload is running.
NOTE
In-Memory OLTP technology is designed for the data structures that can fully reside in memory. Since the In-memory
data cannot be offloaded to disk, make sure that you are using database that has enough memory. See Data size and
storage cap for In-Memory OLTP for more details.
A quick primer on In-Memory OLTP: Quickstart 1: In-Memory OLTP Technologies for Faster T-SQL
Performance.
There is a programmatic way to understand whether a given database supports In-Memory OLTP. You can
execute the following Transact-SQL query:
If the query returns 1 , In-Memory OLTP is supported in this database. The following queries identify all objects
that need to be removed before a database can be downgraded to General Purpose, Standard, or Basic:
IMPORTANT
In-Memory OLTP isn't supported in the General Purpose, Standard or Basic tier. Therefore, it isn't possible to move a
database that has any In-Memory OLTP objects to one of these tiers.
Before you downgrade the database to General Purpose, Standard, or Basic, remove all memory-optimized
tables and table types, as well as all natively compiled T-SQL modules.
Scaling-down resources in Business Critical tier: Data in memory-optimized tables must fit within the In-
Memory OLTP storage that is associated with the tier of the database or the managed instance, or it is available
in the elastic pool. If you try to scale-down the tier or move the database into a pool that doesn't have enough
available In-Memory OLTP storage, the operation fails.
In-memory columnstore
In-memory columnstore technology is enabling you to store and query a large amount of data in the tables.
Columnstore technology uses column-based data storage format and batch query processing to achieve gain
up to 10 times the query performance in OLAP workloads over traditional row-oriented storage. You can also
achieve gains up to 10 times the data compression over the uncompressed data size. There are two types of
columnstore models that you can use to organize your data:
Clustered columnstore where all data in the table is organized in the columnar format. In this model, all
rows in the table are placed in columnar format that highly compresses the data and enables you to execute
fast analytical queries and reports on the table. Depending on the nature of your data, the size of your data
might be decreased 10x-100x. Clustered columnstore model also enables fast ingestion of large amount of
data (bulk-load) since large batches of data greater than 100K rows are compressed before they are stored
on disk. This model is a good choice for the classic data warehouse scenarios.
Non-clustered columnstore where the data is stored in traditional rowstore table and there is an index in
the columnstore format that is used for the analytical queries. This model enables Hybrid Transactional-
Analytic Processing (HTAP): the ability to run performant real-time analytics on a transactional workload.
OLTP queries are executed on rowstore table that is optimized for accessing a small set of rows, while OLAP
queries are executed on columnstore index that is better choice for scans and analytics. The query optimizer
dynamically chooses rowstore or columnstore format based on the query. Non-clustered columnstore
indexes don't decrease the size of the data since original data-set is kept in the original rowstore table
without any change. However, the size of additional columnstore index should be in order of magnitude
smaller than the equivalent B-tree index.
NOTE
In-memory columnstore technology keeps only the data that is needed for processing in the memory, while the data that
cannot fit into the memory is stored on-disk. Therefore, the amount of data in in-memory columnstore structures can
exceed the amount of available memory.
NOTE
SQL Managed Instance supports Columnstore indexes in all tiers.
Next steps
Quickstart 1: In-Memory OLTP Technologies for faster T-SQL Performance
Use In-Memory OLTP in an existing Azure SQL application
Monitor In-Memory OLTP storage for In-Memory OLTP
Try in-memory features
Additional resources
Deeper information
Learn how Quorum doubles key database's workload while lowering DTU by 70% with In-Memory OLTP in
SQL Database
In-Memory OLTP Blog Post
Learn about In-Memory OLTP
Learn about columnstore indexes
Learn about real-time operational analytics
See Common Workload Patterns and Migration Considerations (which describes workload patterns where
In-Memory OLTP commonly provides significant performance gains)
Application design
In-Memory OLTP (in-memory optimization)
Use In-Memory OLTP in an existing Azure SQL application
Tools
Azure portal
SQL Server Management Studio (SSMS)
SQL Server Data Tools (SSDT)
Extended events in Azure SQL Database
9/13/2022 • 5 minutes to read • Edit Online
Prerequisites
This article assumes you already have some knowledge of:
Azure SQL Database
Extended events
The bulk of our documentation about extended events applies to SQL Server, Azure SQL Database, and
Azure SQL Managed Instance.
Prior exposure to the following items is helpful when choosing the Event File as the target:
Azure Storage service
Azure PowerShell with Azure Storage
Code samples
Related articles provide two code samples:
Ring Buffer target code for extended events in Azure SQL Database
Short simple Transact-SQL script.
We emphasize in the code sample article that, when you are done with a Ring Buffer target, you
should release its resources by executing an alter-drop
ALTER EVENT SESSION ... ON DATABASE DROP TARGET ...; statement. Later you can add another instance
of Ring Buffer by ALTER EVENT SESSION ... ON DATABASE ADD TARGET ... .
Event File target code for extended events in Azure SQL Database
Phase 1 is PowerShell to create an Azure Storage container.
Phase 2 is Transact-SQL that uses the Azure Storage container.
Transact-SQL differences
When you execute the CREATE EVENT SESSION command on SQL Server, you use the ON SERVER
clause. But on Azure SQL Database you use the ON DATABASE clause instead.
The ON DATABASE clause also applies to the ALTER EVENT SESSION and DROP EVENT SESSION
Transact-SQL commands.
A best practice is to include the event session option of STARTUP_STATE = ON in your CREATE EVENT
SESSION or ALTER EVENT SESSION statements.
The = ON value supports an automatic restart after a reconfiguration of the logical database due to a
failover.
sys.database_event_session_targets Returns a row for each event target for an event session.
In Microsoft SQL Server, similar catalog views have names that include .server_ instead of .database_. The name
pattern is like sys.server_event_% .
N A M E O F DM V DESC RIP T IO N
sys.dm_xe_database_session_object_columns Shows the configuration values for objects that are bound to
a session.
sys.dm_xe_database_sessions Returns a row for each event session that is scoped to the
current database.
In Microsoft SQL Server, similar catalog views are named without the _database portion of the name, such as:
sys.dm_xe_sessions instead of sys.dm_xe_database_sessions .
DMVs common to both
For extended events there are additional DMVs that are common to Azure SQL Database, Azure SQL Managed
Instance, and Microsoft SQL Server:
sys.dm_xe_map_values
sys.dm_xe_object_columns
sys.dm_xe_objects
sys.dm_xe_packages
SELECT
o.object_type,
p.name AS [package_name],
o.name AS [db_object_name],
o.description AS [db_obj_description]
FROM
sys.dm_xe_objects AS o
INNER JOIN sys.dm_xe_packages AS p ON p.guid = o.package_guid
WHERE
o.object_type in
(
'action', 'event', 'target'
)
ORDER BY
o.object_type,
p.name,
o.name;
Restrictions
There are a couple of security-related differences befitting the cloud environment of Azure SQL Database:
Extended events are founded on the single-tenant isolation model. An event session in one database cannot
access data or events from another database.
You cannot issue a CREATE EVENT SESSION statement in the context of the master database.
Permission model
You must have Control permission on the database to issue a CREATE EVENT SESSION statement. The database
owner (dbo) has Control permission.
Storage container authorizations
The SAS token you generate for your Azure Storage container must specify r wl for the permissions. The r wl
value provides the following permissions:
Read
Write
List
Performance considerations
There are scenarios where intensive use of extended events can accumulate more active memory than is healthy
for the overall system. Therefore Azure SQL Database dynamically sets and adjusts limits on the amount of
active memory that can be accumulated by an event session. Many factors go into the dynamic calculation.
There is a cap on memory available to XEvent sessions in Azure SQL Database:
In single Azure SQL Database in the DTU purchasing model, each database can use up to 128 MB. This is
raised to 256 MB only in the Premium tier.
In single Azure SQL Database in the vCore purchasing model, each database can use up to 128 MB.
In an elastic pool, individual databases are limited by the single database limits, and in total they cannot
exceed 512 MB.
If you receive an error message that says a memory maximum was enforced, some corrective actions you can
take are:
Run fewer concurrent event sessions.
Through your CREATE and ALTER statements for event sessions, reduce the amount of memory you specify
on the MAX_MEMORY clause.
Network latency
The Event File target might experience network latency or failures while persisting data to Azure Storage blobs.
Other events in Azure SQL Database might be delayed while they wait for the network communication to
complete. This delay can slow your workload.
To mitigate this performance risk, avoid setting the EVENT_RETENTION_MODE option to
NO_EVENT_LOSS in your event session definitions.
Related links
Azure Storage Cmdlets
Using Azure PowerShell with Azure Storage
How to use Blob storage from .NET
CREATE CREDENTIAL (Transact-SQL)
CREATE EVENT SESSION (Transact-SQL)
The Azure Service Updates webpage, narrowed by parameter to Azure SQL Database:
https://azure.microsoft.com/updates/?service=sql-database
Event File target code for extended events in Azure
SQL Database and SQL Managed Instance
9/13/2022 • 10 minutes to read • Edit Online
Prerequisites
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is
for the Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the
AzureRm modules are substantially identical.
An Azure account and subscription. You can sign up for a free trial.
Any database you can create a table in.
Optionally you can create an AdventureWorksLT demonstration database in minutes.
SQL Server Management Studio (ssms.exe), ideally its latest monthly update version: Download SQL
Server Management Studio
You must have the Azure PowerShell modules installed.
The modules provide commands, such as - New-AzStorageAccount .
PowerShell code
This PowerShell script assumes you've already installed the Az module. For information, see Install the Azure
PowerShell module.
## TODO: Before running, find all 'TODO' and make each edit!!
cls;
#--------------- 1 -----------------------
'Script assumes you have already logged your PowerShell session into Azure.
But if not, run Connect-AzAccount (or Connect-AzAccount), just one time.';
#Connect-AzAccount; # Same as Connect-AzAccount.
#-------------- 2 ------------------------
'
TODO: Edit the values assigned to these variables, especially the first few!
';
$subscriptionName = 'YOUR_SUBSCRIPTION_NAME';
$resourceGroupName = 'YOUR_RESOURCE-GROUP-NAME';
$policySasExpiryTime = '2018-08-28T23:44:56Z';
$policySasStartTime = '2017-10-01';
$storageAccountLocation = 'YOUR_STORAGE_ACCOUNT_LOCATION';
$storageAccountName = 'YOUR_STORAGE_ACCOUNT_NAME';
$containerName = 'YOUR_CONTAINER_NAME';
$policySasToken = ' ? ';
#--------------- 3 -----------------------
#-------------- 4 ------------------------
'
Clean up the old Azure Storage Account after any previous run,
before continuing this new run.';
if ($storageAccountName) {
Remove-AzStorageAccount `
-Name $storageAccountName `
-ResourceGroupName $resourceGroupName;
}
#--------------- 5 -----------------------
[System.DateTime]::Now.ToString();
'
Create a storage account.
This might take several minutes, will beep when ready.
...PLEASE WAIT...';
New-AzStorageAccount `
-Name $storageAccountName `
-Location $storageAccountLocation `
-ResourceGroupName $resourceGroupName `
-SkuName 'Standard_LRS';
[System.DateTime]::Now.ToString();
[System.Media.SystemSounds]::Beep.Play();
'
Get the access key for your storage account.
';
$accessKey_ForStorageAccount = `
(Get-AzStorageAccountKey `
-Name $storageAccountName `
-ResourceGroupName $resourceGroupName
).Value[0];
"`$accessKey_ForStorageAccount = $accessKey_ForStorageAccount";
#--------------- 6 -----------------------
# The context will be needed to create a container within the storage account.
'Create a context object from the storage account and its primary access key.
';
$context = New-AzStorageContext `
-StorageAccountName $storageAccountName `
-StorageAccountKey $accessKey_ForStorageAccount;
$containerObjectInStorageAccount = New-AzStorageContainer `
-Name $containerName `
-Context $context;
New-AzStorageContainerStoredAccessPolicy `
-Container $containerName `
-Context $context `
-Policy $policySasToken `
-Permission $policySasPermission `
-ExpiryTime $policySasExpiryTime `
-StartTime $policySasStartTime;
'
Generate a SAS token for the container.
';
try {
$sasTokenWithPolicy = New-AzStorageContainerSASToken `
-Name $containerName `
-Context $context `
-Policy $policySasToken;
}
catch {
$Error[0].Exception.ToString();
}
#-------------- 7 ------------------------
'Display the values that YOU must edit into the Transact-SQL script next!:
';
"storageAccountName: $storageAccountName";
"containerName: $containerName";
"sasTokenWithPolicy: $sasTokenWithPolicy";
'
REMINDER: sasTokenWithPolicy here might start with "?" character, which you must exclude from Transact-SQL.
';
'
(Later, return here to delete your Azure Storage account. See the preceding Remove-AzStorageAccount -Name
$storageAccountName)';
'
Now shift to the Transact-SQL portion of the two-part code sample!';
# EOFile
Take note of the few named values that the PowerShell script prints when it ends. You must edit those values
into the Transact-SQL script that follows as phase 2.
NOTE
In the preceding PowerShell code example, SQL extended events are not compatible with the ADLS Gen2 storage
accounts.
WARNING
The SAS key value generated by the preceding PowerShell script might begin with a '?' (question mark). When you use the
SAS key in the following T-SQL script, you must remove the leading '?'. Otherwise your efforts might be blocked by
security.
Transact-SQL code
---- TODO: First, run the earlier PowerShell portion of this two-part code sample.
---- TODO: Second, find every 'TODO' in this Transact-SQL file, and edit each.
---- Transact-SQL code for Event File target on Azure SQL Database or SQL Managed Instance.
IF EXISTS
(SELECT * FROM sys.objects
WHERE type = 'U' and name = 'gmTabEmployee')
BEGIN
DROP TABLE gmTabEmployee;
END
GO
IF NOT EXISTS
(SELECT * FROM sys.symmetric_keys
WHERE symmetric_key_id = 101)
BEGIN
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '0C34C960-6621-4682-A123-C7EA08E3FC46' -- Or any newid().
END
GO
IF EXISTS
(SELECT * FROM sys.database_scoped_credentials
-- TODO: Assign AzureStorageAccount name, and the associated Container name.
WHERE name = 'https://gmstorageaccountxevent.blob.core.windows.net/gmcontainerxevent')
BEGIN
DROP DATABASE SCOPED CREDENTIAL
-- TODO: Assign AzureStorageAccount name, and the associated Container name.
[https://gmstorageaccountxevent.blob.core.windows.net/gmcontainerxevent] ;
END
GO
CREATE
DATABASE SCOPED
CREDENTIAL
-- use '.blob.', and not '.queue.' or '.table.' etc.
-- TODO: Assign AzureStorageAccount name, and the associated Container name.
[https://gmstorageaccountxevent.blob.core.windows.net/gmcontainerxevent]
WITH
IDENTITY = 'SHARED ACCESS SIGNATURE', -- "SAS" token.
-- TODO: Paste in the long SasToken string here for Secret, but exclude any leading '?'.
SECRET = 'sv=2014-02-14&sr=c&si=gmpolicysastoken&sig=EjAqjo6Nu5xMLEZEkMkLbeF7TD9v1J8DNB2t8gOKTts%3D'
;
GO
IF EXISTS
(SELECT * from sys.database_event_sessions
WHERE name = 'gmeventsessionname240b')
BEGIN
DROP
EVENT SESSION
gmeventsessionname240b
ON DATABASE;
END
GO
CREATE
EVENT SESSION
gmeventsessionname240b
ON DATABASE
ADD EVENT
sqlserver.sql_statement_starting
(
ACTION (sqlserver.sql_text)
WHERE statement LIKE 'UPDATE gmTabEmployee%'
)
ADD TARGET
package0.event_file
(
-- TODO: Assign AzureStorageAccount name, and the associated Container name.
-- Also, tweak the .xel file name at end, if you like.
-- Also, tweak the .xel file name at end, if you like.
SET filename =
'https://gmstorageaccountxevent.blob.core.windows.net/gmcontainerxevent/anyfilenamexel242b.xel'
)
WITH
(MAX_MEMORY = 10 MB,
MAX_DISPATCH_LATENCY = 3 SECONDS)
;
GO
ALTER
EVENT SESSION
gmeventsessionname240b
ON DATABASE
STATE = START;
GO
UPDATE gmTabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 2
WHERE EmployeeDescr = 'Jane Doe';
UPDATE gmTabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 13
WHERE EmployeeDescr = 'Jane Doe';
ALTER
EVENT SESSION
gmeventsessionname240b
ON DATABASE
STATE = STOP;
GO
SELECT
*, 'CLICK_NEXT_CELL_TO_BROWSE_ITS_RESULTS!' as [CLICK_NEXT_CELL_TO_BROWSE_ITS_RESULTS],
CAST(event_data AS XML) AS [event_data_XML] -- TODO: In ssms.exe results grid, double-click this
cell!
FROM
sys.fn_xe_file_target_read_file
(
-- TODO: Fill in Storage Account name, and the associated Container name.
-- TODO: The name of the .xel file needs to be an exact match to the files in the storage
account Container (You can use Storage Account explorer from the portal to find out the exact file names or
you can retrieve the name using the following DMV-query: select target_data from
sys.dm_xe_database_session_targets. The 3rd xml-node, "File name", contains the name of the file currently
written to.)
'https://gmstorageaccountxevent.blob.core.windows.net/gmcontainerxevent/anyfilenamexel242b',
null, null, null
);
GO
DROP
EVENT SESSION
gmeventsessionname240b
gmeventsessionname240b
ON DATABASE;
GO
If the target fails to attach when you run, you must stop and restart the event session:
Output
When the Transact-SQL script completes, select a cell under the event_data_XML column header. One
<event> element is displayed which shows one UPDATE statement.
Here is one <event> element that was generated during testing:
<event name="sql_statement_starting" package="sqlserver" timestamp="2015-09-22T19:18:45.420Z">
<data name="state">
<value>0</value>
<text>Normal</text>
</data>
<data name="line_number">
<value>5</value>
</data>
<data name="offset">
<value>148</value>
</data>
<data name="offset_end">
<value>368</value>
</data>
<data name="statement">
<value>UPDATE gmTabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 2
WHERE EmployeeDescr = 'Jane Doe'</value>
</data>
<action name="sql_text" package="sqlserver">
<value>
UPDATE gmTabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 2
WHERE EmployeeDescr = 'Jane Doe';
UPDATE gmTabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 13
WHERE EmployeeDescr = 'Jane Doe';
The preceding Transact-SQL script used the following system function to read the event_file:
sys.fn_xe_file_target_read_file (Transact-SQL)
An explanation of advanced options for the viewing of data from extended events is available at:
Advanced Viewing of Target Data from Extended Events
Next steps
For more info about accounts and containers in the Azure Storage service, see:
How to use Blob storage from .NET
Naming and Referencing Containers, Blobs, and Metadata
Working with the Root Container
Lesson 1: Create a stored access policy and a shared access signature on an Azure container
Lesson 2: Create a SQL Server credential using a shared access signature
Extended Events for Microsoft SQL Server
Ring Buffer target code for extended events in
Azure SQL Database
9/13/2022 • 5 minutes to read • Edit Online
Prerequisites
An Azure account and subscription. You can sign up for a free trial.
Any database you can create a table in.
Optionally you can create an AdventureWorksLT demonstration database in minutes.
SQL Server Management Studio (ssms.exe), ideally its latest monthly update version. You can download
the latest ssms.exe from:
Topic titled Download SQL Server Management Studio.
A direct link to the download.
Code sample
With very minor modification, the following Ring Buffer code sample can be run on either Azure SQL Database
or Microsoft SQL Server. The difference is the presence of the node '_database' in the name of some dynamic
management views (DMVs), used in the FROM clause in Step 5. For example:
sys.dm_xe_database _session_targets
sys.dm_xe_session_targets
GO
---- Transact-SQL.
---- Step set 1.
IF EXISTS
(SELECT * FROM sys.objects
WHERE type = 'U' and name = 'tabEmployee')
BEGIN
DROP TABLE tabEmployee;
END
GO
IF EXISTS
(SELECT * from sys.database_event_sessions
WHERE name = 'eventsession_gm_azuresqldb51')
BEGIN
DROP EVENT SESSION eventsession_gm_azuresqldb51
ON DATABASE;
END
GO
CREATE
EVENT SESSION eventsession_gm_azuresqldb51
ON DATABASE
ADD EVENT
sqlserver.sql_statement_starting
(
ACTION (sqlserver.sql_text)
WHERE statement LIKE '%UPDATE tabEmployee%'
)
ADD TARGET
package0.ring_buffer
(SET
max_memory = 500 -- Units of KB.
);
GO
UPDATE tabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 102;
UPDATE tabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 1015;
SELECT
se.name AS [session-name],
ev.event_name,
ac.action_name,
st.target_name,
se.session_source,
st.target_data,
CAST(st.target_data AS XML) AS [target_data_XML]
FROM
sys.dm_xe_database_session_event_actions AS ac
UPDATE tabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 102;
UPDATE tabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 1015;
SET EmployeeKudosCount = EmployeeKudosCount + 1015;
UPDATE tabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 102;
UPDATE tabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 1015;
The definition of your event session is updated, but not dropped. Later you can add another instance of the Ring
Buffer to your event session:
ALTER EVENT SESSION eventsession_gm_azuresqldb51
ON DATABASE
ADD TARGET
package0.ring_buffer
(SET
max_memory = 500 -- Units of KB.
);
More information
The primary topic for extended events on Azure SQL Database is:
Extended event considerations in Azure SQL Database, which contrasts some aspects of extended events that
differ between Azure SQL Database versus Microsoft SQL Server.
Other code sample topics for extended events are available at the following links. However, you must routinely
check any sample to see whether the sample targets Microsoft SQL Server versus Azure SQL Database. Then
you can decide whether minor changes are needed to run the sample.
Code sample for Azure SQL Database: Event File target code for extended events in Azure SQL Database
Quickstart: Use .NET and C# in Visual Studio to
connect to and query a database
9/13/2022 • 2 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
This quickstart shows how to use the .NET and C# code in Visual Studio to query a database in Azure SQL or
Synapse SQL with Transact-SQL statements.
Prerequisites
To complete this quickstart, you need:
An Azure account with an active subscription. Create an account for free.
Visual Studio 2022 Community, Professional, or Enterprise edition.
A database where you can run a query.
You can use one of these quickstarts to create and then configure a database:
CLI CLI
Deployment Deployment
template template
using System;
using Microsoft.Data.SqlClient;
using System.Text;
namespace sqltest
{
class Program
{
static void Main(string[] args)
{
try
{
SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder();
builder.DataSource = "<your_server>.database.windows.net";
builder.UserID = "<your_username>";
builder.Password = "<your_password>";
builder.InitialCatalog = "<your_database>";
Next steps
Learn how to connect and query a database in Azure SQL Database by using .NET from the command line on
Windows/Linux/macOS.
Learn about Getting started with .NET on Windows/Linux/macOS using VS Code.
Learn more about developing with .NET and SQL.
Learn how to Design your first database in Azure SQL Database by using SSMS.
For more information about .NET, see .NET documentation.
Retry logic example: Connect resiliently to Azure SQL with ADO.NET.
Quickstart: Use .NET (C#) to query a database
9/13/2022 • 2 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
In this quickstart, you'll use .NET and C# code to connect to a database. You'll then run a Transact-SQL statement
to query data. This quickstart is applicable to Windows, Linux, and macOS and leverages the unified .NET
platform.
TIP
This free Learn module shows you how to Develop and configure an ASP.NET application that queries a database in Azure
SQL Database
Prerequisites
To complete this quickstart, you need:
An Azure account with an active subscription. Create an account for free.
.NET SDK for your operating system installed.
A database where you can run your query.
You can use one of these quickstarts to create and then configure a database:
CLI CLI
Deployment Deployment
template template
This command creates new app project files, including an initial C# code file (Program.cs ), an XML
configuration file (sqltest.csproj ), and needed binaries.
2. At the command prompt used above, run this command.
NOTE
To use an ADO.NET connection string, replace the 4 lines in the code setting the server, database, username, and
password with the line below. In the string, set your username and password.
builder.ConnectionString="<your_ado_net_connection_string>";
using Microsoft.Data.SqlClient;
namespace sqltest
{
class Program
{
static void Main(string[] args)
{
try
{
SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder();
builder.DataSource = "<your_server.database.windows.net>";
builder.UserID = "<your_username>";
builder.Password = "<your_password>";
builder.InitialCatalog = "<your_database>";
connection.Open();
dotnet restore
dotnet run
2. Verify that the rows are returned, your output may include other values.
Query data example:
=========================================
master SQL_Latin1_General_CP1_CI_AS
tempdb SQL_Latin1_General_CP1_CI_AS
WideWorldImporters Latin1_General_100_CI_AS
Next steps
Getting started with .NET on Windows/Linux/macOS using VS Code.
Learn how to connect to Azure SQL Database using Azure Data Studio on Windows/Linux/macOS.
Learn more about developing with .NET and SQL.
Learn how to connect and query Azure SQL Database or Azure SQL Managed Instance, by using .NET in
Visual Studio.
Learn how to Design your first database with SSMS.
For more information about .NET, see .NET documentation.
Quickstart: Use Golang to query a database in
Azure SQL Database or Azure SQL Managed
Instance
9/13/2022 • 6 minutes to read • Edit Online
Prerequisites
To complete this quickstart, you need:
An Azure account with an active subscription. Create an account for free.
A database in Azure SQL Database or Azure SQL Managed Instance. You can use one of these quickstarts
to create a database:
SQ L SERVER O N A Z URE
SQ L DATA B A SE SQ L M A N A GED IN STA N C E VM
Load data Adventure Works loaded Restore Wide World Restore Wide World
per quickstart Importers Importers
IMPORTANT
The scripts in this article are written to use the Adventure Works database. With a SQL Managed Instance, you
must either import the Adventure Works database into an instance database or modify the scripts in this article
to use the Wide World Importers database.
NOTE
For connection information for SQL Server on Azure VM, see Connect to a SQL Server instance.
mkdir SqlServerSample
2. Navigate to SqlSer verSample and install the SQL Server driver for Go.
cd SqlServerSample
go get github.com/microsoft/go-mssqldb
2. Use sqlcmd to connect to the database and run your newly created Azure SQL script. Replace the
appropriate values for your server, database, username, and password.
package main
import (
_ "github.com/microsoft/go-mssqldb"
"database/sql"
"context"
"log"
"fmt"
"errors"
)
var db *sql.DB
func main() {
// Build connection string
connString := fmt.Sprintf("server=%s;user id=%s;password=%s;port=%d;database=%s;",
server, user, password, port, database)
// Create employee
createID, err := CreateEmployee("Jake", "United States")
if err != nil {
log.Fatal("Error creating Employee: ", err.Error())
}
fmt.Printf("Inserted ID: %d successfully.\n", createID)
// Read employees
count, err := ReadEmployees()
if err != nil {
log.Fatal("Error reading Employees: ", err.Error())
}
fmt.Printf("Read %d row(s) successfully.\n", count)
if db == nil {
err = errors.New("CreateEmployee: db is null")
return -1, err
}
tsql := `
INSERT INTO TestSchema.Employees (Name, Location) VALUES (@Name, @Location);
select isNull(SCOPE_IDENTITY(), -1);
`
row := stmt.QueryRowContext(
ctx,
sql.Named("Name", name),
sql.Named("Location", location))
var newID int64
err = row.Scan(&newID)
err = row.Scan(&newID)
if err != nil {
return -1, err
}
// Execute query
rows, err := db.QueryContext(ctx, tsql)
if err != nil {
return -1, err
}
defer rows.Close()
return result.RowsAffected()
}
go run sample.go
Connected!
Inserted ID: 4 successfully.
ID: 1, Name: Jared, Location: Australia
ID: 2, Name: Nikita, Location: India
ID: 3, Name: Tom, Location: Germany
ID: 4, Name: Jake, Location: United States
Read 4 row(s) successfully.
Updated 1 row(s) successfully.
Deleted 1 row(s) successfully.
Next steps
Design your first database in Azure SQL Database
Golang driver for SQL Server
Report issues or ask questions
Quickstart: Use Node.js to query a database in
Azure SQL Database or Azure SQL Managed
Instance
9/13/2022 • 3 minutes to read • Edit Online
Prerequisites
To complete this quickstart, you need:
An Azure account with an active subscription. Create an account for free.
SQ L SERVER O N A Z URE
A C T IO N SQ L DATA B A SE SQ L M A N A GED IN STA N C E VM
CLI CLI
Load data Adventure Works loaded Restore Wide World Restore Wide World
per quickstart Importers Importers
Node.js-related software
macOS
Ubuntu
Windows
Install Homebrew and Node.js, and then install the ODBC driver and SQLCMD using steps 1.2 and 1.3 in
Create Node.js apps using SQL Server on macOS.
IMPORTANT
The scripts in this article are written to use the Adventure Works database.
NOTE
You can optionally choose to use an Azure SQL Managed Instance.
To create and configure, use the Azure portal, PowerShell, or CLI, and then set up on-premises or VM connectivity.
To load data, see restore with BACPAC with the Adventure Works file, or see restore the Wide World Importers database.
NOTE
For connection information for SQL Server on Azure VM, see Connect to SQL Server.
npm init -y
npm install tedious
/*
//Use Azure VM Managed Identity to connect to the SQL database
const config = {
server: process.env["db_server"],
authentication: {
type: 'azure-active-directory-msi-vm',
},
options: {
database: process.env["db_database"],
encrypt: true,
port: 1433
}
};
//Use Azure App Service Managed Identity to connect to the SQL database
const config = {
server: process.env["db_server"],
authentication: {
type: 'azure-active-directory-msi-app-service',
},
options: {
database: process.env["db_database"],
encrypt: true,
port: 1433
}
});
*/
connection.connect();
function queryDatabase() {
console.log("Reading rows from the Table...");
NOTE
For more information about using managed identity for authentication, complete the tutorial to access data via managed
identity.
NOTE
The code example uses the AdventureWorksLT sample database in Azure SQL Database.
node sqltest.js
2. Verify the top 20 rows are returned and close the application window.
Next steps
Microsoft Node.js Driver for SQL Server
Connect and query on Windows/Linux/macOS with .NET core, Visual Studio Code, or SSMS (Windows
only)
Get started with .NET Core on Windows/Linux/macOS using the command line
Design your first database in Azure SQL Database using .NET or SSMS
Quickstart: Use PHP to query a database in Azure
SQL Database
9/13/2022 • 2 minutes to read • Edit Online
Prerequisites
To complete this quickstart, you need:
An Azure account with an active subscription. Create an account for free.
A database in Azure SQL Database or Azure SQL Managed Instance. You can use one of these quickstarts
to create and then configure a database:
SQ L SERVER O N A Z URE
A C T IO N SQ L DATA B A SE SQ L M A N A GED IN STA N C E VM
CLI CLI
Load data Adventure Works loaded Restore Wide World Restore Wide World
per quickstart Importers Importers
IMPORTANT
The scripts in this article are written to use the Adventure Works database. With a SQL Managed Instance, you
must either import the Adventure Works database into an instance database or modify the scripts in this article
to use the Wide World Importers database.
NOTE
For connection information for SQL Server on Azure VM, see Connect to a SQL Server instance.
<?php
$serverName = "your_server.database.windows.net"; // update me
$connectionOptions = array(
"Database" => "your_database", // update me
"Uid" => "your_username", // update me
"PWD" => "your_password" // update me
);
//Establishes the connection
$conn = sqlsrv_connect($serverName, $connectionOptions);
$tsql= "SELECT TOP 20 pc.Name as CategoryName, p.name as ProductName
FROM [SalesLT].[ProductCategory] pc
JOIN [SalesLT].[Product] p
ON pc.productcategoryid = p.productcategoryid";
$getResults= sqlsrv_query($conn, $tsql);
echo ("Reading data from table" . PHP_EOL);
if ($getResults == FALSE)
echo (sqlsrv_errors());
while ($row = sqlsrv_fetch_array($getResults, SQLSRV_FETCH_ASSOC)) {
echo ($row['CategoryName'] . " " . $row['ProductName'] . PHP_EOL);
}
sqlsrv_free_stmt($getResults);
?>
php sqltest.php
2. Verify the top 20 rows are returned and close the app window.
Next steps
Design your first database in Azure SQL Database
Microsoft PHP Drivers for SQL Server
Report issues or ask questions
Retry logic example: Connect resiliently to Azure SQL with PHP
Quickstart: Use Python to query a database
9/13/2022 • 2 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
In this quickstart, you use Python to connect to Azure SQL Database, Azure SQL Managed Instance, or Synapse
SQL database and use T-SQL statements to query data.
Prerequisites
To complete this quickstart, you need:
An Azure account with an active subscription. Create an account for free.
A database where you will run a query.
You can use one of these quickstarts to create and then configure a database:
CLI CLI
Deployment Deployment
template template
A C T IO N MAC OS UB UN T U W IN DO W S
A C T IO N MAC OS UB UN T U W IN DO W S
Install the ODBC driver, Use steps 1.2 , 1.3 , and Configure an Configure an environment
SQLCMD, and the Python 2.1 in create Python apps environment for pyodbc for pyodbc Python
driver for SQL Server using SQL Server on Python development development.
macOS. This will also
install install Homebrew
and Python.
Further information Microsoft ODBC driver on Microsoft ODBC driver on Microsoft ODBC driver on
macOS Linux Linux
To further explore Python and the database in Azure SQL Database, see Azure SQL Database libraries for Python,
the pyodbc repository, and a pyodbc sample.
import pyodbc
server = '<server>.database.windows.net'
database = '<database>'
username = '<username>'
password = '{<password>}'
driver= '{ODBC Driver 17 for SQL Server}'
with
pyodbc.connect('DRIVER='+driver+';SERVER=tcp:'+server+';PORT=1433;DATABASE='+database+';UID='+usernam
e+';PWD='+ password) as conn:
with conn.cursor() as cursor:
cursor.execute("SELECT TOP 3 name, collation_name FROM sys.databases")
row = cursor.fetchone()
while row:
print (str(row[0]) + " " + str(row[1]))
row = cursor.fetchone()
python sqltest.py
2. Verify that the databases and their collations are returned, and then close the command window.
Next steps
Design your first database in Azure SQL Database
Microsoft Python drivers for SQL Server
Python developer center
Quickstart: Use Ruby to query a database in Azure
SQL Database or Azure SQL Managed Instance
9/13/2022 • 2 minutes to read • Edit Online
Prerequisites
To complete this quickstart, you need the following prerequisites:
A database. You can use one of these quickstarts to create and then configure the database:
SQ L SERVER O N A Z URE
A C T IO N SQ L DATA B A SE SQ L M A N A GED IN STA N C E VIRT UA L M A C H IN ES
CLI CLI
Load data Adventure Works loaded Restore Wide World Restore Wide World
per quickstart Importers Importers
IMPORTANT
The scripts in this article are written to use the Adventure Works database. With a SQL Managed Instance, either
import the Adventure Works database into an instance database or modify the scripts in this article to use the
Wide World Importers database.
NOTE
For connection information for SQL Server on Azure Virtual Machines, see Connect to a SQL Server instance.
require 'tiny_tds'
server = '<server>.database.windows.net'
database = '<database>'
username = '<username>'
password = '<password>'
client = TinyTds::Client.new username: username, password: password,
host: server, port: 1433, database: database, azure: true
IMPORTANT
This example uses the sample AdventureWorksLT data, which you can choose as source when creating your
database. If your database has different data, use tables from your own database in the SELECT query.
ruby sqltest.rb
2. Verify that the top 20 Category/Product rows from your database are returned.
Next steps
Design your first database in Azure SQL Database
GitHub repository for TinyTDS
Report issues or ask questions about TinyTDS
Ruby driver for SQL Server
Manage historical data in Temporal tables with
retention policy
9/13/2022 • 7 minutes to read • Edit Online
In the preceding example, we assumed that ValidTo column corresponds to the end of SYSTEM_TIME period.
Database flag is_temporal_histor y_retention_enabled is set to ON by default, but users can change it with
ALTER DATABASE statement. It is also automatically set to OFF after point in time restore operation. To enable
temporal history retention cleanup for your database, execute the following statement:
IMPORTANT
You can configure retention for temporal tables even if is_temporal_histor y_retention_enabled is OFF, but automatic
cleanup for aged rows is not triggered in that case.
Retention policy is configured during table creation by specifying value for the HISTORY_RETENTION_PERIOD
parameter:
CREATE TABLE dbo.WebsiteUserInfo
(
[UserID] int NOT NULL PRIMARY KEY CLUSTERED
, [UserName] nvarchar(100) NOT NULL
, [PagesVisited] int NOT NULL
, [ValidFrom] datetime2 (0) GENERATED ALWAYS AS ROW START
, [ValidTo] datetime2 (0) GENERATED ALWAYS AS ROW END
, PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo)
)
WITH
(
SYSTEM_VERSIONING = ON
(
HISTORY_TABLE = dbo.WebsiteUserInfoHistory,
HISTORY_RETENTION_PERIOD = 6 MONTHS
)
);
Azure SQL Database and Azure SQL Managed Instance allow you to specify retention period by using different
time units: DAYS, WEEKS, MONTHS, and YEARS. If HISTORY_RETENTION_PERIOD is omitted, INFINITE retention
is assumed. You can also use INFINITE keyword explicitly.
In some scenarios, you may want to configure retention after table creation, or to change previously configured
value. In that case use ALTER TABLE statement:
IMPORTANT
Setting SYSTEM_VERSIONING to OFF does not preserve retention period value. Setting SYSTEM_VERSIONING to ON
without HISTORY_RETENTION_PERIOD specified explicitly results in the INFINITE retention period.
To review current state of the retention policy, use the following query that joins temporal retention enablement
flag at the database level with retention periods for individual tables:
SELECT DB.is_temporal_history_retention_enabled,
SCHEMA_NAME(T1.schema_id) AS TemporalTableSchema,
T1.name as TemporalTableName, SCHEMA_NAME(T2.schema_id) AS HistoryTableSchema,
T2.name as HistoryTableName,T1.history_retention_period,
T1.history_retention_period_unit_desc
FROM sys.tables T1
OUTER APPLY (select is_temporal_history_retention_enabled from sys.databases
where name = DB_NAME()) AS DB
LEFT JOIN sys.tables T2
ON T1.history_table_id = T2.object_id WHERE T1.temporal_type = 2
Excellent data compression and efficient retention cleanup makes clustered columnstore index a perfect choice
for scenarios when your workload rapidly generates high amount of historical data. That pattern is typical for
intensive transactional processing workloads that use temporal tables for change tracking and auditing, trend
analysis, or IoT data ingestion.
Index considerations
The cleanup task for tables with rowstore clustered index requires index to start with the column corresponding
the end of SYSTEM_TIME period. If such index doesn't exist, you cannot configure a finite retention period:
Msg 13765, Level 16, State 1
When finite retention period is configured for the history table with the clustered columnstore index, you cannot
create additional non-clustered B-tree indexes on that table:
Cannot create non-clustered index on a temporal history table 'WebsiteUserInfoHistory' since it has finite
retention period and clustered columnstore index defined.
The query plan includes additional filter applied to end of period column (ValidTo) in the Clustered Index Scan
operator on the history table (highlighted). This example assumes that one MONTH retention period was set on
WebsiteUserInfo table.
However, if you query history table directly, you may see rows that are older than specified retention period, but
without any guarantee for repeatable query results. The following picture shows query execution plan for the
query on the history table without additional filters applied:
Do not rely your business logic on reading history table beyond retention period as you may get inconsistent or
unexpected results. We recommend that you use temporal queries with FOR SYSTEM_TIME clause for analyzing
data in temporal tables.
Next steps
To learn how to use temporal tables in your applications, check out Getting Started with Temporal Tables.
For detailed information about temporal tables, review Temporal tables.
Manage Azure SQL Database long-term backup
retention
9/13/2022 • 9 minutes to read • Edit Online
Prerequisites
Portal
Azure CLI
PowerShell
You can configure SQL Database to retain automated backups for a period longer than the retention period for
your service tier.
1. In the Azure portal, navigate to your server and then select Backups . Select the Retention policies tab
to modify your backup retention settings.
2. On the Retention policies tab, select the database(s) on which you want to set or modify long-term
backup retention policies. Unselected databases will not be affected.
3. In the Configure policies pane, specify your desired retention period for weekly, monthly, or yearly
backups. Choose a retention period of '0' to indicate that no long-term backup retention should be set.
4. Select Apply to apply the chosen retention settings to all selected databases.
IMPORTANT
When you enable a long-term backup retention policy, it may take up to 7 days for the first backup to become visible and
available to restore. For details of the LTR backup cadence, see long-term backup retention.
View backups and restore from a backup
View the backups that are retained for a specific database with an LTR policy, and restore from those backups.
Portal
Azure CLI
PowerShell
1. In the Azure portal, navigate to your server and then select Backups . To view the available LTR backups
for a specific database, select Manage under the Available LTR backups column. A pane will appear with
a list of the available LTR backups for the selected database.
2. In the Available LTR backups pane that appears, review the available backups. You may select a backup
to restore from or to delete.
3. To restore from an available LTR backup, select the backup from which you want to restore, and then
select Restore .
4. Choose a name for your new database, then select Review + Create to review the details of your
Restore. Select Create to restore your database from the chosen backup.
5. On the toolbar, select the notification icon to view the status of the restore job.
6. When the restore job is completed, open the SQL databases page to view the newly restored database.
NOTE
From here, you can connect to the restored database using SQL Server Management Studio to perform needed tasks,
such as to extract a bit of data from the restored database to copy into the existing database or to delete the existing
database and rename the restored database to the existing database name.
Limitations
When restoring from an LTR backup, the read scale property is disabled. To enable, read scale on the restored
database, update the database after it has been created.
You need to specify the target service level objective, when restoring from an LTR backup, which was created
when the database was in an elastic pool.
Next steps
To learn about service-generated automatic backups, see automatic backups
To learn about long-term backup retention, see long-term backup retention
Create Azure AD guest users and set as an Azure
AD admin
9/13/2022 • 6 minutes to read • Edit Online
Feature description
This feature lifts the current limitation that only allows guest users to connect to Azure SQL Database, SQL
Managed Instance, or Azure Synapse Analytics when they're members of a group created in Azure AD. The
group needed to be mapped to a user manually using the CREATE USER (Transact-SQL) statement in a given
database. Once a database user has been created for the Azure AD group containing the guest user, the guest
user can sign into the database using Azure Active Directory with MFA authentication. Guest users can be
created and connect directly to SQL Database, SQL Managed Instance, or Azure Synapse without the
requirement of adding them to an Azure AD group first, and then creating a database user for that Azure AD
group.
As part of this feature, you also have the ability to set the Azure AD guest user directly as an AD admin for the
logical server or for a managed instance. The existing functionality (which allows the guest user to be part of an
Azure AD group that can then be set as the Azure AD admin for the logical server or managed instance) is not
impacted. Guest users in the database that are a part of an Azure AD group are also not impacted by this
change.
For more information about existing support for guest users using Azure AD groups, see Using multi-factor
Azure Active Directory authentication.
Prerequisite
Az.Sql 2.9.0 module or higher is needed when using PowerShell to set a guest user as an Azure AD admin for
the logical server or managed instance.
3. There should now be a database user created for the guest user [email protected] .
4. Run the below command to verify the database user got created successfully:
5. Disconnect and sign into the database as the guest user [email protected] using SQL Server Management
Studio (SSMS) using the authentication method Azure Active Director y - Universal with MFA . For
more information, see Using multi-factor Azure Active Directory authentication.
Create guest user in SQL Managed Instance
NOTE
SQL Managed Instance supports logins for Azure AD users, as well as Azure AD contained database users. The below
steps show how to create a login and user for an Azure AD guest user in SQL Managed Instance. You can also choose to
create a contained database user in SQL Managed Instance by using the method in the Create guest user in SQL
Database and Azure Synapse section.
1. Ensure that the guest user (for example, [email protected] ) is already added into your Azure AD and an
Azure AD admin has been set for the SQL Managed Instance server. Having an Azure AD admin is
required for Azure Active Directory authentication.
2. Connect to the SQL Managed Instance server as the Azure AD admin or an Azure AD user with sufficient
SQL permissions to create users, and run the following command on the master database to create a
login for the guest user:
3. There should now be a login created for the guest user [email protected] in the master database.
4. Run the below command to verify the login got created successfully:
5. Run the below command on the database where the guest user needs to be added:
6. There should now be a database user created for the guest user [email protected] .
7. Disconnect and sign into the database as the guest user [email protected] using SQL Server Management
Studio (SSMS) using the authentication method Azure Active Director y - Universal with MFA . For
more information, see Using multi-factor Azure Active Directory authentication.
You can also use the Azure CLI command az sql server ad-admin to set the guest user as an Azure AD admin for
your logical server.
Azure PowerShell (SQL Managed Instance )
To setup an Azure AD guest user for a managed instance, follow these steps:
1. Ensure that the guest user (for example, [email protected] ) is already added into your Azure AD.
2. Go to the Azure portal, and go to your Azure Active Director y resource. Under Manage , go to the
Users pane. Select your guest user, and record the Object ID .
3. Run the following PowerShell command to add the guest user as the Azure AD admin for your SQL
Managed Instance:
Replace <ResourceGroupName> with your Azure Resource Group name that contains the SQL Managed
Instance.
Replace <ManagedInstanceName> with your SQL Managed Instance name.
Replace <DisplayNameOfGuestUser> with your guest user name.
Replace <AADObjectIDOfGuestUser> with the Object ID gathered earlier.
You can also use the Azure CLI command az sql mi ad-admin to set the guest user as an Azure AD admin for
your managed instance.
Next steps
Configure and manage Azure AD authentication with Azure SQL
Using multi-factor Azure Active Directory authentication
CREATE USER (Transact-SQL)
Tutorial: Assign Directory Readers role to an Azure
AD group and manage role assignments
9/13/2022 • 6 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
This article guides you through creating a group in Azure Active Directory (Azure AD), and assigning that group
the Director y Readers role. The Directory Readers permissions allow the group owners to add additional
members to the group, such as a managed identity of Azure SQL Database, Azure SQL Managed Instance, and
Azure Synapse Analytics. This bypasses the need for a Global Administrator or Privileged Role Administrator to
assign the Directory Readers role directly for each Azure SQL logical server identity in the tenant.
This tutorial uses the feature introduced in Use Azure AD groups to manage role assignments.
For more information on the benefits of assigning the Directory Readers role to an Azure AD group for Azure
SQL, see Directory Readers role in Azure Active Directory for Azure SQL.
NOTE
With Microsoft Graph support for Azure SQL, the Directory Readers role can be replaced with using lower level
permissions. For more information, see User-assigned managed identity in Azure AD for Azure SQL.
Prerequisites
An Azure AD instance. For more information, see Configure and manage Azure AD authentication with Azure
SQL.
A SQL Database, SQL Managed Instance, or Azure Synapse.
NOTE
Make sure that the Group Type is Security . Microsoft 365 groups are not supported for Azure SQL.
To check and manage the group that was created, go back to the Groups pane in the Azure portal, and search
for your group name. Additional owners and members can be added under the Owners and Members menu
of Manage setting after selecting your group. You can also review the Assigned roles for the group.
Add Azure SQL managed identity to the group
NOTE
We're using SQL Managed Instance for this example, but similar steps can be applied for SQL Database or Azure Synapse
to achieve the same results.
For subsequent steps, the Global Administrator or Privileged Role Administrator user is no longer needed.
1. Log into the Azure portal as the user managing SQL Managed Instance, and is an owner of the group
created earlier.
2. Find the name of your SQL managed instance resource in the Azure portal.
During the creation of your SQL Managed Instance, an Azure identity was created for your instance. The
created identity has the same name as the prefix of your SQL Managed Instance name. You can find the
service principal for your SQL Managed Instance identity that created as an Azure AD Application by
following these steps:
Go to the Azure Active Director y resource. Under the Manage setting, select Enterprise
applications . The Object ID is the identity of the instance.
3. Go to the Azure Active Director y resource. Under Managed , go to Groups . Select the group that you
created. Under the Managed setting of your group, select Members . Select Add members and add
your SQL Managed Instance service principal as a member of the group by searching for the name found
above.
NOTE
It can take a few minutes to propagate the service principal permissions through the Azure system, and allow access to
Microsoft Graph API. You may have to wait a few minutes before you provision an Azure AD admin for SQL Managed
Instance.
Remarks
For SQL Database and Azure Synapse, the server identity can be created during the Azure SQL logical server
creation or after the server was created. For more information on how to create or set the server identity in SQL
Database or Azure Synapse, see Enable service principals to create Azure AD users.
For SQL Managed Instance, the Director y Readers role must be assigned to managed instance identity before
you can set up an Azure AD admin for the managed instance.
Assigning the Director y Readers role to the server identity isn't required for SQL Database or Azure Synapse
when setting up an Azure AD admin for the logical server. However, to enable an Azure AD object creation in
SQL Database or Azure Synapse on behalf of an Azure AD application, the Director y Readers role is required.
If the role isn't assigned to the SQL logical server identity, creating Azure AD users in Azure SQL will fail. For
more information, see Azure Active Directory service principal with Azure SQL.
1. Download the Azure AD PowerShell module using the following commands. You may need to run
PowerShell as an administrator.
Install-Module azuread
Import-Module azuread
#To verify that the module is ready to use, use the following command:
Get-Module azuread
Connect-AzureAD
You can also verify owners of the group in the Azure portal. Follow the steps in Checking the group that
was created.
Assigning the service principal as a member of the group
For subsequent steps, the Global Administrator or Privileged Role Administrator user is no longer needed.
1. Using an owner of the group, that also manages the Azure SQL resource, run the following command to
connect to your Azure AD.
Connect-AzureAD
2. Assign the service principal as a member of the group that was created.
Replace <ServerName> with your Azure SQL logical server name, or your Managed Instance name. For
more information, see the section, Add Azure SQL service identity to the group
The following command will return the service principal Object ID indicating that it has been added to the
group:
Next steps
Directory Readers role in Azure Active Directory for Azure SQL
Tutorial: Create Azure AD users using Azure AD applications
Configure and manage Azure AD authentication with Azure SQL
Tutorial: Enable Azure Active Directory only
authentication with Azure SQL
9/13/2022 • 10 minutes to read • Edit Online
Prerequisites
An Azure AD instance. For more information, see Configure and manage Azure AD authentication with Azure
SQL.
A SQL Database or SQL Managed Instance with a database, and logins or users. See Quickstart: Create an
Azure SQL Database single database if you haven't already created an Azure SQL Database, or Quickstart:
Create an Azure SQL Managed Instance.
4. Click Save .
3. If you haven't added an Azure Active Director y admin , you'll need to set this before you can enable
Azure AD-only authentication.
4. Select the Suppor t only Azure Active Director y authentication for this ser ver checkbox.
5. The Enable Azure AD authentication only popup will show. Click Yes to enable the feature and Save
the setting.
Portal
The Azure CLI
PowerShell
Portal
The Azure CLI
PowerShell
Next steps
Azure AD-only authentication with Azure SQL
Create server with Azure AD-only authentication enabled in Azure SQL
Using Azure Policy to enforce Azure Active Directory only authentication with Azure SQL
Using Azure Policy to enforce Azure Active
Directory only authentication with Azure SQL
9/13/2022 • 3 minutes to read • Edit Online
NOTE
The Azure AD-only authentication and associated Azure Policy feature discussed in this article is in public preview .
This article guides you through creating an Azure Policy that would enforce Azure AD-only authentication when
users create an Azure SQL Managed Instance, or a logical server for Azure SQL Database. To learn more about
Azure AD-only authentication during resource creation, see Create server with Azure AD-only authentication
enabled in Azure SQL.
In this article, you learn how to:
Create an Azure Policy that enforces logical server or managed instance creation with Azure AD-only
authentication enabled
Check Azure Policy compliance
Prerequisite
Have permissions to manage Azure Policy. For more information, see Azure RBAC permissions in Azure
Policy.
NOTE
The JSON script in the menu shows the built-in policy definition that can be used as a template to build a custom
Azure Policy for SQL Database. The default is set to Audit .
7. In the Basics tab, add a Scope by using the selector (...) on the side of the box.
8. in the Scope pane, select your Subscription from the drop-down menu, and select a Resource Group
for this policy. Once you're done, use the Select button to save the selection.
NOTE
If you do not select a resource group, the policy will apply to the whole subscription.
9. Once you're back on the Basics tab, customize the Assignment name and provide an optional
Description . Make sure the Policy enforcement is Enabled .
10. Go over to the Parameters tab. Unselect the option Only show parameters that require input .
11. Under Effect , select Deny . This setting will prevent a logical server creation without Azure AD-only
authentication enabled.
12. In the Non-compliance messages tab, you can customize the policy message that displays if a
violation of the policy has occurred. The message will let users know what policy was enforced during
server creation.
13. Select Review + create . Review the policy and select the Create button.
NOTE
It may take some time for the newly created policy to be enforced.
NOTE
Updating the compliance report may take some time. Changes related to resource creation or Azure AD-only
authentication settings are not reported immediately.
Provision a server
You can then try to provision a logical server or managed instance in the resource group that you assigned the
Azure Policy. If Azure AD-only authentication is enabled during server creation, the provision will succeed. When
Azure AD-only authentication isn't enabled, the provision will fail.
For more information, see Create server with Azure AD-only authentication enabled in Azure SQL.
Next steps
Overview of Azure Policy for Azure AD-only authentication
Create server with Azure AD-only authentication enabled in Azure SQL
Overview of Azure AD-only authentication
Create server with Azure AD-only authentication
enabled in Azure SQL
9/13/2022 • 18 minutes to read • Edit Online
Prerequisites
Version 2.26.1 or later is needed when using The Azure CLI. For more information on the installation and the
latest version, see Install the Azure CLI.
Az 6.1.0 module or higher is needed when using PowerShell.
If you're provisioning a managed instance using the Azure CLI, PowerShell, or REST API, a virtual network and
subnet needs to be created before you begin. For more information, see Create a virtual network for Azure
SQL Managed Instance.
Permissions
To provision a logical server or managed instance, you'll need to have the appropriate permissions to create
these resources. Azure users with higher permissions, such as subscription Owners, Contributors, Service
Administrators, and Co-Administrators have the privilege to create a SQL server or managed instance. To create
these resources with the least privileged Azure RBAC role, use the SQL Server Contributor role for SQL
Database and SQL Managed Instance Contributor role for SQL Managed Instance.
The SQL Security Manager Azure RBAC role doesn't have enough permissions to create a server or instance
with Azure AD-only authentication enabled. The SQL Security Manager role will be required to manage the
Azure AD-only authentication feature after server or instance creation.
1. Browse to the Select SQL deployment option page in the Azure portal.
2. If you aren't already signed in to Azure portal, sign in when prompted.
3. Under SQL databases , leave Resource type set to Single database , and select Create .
4. On the Basics tab of the Create SQL Database form, under Project details , select the desired Azure
Subscription .
5. For Resource group , select Create new , enter a name for your resource group, and select OK .
6. For Database name , enter a name for your database.
7. For Ser ver , select Create new , and fill out the new server form with the following values:
Ser ver name : Enter a unique server name. Server names must be globally unique for all servers in
Azure, not just unique within a subscription. Enter a value, and the Azure portal will let you know if it's
available or not.
Location : Select a location from the dropdown list
Authentication method : Select Use only Azure Active Director y (Azure AD) authentication .
Select Set admin , which brings up a menu to select an Azure AD principal as your logical server
Azure AD administrator. When you're finished, use the Select button to set your admin.
8. Select Next: Networking at the bottom of the page.
9. On the Networking tab, for Connectivity method , select Public endpoint .
10. For Firewall rules , set Add current client IP address to Yes . Leave Allow Azure ser vices and
resources to access this ser ver set to No .
11. Leave Connection policy and Minimum TLS version settings as their default value.
12. Select Next: Security at the bottom of the page. Configure any of the settings for Microsoft Defender
for SQL , Ledger , Identity , and Transparent data encr yption for your environment. You can also skip
these settings.
NOTE
Using a user-assigned managed identity (UMI) is not supported with Azure AD-only authentication. Do not set
the server identity in the Identity section as a UMI.
For more information on the configuration options, see Quickstart: Create an Azure SQL Managed
Instance.
5. Under Authentication , select Use only Azure Active Director y (Azure AD) authentication for the
Authentication method .
6. Select Set admin , which brings up a menu to select an Azure AD principal as your managed instance
Azure AD administrator. When you're finished, use the Select button to set your admin.
7. You can leave the rest of the settings default. For more information on the Networking , Security , or
other tabs and settings, follow the guide in the article Quickstart: Create an Azure SQL Managed Instance.
8. Once you're done with configuring your settings, select Review + create to proceed. Select Create to
start provisioning the managed instance.
Grant Directory Readers permissions
Once the deployment is complete for your managed instance, you may notice that the SQL Managed Instance
needs Read permissions to access Azure Active Directory. Read permissions can be granted by clicking on the
displayed message in the Azure portal by a person with enough privileges. For more information, see Directory
Readers role in Azure Active Directory for Azure SQL.
Limitations
To reset the server administrator password, Azure AD-only authentication must be disabled.
If Azure AD-only authentication is disabled, you must create a server with a server admin and password
when using all APIs.
Next steps
If you already have a SQL server or managed instance, and just want to enable Azure AD-only authentication,
see Tutorial: Enable Azure Active Directory only authentication with Azure SQL.
For more information on the Azure AD-only authentication feature, see Azure AD-only authentication with
Azure SQL.
If you're looking to enforce server creation with Azure AD-only authentication enabled, see Azure Policy for
Azure Active Directory only authentication with Azure SQL
Tutorial: Create and utilize Azure Active Directory
server logins
9/13/2022 • 5 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
(dedicated SQL pools only)
NOTE
Azure Active Directory (Azure AD) server principals (logins) are currently in public preview for Azure SQL Database. Azure
SQL Managed Instance can already utilize Azure AD logins.
This article guides you through creating and utilizing Azure Active Directory (Azure AD) principals (logins) in the
virtual master database of Azure SQL.
In this tutorial, you learn how to:
Create an Azure AD login in the virtual master database with the new syntax extension for Azure SQL
Database
Create a user mapped to an Azure AD login in the virtual master database
Grant server roles to an Azure AD user
Disable an Azure AD login
Prerequisites
A SQL Database or SQL Managed Instance with a database. See Quickstart: Create an Azure SQL Database
single database if you haven't already created an Azure SQL Database, or Quickstart: Create an Azure SQL
Managed Instance.
Azure AD authentication set up for SQL Database or Managed Instance. For more information, see Configure
and manage Azure AD authentication with Azure SQL.
This article instructs you on creating an Azure AD login and user within the virtual master database. Only an
Azure AD admin can create a user within the virtual master database, so we recommend you use the Azure
AD admin account when going through this tutorial. An Azure AD principal with the loginmanager role can
create a login, but not a user within the virtual master database.
NOTE
The first Azure AD login must be created by the Azure Active Directory admin. A SQL login cannot create Azure
AD logins.
2. Using SQL Server Management Studio (SSMS), log into your SQL Database with the Azure AD admin
account set up for the server.
3. Run the following query:
Use master
CREATE LOGIN [[email protected]] FROM EXTERNAL PROVIDER
GO
5. The login [email protected] has been created in the virtual master database.
Use master
CREATE USER [[email protected]] FROM LOGIN [[email protected]]
TIP
Although it is not required to use Azure AD user aliases (for example, [email protected] ), it is a recommended
best practice to use the same alias for Azure AD users and Azure AD logins.
NOTE
The server-level roles mentioned here are not supported for Azure AD groups.
Permissions aren't effective until the user reconnects. Flush the DBCC cache as well:
DBCC FLUSHAUTHCACHE
DBCC FREESYSTEMCACHE('TokenAndPermUserStore') WITH NO_INFOMSGS
To check which Azure AD logins are part of server-level roles, run the following query:
DatabaseRoleName DatabaseUserName
dbmanager [email protected]
loginmanager [email protected]
For the DISABLE or ENABLE changes to take immediate effect, the authentication cache and the
TokenAndPermUserStore cache must be cleared using the following T-SQL commands:
DBCC FLUSHAUTHCACHE
DBCC FREESYSTEMCACHE('TokenAndPermUserStore') WITH NO_INFOMSGS
Check that the login has been disabled by executing the following query:
A use case for this would be to allow read-only on geo-replicas, but deny connection on a primary server.
See also
For more information and examples, see:
Azure Active Directory server principals
CREATE LOGIN (Transact-SQL)
CREATE USER (Transact-SQL)
PowerShell and Azure CLI: Enable Transparent Data
Encryption with customer-managed key from Azure
Key Vault
9/13/2022 • 6 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
This article walks through how to use a key from Azure Key Vault for Transparent Data Encryption (TDE) on
Azure SQL Database or Azure Synapse Analytics. To learn more about the TDE with Azure Key Vault integration -
Bring Your Own Key (BYOK) Support, visit TDE with customer-managed keys in Azure Key Vault.
NOTE
Azure SQL now supports using a RSA key stored in a Managed HSM as TDE Protector. Azure Key Vault Managed HSM is
a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard
cryptographic keys for your cloud applications, using FIPS 140-2 Level 3 validated HSMs. Learn more about Managed
HSMs.
NOTE
This article applies to Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics (dedicated SQL
pools (formerly SQL DW)). For documentation on Transparent Data Encryption for dedicated SQL pools inside Synapse
workspaces, see Azure Synapse Analytics encryption.
PowerShell
The Azure CLI
For Az module installation instructions, see Install Azure PowerShell. For specific cmdlets, see AzureRM.Sql.
For specifics on Key Vault, see PowerShell instructions from Key Vault and How to use Key Vault soft-delete with
PowerShell.
IMPORTANT
The PowerShell Azure Resource Manager (RM) module is still supported, but all future development is for the Az.Sql
module. The AzureRM module will continue to receive bug fixes until at least December 2020. The arguments for the
commands in the Az module and in the AzureRm modules are substantially identical. For more about their compatibility,
see Introducing the new Azure PowerShell Az module.
If you are creating a server, use the New-AzSqlServer cmdlet with the tag -Identity to add an Azure AD identity
during server creation:
For adding permissions to your server on a Managed HSM, add the 'Managed HSM Crypto Service Encryption
User' local RBAC role to the server. This will enable the server to perform get, wrap key, unwrap key operations
on the keys in the Managed HSM. Instructions for provisioning server access on Managed HSM
Add the Key Vault key to the server and set the TDE Protector
Use the Get-AzKeyVaultKey cmdlet to retrieve the key ID from key vault
Use the Add-AzSqlServerKeyVaultKey cmdlet to add the key from the Key Vault to the server.
Use the Set-AzSqlServerTransparentDataEncryptionProtector cmdlet to set the key as the TDE protector for
all server resources.
Use the Get-AzSqlServerTransparentDataEncryptionProtector cmdlet to confirm that the TDE protector was
configured as intended.
NOTE
For Managed HSM keys, use Az.Sql 2.11.1 version of PowerShell.
NOTE
The combined length for the key vault name and key name cannot exceed 94 characters.
TIP
An example KeyId from Key Vault:
https://contosokeyvault.vault.azure.net/keys/Key1/1a1a2b2b3c3c4d4d5e5e6f6f7g7g8h8h
# set the key as the TDE protector for all resources under the server
Set-AzSqlServerTransparentDataEncryptionProtector -ResourceGroupName <SQLDatabaseResourceGroupName> -
ServerName <LogicalServerName> `
-Type AzureKeyVault -KeyId <KeyVaultKeyId>
Turn on TDE
Use the Set-AzSqlDatabaseTransparentDataEncryption cmdlet to turn on TDE.
Now the database or data warehouse has TDE enabled with an encryption key in Key Vault.
Use the Get-AzSqlServerKeyVaultKey cmdlet to return the list of Key Vault keys added to the server.
Use the Remove-AzSqlServerKeyVaultKey to remove a Key Vault key from the server.
Troubleshooting
Check the following if an issue occurs:
If the key vault cannot be found, make sure you're in the right subscription.
PowerShell
Azure CLI
If the new key cannot be added to the server, or the new key cannot be updated as the TDE Protector, check
the following:
The key should not have an expiration date
The key must have the get, wrap key, and unwrap key operations enabled.
Next steps
Learn how to rotate the TDE Protector of a server to comply with security requirements: Rotate the
Transparent Data Encryption protector Using PowerShell.
In case of a security risk, learn how to remove a potentially compromised TDE Protector: Remove a
potentially compromised key.
Configure Always Encrypted by using Azure Key
Vault
9/13/2022 • 13 minutes to read • Edit Online
Prerequisites
An Azure account and subscription. If you don't have one, sign up for a free trial.
A database in Azure SQL Database or Azure SQL Managed Instance.
SQL Server Management Studio version 13.0.700.242 or later.
.NET Framework 4.6 or later (on the client computer).
Visual Studio.
Azure PowerShell or Azure CLI
IMPORTANT
The PowerShell Azure Resource Manager (RM) module is still supported by Azure SQL Database, but all future
development is for the Az.Sql module. The AzureRM module will continue to receive bug fixes until at least December
2020. The arguments for the commands in the Az module and in the AzureRm modules are substantially identical. For
more about their compatibility, see Introducing the new Azure PowerShell Az module.
$subscriptionName = '<subscriptionName>'
$userPrincipalName = '<[email protected]>'
$applicationId = '<applicationId from AAD application>'
$resourceGroupName = '<resourceGroupName>' # use the same resource group name when creating your SQL
Database below
$location = '<datacenterLocation>'
$vaultName = '<vaultName>'
Connect-AzAccount
$subscriptionId = (Get-AzSubscription -SubscriptionName $subscriptionName).Id
Set-AzContext -SubscriptionId $subscriptionId
Create a table
In this section, you will create a table to hold patient data. It's not initially encrypted--you will configure
encryption in the next section.
1. Expand Databases .
2. Right-click the database and click New Quer y .
3. Paste the following Transact-SQL (T-SQL) into the new query window and Execute it.
IMPORTANT
Your application must use SqlParameter objects when passing plaintext data to the server with Always Encrypted columns.
Passing literal values without using SqlParameter objects will result in an exception.
1. Open Visual Studio and create a new C# Console Application (Visual Studio 2015 and earlier) or Console
App (.NET Framework) (Visual Studio 2017 and later). Make sure your project is set to .NET Framework
4.6 or later.
2. Name the project AlwaysEncr yptedConsoleAKVApp and click OK .
3. Install the following NuGet packages by going to Tools > NuGet Package Manager > Package Manager
Console .
Run these two lines of code in the Package Manager Console:
Install-Package Microsoft.SqlServer.Management.AlwaysEncrypted.AzureKeyVaultProvider
Install-Package Microsoft.IdentityModel.Clients.ActiveDirectory
// Instantiate a SqlConnectionStringBuilder.
SqlConnectionStringBuilder connStringBuilder = new SqlConnectionStringBuilder("replace with your connection
string");
providers.Add(SqlColumnEncryptionAzureKeyVaultProvider.ProviderName, azureKeyVaultProvider);
SqlConnection.RegisterColumnEncryptionKeyStoreProviders(providers);
}
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Data;
using System.Data.SqlClient;
using Microsoft.IdentityModel.Clients.ActiveDirectory;
using Microsoft.SqlServer.Management.AlwaysEncrypted.AzureKeyVaultProvider;
namespace AlwaysEncryptedConsoleAKVApp {
class Program {
// Update this line with your Clinic database connection string from the Azure portal.
static string connectionString = @"<connection string from the portal>";
static string applicationId = @"<application ID from your AAD application>";
static string clientKey = "<key from your AAD application>";
// Create a SqlConnectionStringBuilder.
SqlConnectionStringBuilder connStringBuilder =
new SqlConnectionStringBuilder(connectionString);
InsertPatient(new Patient() {
SSN = "999-99-0001",
FirstName = "Orlando",
LastName = "Gee",
LastName = "Gee",
BirthDate = DateTime.Parse("01/04/1964")
});
InsertPatient(new Patient() {
SSN = "999-99-0002",
FirstName = "Keith",
LastName = "Harris",
BirthDate = DateTime.Parse("06/20/1977")
});
InsertPatient(new Patient() {
SSN = "999-99-0003",
FirstName = "Donna",
LastName = "Carreras",
BirthDate = DateTime.Parse("02/09/1973")
});
InsertPatient(new Patient() {
SSN = "999-99-0004",
FirstName = "Janet",
LastName = "Gates",
BirthDate = DateTime.Parse("08/31/1985")
});
InsertPatient(new Patient() {
SSN = "999-99-0005",
FirstName = "Lucy",
LastName = "Harrington",
BirthDate = DateTime.Parse("05/06/1993")
});
string ssn;
// This very simple validation only checks that the user entered 11 characters.
// In production be sure to check all user input and use the best validation for your specific
application.
do {
Console.WriteLine("Please enter a valid SSN (ex. 999-99-0003):");
ssn = Console.ReadLine();
} while (ssn.Length != 11);
// The example allows duplicate SSN entries so we will return all records
// that match the provided value and store the results in selectedPatients.
Patient selectedPatient = SelectPatientBySSN(ssn);
// Check if any records were returned and display our query results.
if (selectedPatient != null) {
Console.WriteLine("Patient found with SSN = " + ssn);
Console.WriteLine(selectedPatient.FirstName + " " + selectedPatient.LastName + "\tSSN: "
+ selectedPatient.SSN + "\tBirthdate: " + selectedPatient.BirthDate);
}
else {
Console.WriteLine("No patients found with SSN = " + ssn);
}
SqlColumnEncryptionAzureKeyVaultProvider azureKeyVaultProvider =
new SqlColumnEncryptionAzureKeyVaultProvider(GetToken);
providers.Add(SqlColumnEncryptionAzureKeyVaultProvider.ProviderName, azureKeyVaultProvider);
SqlConnection.RegisterColumnEncryptionKeyStoreProviders(providers);
}
public async static Task<string> GetToken(string authority, string resource, string scope) {
var authContext = new AuthenticationContext(authority);
AuthenticationResult result = await authContext.AcquireTokenAsync(resource, _clientCredential);
if (result == null)
throw new InvalidOperationException("Failed to obtain the access token");
return result.AccessToken;
}
sqlCmd.Parameters.Add(paramSSN);
sqlCmd.Parameters.Add(paramFirstName);
sqlCmd.Parameters.Add(paramLastName);
sqlCmd.Parameters.Add(paramBirthDate);
if (reader.HasRows) {
while (reader.Read()) {
patients.Add(new Patient() {
SSN = reader[0].ToString(),
FirstName = reader[1].ToString(),
LastName = reader["LastName"].ToString(),
BirthDate = (DateTime)reader["BirthDate"]
});
}
}
}
catch (Exception ex) {
throw;
}
}
return patients;
}
sqlCmd.Parameters.Add(paramSSN);
if (reader.HasRows) {
while (reader.Read()) {
patient = new Patient() {
SSN = reader[0].ToString(),
FirstName = reader[1].ToString(),
LastName = reader["LastName"].ToString(),
BirthDate = (DateTime)reader["BirthDate"]
};
}
}
else {
patient = null;
}
}
catch (Exception ex) {
throw;
}
}
return patient;
}
// This method simply deletes all records in the Patients table to reset our demo.
static int ResetPatientsTable() {
int returnValue = 0;
}
catch (Exception ex) {
returnValue = 1;
}
}
return returnValue;
}
}
class Patient {
public string SSN { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public DateTime BirthDate { get; set; }
}
}
You can see that the encrypted columns do not contain any plaintext data.
To use SSMS to access the plaintext data, you first need to ensure that the user has proper permissions to the
Azure Key Vault: get, unwrapKey, and verify. For detailed information, see Create and Store Column Master Keys
(Always Encrypted).
Then add the Column Encryption Setting=enabled parameter during your connection.
1. In SSMS, right-click your server in Object Explorer and choose Disconnect .
2. Click Connect > Database Engine to open the Connect to Ser ver window and click Options .
3. Click Additional Connection Parameters and type Column Encr yption Setting=enabled .
You can now see the plaintext data in the encrypted columns.
Next steps
After your database is configured to use Always Encrypted, you may want to do the following:
Rotate and clean up your keys.
Migrate data that is already encrypted with Always Encrypted.
Related information
Always Encrypted (client development)
Transparent data encryption
SQL Server encryption
Always Encrypted wizard
Always Encrypted blog
Configure Always Encrypted by using the Windows
certificate store
9/13/2022 • 11 minutes to read • Edit Online
Prerequisites
For this tutorial, you'll need:
An Azure account and subscription. If you don't have one, sign up for a free trial.
A database in Azure SQL Database or Azure SQL Managed Instance.
SQL Server Management Studio version 13.0.700.242 or later.
.NET Framework 4.6 or later (on the client computer).
Visual Studio.
If the New Firewall Rule window opens, sign in to Azure and let SSMS create a new firewall rule for you.
Create a table
In this section, you will create a table to hold patient data. This will be a normal table initially--you will configure
encryption in the next section.
1. Expand Databases .
2. Right-click the Clinic database and click New Quer y .
3. Paste the following Transact-SQL (T-SQL) into the new query window and Execute it.
IMPORTANT
Your application must use SqlParameter objects when passing plaintext data to the server with Always Encrypted columns.
Passing literal values without using SqlParameter objects will result in an exception.
1. Open Visual Studio and create a new C# console application. Make sure your project is set to .NET
Framework 4.6 or later.
2. Name the project AlwaysEncr yptedConsoleApp and click OK .
NOTE
This is the only change required in a client application specific to Always Encrypted. If you have an existing application that
stores its connection string externally (that is, in a config file), you might be able to enable Always Encrypted without
changing any code.
// Instantiate a SqlConnectionStringBuilder.
SqlConnectionStringBuilder connStringBuilder =
new SqlConnectionStringBuilder("replace with your connection string");
using System;
using System.Collections.Generic;
using System.Data;
using System.Data.SqlClient;
using System.Globalization;
namespace AlwaysEncryptedConsoleApp
{
class Program
{
// Update this line with your Clinic database connection string from the Azure portal.
static string connectionString = @"Data Source = SPE-T640-01.sys-sqlsvr.local; Initial Catalog =
Clinic; Integrated Security = true";
// Create a SqlConnectionStringBuilder.
SqlConnectionStringBuilder connStringBuilder =
new SqlConnectionStringBuilder(connectionString);
string ssn;
// This very simple validation only checks that the user entered 11 characters.
// In production be sure to check all user input and use the best validation for your specific
application.
do
{
Console.WriteLine("Please enter a valid SSN (ex. 123-45-6789):");
ssn = Console.ReadLine();
} while (ssn.Length != 11);
// The example allows duplicate SSN entries so we will return all records
// that match the provided value and store the results in selectedPatients.
Patient selectedPatient = SelectPatientBySSN(ssn);
// Check if any records were returned and display our query results.
if (selectedPatient != null)
{
Console.WriteLine("Patient found with SSN = " + ssn);
Console.WriteLine(selectedPatient.FirstName + " " + selectedPatient.LastName + "\tSSN: "
+ selectedPatient.SSN + "\tBirthdate: " + selectedPatient.BirthDate);
}
else
{
Console.WriteLine("No patients found with SSN = " + ssn);
}
sqlCmd.Parameters.Add(paramSSN);
sqlCmd.Parameters.Add(paramFirstName);
sqlCmd.Parameters.Add(paramLastName);
sqlCmd.Parameters.Add(paramBirthDate);
if (reader.HasRows)
{
while (reader.Read())
{
patients.Add(new Patient()
{
SSN = reader[0].ToString(),
FirstName = reader[1].ToString(),
LastName = reader["LastName"].ToString(),
BirthDate = (DateTime)reader["BirthDate"]
});
}
}
}
catch (Exception ex)
{
throw;
}
}
}
return patients;
}
sqlCmd.Parameters.Add(paramSSN);
if (reader.HasRows)
{
while (reader.Read())
{
patient = new Patient()
{
SSN = reader[0].ToString(),
FirstName = reader[1].ToString(),
LastName = reader["LastName"].ToString(),
BirthDate = (DateTime)reader["BirthDate"]
};
}
}
else
{
patient = null;
}
}
catch (Exception ex)
{
throw;
}
}
return patient;
}
// This method simply deletes all records in the Patients table to reset our demo.
static int ResetPatientsTable()
{
int returnValue = 0;
class Patient
{
public string SSN { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public DateTime BirthDate { get; set; }
}
}
You can see that the encrypted columns do not contain any plaintext data.
To use SSMS to access the plaintext data, you can add the Column Encr yption Setting=enabled parameter
to the connection.
1. In SSMS, right-click your server in Object Explorer , and then click Disconnect .
2. Click Connect > Database Engine to open the Connect to Ser ver window, and then click Options .
3. Click Additional Connection Parameters and type Column Encr yption Setting=enabled .
4. Run the following query on the Clinic database.
You can now see the plaintext data in the encrypted columns.
NOTE
If you connect with SSMS (or any client) from a different computer, it will not have access to the encryption keys and will
not be able to decrypt the data.
Next steps
After you create a database that uses Always Encrypted, you may want to do the following:
Run this sample from a different computer. It won't have access to the encryption keys, so it will not have
access to the plaintext data and will not run successfully.
Rotate and clean up your keys.
Migrate data that is already encrypted with Always Encrypted.
Deploy Always Encrypted certificates to other client machines (see the "Making Certificates Available to
Applications and Users" section).
Related information
Always Encrypted (client development)
Transparent Data Encryption
SQL Server Encryption
Always Encrypted Wizard
Always Encrypted Blog
Detectable types of query performance bottlenecks
9/13/2022 • 2 minutes to read • Edit Online
Applies to: SQL Server Azure SQL Database Azure SQL Managed Instance
This content is split into two articles:
Detectable types of query performance bottlenecks in Azure SQL Database
Detectable types of query performance bottlenecks in SQL Server and Azure SQL Managed Instance
Next steps
More resources for Azure SQL Database:
Configure the max degree of parallelism (MAXDOP) in Azure SQL Database
Understand and resolve Azure SQL Database blocking problems in Azure SQL Database
Diagnose and troubleshoot high CPU on Azure SQL Database
SQL Database monitoring and tuning overview
Monitoring Microsoft Azure SQL Database performance using dynamic management views
Tune nonclustered indexes with missing index suggestions
Resource management in Azure SQL Database
Resource limits for single databases using the vCore purchasing model
Resource limits for elastic pools using the vCore purchasing model
Resource limits for single databases using the DTU purchasing model
Resource limits for elastic pools using the DTU purchasing model
More resources for SQL Server and Azure SQL Managed Instance:
Configure the max degree of parallelism Server Configuration Option
Understand and resolve SQL Server blocking problems
Monitoring Microsoft Azure SQL Managed Instance performance using dynamic management views
Tune nonclustered indexes with missing index suggestions
sys.server_resource_stats (Azure SQL Managed Instance)
Overview of Azure SQL Managed Instance resource limits
Troubleshoot Azure SQL Database and Azure SQL
Managed Instance performance issues with
Intelligent Insights
9/13/2022 • 24 minutes to read • Edit Online
NOTE
For a quick performance troubleshooting guide using Intelligent Insights, see the Recommended troubleshooting flow
flowchart in this document.
Intelligent insights is a preview feature, not available in the following regions: West Europe, North Europe, West US 1 and
East US 1.
DET EC TA B L E P ERF O RM A N C E
PAT T ERN S A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E
Workload increase Workload increase or continuous Workload increase has been detected.
accumulation of workload on the This is affecting the database
database was detected. This is affecting performance.
performance.
Memory pressure Workers that requested memory Workers that have requested memory
grants have to wait for memory grants are waiting for memory
allocations for statistically significant allocations for a statistically significant
amounts of time, or an increased amount of time. This is affecting the
accumulation of workers that database performance.
requested memory grants exist. This is
affecting performance.
DET EC TA B L E P ERF O RM A N C E
PAT T ERN S A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E
Increased MAXDOP The maximum degree of parallelism The maximum degree of parallelism
option (MAXDOP) has changed option (MAXDOP) has changed
affecting the query execution efficiency. affecting the query execution efficiency.
This is affecting performance. This is affecting performance.
Pagelatch contention Multiple threads are concurrently Multiple threads are concurrently
attempting to access the same in- attempting to access the same in-
memory data buffer pages resulting in memory data buffer pages resulting in
increased wait times and causing increased wait times and causing
pagelatch contention. This is affecting pagelatch contention. This is affecting
performance. database the performance.
Missing Index Missing index was detected affecting Missing index was detected affecting
performance. the database performance.
New Query New query was detected affecting the New query was detected affecting the
overall performance. overall database performance.
Increased Wait Statistic Increased database wait times were Increased database wait times were
detected affecting performance. detected affecting the database
performance.
TempDB Contention Multiple threads are trying to access Multiple threads are trying to access
the same TempDB resource causing a the same TempDB resource causing a
bottleneck. This is affecting bottleneck. This is affecting the
performance. database performance.
Elastic pool DTU shortage Shortage of available eDTUs in the Not available for Azure SQL Managed
elastic pool is affecting performance. Instance as it uses the vCore model.
Plan Regression New plan, or a change in the workload New plan, or a change in the workload
of an existing plan was detected. This is of an existing plan was detected. This is
affecting performance. affecting the database performance.
Database-scoped configuration value Configuration change on the database Configuration change on the database
change was detected affecting the database was detected affecting the database
performance. performance.
Slow client Slow application client is unable to Slow application client is unable to
consume output from the database consume output from the database
fast enough. This is affecting fast enough. This is affecting the
performance. database performance.
Pricing tier downgrade Pricing tier downgrade action Pricing tier downgrade action
decreased available resources. This is decreased available resources. This is
affecting performance. affecting the database performance.
TIP
For continuous performance optimization of databases, enable automatic tuning. This built-in intelligence feature
continuously monitors your database, automatically tunes indexes, and applies query execution plan corrections.
Workload increase
What is happening
This performance pattern identifies issues caused by a workload increase or, in its more severe form, a workload
pile-up.
This detection is made through a combination of several metrics. The basic metric measured is detecting an
increase in workload compared with the past workload baseline. The other form of detection is based on
measuring a large increase in active worker threads that is large enough to affect the query performance.
In its more severe form, the workload might continuously pile up due to the inability of a database to handle the
workload. The result is a continuously growing workload size, which is the workload pile-up condition. Due to
this condition, the time that the workload waits for execution grows. This condition represents one of the most
severe database performance issues. This issue is detected through monitoring the increase in the number of
aborted worker threads.
Troubleshooting
The diagnostics log outputs the number of queries whose execution has increased and the query hash of the
query with the largest contribution to the workload increase. You can use this information as a starting point for
optimizing the workload. The query identified as the largest contributor to the workload increase is especially
useful as your starting point.
You might consider distributing the workloads more evenly to the database. Consider optimizing the query that
is affecting the performance by adding indexes. You also might distribute your workload among multiple
databases. If these solutions aren't possible, consider increasing the pricing tier of your database subscription to
increase the amount of resources available.
Memory pressure
What is happening
This performance pattern indicates degradation in the current database performance caused by memory
pressure, or in its more severe form a memory pile-up condition, compared to the past seven-day performance
baseline.
Memory pressure denotes a performance condition in which there is a large number of worker threads
requesting memory grants. The high volume causes a high memory utilization condition in which the database
is unable to efficiently allocate memory to all workers that request it. One of the most common reasons for this
issue is related to the amount of memory available to the database on one hand. On the other hand, an increase
in workload causes the increase in worker threads and the memory pressure.
The more severe form of memory pressure is the memory pile-up condition. This condition indicates that a
higher number of worker threads are requesting memory grants than there are queries releasing the memory.
This number of worker threads requesting memory grants also might be continuously increasing (piling up)
because the database engine is unable to allocate memory efficiently enough to meet the demand. The memory
pile-up condition represents one of the most severe database performance issues.
Troubleshooting
The diagnostics log outputs the memory object store details with the clerk (that is, worker thread) marked as the
highest reason for high memory usage and relevant time stamps. You can use this information as the basis for
troubleshooting.
You can optimize or remove queries related to the clerks with the highest memory usage. You also can make
sure that you aren't querying data that you don't plan to use. Good practice is to always use a WHERE clause in
your queries. In addition, we recommend that you create nonclustered indexes to seek the data rather than scan
it.
You also can reduce the workload by optimizing or distributing it over multiple databases. Or you can distribute
your workload among multiple databases. If these solutions aren't possible, consider increasing the pricing tier
of your database to increase the amount of memory resources available to the database.
For additional troubleshooting suggestions, see Memory grants meditation: The mysterious SQL Server
memory consumer with many names. For more information on out of memory errors in Azure SQL Database,
see Troubleshoot out of memory errors with Azure SQL Database.
Locking
What is happening
This performance pattern indicates degradation in the current database performance in which excessive
database locking is detected compared to the past seven-day performance baseline.
In modern RDBMS, locking is essential for implementing multithreaded systems in which performance is
maximized by running multiple simultaneous workers and parallel database transactions where possible.
Locking in this context refers to the built-in access mechanism in which only a single transaction can exclusively
access the rows, pages, tables, and files that are required and not compete with another transaction for
resources. When the transaction that locked the resources for use is done with them, the lock on those resources
is released, which allows other transactions to access required resources. For more information on locking, see
Lock in the database engine.
If transactions executed by the SQL engine are waiting for prolonged periods of time to access resources locked
for use, this wait time causes the slowdown of the workload execution performance.
Troubleshooting
The diagnostics log outputs locking details that you can use as the basis for troubleshooting. You can analyze the
reported blocking queries, that is, the queries that introduce the locking performance degradation, and remove
them. In some cases, you might be successful in optimizing the blocking queries.
The simplest and safest way to mitigate the issue is to keep transactions short and to reduce the lock footprint
of the most expensive queries. You can break up a large batch of operations into smaller operations. Good
practice is to reduce the query lock footprint by making the query as efficient as possible. Reduce large scans
because they increase chances of deadlocks and adversely affect overall database performance. For identified
queries that cause locking, you can create new indexes or add columns to the existing index to avoid the table
scans.
For more suggestions, see:
Understand and resolve Azure SQL blocking problems
How to resolve blocking problems that are caused by lock escalation in SQL Server
Increased MAXDOP
What is happening
This detectable performance pattern indicates a condition in which a chosen query execution plan was
parallelized more than it should have been. The query optimizer can enhance the workload performance by
executing queries in parallel to speed up things where possible. In some cases, parallel workers processing a
query spend more time waiting on each other to synchronize and merge results compared to executing the
same query with fewer parallel workers, or even in some cases compared to a single worker thread.
The expert system analyzes the current database performance compared to the baseline period. It determines if
a previously running query is running slower than before because the query execution plan is more parallelized
than it should be.
The MAXDOP server configuration option is used to control how many CPU cores can be used to execute the
same query in parallel.
Troubleshooting
The diagnostics log outputs query hashes related to queries for which the duration of execution increased
because they were parallelized more than they should have been. The log also outputs CXP wait times. This time
represents the time a single organizer/coordinator thread (thread 0) is waiting for all other threads to finish
before merging the results and moving ahead. In addition, the diagnostics log outputs the wait times that the
poor-performing queries were waiting in execution overall. You can use this information as the basis for
troubleshooting.
First, optimize or simplify complex queries. Good practice is to break up long batch jobs into smaller ones. In
addition, ensure that you created indexes to support your queries. You can also manually enforce the maximum
degree of parallelism (MAXDOP) for a query that was flagged as poor performing. To configure this operation
by using T-SQL, see Configure the MAXDOP server configuration option.
Setting the MAXDOP server configuration option to zero (0) as a default value denotes that database can use all
available CPU cores to parallelize threads for executing a single query. Setting MAXDOP to one (1) denotes that
only one core can be used for a single query execution. In practical terms, this means that parallelism is turned
off. Depending on the case-per-case basis, available cores to the database, and diagnostics log information, you
can tune the MAXDOP option to the number of cores used for parallel query execution that might resolve the
issue in your case.
Pagelatch contention
What is happening
This performance pattern indicates the current database workload performance degradation due to pagelatch
contention compared to the past seven-day workload baseline.
Latches are lightweight synchronization mechanisms used to enable multithreading. They guarantee consistency
of in-memory structures that include indices, data pages, and other internal structures.
There are many types of latches available. For simplicity purposes, buffer latches are used to protect in-memory
pages in the buffer pool. IO latches are used to protect pages not yet loaded into the buffer pool. Whenever data
is written to or read from a page in the buffer pool, a worker thread needs to acquire a buffer latch for the page
first. Whenever a worker thread attempts to access a page that isn't already available in the in-memory buffer
pool, an IO request is made to load the required information from the storage. This sequence of events indicates
a more severe form of performance degradation.
Contention on the page latches occurs when multiple threads concurrently attempt to acquire latches on the
same in-memory structure, which introduces an increased wait time to query execution. In the case of pagelatch
IO contention, when data needs to be accessed from storage, this wait time is even larger. It can affect workload
performance considerably. Pagelatch contention is the most common scenario of threads waiting on each other
and competing for resources on multiple CPU systems.
Troubleshooting
The diagnostics log outputs pagelatch contention details. You can use this information as the basis for
troubleshooting.
Because a pagelatch is an internal control mechanism, it automatically determines when to use them.
Application decisions, including schema design, can affect pagelatch behavior due to the deterministic behavior
of latches.
One method for handling latch contention is to replace a sequential index key with a nonsequential key to
evenly distribute inserts across an index range. Typically, a leading column in the index distributes the workload
proportionally. Another method to consider is table partitioning. Creating a hash partitioning scheme with a
computed column on a partitioned table is a common approach for mitigating excessive latch contention. In the
case of pagelatch IO contention, introducing indexes helps to mitigate this performance issue.
For more information, see Diagnose and resolve latch contention on SQL Server (PDF download).
Missing index
What is happening
This performance pattern indicates the current database workload performance degradation compared to the
past seven-day baseline due to a missing index.
An index is used to speed up the performance of queries. It provides quick access to table data by reducing the
number of dataset pages that need to be visited or scanned.
Specific queries that caused performance degradation are identified through this detection for which creating
indexes would be beneficial to the performance.
Troubleshooting
The diagnostics log outputs query hashes for the queries that were identified to affect the workload
performance. You can build indexes for these queries. You also can optimize or remove these queries if they
aren't required. A good performance practice is to avoid querying data that you don't use.
TIP
Did you know that built-in intelligence can automatically manage the best-performing indexes for your databases?
For continuous performance optimization, we recommend that you enable automatic tuning. This unique built-in
intelligence feature continuously monitors your database and automatically tunes and creates indexes for your databases.
New query
What is happening
This performance pattern indicates that a new query is detected that is performing poorly and affecting the
workload performance compared to the seven-day performance baseline.
Writing a good-performing query sometimes can be a challenging task. For more information on writing
queries, see Writing SQL queries. To optimize existing query performance, see Query tuning.
Troubleshooting
The diagnostics log outputs information up to two new most CPU-consuming queries, including their query
hashes. Because the detected query affects the workload performance, you can optimize your query. Good
practice is to retrieve only data you need to use. We also recommend using queries with a WHERE clause. We
also recommend that you simplify complex queries and break them up into smaller queries. Another good
practice is to break down large batch queries into smaller batch queries. Introducing indexes for new queries is
typically a good practice to mitigate this performance issue.
In Azure SQL Database, consider using Query Performance Insight.
TempDB contention
What is happening
This detectable performance pattern indicates a database performance condition in which a bottleneck of
threads trying to access tempDB resources exists. (This condition isn't IO-related.) The typical scenario for this
performance issue is hundreds of concurrent queries that all create, use, and then drop small tempDB tables.
The system detected that the number of concurrent queries using the same tempDB tables increased with
sufficient statistical significance to affect database performance compared to the past seven-day performance
baseline.
Troubleshooting
The diagnostics log outputs tempDB contention details. You can use the information as the starting point for
troubleshooting. There are two things you can pursue to alleviate this kind of contention and increase the
throughput of the overall workload: You can stop using the temporary tables. You also can use memory-
optimized tables.
For more information, see Introduction to memory-optimized tables.
Plan regression
What is happening
This detectable performance pattern denotes a condition in which the database utilizes a suboptimal query
execution plan. The suboptimal plan typically causes increased query execution, which leads to longer wait times
for the current and other queries.
The database engine determines the query execution plan with the least cost to a query execution. As the type of
queries and workloads change, sometimes the existing plans are no longer efficient, or perhaps the database
engine didn't make a good assessment. As a matter of correction, query execution plans can be manually forced.
This detectable performance pattern combines three different cases of plan regression: new plan regression, old
plan regression, and existing plans changed workload. The particular type of plan regression that occurred is
provided in the details property in the diagnostics log.
The new plan regression condition refers to a state in which the database engine starts executing a new query
execution plan that isn't as efficient as the old plan. The old plan regression condition refers to the state when
the database engine switches from using a new, more efficient plan to the old plan, which isn't as efficient as the
new plan. The existing plans changed workload regression refers to the state in which the old and the new plans
continuously alternate, with the balance going more toward the poor-performing plan.
For more information on plan regressions, see What is plan regression in SQL Server?.
Troubleshooting
The diagnostics log outputs the query hashes, good plan ID, bad plan ID, and query IDs. You can use this
information as the basis for troubleshooting.
You can analyze which plan is better performing for your specific queries that you can identify with the query
hashes provided. After you determine which plan works better for your queries, you can manually force it.
For more information, see Learn how SQL Server prevents plan regressions.
TIP
Did you know that the built-in intelligence feature can automatically manage the best-performing query execution plans
for your databases?
For continuous performance optimization, we recommend that you enable automatic tuning. This built-in intelligence
feature continuously monitors your database and automatically tunes and creates best-performing query execution plans
for your databases.
Intelligent Insights usually needs one hour of time to perform the root cause analysis of the performance issue.
If you can't locate your issue in Intelligent Insights and it's critical to you, use the Query Store to manually
identify the root cause of the performance issue. (Typically, these issues are less than one hour old.) For more
information, see Monitor performance by using the Query Store.
Next steps
Learn Intelligent Insights concepts.
Use the Intelligent Insights performance diagnostics log.
Monitor using Azure SQL Analytics.
Learn to collect and consume log data from your Azure resources.
How to use batching to improve Azure SQL
Database and Azure SQL Managed Instance
application performance
9/13/2022 • 26 minutes to read • Edit Online
Why is batching important for Azure SQL Database and Azure SQL
Managed Instance?
Batching calls to a remote service is a well-known strategy for increasing performance and scalability. There are
fixed processing costs to any interactions with a remote service, such as serialization, network transfer, and
deserialization. Packaging many separate transactions into a single batch minimizes these costs.
In this article, we want to examine various batching strategies and scenarios. Although these strategies are also
important for on-premises applications that use SQL Server, there are several reasons for highlighting the use
of batching for Azure SQL Database and Azure SQL Managed Instance:
There is potentially greater network latency in accessing Azure SQL Database and Azure SQL Managed
Instance, especially if you are accessing Azure SQL Database or Azure SQL Managed Instance from outside
the same Microsoft Azure datacenter.
The multitenant characteristics of Azure SQL Database and Azure SQL Managed Instance means that the
efficiency of the data access layer correlates to the overall scalability of the database. In response to usage in
excess of predefined quotas, Azure SQL Database and Azure SQL Managed Instance can reduce throughput
or respond with throttling exceptions. Efficiencies, such as batching, enable you to do more work before
reaching these limits.
Batching is also effective for architectures that use multiple databases (sharding). The efficiency of your
interaction with each database unit is still a key factor in your overall scalability.
One of the benefits of using Azure SQL Database or Azure SQL Managed Instance is that you don't have to
manage the servers that host the database. However, this managed infrastructure also means that you have to
think differently about database optimizations. You can no longer look to improve the database hardware or
network infrastructure. Microsoft Azure controls those environments. The main area that you can control is how
your application interacts with Azure SQL Database and Azure SQL Managed Instance. Batching is one of these
optimizations.
The first part of this article examines various batching techniques for .NET applications that use Azure SQL
Database or Azure SQL Managed Instance. The last two sections cover batching guidelines and scenarios.
Batching strategies
Note about timing results in this article
NOTE
Results are not benchmarks but are meant to show relative performance . Timings are based on an average of at least
10 test runs. Operations are inserts into an empty table. These tests were measured pre-V12, and they do not necessarily
correspond to throughput that you might experience in a V12 database using the new DTU service tiers or vCore service
tiers. The relative benefit of the batching technique should be similar.
Transactions
It seems strange to begin a review of batching by discussing transactions. But the use of client-side transactions
has a subtle server-side batching effect that improves performance. And transactions can be added with only a
few lines of code, so they provide a fast way to improve performance of sequential operations.
Consider the following C# code that contains a sequence of insert and update operations on a simple table.
The best way to optimize this code is to implement some form of client-side batching of these calls. But there is
a simple way to increase the performance of this code by simply wrapping the sequence of calls in a transaction.
Here is the same code that uses a transaction.
transaction.Commit();
}
Transactions are actually being used in both of these examples. In the first example, each individual call is an
implicit transaction. In the second example, an explicit transaction wraps all of the calls. Per the documentation
for the write-ahead transaction log, log records are flushed to the disk when the transaction commits. So by
including more calls in a transaction, the write to the transaction log can delay until the transaction is
committed. In effect, you are enabling batching for the writes to the server's transaction log.
The following table shows some ad hoc testing results. The tests performed the same sequential inserts with
and without transactions. For more perspective, the first set of tests ran remotely from a laptop to the database
in Microsoft Azure. The second set of tests ran from a cloud service and database that both resided within the
same Microsoft Azure datacenter (West US). The following table shows the duration in milliseconds of
sequential inserts with and without transactions.
On-premises to Azure :
O P ERAT IO N S N O T RA N SA C T IO N ( M S) T RA N SA C T IO N ( M S)
1 130 402
10 1208 1226
O P ERAT IO N S N O T RA N SA C T IO N ( M S) T RA N SA C T IO N ( M S)
1 21 26
10 220 56
NOTE
Results are not benchmarks. See the note about timing results in this article.
Based on the previous test results, wrapping a single operation in a transaction actually decreases performance.
But as you increase the number of operations within a single transaction, the performance improvement
becomes more marked. The performance difference is also more noticeable when all operations occur within
the Microsoft Azure datacenter. The increased latency of using Azure SQL Database or Azure SQL Managed
Instance from outside the Microsoft Azure datacenter overshadows the performance gain of using transactions.
Although the use of transactions can increase performance, continue to observe best practices for transactions
and connections. Keep the transaction as short as possible, and close the database connection after the work
completes. The using statement in the previous example assures that the connection is closed when the
subsequent code block completes.
The previous example demonstrates that you can add a local transaction to any ADO.NET code with two lines.
Transactions offer a quick way to improve the performance of code that makes sequential insert, update, and
delete operations. However, for the fastest performance, consider changing the code further to take advantage
of client-side batching, such as table-valued parameters.
For more information about transactions in ADO.NET, see Local Transactions in ADO.NET.
Table -valued parameters
Table-valued parameters support user-defined table types as parameters in Transact-SQL statements, stored
procedures, and functions. This client-side batching technique allows you to send multiple rows of data within
the table-valued parameter. To use table-valued parameters, first define a table type. The following Transact-SQL
statement creates a table type named MyTableType .
In code, you create a DataTable with the exact same names and types of the table type. Pass this DataTable in a
parameter in a text query or stored procedure call. The following example shows this technique:
cmd.Parameters.Add(
new SqlParameter()
{
ParameterName = "@TestTvp",
SqlDbType = SqlDbType.Structured,
TypeName = "MyTableType",
Value = table,
});
cmd.ExecuteNonQuery();
}
In the previous example, the SqlCommand object inserts rows from a table-valued parameter, @TestTvp . The
previously created DataTable object is assigned to this parameter with the SqlCommand.Parameters.Add
method. Batching the inserts in one call significantly increases the performance over sequential inserts.
To improve the previous example further, use a stored procedure instead of a text-based command. The
following Transact-SQL command creates a stored procedure that takes the SimpleTestTableType table-valued
parameter.
In most cases, table-valued parameters have equivalent or better performance than other batching techniques.
Table-valued parameters are often preferable, because they are more flexible than other options. For example,
other techniques, such as SQL bulk copy, only permit the insertion of new rows. But with table-valued
parameters, you can use logic in the stored procedure to determine which rows are updates and which are
inserts. The table type can also be modified to contain an "Operation" column that indicates whether the
specified row should be inserted, updated, or deleted.
The following table shows ad hoc test results for the use of table-valued parameters in milliseconds.
1 124 32
10 131 25
100 338 51
NOTE
Results are not benchmarks. See the note about timing results in this article.
The performance gain from batching is immediately apparent. In the previous sequential test, 1000 operations
took 129 seconds outside the datacenter and 21 seconds from within the datacenter. But with table-valued
parameters, 1000 operations take only 2.6 seconds outside the datacenter and 0.4 seconds within the
datacenter.
For more information on table-valued parameters, see Table-Valued Parameters.
SQL bulk copy
SQL bulk copy is another way to insert large amounts of data into a target database. .NET applications can use
the SqlBulkCopy class to perform bulk insert operations. SqlBulkCopy is similar in function to the command-
line tool, Bcp.exe , or the Transact-SQL statement, BULK INSERT . The following code example shows how to
bulk copy the rows in the source DataTable , table, to the destination table, MyTable.
using (SqlConnection connection = new
SqlConnection(CloudConfigurationManager.GetSetting("Sql.ConnectionString")))
{
connection.Open();
There are some cases where bulk copy is preferred over table-valued parameters. See the comparison table of
Table-Valued parameters versus BULK INSERT operations in the article Table-Valued Parameters.
The following ad hoc test results show the performance of batching with SqlBulkCopy in milliseconds.
1 433 57
10 441 32
100 636 53
NOTE
Results are not benchmarks. See the note about timing results in this article.
In smaller batch sizes, the use table-valued parameters outperformed the SqlBulkCopy class. However,
SqlBulkCopy performed 12-31% faster than table-valued parameters for the tests of 1,000 and 10,000 rows.
Like table-valued parameters, SqlBulkCopy is a good option for batched inserts, especially when compared to
the performance of non-batched operations.
For more information on bulk copy in ADO.NET, see Bulk Copy Operations.
Multiple -row parameterized INSERT statements
One alternative for small batches is to construct a large parameterized INSERT statement that inserts multiple
rows. The following code example demonstrates this technique.
using (SqlConnection connection = new
SqlConnection(CloudConfigurationManager.GetSetting("Sql.ConnectionString")))
{
connection.Open();
cmd.ExecuteNonQuery();
}
This example is meant to show the basic concept. A more realistic scenario would loop through the required
entities to construct the query string and the command parameters simultaneously. You are limited to a total of
2100 query parameters, so this limits the total number of rows that can be processed in this manner.
The following ad hoc test results show the performance of this type of insert statement in milliseconds.
1 32 20
10 30 25
100 33 51
NOTE
Results are not benchmarks. See the note about timing results in this article.
This approach can be slightly faster for batches that are less than 100 rows. Although the improvement is small,
this technique is another option that might work well in your specific application scenario.
DataAdapter
The DataAdapter class allows you to modify a DataSet object and then submit the changes as INSERT,
UPDATE, and DELETE operations. If you are using the DataAdapter in this manner, it is important to note that
separate calls are made for each distinct operation. To improve performance, use the UpdateBatchSize
property to the number of operations that should be batched at a time. For more information, see Performing
Batch Operations Using DataAdapters.
Entity Framework
Entity Framework Core supports batching.
XML
For completeness, we feel that it is important to talk about XML as a batching strategy. However, the use of XML
has no advantages over other methods and several disadvantages. The approach is similar to table-valued
parameters, but an XML file or string is passed to a stored procedure instead of a user-defined table. The stored
procedure parses the commands in the stored procedure.
There are several disadvantages to this approach:
Working with XML can be cumbersome and error prone.
Parsing the XML on the database can be CPU-intensive.
In most cases, this method is slower than table-valued parameters.
For these reasons, the use of XML for batch queries is not recommended.
Batching considerations
The following sections provide more guidance for the use of batching in Azure SQL Database and Azure SQL
Managed Instance applications.
Tradeoffs
Depending on your architecture, batching can involve a tradeoff between performance and resiliency. For
example, consider the scenario where your role unexpectedly goes down. If you lose one row of data, the impact
is smaller than the impact of losing a large batch of unsubmitted rows. There is a greater risk when you buffer
rows before sending them to the database in a specified time window.
Because of this tradeoff, evaluate the type of operations that you batch. Batch more aggressively (larger batches
and longer time windows) with data that is less critical.
Batch size
In our tests, there was typically no advantage to breaking large batches into smaller chunks. In fact, this
subdivision often resulted in slower performance than submitting a single large batch. For example, consider a
scenario where you want to insert 1000 rows. The following table shows how long it takes to use table-valued
parameters to insert 1000 rows when divided into smaller batches.
1000 1 347
500 2 355
100 10 465
50 20 630
NOTE
Results are not benchmarks. See the note about timing results in this article.
You can see that the best performance for 1000 rows is to submit them all at once. In other tests (not shown
here), there was a small performance gain to break a 10000-row batch into two batches of 5000. But the table
schema for these tests is relatively simple, so you should perform tests on your specific data and batch sizes to
verify these findings.
Another factor to consider is that if the total batch becomes too large, Azure SQL Database or Azure SQL
Managed Instance might throttle and refuse to commit the batch. For the best results, test your specific scenario
to determine if there is an ideal batch size. Make the batch size configurable at runtime to enable quick
adjustments based on performance or errors.
Finally, balance the size of the batch with the risks associated with batching. If there are transient errors or the
role fails, consider the consequences of retrying the operation or of losing the data in the batch.
Parallel processing
What if you took the approach of reducing the batch size but used multiple threads to execute the work? Again,
our tests showed that several smaller multithreaded batches typically performed worse than a single larger
batch. The following test attempts to insert 1000 rows in one or more parallel batches. This test shows how
more simultaneous batches actually decreased performance.
NOTE
Results are not benchmarks. See the note about timing results in this article.
There are several potential reasons for the degradation in performance due to parallelism:
There are multiple simultaneous network calls instead of one.
Multiple operations against a single table can result in contention and blocking.
There are overheads associated with multithreading.
The expense of opening multiple connections outweighs the benefit of parallel processing.
If you target different tables or databases, it is possible to see some performance gain with this strategy.
Database sharding or federations would be a scenario for this approach. Sharding uses multiple databases and
routes different data to each database. If each small batch is going to a different database, then performing the
operations in parallel can be more efficient. However, the performance gain is not significant enough to use as
the basis for a decision to use database sharding in your solution.
In some designs, parallel execution of smaller batches can result in improved throughput of requests in a system
under load. In this case, even though it is quicker to process a single larger batch, processing multiple batches in
parallel might be more efficient.
If you do use parallel execution, consider controlling the maximum number of worker threads. A smaller
number might result in less contention and a faster execution time. Also, consider the additional load that this
places on the target database both in connections and transactions.
Related performance factors
Typical guidance on database performance also affects batching. For example, insert performance is reduced for
tables that have a large primary key or many nonclustered indexes.
If table-valued parameters use a stored procedure, you can use the command SET NOCOUNT ON at the
beginning of the procedure. This statement suppresses the return of the count of the affected rows in the
procedure. However, in our tests, the use of SET NOCOUNT ON either had no effect or decreased
performance. The test stored procedure was simple with a single INSERT command from the table-valued
parameter. It is possible that more complex stored procedures would benefit from this statement. But don't
assume that adding SET NOCOUNT ON to your stored procedure automatically improves performance. To
understand the effect, test your stored procedure with and without the SET NOCOUNT ON statement.
Batching scenarios
The following sections describe how to use table-valued parameters in three application scenarios. The first
scenario shows how buffering and batching can work together. The second scenario improves performance by
performing master-detail operations in a single stored procedure call. The final scenario shows how to use
table-valued parameters in an "UPSERT" operation.
Buffering
Although there are some scenarios that are obvious candidate for batching, there are many scenarios that could
take advantage of batching by delayed processing. However, delayed processing also carries a greater risk that
the data is lost in the event of an unexpected failure. It is important to understand this risk and consider the
consequences.
For example, consider a web application that tracks the navigation history of each user. On each page request,
the application could make a database call to record the user's page view. But higher performance and scalability
can be achieved by buffering the users' navigation activities and then sending this data to the database in
batches. You can trigger the database update by elapsed time and/or buffer size. For example, a rule could
specify that the batch should be processed after 20 seconds or when the buffer reaches 1000 items.
The following code example uses Reactive Extensions - Rx to process buffered events raised by a monitoring
class. When the buffer fills or a timeout is reached, the batch of user data is sent to the database with a table-
valued parameter.
The following NavHistoryData class models the user navigation details. It contains basic information such as the
user identifier, the URL accessed, and the access time.
The NavHistoryDataMonitor class is responsible for buffering the user navigation data to the database. It
contains a method, RecordUserNavigationEntry, which responds by raising an OnAdded event. The following
code shows the constructor logic that uses Rx to create an observable collection based on the event. It then
subscribes to this observable collection with the Buffer method. The overload specifies that the buffer should be
sent every 20 seconds or 1000 entries.
public NavHistoryDataMonitor()
{
var observableData =
Observable.FromEventPattern<NavHistoryDataEventArgs>(this, "OnAdded");
observableData.Buffer(TimeSpan.FromSeconds(20), 1000).Subscribe(Handler);
}
The handler converts all of the buffered items into a table-valued type and then passes this type to a stored
procedure that processes the batch. The following code shows the complete definition for both the
NavHistoryDataEventArgs and the NavHistoryDataMonitor classes.
public class NavHistoryDataEventArgs : System.EventArgs
{
public NavHistoryDataEventArgs(NavHistoryData data) { Data = data; }
public NavHistoryData Data { get; set; }
}
public NavHistoryDataMonitor()
{
var observableData =
Observable.FromEventPattern<NavHistoryDataEventArgs>(this, "OnAdded");
observableData.Buffer(TimeSpan.FromSeconds(20), 1000).Subscribe(Handler);
}
The handler converts all of the buffered items into a table-valued type and then passes this type to a stored
procedure that processes the batch. The following code shows the complete definition for both the
NavHistoryDataEventArgs and the NavHistoryDataMonitor classes.
cmd.Parameters.Add(
new SqlParameter()
{
ParameterName = "@NavHistoryBatch",
SqlDbType = SqlDbType.Structured,
TypeName = "NavigationHistoryTableType",
Value = navHistoryBatch,
});
cmd.ExecuteNonQuery();
}
}
}
To use this buffering class, the application creates a static NavHistoryDataMonitor object. Each time a user
accesses a page, the application calls the NavHistoryDataMonitor.RecordUserNavigationEntry method. The
buffering logic proceeds to take care of sending these entries to the database in batches.
Master detail
Table-valued parameters are useful for simple INSERT scenarios. However, it can be more challenging to batch
inserts that involve more than one table. The "master/detail" scenario is a good example. The master table
identifies the primary entity. One or more detail tables store more data about the entity. In this scenario, foreign
key relationships enforce the relationship of details to a unique master entity. Consider a simplified version of a
PurchaseOrder table and its associated OrderDetail table. The following Transact-SQL creates the PurchaseOrder
table with four columns: OrderID, OrderDate, CustomerID, and Status.
Each order contains one or more product purchases. This information is captured in the PurchaseOrderDetail
table. The following Transact-SQL creates the PurchaseOrderDetail table with five columns: OrderID,
OrderDetailID, ProductID, UnitPrice, and OrderQty.
The OrderID column in the PurchaseOrderDetail table must reference an order from the PurchaseOrder table.
The following definition of a foreign key enforces this constraint.
In order to use table-valued parameters, you must have one user-defined table type for each target table.
Then define a stored procedure that accepts tables of these types. This procedure allows an application to locally
batch a set of orders and order details in a single call. The following Transact-SQL provides the complete stored
procedure declaration for this purchase order example.
CREATE PROCEDURE sp_InsertOrdersBatch (
@orders as PurchaseOrderTableType READONLY,
@details as PurchaseOrderDetailTableType READONLY )
AS
SET NOCOUNT ON;
In this example, the locally defined @IdentityLink table stores the actual OrderID values from the newly inserted
rows. These order identifiers are different from the temporary OrderID values in the @orders and @details
table-valued parameters. For this reason, the @IdentityLink table then connects the OrderID values from the
@orders parameter to the real OrderID values for the new rows in the PurchaseOrder table. After this step, the
@IdentityLink table can facilitate inserting the order details with the actual OrderID that satisfies the foreign key
constraint.
This stored procedure can be used from code or from other Transact-SQL calls. See the table-valued parameters
section of this paper for a code example. The following Transact-SQL shows how to call the
sp_InsertOrdersBatch.
declare @orders as PurchaseOrderTableType
declare @details as PurchaseOrderDetailTableType
INSERT @orders
([OrderID], [OrderDate], [CustomerID], [Status])
VALUES(1, '1/1/2013', 1125, 'Complete'),
(2, '1/13/2013', 348, 'Processing'),
(3, '1/12/2013', 2504, 'Shipped')
INSERT @details
([OrderID], [ProductID], [UnitPrice], [OrderQty])
VALUES(1, 10, $11.50, 1),
(1, 12, $1.58, 1),
(2, 23, $2.57, 2),
(3, 4, $10.00, 1)
This solution allows each batch to use a set of OrderID values that begin at 1. These temporary OrderID values
describe the relationships in the batch, but the actual OrderID values are determined at the time of the insert
operation. You can run the same statements in the previous example repeatedly and generate unique orders in
the database. For this reason, consider adding more code or database logic that prevents duplicate orders when
using this batching technique.
This example demonstrates that even more complex database operations, such as master-detail operations, can
be batched using table-valued parameters.
UPSERT
Another batching scenario involves simultaneously updating existing rows and inserting new rows. This
operation is sometimes referred to as an "UPSERT" (update + insert) operation. Rather than making separate
calls to INSERT and UPDATE, the MERGE statement can be a suitable replacement. The MERGE statement can
perform both insert and update operations in a single call. The MERGE statement locking mechanics work
differently from separate INSERT and UPDATE statements. Test your specific workloads before deploying to
production.
Table-valued parameters can be used with the MERGE statement to perform updates and inserts. For example,
consider a simplified Employee table that contains the following columns: EmployeeID, FirstName, LastName,
SocialSecurityNumber:
In this example, you can use the fact that the SocialSecurityNumber is unique to perform a MERGE of multiple
employees. First, create the user-defined table type:
Next, create a stored procedure or write code that uses the MERGE statement to perform the update and insert.
The following example uses the MERGE statement on a table-valued parameter, @employees, of type
EmployeeTableType. The contents of the @employees table are not shown here.
For more information, see the documentation and examples for the MERGE statement. Although the same work
could be performed in a multiple-step stored procedure call with separate INSERT and UPDATE operations, the
MERGE statement is more efficient. Database code can also construct Transact-SQL calls that use the MERGE
statement directly without requiring two database calls for INSERT and UPDATE.
Recommendation summary
The following list provides a summary of the batching recommendations discussed in this article:
Use buffering and batching to increase the performance and scalability of Azure SQL Database and Azure
SQL Managed Instance applications.
Understand the tradeoffs between batching/buffering and resiliency. During a role failure, the risk of losing
an unprocessed batch of business-critical data might outweigh the performance benefit of batching.
Attempt to keep all calls to the database within a single datacenter to reduce latency.
If you choose a single batching technique, table-valued parameters offer the best performance and flexibility.
For the fastest insert performance, follow these general guidelines but test your scenario:
For < 100 rows, use a single parameterized INSERT command.
For < 1000 rows, use table-valued parameters.
For >= 1000 rows, use SqlBulkCopy.
For update and delete operations, use table-valued parameters with stored procedure logic that determines
the correct operation on each row in the table parameter.
Batch size guidelines:
Use the largest batch sizes that make sense for your application and business requirements.
Balance the performance gain of large batches with the risks of temporary or catastrophic failures.
What is the consequence of retries or loss of the data in the batch?
Test the largest batch size to verify that Azure SQL Database or Azure SQL Managed Instance does not
reject it.
Create configuration settings that control batching, such as the batch size or the buffering time
window. These settings provide flexibility. You can change the batching behavior in production without
redeploying the cloud service.
Avoid parallel execution of batches that operate on a single table in one database. If you do choose to divide
a single batch across multiple worker threads, run tests to determine the ideal number of threads. After an
unspecified threshold, more threads will decrease performance rather than increase it.
Consider buffering on size and time as a way of implementing batching for more scenarios.
Next steps
This article focused on how database design and coding techniques related to batching can improve your
application performance and scalability. But this is just one factor in your overall strategy. For more ways to
improve performance and scalability, see Database performance guidance and Price and performance
considerations for an elastic pool.
Load data from CSV into Azure SQL Database or
SQL Managed Instance (flat files)
9/13/2022 • 2 minutes to read • Edit Online
(Optional) To export your own data from a SQL Server database, open a command prompt and run the
following command. Replace TableName, ServerName, DatabaseName, Username, and Password with your own
information.
sqlcmd.exe -S <server name> -d <database name> -U <username> -P <password> -I -Q "SELECT * FROM DimDate2
ORDER BY 1;"
20150101 1 3
20150201 1 3
20150301 1 3
20150401 2 4
20150501 2 4
20150601 2 4
20150701 3 1
20150801 3 1
DAT EID C A L EN DA RQ UA RT ER F ISC A LQ UA RT ER
20150801 3 1
20151001 4 2
20151101 4 2
20151201 4 2
Next steps
To migrate a SQL Server database, see SQL Server database migration.
Tune applications and databases for performance in
Azure SQL Database and Azure SQL Managed
Instance
9/13/2022 • 18 minutes to read • Edit Online
SELECT
CONVERT (varchar, getdate(), 126) AS runtime
, mig.index_group_handle
, mid.index_handle
, CONVERT (decimal (28,1), migs.avg_total_user_cost * migs.avg_user_impact *
(migs.user_seeks + migs.user_scans)) AS improvement_measure
, 'CREATE INDEX missing_index_' + CONVERT (varchar, mig.index_group_handle) + '_' +
CONVERT (varchar, mid.index_handle) + ' ON ' + mid.statement + '
(' + ISNULL (mid.equality_columns,'')
+ CASE WHEN mid.equality_columns IS NOT NULL
AND mid.inequality_columns IS NOT NULL
THEN ',' ELSE '' END + ISNULL (mid.inequality_columns, '') + ')'
+ ISNULL (' INCLUDE (' + mid.included_columns + ')', '') AS create_index_statement
, migs.*
, mid.database_id
, mid.[object_id]
FROM sys.dm_db_missing_index_groups AS mig
INNER JOIN sys.dm_db_missing_index_group_stats AS migs
ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details AS mid
ON mig.index_handle = mid.index_handle
ORDER BY migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC
After it's created, that same SELECT statement picks a different plan, which uses a seek instead of a scan, and
then executes the plan more efficiently:
The key insight is that the IO capacity of a shared, commodity system is more limited than that of a dedicated
server machine. There's a premium on minimizing unnecessary IO to take maximum advantage of the system in
the resources of each compute size of the service tiers. Appropriate physical database design choices can
significantly improve the latency for individual queries, improve the throughput of concurrent requests handled
per scale unit, and minimize the costs required to satisfy the query.
For more information about tuning indexes using missing index requests, see Tune nonclustered indexes with
missing index suggestions.
Query tuning and hinting
The query optimizer in Azure SQL Database and Azure SQL Managed Instance is similar to the traditional SQL
Server query optimizer. Most of the best practices for tuning queries and understanding the reasoning model
limitations for the query optimizer also apply to Azure SQL Database and Azure SQL Managed Instance. If you
tune queries in Azure SQL Database and Azure SQL Managed Instance, you might get the additional benefit of
reducing aggregate resource demands. Your application might be able to run at a lower cost than an un-tuned
equivalent because it can run at a lower compute size.
An example that is common in SQL Server and which also applies to Azure SQL Database and Azure SQL
Managed Instance is how the query optimizer "sniffs" parameters. During compilation, the query optimizer
evaluates the current value of a parameter to determine whether it can generate a more optimal query plan.
Although this strategy often can lead to a query plan that is significantly faster than a plan compiled without
known parameter values, currently it works imperfectly both in SQL Server, in Azure SQL Database, and Azure
SQL Managed Instance. Sometimes the parameter is not sniffed, and sometimes the parameter is sniffed but the
generated plan is suboptimal for the full set of parameter values in a workload. Microsoft includes query hints
(directives) so that you can specify intent more deliberately and override the default behavior of parameter
sniffing. Often, if you use hints, you can fix cases in which the default SQL Server, Azure SQL Database, and
Azure SQL Managed Instance behavior is imperfect for a specific customer workload.
The next example demonstrates how the query processor can generate a plan that is suboptimal both for
performance and resource requirements. This example also shows that if you use a query hint, you can reduce
query run time and resource requirements for your database:
DROP TABLE psptest1;
CREATE TABLE psptest1(col1 int primary key identity, col2 int, col3 binary(200));
DECLARE @a int = 0;
SET NOCOUNT ON;
BEGIN TRANSACTION
WHILE @a < 20000
BEGIN
INSERT INTO psptest1(col2) values (1);
INSERT INTO psptest1(col2) values (@a);
SET @a += 1;
END
COMMIT TRANSACTION
CREATE INDEX i1 on psptest1(col2);
GO
CREATE TABLE t1 (col1 int primary key, col2 int, col3 binary(200));
GO
The setup code creates a table that has skewed data distribution. The optimal query plan differs based on which
parameter is selected. Unfortunately, the plan caching behavior doesn't always recompile the query based on
the most common parameter value. So, it's possible for a suboptimal plan to be cached and used for many
values, even when a different plan might be a better plan choice on average. Then the query plan creates two
stored procedures that are identical, except that one has a special query hint.
We recommend that you wait at least 10 minutes before you begin part 2 of the example, so that the results are
distinct in the resulting telemetry data.
EXEC psp2 @param2=1;
TRUNCATE TABLE t1;
DECLARE @i int = 0;
WHILE @i < 1000
BEGIN
EXEC psp2 @param2=2;
TRUNCATE TABLE t1;
SET @i += 1;
END
Each part of this example attempts to run a parameterized insert statement 1,000 times (to generate a sufficient
load to use as a test data set). When it executes stored procedures, the query processor examines the parameter
value that is passed to the procedure during its first compilation (parameter "sniffing"). The processor caches the
resulting plan and uses it for later invocations, even if the parameter value is different. The optimal plan might
not be used in all cases. Sometimes you need to guide the optimizer to pick a plan that is better for the average
case rather than the specific case from when the query was first compiled. In this example, the initial plan
generates a "scan" plan that reads all rows to find each value that matches the parameter:
Because we executed the procedure by using the value 1, the resulting plan was optimal for the value 1 but was
suboptimal for all other values in the table. The result likely isn't what you would want if you were to pick each
plan randomly, because the plan performs more slowly and uses more resources.
If you run the test with SET STATISTICS IO set to ON , the logical scan work in this example is done behind the
scenes. You can see that there are 1,148 reads done by the plan (which is inefficient, if the average case is to
return just one row):
The second part of the example uses a query hint to tell the optimizer to use a specific value during the
compilation process. In this case, it forces the query processor to ignore the value that is passed as the
parameter, and instead to assume UNKNOWN . This refers to a value that has the average frequency in the table
(ignoring skew). The resulting plan is a seek-based plan that is faster and uses fewer resources, on average, than
the plan in part 1 of this example:
You can see the effect in the sys.resource_stats table (there is a delay from the time that you execute the test
and when the data populates the table). For this example, part 1 executed during the 22:25:00 time window, and
part 2 executed at 22:35:00. The earlier time window used more resources in that time window than the later
one (because of plan efficiency improvements).
SELECT TOP 1000 *
FROM sys.resource_stats
WHERE database_name = 'resource1'
ORDER BY start_time DESC
NOTE
Although the volume in this example is intentionally small, the effect of suboptimal parameters can be substantial,
especially on larger databases. The difference, in extreme cases, can be between seconds for fast cases and hours for slow
cases.
You can examine sys.resource_stats to determine whether the resource for a test uses more or fewer
resources than another test. When you compare data, separate the timing of tests so that they are not in the
same 5-minute window in the sys.resource_stats view. The goal of the exercise is to minimize the total
amount of resources used, and not to minimize the peak resources. Generally, optimizing a piece of code for
latency also reduces resource consumption. Make sure that the changes you make to an application are
necessary, and that the changes don't negatively affect the customer experience for someone who might be
using query hints in the application.
If a workload has a set of repeating queries, often it makes sense to capture and validate the optimality of your
plan choices because it drives the minimum resource size unit required to host the database. After you validate
it, occasionally reexamine the plans to help you make sure that they have not degraded. You can learn more
about query hints (Transact-SQL).
Very large database architectures
Before the release of Hyperscale service tier for single databases in Azure SQL Database, customers used to hit
capacity limits for individual databases. These capacity limits still exist for pooled databases in Azure SQL
Database elastic pools and instance databases in Azure SQL Managed Instances. The following two sections
discuss two options for solving problems with very large databases in Azure SQL Database and Azure SQL
Managed Instance when you cannot use the Hyperscale service tier.
Cross-database sharding
Because Azure SQL Database and Azure SQL Managed Instance runs on commodity hardware, the capacity
limits for an individual database are lower than for a traditional on-premises SQL Server installation. Some
customers use sharding techniques to spread database operations over multiple databases when the operations
don't fit inside the limits of an individual database in Azure SQL Database and Azure SQL Managed Instance.
Most customers who use sharding techniques in Azure SQL Database and Azure SQL Managed Instance split
their data on a single dimension across multiple databases. For this approach, you need to understand that OLTP
applications often perform transactions that apply to only one row or to a small group of rows in the schema.
NOTE
Azure SQL Database now provides a library to assist with sharding. For more information, see Elastic Database client
library overview.
For example, if a database has customer name, order, and order details (like the traditional example Northwind
database that ships with SQL Server), you could split this data into multiple databases by grouping a customer
with the related order and order detail information. You can guarantee that the customer's data stays in an
individual database. The application would split different customers across databases, effectively spreading the
load across multiple databases. With sharding, customers not only can avoid the maximum database size limit,
but Azure SQL Database and Azure SQL Managed Instance also can process workloads that are significantly
larger than the limits of the different compute sizes, as long as each individual database fits into its service tier
limits.
Although database sharding doesn't reduce the aggregate resource capacity for a solution, it's highly effective at
supporting very large solutions that are spread over multiple databases. Each database can run at a different
compute size to support very large, "effective" databases with high resource requirements.
Functional partitioning
Users often combine many functions in an individual database. For example, if an application has logic to
manage inventory for a store, that database might have logic associated with inventory, tracking purchase
orders, stored procedures, and indexed or materialized views that manage end-of-month reporting. This
technique makes it easier to administer the database for operations like backup, but it also requires you to size
the hardware to handle the peak load across all functions of an application.
If you use a scale-out architecture in Azure SQL Database and Azure SQL Managed Instance, it's a good idea to
split different functions of an application into different databases. By using this technique, each application
scales independently. As an application becomes busier (and the load on the database increases), the
administrator can choose independent compute sizes for each function in the application. At the limit, with this
architecture, an application can be larger than a single commodity machine can handle because the load is
spread across multiple machines.
Batch queries
For applications that access data by using high-volume, frequent, ad hoc querying, a substantial amount of
response time is spent on network communication between the application tier and the database tier. Even when
both the application and the database are in the same data center, the network latency between the two might
be magnified by a large number of data access operations. To reduce the network round trips for the data access
operations, consider using the option to either batch the ad hoc queries, or to compile them as stored
procedures. If you batch the ad hoc queries, you can send multiple queries as one large batch in a single trip to
the database. If you compile ad hoc queries in a stored procedure, you could achieve the same result as if you
batch them. Using a stored procedure also gives you the benefit of increasing the chances of caching the query
plans in the database so you can use the stored procedure again.
Some applications are write-intensive. Sometimes you can reduce the total IO load on a database by considering
how to batch writes together. Often, this is as simple as using explicit transactions instead of auto-commit
transactions in stored procedures and ad hoc batches. For an evaluation of different techniques you can use, see
Batching techniques for database applications in Azure. Experiment with your own workload to find the right
model for batching. Be sure to understand that a model might have slightly different transactional consistency
guarantees. Finding the right workload that minimizes resource use requires finding the right combination of
consistency and performance trade-offs.
Application-tier caching
Some database applications have read-heavy workloads. Caching layers might reduce the load on the database
and might potentially reduce the compute size required to support a database by using Azure SQL Database
and Azure SQL Managed Instance. With Azure Cache for Redis, if you have a read-heavy workload, you can read
the data once (or perhaps once per application-tier machine, depending on how it is configured), and then store
that data outside of your database. This is a way to reduce database load (CPU and read IO), but there is an
effect on transactional consistency because the data being read from the cache might be out of sync with the
data in the database. Although in many applications some level of inconsistency is acceptable, that's not true for
all workloads. You should fully understand any application requirements before you implement an application-
tier caching strategy.
Get configuration and design tips
If you use Azure SQL Database, you can execute an open-source T-SQL script for improving database
configuration and design in Azure SQL DB. The script will analyze your database on demand and provide tips to
improve database performance and health. Some tips suggest configuration and operational changes based on
best practices, while other tips recommend design changes suitable for your workload, such as enabling
advanced database engine features.
To learn more about the script and get started, visit the Azure SQL Tips wiki page.
Next steps
Learn about the DTU-based purchasing model
Learn more about the vCore-based purchasing model
Read What is an Azure elastic pool?
Discover When to consider an elastic pool
Read about Monitoring Microsoft Azure SQL Database and Azure SQL Managed Instance performance using
dynamic management views
Learn to Diagnose and troubleshoot high CPU on Azure SQL Database
Tune nonclustered indexes with missing index suggestions
Video: Data Loading Best Practices on Azure SQL Database
Configure streaming export of Azure SQL Database
and SQL Managed Instance diagnostic telemetry
9/13/2022 • 26 minutes to read • Edit Online
NOTE
Diagnostic settings cannot be configured for the system databases , such as master , msdb , model , resource and
tempdb databases.
NOTE
To enable audit log streaming of security telemetry, see Set up auditing for your database and auditing logs in Azure
Monitor logs and Azure Event Hubs.
IMPORTANT
The streaming export of diagnostic telemetry is not enabled by default.
Select one of the following tabs for step-by-step guidance for configuring the streaming export of diagnostic
telemetry in the Azure portal and for scripts for accomplishing the same with PowerShell and the Azure CLI.
Azure portal
PowerShell
Azure CLI
To configure streaming of diagnostic telemetry for elastic pools and pooled databases, you need to separately
configure each separately:
Enable streaming of diagnostic telemetry for an elastic pool
Enable streaming of diagnostic telemetry for each database in elastic pool
The elastic pool container has its own telemetry separate from each individual pooled database's telemetry.
To enable streaming of diagnostic telemetry for an elastic pool resource, follow these steps:
1. Go to the elastic pool resource in Azure portal.
2. Select Diagnostics settings .
3. Select Turn on diagnostics if no previous settings exist, or select Edit setting to edit a previous setting.
4. Enter a setting name for your own reference.
5. Select a destination resource for the streaming diagnostics data: Archive to storage account , Stream
to an event hub , or Send to Log Analytics .
6. For log analytics, select Configure and create a new workspace by selecting +Create New Workspace ,
or select an existing workspace.
7. Select the check box for elastic pool diagnostic telemetry: Basic metrics.
8. Select Save .
9. In addition, configure streaming of diagnostic telemetry for each database within the elastic pool you
want to monitor by following steps described in the next section.
IMPORTANT
In addition to configuring diagnostic telemetry for an elastic pool, you also need to configure diagnostic telemetry for
each database in the elastic pool.
Single or pooled database Basic metrics contains DTU percentage, DTU used, DTU limit,
CPU percentage, physical data read percentage, log write
percentage, Successful/Failed/Blocked by firewall connections,
sessions percentage, workers percentage, storage, storage
percentage, XTP storage percentage, and deadlocks.
To enable streaming of diagnostic telemetry for a single or a pooled database, follow these steps:
1. Go to Azure SQL database resource.
2. Select Diagnostics settings .
3. Select Turn on diagnostics if no previous settings exist, or select Edit setting to edit a previous setting.
You can create up to three parallel connections to stream diagnostic telemetry.
4. Select Add diagnostic setting to configure parallel streaming of diagnostics data to multiple resources.
TIP
Repeat these steps for each single and pooled database you want to monitor.
To configure streaming of diagnostic telemetry for managed instance and instance databases, you will need to
separately configure each:
Enable streaming of diagnostic telemetry for managed instance
Enable streaming of diagnostic telemetry for each instance database
The managed instance container has its own telemetry separate from each instance database's telemetry.
To enable streaming of diagnostic telemetry for a managed instance resource, follow these steps:
1. Go to the managed instance resource in Azure portal.
2. Select Diagnostics settings .
3. Select Turn on diagnostics if no previous settings exist, or select Edit setting to edit a previous setting.
IMPORTANT
In addition to configuring diagnostic telemetry for a managed instance, you also need to configure diagnostic telemetry
for each instance database.
To enable streaming of diagnostic telemetry for an instance database, follow these steps:
1. Go to instance database resource within managed instance.
2. Select Diagnostics settings .
3. Select Turn on diagnostics if no previous settings exist, or select Edit setting to edit a previous setting.
You can create up to three (3) parallel connections to stream diagnostic telemetry.
Select +Add diagnostic setting to configure parallel streaming of diagnostics data to multiple
resources.
4. Enter a setting name for your own reference.
5. Select a destination resource for the streaming diagnostics data: Archive to storage account , Stream
to an event hub , or Send to Log Analytics .
6. Select the check boxes for database diagnostic telemetry: SQLInsights , Quer yStoreRuntimeStatistics ,
Quer yStoreWaitStatistics , and Errors .
7. Select Save .
8. Repeat these steps for each instance database you want to monitor.
TIP
Repeat these steps for each instance database you want to monitor.
Installation overview
You can monitor a collection of databases and database collections with Azure SQL Analytics by performing the
following steps:
1. Create an Azure SQL Analytics solution from the Azure Marketplace.
2. Create a Log Analytics workspace in the solution.
3. Configure databases to stream diagnostic telemetry into the workspace.
You can configure the streaming export of this diagnostic telemetry by using the built-in Send to Log
Analytics option in the diagnostics settings tab in the Azure portal. You can also enable streaming into a Log
Analytics workspace by using diagnostics settings via PowerShell cmdlets, the Azure CLI, the Azure Monitor
REST API, or Resource Manager templates.
Create an Azure SQL Analytics resource
1. Search for Azure SQL Analytics in Azure Marketplace and select it.
2. Select Create on the solution's overview screen.
3. Fill in the Azure SQL Analytics form with the additional information that is required: workspace name,
subscription, resource group, location, and pricing tier.
insights-metrics-minute/resourceId=/SUBSCRIPTIONS/s1id1234-5679-0123-4567-
890123456789/RESOURCEGROUPS/TESTRESOURCEGROUP/PROVIDERS/MICROSOFT.SQL/
servers/Server1/databases/database1/y=2016/m=08/d=22/h=18/m=00/PT1H.json
A blob name for storing data from an elastic pool looks like:
IMPORTANT
Active databases with heavier workloads ingest more data than idle databases. For more information, see Log analytics
pricing.
If you are using Azure SQL Analytics, you can monitor your data ingestion consumption by selecting OMS
Workspace on the navigation menu of Azure SQL Analytics, and then selecting Usage and Estimated Costs .
Elastic pool eDTU percentage, eDTU used, eDTU limit, CPU percentage,
physical data read percentage, log write percentage, sessions
percentage, workers percentage, storage, storage
percentage, storage limit, XTP storage percentage
Single and pooled database DTU percentage, DTU used, DTU limit, CPU percentage,
physical data read percentage, log write percentage,
Successful/Failed/Blocked by firewall connections, sessions
percentage, workers percentage, storage, storage
percentage, XTP storage percentage, and deadlocks
Advanced metrics
Refer to the following table for details about advanced metrics.
sqlserver_process_core_percent1 SQL process core percent CPU usage percentage for the SQL
process, as measured by the operating
system.
sqlserver_process_memory_percent1 SQL process memory percent Memory usage percentage for the SQL
process, as measured by the operating
system.
tempdb_data_size2 Tempdb Data File Size Kilobytes Tempdb Data File Size Kilobytes.
tempdb_log_size2 Tempdb Log File Size Kilobytes Tempdb Log File Size Kilobytes.
NOTE
Both Basic and Advanced metrics may be unavailable for databases that have been inactive for 7 days or longer.
Basic logs
Details of telemetry available for all logs are documented in the following tables. For more information, see
supported diagnostic telemetry.
Resource usage stats for managed instances
total_query_wait_time_ms_d Total wait time of the query on the specific wait category
query_param_type_d 0
error_state_d A numeric state value associated with the query timeout (an
attention event)
Blockings dataset
Deadlocks dataset
Next steps
To learn how to enable logging and to understand the metrics and log categories supported by the various
Azure services, see:
Overview of metrics in Microsoft Azure
Overview of Azure platform logs
To learn about Event Hubs, read:
What is Azure Event Hubs?
Get started with Event Hubs
To learn how to set up alerts based on telemetry from log analytics see:
Creating alerts for Azure SQL Database and Azure SQL Managed Instance
Use In-Memory OLTP to improve your application
performance in Azure SQL Database and Azure
SQL Managed Instance
9/13/2022 • 4 minutes to read • Edit Online
NOTE
Learn how Quorum doubles key database's workload while lowering DTU by 70% with Azure SQL Database
Step 1: Ensure you are using a Premium and Business Critical tier
database
In-Memory OLTP is supported only in Premium and Business Critical tier databases. In-Memory is supported if
the returned result is 1 (not 0):
For the TRANSACTION_ISOLATION_LEVEL, SNAPSHOT is the most common value for the natively
compiled stored procedure. However, a subset of the other values is also supported:
REPEATABLE READ
SERIALIZABLE
The LANGUAGE value must be present in the sys.languages view.
How to migrate a stored procedure
The migration steps are:
1. Obtain the CREATE PROCEDURE script to the regular interpreted stored procedure.
2. Rewrite its header to match the previous template.
3. Ascertain whether the stored procedure T-SQL code uses any features that are not supported for natively
compiled stored procedures. Implement workarounds if necessary.
For details see Migration Issues for Natively Compiled Stored Procedures.
4. Rename the old stored procedure by using SP_RENAME. Or simply DROP it.
5. Run your edited CREATE PROCEDURE T-SQL script.
Related links
In-Memory OLTP (In-Memory Optimization)
Introduction to Natively Compiled Stored Procedures
Memory Optimization Advisor
In-Memory sample
9/13/2022 • 9 minutes to read • Edit Online
Error 40536
If you get error 40536 when you run the T-SQL script, run the following T-SQL script to verify whether the
database supports In-Memory:
A result of 0 means that In-Memory isn't supported, and 1 means that it is supported. To diagnose the problem,
ensure that the database is at the Premium service tier.
About the created memory-optimized items
Tables : The sample contains the following memory-optimized tables:
SalesLT.Product_inmem
SalesLT.SalesOrderHeader_inmem
SalesLT.SalesOrderDetail_inmem
Demo.DemoSalesOrderHeaderSeed
Demo.DemoSalesOrderDetailSeed
You can inspect memory-optimized tables through the Object Explorer in SSMS. Right-click Tables > Filter >
Filter Settings > Is Memor y Optimized . The value equals 1.
Or you can query the catalog views, such as:
DECLARE
@i int = 0,
@od SalesLT.SalesOrderDetailType_inmem,
@SalesOrderID int,
@DueDate datetime2 = sysdatetime(),
@CustomerID int = rand() * 8000,
@BillToAddressID int = rand() * 10000,
@ShipToAddressID int = rand() * 10000;
To make the _ondisk version of the preceding T-SQL script for ostress.exe, you would replace both occurrences
of the _inmem substring with _ondisk. These replacements affect the names of tables and stored procedures.
Install RML utilities and ostress
Ideally, you would plan to run ostress.exe on an Azure virtual machine (VM). You would create an Azure VM in
the same Azure geographic region where your AdventureWorksLT database resides. But you can run ostress.exe
on your laptop instead.
On the VM, or on whatever host you choose, install the Replay Markup Language (RML) utilities. The utilities
include ostress.exe.
For more information, see:
The ostress.exe discussion in Sample Database for In-Memory OLTP.
Sample Database for In-Memory OLTP.
The blog for installing ostress.exe.
Run the _inmem stress workload first
You can use an RML Cmd Prompt window to run our ostress.exe command line. The command-line parameters
direct ostress to:
Run 100 connections concurrently (-n100).
Have each connection run the T-SQL script 50 times (-r50).
EXECUTE Demo.usp_DemoReset;
2. Copy the text of the preceding ostress.exe command line to your clipboard.
3. Replace the <placeholders> for the parameters -S -U -P -d with the correct real values.
4. Run your edited command line in an RML Cmd window.
Result is a duration
When ostress.exe finishes, it writes the run duration as its final line of output in the RML Cmd window. For
example, a shorter test run lasted about 1.5 minutes:
11/12/15 00:35:00.873 [0x000030A8] OSTRESS exiting normally, elapsed time: 00:01:31.867
EXECUTE Demo.usp_DemoReset;
2. Edit the ostress.exe command line to replace all _inmem with _ondisk.
3. Rerun ostress.exe for the second time, and capture the duration result.
4. Again, reset the database (for responsibly deleting what can be a large amount of test data).
Expected comparison results
Our In-Memory tests have shown that performance improved by nine times for this simplistic workload, with
ostress running on an Azure VM in the same Azure region as the database.
Level 130 is not directly related to In-Memory features. But level 130 generally provides faster query
performance than 120.
Key tables and columnstore indexes
dbo.FactResellerSalesXL_CCI is a table that has a clustered columnstore index, which has advanced
compression at the data level.
dbo.FactResellerSalesXL_PageCompressed is a table that has an equivalent regular clustered index, which
is compressed only at the page level.
Key queries to compare the columnstore index
There are several T-SQL query types that you can run to see performance improvements. In step 2 in the T-SQL
script, pay attention to this pair of queries. They differ only on one line:
FROM FactResellerSalesXL_PageCompressed a
FROM FactResellerSalesXL_CCI a
-- Execute a typical query that joins the Fact Table with dimension tables
-- Note this query will run on the Page Compressed table, Note down the time
SET STATISTICS IO ON
SET STATISTICS TIME ON
GO
SELECT c.Year
,e.ProductCategoryKey
,FirstName + ' ' + LastName AS FullName
,count(SalesOrderNumber) AS NumSales
,sum(SalesAmount) AS TotalSalesAmt
,Avg(SalesAmount) AS AvgSalesAmt
,count(DISTINCT SalesOrderNumber) AS NumOrders
,count(DISTINCT a.CustomerKey) AS CountCustomers
FROM FactResellerSalesXL_PageCompressed a
INNER JOIN DimProduct b ON b.ProductKey = a.ProductKey
INNER JOIN DimCustomer d ON d.CustomerKey = a.CustomerKey
Inner JOIN DimProductSubCategory e on e.ProductSubcategoryKey = b.ProductSubcategoryKey
INNER JOIN DimDate c ON c.DateKey = a.OrderDateKey
GROUP BY e.ProductCategoryKey,c.Year,d.CustomerKey,d.FirstName,d.LastName
GO
SET STATISTICS IO OFF
SET STATISTICS TIME OFF
GO
-- This is the same Prior query on a table with a clustered columnstore index CCI
-- The comparison numbers are even more dramatic the larger the table is (this is an 11 million row table
only)
SET STATISTICS IO ON
SET STATISTICS TIME ON
GO
SELECT c.Year
,e.ProductCategoryKey
,FirstName + ' ' + LastName AS FullName
,count(SalesOrderNumber) AS NumSales
,sum(SalesAmount) AS TotalSalesAmt
,Avg(SalesAmount) AS AvgSalesAmt
,count(DISTINCT SalesOrderNumber) AS NumOrders
,count(DISTINCT a.CustomerKey) AS CountCustomers
FROM FactResellerSalesXL_CCI a
INNER JOIN DimProduct b ON b.ProductKey = a.ProductKey
INNER JOIN DimCustomer d ON d.CustomerKey = a.CustomerKey
Inner JOIN DimProductSubCategory e on e.ProductSubcategoryKey = b.ProductSubcategoryKey
INNER JOIN DimDate c ON c.DateKey = a.OrderDateKey
GROUP BY e.ProductCategoryKey,c.Year,d.CustomerKey,d.FirstName,d.LastName
GO
In a database with the P2 pricing tier, you can expect about nine times the performance gain for this query by
using the clustered columnstore index compared with the traditional index. With P15, you can expect about 57
times the performance gain by using the columnstore index.
Next steps
Quickstart 1: In-Memory OLTP Technologies for faster T-SQL Performance
Use In-Memory OLTP in an existing Azure SQL application
Monitor In-Memory OLTP storage for In-Memory OLTP
Additional resources
Deeper information
Learn how Quorum doubles key database's workload while lowering DTU by 70% with In-Memory OLTP
in Azure SQL Database
In-Memory OLTP in Azure SQL Database Blog Post
Learn about In-Memory OLTP
Learn about columnstore indexes
Learn about real-time operational analytics
See Common Workload Patterns and Migration Considerations (which describes workload patterns
where In-Memory OLTP commonly provides significant performance gains)
Application design
In-Memory OLTP (In-Memory Optimization)
Use In-Memory OLTP in an existing Azure SQL application
Tools
Azure portal
SQL Server Management Studio (SSMS)
SQL Server Data Tools (SSDT)
Monitor In-Memory OLTP storage in Azure SQL
Database and Azure SQL Managed Instance
9/13/2022 • 2 minutes to read • Edit Online
Determine whether data fits within the In-Memory OLTP storage cap
Determine the storage caps of the different service tiers. Each Premium and Business Critical service tier has a
maximum In-Memory OLTP storage size.
DTU-based resource limits - single database
DTU-based resource limits - elastic pools
vCore-based resource limits - single databases
vCore-based resource limits - elastic pools
vCore-based resource limits - managed instance
Estimating memory requirements for a memory-optimized table works the same way for SQL Server as it does
in Azure SQL Database and Azure SQL Managed Instance. Take a few minutes to review Estimate memory
requirements.
Table and table variable rows, as well as indexes, count toward the max user data size. In addition, ALTER TABLE
needs enough room to create a new version of the entire table and its indexes.
Once this limit is exceeded, insert and update operations may start failing with error 41823 for single databases
in Azure SQL Database and databases in Azure SQL Managed Instance, and error 41840 for elastic pools in
Azure SQL Database. At that point you need to either delete data to reclaim memory, or upgrade the service tier
or compute size of your database.
NOTE
In rare cases, errors 41823 and 41840 can be transient, meaning there is enough available In-Memory OLTP storage, and
retrying the operation succeeds. We therefore recommend to both monitor the overall available In-Memory OLTP
storage and to retry when first encountering error 41823 or 41840. For more information about retry logic, see Conflict
Detection and Retry Logic with In-Memory OLTP.
Next steps
For monitoring guidance, see Monitoring using dynamic management views.
Quickstart: Import a BACPAC file to a database in
Azure SQL Database or Azure SQL Managed
Instance
9/13/2022 • 7 minutes to read • Edit Online
NOTE
The imported database's compatibility level is based on the source database's compatibility level.
IMPORTANT
After importing your database, you can choose to operate the database at its current compatibility level (level 100 for the
AdventureWorks2008R2 database) or at a higher level. For more information on the implications and options for
operating a database at a specific compatibility level, see ALTER DATABASE Compatibility Level. See also ALTER DATABASE
SCOPED CONFIGURATION for information about additional database-level settings related to compatibility levels.
NOTE
Import and Export using Private Link is in preview. Import functionality on Azure SQL Hyperscale databases is now in
preview.
The Azure portal only supports creating a single database in Azure SQL Database and only from a BACPAC file
stored in Azure Blob storage.
To migrate a database into an Azure SQL Managed Instance from a BACPAC file, use SQL Server Management
Studio or SQLPackage, using the Azure portal or Azure PowerShell is not currently supported.
NOTE
Machines processing import/export requests submitted through the Azure portal or PowerShell need to store the
BACPAC file as well as temporary files generated by the Data-Tier Application Framework (DacFX). The disk space required
varies significantly among databases with the same size and can require disk space up to 3 times the size of the database.
Machines running the import/export request only have 450GB local disk space. As a result, some requests may fail with
the error There is not enough space on the disk . In this case, the workaround is to run sqlpackage.exe on a machine
with enough local disk space. We encourage using SqlPackage to import/export databases larger than 150GB to avoid
this issue.
1. To import from a BACPAC file into a new single database using the Azure portal, open the appropriate
server page and then, on the toolbar, select Impor t database .
2. Select the storage account and the container for the BACPAC file and then select the BACPAC file from
which to import.
3. Specify the new database size (usually the same as origin) and provide the destination SQL Server
credentials. For a list of possible values for a new database in Azure SQL Database, see Create Database.
4. Click OK .
5. To monitor an import's progress, open the database's server page, and, under Settings , select
Impor t/Expor t histor y . When successful, the import has a Completed status.
6. To verify the database is live on the server, select SQL databases and verify the new database is Online .
Using SqlPackage
To import a SQL Server database using the SqlPackage command-line utility, see import parameters and
properties. You can download the latest SqlPackage for Windows, macOS, or Linux.
For scale and performance, we recommend using SqlPackage in most production environments rather than
using the Azure portal. For a SQL Server Customer Advisory Team blog about migrating using BACPAC files, see
migrating from SQL Server to Azure SQL Database using BACPAC Files.
The DTU based provisioning model supports select database max size values for each tier. When importing a
database use one of these supported values.
The following SqlPackage command imports the AdventureWorks2008R2 database from local storage to a
logical SQL server named mynewser ver20170403 . It creates a new database called myMigratedDatabase
with a Premium service tier and a P6 Service Objective. Change these values as appropriate for your
environment.
IMPORTANT
To connect to Azure SQL Database from behind a corporate firewall, the firewall must have port 1433 open. To connect to
SQL Managed Instance, you must have a point-to-site connection or an express route connection.
This example shows how to import a database using SqlPackage with Active Directory Universal Authentication.
sqlpackage.exe /a:Import /sf:testExport.bacpac /tdn:NewDacFX /tsn:apptestserver.database.windows.net
/ua:True /tid:"apptest.onmicrosoft.com"
Using PowerShell
NOTE
A SQL Managed Instance does not currently support migrating a database into an instance database from a BACPAC file
using Azure PowerShell. To import into a SQL Managed Instance, use SQL Server Management Studio or SQLPackage.
NOTE
The machines processing import/export requests submitted through portal or PowerShell need to store the bacpac file as
well as temporary files generated by Data-Tier Application Framework (DacFX). The disk space required varies significantly
among DBs with same size and can take up to 3 times of the database size. Machines running the import/export request
only have 450GB local disk space. As result, some requests may fail with "There is not enough space on the disk" error. In
this case, the workaround is to run sqlpackage.exe on a machine with enough local disk space. When importing/exporting
databases larger than 150GB, use SqlPackage to avoid this issue.
PowerShell
Azure CLI
IMPORTANT
The PowerShell Azure Resource Manager (RM) module is still supported, but all future development is for the Az.Sql
module. The AzureRM module will continue to receive bug fixes until at least December 2020. The arguments for the
commands in the Az module and in the AzureRm modules are substantially identical. For more about their compatibility,
see Introducing the new Azure PowerShell Az module.
Use the New-AzSqlDatabaseImport cmdlet to submit an import database request to Azure. Depending on
database size, the import may take some time to complete. The DTU based provisioning model supports select
database max size values for each tier. When importing a database use one of these supported values.
You can use the Get-AzSqlDatabaseImportExportStatus cmdlet to check the import's progress. Running the
cmdlet immediately after the request usually returns Status: InProgress . The import is complete when you see
Status: Succeeded .
[Console]::Write("Importing")
while ($importStatus.Status -eq "InProgress") {
$importStatus = Get-AzSqlDatabaseImportExportStatus -OperationStatusLink
$importRequest.OperationStatusLink
[Console]::Write(".")
Start-Sleep -s 10
}
[Console]::WriteLine("")
$importStatus
TIP
For another script example, see Import a database from a BACPAC file.
Limitations
Importing to a database in elastic pool isn't supported. You can import data into a single database and then
move the database to an elastic pool.
Import Export Service does not work when Allow access to Azure services is set to OFF. However you can
work around the problem by manually running sqlpackage.exe from an Azure VM or performing the export
directly in your code by using the DacFx API.
Import does not support specifying a backup storage redundancy while creating a new database and creates
with the default geo-redundant backup storage redundancy. To workaround, first create an empty database
with desired backup storage redundancy using Azure portal or PowerShell and then import the BACPAC into
this empty database.
Storage behind a firewall is currently not supported.
Additional tools
You can also use these wizards.
Import Data-tier Application Wizard in SQL Server Management Studio.
SQL Server Import and Export Wizard.
Next steps
To learn how to connect to and query Azure SQL Database from Azure Data Studio, see Quickstart: Use Azure
Data Studio to connect and query Azure SQL Database.
To learn how to connect to and query a database in Azure SQL Database, see Quickstart: Azure SQL
Database: Use SQL Server Management Studio to connect to and query data.
For a SQL Server Customer Advisory Team blog about migrating using BACPAC files, see Migrating from
SQL Server to Azure SQL Database using BACPAC Files.
For a discussion of the entire SQL Server database migration process, including performance
recommendations, see SQL Server database migration to Azure SQL Database.
To learn how to manage and share storage keys and shared access signatures securely, see Azure Storage
Security Guide.
Export to a BACPAC file - Azure SQL Database and
Azure SQL Managed Instance
9/13/2022 • 6 minutes to read • Edit Online
NOTE
Export functionality on Azure SQL Hyperscale databases tier is now in preview.
Considerations
For an export to be transactionally consistent, you must ensure either that no write activity is occurring
during the export, or that you are exporting from a transactionally consistent copy of your database.
If you are exporting to blob storage, the maximum size of a BACPAC file is 200 GB. To archive a larger
BACPAC file, export to local storage with SqlPackage.exe.
Exporting a BACPAC file to Azure premium storage using the methods discussed in this article is not
supported.
Storage behind a firewall is currently not supported.
Immutable storage is currently not supported.
Storage file name or the input value for StorageURI should be fewer than 128 characters long and cannot
end with '.' and cannot contain special characters like a space character or '<,>,*,%,&,:,,/,?'.
If the export operation exceeds 20 hours, it may be canceled. To increase performance during export, you
can:
Temporarily increase your compute size.
Cease all read and write activity during the export.
Use a clustered index with non-null values on all large tables. Without clustered indexes, an export
may fail if it takes longer than 6-12 hours. This is because the export service needs to complete a table
scan to try to export entire table. A good way to determine if your tables are optimized for export is to
run DBCC SHOW_STATISTICS and make sure that the RANGE_HI_KEY is not null and its value has
good distribution. For details, see DBCC SHOW_STATISTICS.
Azure SQL Managed Instance does not currently support exporting a database to a BACPAC file using the
Azure portal or Azure PowerShell. To export a managed instance into a BACPAC file, use SQL Server
Management Studio (SSMS) or SQLPackage.
For databases in the Hyperscale service tier, BACPAC export/import from Azure portal, from PowerShell
using New-AzSqlDatabaseExport or New-AzSqlDatabaseImport, from Azure CLI using az sql db export
and az sql db import, and from REST API is not supported. BACPAC import/export for smaller Hyperscale
databases (up to 200 GB) is supported using SSMS and SQLPackage version 18.4 and later. For larger
databases, BACPAC export/import may take a long time, and may fail for various reasons.
NOTE
BACPACs are not intended to be used for backup and restore operations. Azure automatically creates backups for every
user database. For details, see business continuity overview and SQL Database backups.
NOTE
Import and Export using Private Link is in preview.
NOTE
Machines processing import/export requests submitted through the Azure portal or PowerShell need to store the
BACPAC file as well as temporary files generated by the Data-Tier Application Framework (DacFX). The disk space required
varies significantly among databases with the same size and can require disk space up to three times the size of the
database. Machines running the import/export request only have 450GB local disk space. As a result, some requests may
fail with the error There is not enough space on the disk . In this case, the workaround is to run sqlpackage.exe on a
machine with enough local disk space. We encourage using SQLPackage to import/export databases larger than 150GB
to avoid this issue.
1. To export a database using the Azure portal, open the page for your database and select Expor t on the
toolbar.
2. Specify the BACPAC filename, select an existing Azure storage account and container for the export, and
then provide the appropriate credentials for access to the source database. A SQL Ser ver admin login
is needed here even if you are the Azure admin, as being an Azure admin does not equate to having
admin permissions in Azure SQL Database or Azure SQL Managed Instance.
3. Select OK .
4. To monitor the progress of the export operation, open the page for the server containing the database
being exported. Under Data management , select Impor t/Expor t histor y .
SQLPackage utility
We recommend the use of the SQLPackage utility for scale and performance in most production environments.
You can run multiple sqlpackage.exe commands in parallel for subsets of tables to speed up import/export
operations.
To export a database in SQL Database using the SQLPackage command-line utility, see Export parameters and
properties. The SQLPackage utility is available for Windows, macOS, and Linux.
This example shows how to export a database using sqlpackage.exe with Active Directory Universal
Authentication:
PowerShell
Exporting a BACPAC of a database from Azure SQL Managed Instance or from a database in the Hyperscale
service tier using PowerShell is not currently supported. See Considerations.
Use the New-AzSqlDatabaseExport cmdlet to submit an export database request to the Azure SQL Database
service. Depending on the size of your database, the export operation may take some time to complete.
To check the status of the export request, use the Get-AzSqlDatabaseImportExportStatus cmdlet. Running this
cmdlet immediately after the request usually returns Status: InProgress . When you see Status: Succeeded
the export is complete.
Next steps
To learn about long-term backup retention of a single database and pooled databases as an alternative to
exporting a database for archive purposes, see Long-term backup retention. You can use SQL Agent jobs to
schedule copy-only database backups as an alternative to long-term backup retention.
To learn about importing a BACPAC to a SQL Server database, see Import a BACPAC to a SQL Server
database.
To learn about exporting a BACPAC from a SQL Server database, see Export a Data-tier Application
To learn about using the Data Migration Service to migrate a database, see Migrate from SQL Server to
Azure SQL Database offline using DMS.
If you are exporting from SQL Server as a prelude to migration to Azure SQL Database, see Migrate a SQL
Server database to Azure SQL Database.
To learn how to manage and share storage keys and shared access signatures securely, see Azure Storage
Security Guide.
Move resources to new region - Azure SQL
Database & Azure SQL Managed Instance
9/13/2022 • 10 minutes to read • Edit Online
Overview
There are various scenarios in which you'd want to move your existing database or managed instance from one
region to another. For example, you're expanding your business to a new region and want to optimize it for the
new customer base. Or you need to move the operations to a different region for compliance reasons. Or Azure
released a new region that provides a better proximity and improves the customer experience.
This article provides a general workflow for moving resources to a different region. The workflow consists of the
following steps:
1. Verify the prerequisites for the move.
2. Prepare to move the resources in scope.
3. Monitor the preparation process.
4. Test the move process.
5. Initiate the actual move.
6. Remove the resources from the source region.
NOTE
This article applies to migrations within the Azure public cloud or within the same sovereign cloud.
NOTE
To move Azure SQL databases and elastic pools to a different Azure region, you can also use Azure Resource Mover
(Recommended). Refer this tutorial for detailed steps to do the same.
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
Move a database
Verify prerequisites
1. Create a target server for each source server.
2. Configure the firewall with the right exceptions by using PowerShell.
3. Configure the servers with the correct logins. If you're not the subscription administrator or SQL server
administrator, work with the administrator to assign the permissions that you need. For more
information, see How to manage Azure SQL Database security after disaster recovery.
4. If your databases are encrypted with transparent data encryption (TDE) and bring your own encryption
key (BYOK or Customer-Managed Key) in Azure Key Vault, ensure that the correct encryption material is
provisioned in the target regions.
The simplest way to do this is to add the encryption key from the existing key vault (that is being used
as TDE Protector on source server) to the target server and then set the key as the TDE Protector on
the target server
NOTE
A server or managed instance in one region can now be connected to a key vault in any other region.
As a best practice to ensure the target server has access to older encryption keys (required for
restoring database backups), run the Get-AzSqlServerKeyVaultKey cmdlet on the source server or Get-
AzSqlInstanceKeyVaultKey cmdlet on the source managed instance to return the list of available keys
and add those keys to the target server.
For more information and best practices on configuring customer-managed TDE on the target server,
see Azure SQL transparent data encryption with customer-managed keys in Azure Key Vault.
To move the key vault to the new region, see Move an Azure key vault across regions
5. If database-level audit is enabled, disable it and enable server-level auditing instead. After failover,
database-level auditing will require the cross-region traffic, which isn't desired or possible after the move.
6. For server-level audits, ensure that:
The storage container, Log Analytics, or event hub with the existing audit logs is moved to the target
region.
Auditing is configured on the target server. For more information, see Get started with SQL Database
auditing.
7. If your instance has a long-term retention policy (LTR), the existing LTR backups will remain associated
with the current server. Because the target server is different, you'll be able to access the older LTR
backups in the source region by using the source server, even if the server is deleted.
NOTE
This will be insufficient for moving between the sovereign cloud and a public region. Such a migration will require
moving the LTR backups to the target server, which is not currently supported.
Prepare resources
1. Create a failover group between the server of the source and the server of the target.
2. Add the databases you want to move to the failover group.
Replication of all added databases will be initiated automatically. For more information, see Using failover
groups with SQL Database.
Monitor the preparation process
You can periodically call Get-AzSqlDatabaseFailoverGroup to monitor replication of your databases from the
source to the target. The output object of Get-AzSqlDatabaseFailoverGroup includes a property for the
ReplicationState :
ReplicationState = 2 (CATCH_UP) indicates the database is synchronized and can be safely failed over.
ReplicationState = 0 (SEEDING) indicates that the database is not yet seeded, and an attempt to fail over
will fail.
Test synchronization
After ReplicationState is 2, connect to each database or subset of databases using the secondary endpoint
<fog-name>.secondary.database.windows.net and perform any query against the databases to ensure connectivity,
proper security configuration, and data replication.
Initiate the move
1. Connect to the target server using the secondary endpoint <fog-name>.secondary.database.windows.net .
2. Use Switch-AzSqlDatabaseFailoverGroup to switch the secondary managed instance to be the primary with
full synchronization. This operation will succeed or it will roll back.
3. Verify that the command has completed successfully by using
nslook up <fog-name>.secondary.database.windows.net to ascertain that the DNS CNAME entry points to the
target region IP address. If the switch command fails, the CNAME won't be updated.
Remove the source databases
Once the move completes, remove the resources in the source region to avoid unnecessary charges.
1. Delete the failover group using Remove-AzSqlDatabaseFailoverGroup.
2. Delete each source database using Remove-AzSqlDatabase for each of the databases on the source server.
This will automatically terminate geo-replication links.
3. Delete the source server using Remove-AzSqlServer.
4. Remove the key vault, audit storage containers, event hub, Azure Active Directory (Azure AD) instance, and
other dependent resources to stop being billed for them.
NOTE
This will be insufficient for moving between the sovereign cloud and a public region. Such a migration will require
moving the LTR backups to the target server, which is not currently supported.
Prepare to move
1. Create a separate failover group between each elastic pool on the source server and its counterpart
elastic pool on the target server.
2. Add all the databases in the pool to the failover group.
Replication of the added databases will be initiated automatically. For more information, see Using
failover groups with SQL Database.
NOTE
While it is possible to create a failover group that includes multiple elastic pools, we strongly recommend that you
create a separate failover group for each pool. If you have a large number of databases across multiple elastic
pools that you need to move, you can run the preparation steps in parallel and then initiate the move step in
parallel. This process will scale better and will take less time compared to having multiple elastic pools in the same
failover group.
NOTE
This will be insufficient for moving between the sovereign cloud and a public region. Such a migration will require moving
the LTR backups to the target instance, which is not currently supported.
Prepare resources
Create a failover group between each source managed instance and the corresponding target instance of SQL
Managed Instance.
Replication of all databases on each instance will be initiated automatically. For more information, see Auto-
failover groups.
Monitor the preparation process
You can periodically call Get-AzSqlDatabaseFailoverGroup to monitor replication of your databases from the
source to the target. The output object of Get-AzSqlDatabaseFailoverGroup includes a property for the
ReplicationState :
ReplicationState = 2 (CATCH_UP) indicates the database is synchronized and can be safely failed over.
ReplicationState = 0 (SEEDING) indicates that the database isn't yet seeded, and an attempt to fail over will
fail.
Test synchronization
Once ReplicationState is , connect to each database, or subset of databases using the secondary endpoint
2
<fog-name>.secondary.database.windows.net and perform any query against the databases to ensure connectivity,
proper security configuration, and data replication.
Initiate the move
1. Connect to the target managed instance by using the secondary endpoint
<fog-name>.secondary.database.windows.net .
2. Use Switch-AzSqlDatabaseFailoverGroup to switch the secondary managed instance to be the primary with
full synchronization. This operation will succeed, or it will roll back.
3. Verify that the command has completed successfully by using
nslook up <fog-name>.secondary.database.windows.net to ascertain that the DNS CNAME entry points to the
target region IP address. If the switch command fails, the CNAME won't be updated.
Remove the source managed instances
Once the move finishes, remove the resources in the source region to avoid unnecessary charges.
1. Delete the failover group using Remove-AzSqlDatabaseFailoverGroup. This will drop the failover group
configuration and terminate geo-replication links between the two instances.
2. Delete the source managed instance using Remove-AzSqlInstance.
3. Remove any additional resources in the resource group, such as the virtual cluster, virtual network, and
security group.
Next steps
Manage your database after it has been migrated.
Application development overview - SQL Database
& SQL Managed Instance
9/13/2022 • 2 minutes to read • Edit Online
Authentication
Access to Azure SQL Database is protected with logins and firewalls. Azure SQL Database supports both SQL
Server and Azure Active Directory authentication users and logins. Azure Active Directory logins are available
only in SQL Managed Instance.
Learn more about managing database access and login.
Connections
In your client connection logic, override the default timeout to be 30 seconds. The default of 15 seconds is too
short for connections that depend on the internet.
If you are using a connection pool, be sure to close the connection the instant your program is not actively using
it, and is not preparing to reuse it.
Avoid long-running transactions because any infrastructure or connection failure might roll back the
transaction. If possible, split the transaction in the multiple smaller transactions and use batching to improve
performance.
Resiliency
Azure SQL Database is a cloud service where you might expect transient errors that happen in the underlying
infrastructure or in the communication between cloud entities. Although Azure SQL Database is resilient on the
transitive infrastructure failures, these failures might affect your connectivity. When a transient error occurs
while connecting to SQL Database, your code should retry the call. We recommend that retry logic use backoff
logic, so that it does not overwhelm the service with multiple clients retrying simultaneously. Retry logic
depends on the error messages for SQL Database client programs.
For more information about how to prepare for planned maintenance events on your Azure SQL Database, see
planning for Azure maintenance events in Azure SQL Database.
Network considerations
On the computer that hosts your client program, ensure the firewall allows outgoing TCP communication on
port 1433. More information: Configure an Azure SQL Database firewall.
If your client program connects to SQL Database while your client runs on an Azure virtual machine (VM),
you must open certain port ranges on the VM. More information: Ports beyond 1433 for ADO.NET 4.5 and
SQL Database.
Client connections to Azure SQL Database sometimes bypass the proxy and interact directly with the
database. Ports other than 1433 become important. For more information, Azure SQL Database connectivity
architecture and Ports beyond 1433 for ADO.NET 4.5 and SQL Database.
For networking configuration for an instance of SQL Managed Instance, see network configuration for SQL
Managed Instance.
Next steps
Explore all the capabilities of SQL Database and SQL Managed Instance.
To get started, see the guides for Azure SQL Database and Azure SQL Managed Instances.
Getting started with JSON features in Azure SQL
Database and Azure SQL Managed Instance
9/13/2022 • 6 minutes to read • Edit Online
The FOR JSON PATH clause formats the results of the query as JSON text. Column names are used as keys, while
the cell values are generated as JSON values:
[
{"CustomerName":"Eric Torres","PhoneNumber":"(307) 555-0100","FaxNumber":"(307) 555-0101"},
{"CustomerName":"Cosmina Vlad","PhoneNumber":"(505) 555-0100","FaxNumber":"(505) 555-0101"},
{"CustomerName":"Bala Dixit","PhoneNumber":"(209) 555-0100","FaxNumber":"(209) 555-0101"}
]
The result set is formatted as a JSON array where each row is formatted as a separate JSON object.
PATH indicates that you can customize the output format of your JSON result by using dot notation in column
aliases. The following query changes the name of the "CustomerName" key in the output JSON format, and puts
phone and fax numbers in the "Contact" sub-object:
select CustomerName as Name, PhoneNumber as [Contact.Phone], FaxNumber as [Contact.Fax]
from Sales.Customers
where CustomerID = 931
FOR JSON PATH, WITHOUT_ARRAY_WRAPPER
{
"Name":"Nada Jovanovic",
"Contact":{
"Phone":"(215) 555-0100",
"Fax":"(215) 555-0101"
}
}
In this example, we returned a single JSON object instead of an array by specifying the
WITHOUT_ARRAY_WRAPPER option. You can use this option if you know that you are returning a single object
as a result of query.
The main value of the FOR JSON clause is that it lets you return complex hierarchical data from your database
formatted as nested JSON objects or arrays. The following example shows how to include the rows from the
Orders table that belong to the Customer as a nested array of Orders :
Instead of sending separate queries to get Customer data and then to fetch a list of related Orders, you can get
all the necessary data with a single query, as shown in the following sample output:
{
"Name":"Nada Jovanovic",
"Phone":"(215) 555-0100",
"Fax":"(215) 555-0101",
"Orders":[
{"OrderID":382,"OrderDate":"2013-01-07","ExpectedDeliveryDate":"2013-01-08"},
{"OrderID":395,"OrderDate":"2013-01-07","ExpectedDeliveryDate":"2013-01-08"},
{"OrderID":1657,"OrderDate":"2013-01-31","ExpectedDeliveryDate":"2013-02-01"}
]
}
The JSON data used in this example is represented by using the NVARCHAR(MAX) type. JSON can be inserted
into this table or provided as an argument of the stored procedure using standard Transact-SQL syntax as
shown in the following example:
Any client-side language or library that works with string data in Azure SQL Database and Azure SQL Managed
Instance will also work with JSON data. JSON can be stored in any table that supports the NVARCHAR type,
such as a Memory-optimized table or a System-versioned table. JSON does not introduce any constraint either
in the client-side code or in the database layer.
update Products
set Data = JSON_MODIFY(Data, '$.Price', 60)
where Id = 1
The JSON_VALUE function extracts a value from JSON text stored in the Data column. This function uses a
JavaScript-like path to reference a value in JSON text to extract. The extracted value can be used in any part of
SQL query.
The JSON_QUERY function is similar to JSON_VALUE. Unlike JSON_VALUE, this function extracts complex sub-
object such as arrays or objects that are placed in JSON text.
The JSON_MODIFY function lets you specify the path of the value in the JSON text that should be updated, as
well as a new value that will overwrite the old one. This way you can easily update JSON text without reparsing
the entire structure.
Since JSON is stored in a standard text, there are no guarantees that the values stored in text columns are
properly formatted. You can verify that text stored in JSON column is properly formatted by using standard
Azure SQL Database check constraints and the ISJSON function:
ALTER TABLE Products
ADD CONSTRAINT [Data should be formatted as JSON]
CHECK (ISJSON(Data) > 0)
If the input text is properly formatted JSON, the ISJSON function returns the value 1. On every insert or update
of JSON column, this constraint will verify that new text value is not malformed JSON.
In the example above, we can specify where to locate the JSON array that should be opened (in the $.Orders
path), what columns should be returned as result, and where to find the JSON values that will be returned as
cells.
We can transform a JSON array in the @orders variable into a set of rows, analyze this result set, or insert rows
into a standard table:
The collection of orders formatted as a JSON array and provided as a parameter to the stored procedure can be
parsed and inserted into the Orders table.
Accelerate real-time big data analytics using the
Spark connector
9/13/2022 • 5 minutes to read • Edit Online
NOTE
As of Sep 2020, this connector is not actively maintained. However, Apache Spark Connector for SQL Server and Azure
SQL is now available, with support for Python and R bindings, an easier-to use interface to bulk insert data, and many
other improvements. We strongly encourage you to evaluate and use the new connector instead of this one. The
information about the old connector (this page) is only retained for archival purposes.
The Spark connector enables databases in Azure SQL Database, Azure SQL Managed Instance, and SQL Server
to act as the input data source or output data sink for Spark jobs. It allows you to utilize real-time transactional
data in big data analytics and persist results for ad hoc queries or reporting. Compared to the built-in JDBC
connector, this connector provides the ability to bulk insert data into your database. It can outperform row-by-
row insertion with 10x to 20x faster performance. The Spark connector supports Azure Active Directory (Azure
AD) authentication to connect to Azure SQL Database and Azure SQL Managed Instance, allowing you to
connect your database from Azure Databricks using your Azure AD account. It provides similar interfaces with
the built-in JDBC connector. It is easy to migrate your existing Spark jobs to use this new connector.
The Spark connector utilizes the Microsoft JDBC Driver for SQL Server to move data between Spark worker
nodes and databases:
The dataflow is as follows:
1. The Spark master node connects to databases in SQL Database or SQL Server and loads data from a specific
table or using a specific SQL query.
2. The Spark master node distributes data to worker nodes for transformation.
3. The Worker node connects to databases that connect to SQL Database and SQL Server and writes data to the
database. User can choose to use row-by-row insertion or bulk insert.
The following diagram illustrates the data flow.
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._
Read data from Azure SQL and SQL Server with specified SQL query
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._
import org.apache.spark.sql.SaveMode
collection.write.mode(SaveMode.Append).sqlDB(config)
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.query._
val query = """
|UPDATE Customers
|SET ContactName = 'Alfred Schmidt', City = 'Frankfurt'
|WHERE CustomerID = 1;
""".stripMargin
sqlContext.sqlDBQuery(config)
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._
/**
Add column Metadata.
If not specified, metadata is automatically added
from the destination table, which may suffer performance.
*/
var bulkCopyMetadata = new BulkCopyMetadata
bulkCopyMetadata.addColumnMetadata(1, "Title", java.sql.Types.NVARCHAR, 128, 0)
bulkCopyMetadata.addColumnMetadata(2, "FirstName", java.sql.Types.NVARCHAR, 50, 0)
bulkCopyMetadata.addColumnMetadata(3, "LastName", java.sql.Types.NVARCHAR, 50, 0)
df.bulkCopyToSqlDB(bulkCopyConfig, bulkCopyMetadata)
//df.bulkCopyToSqlDB(bulkCopyConfig) if no metadata is specified.
Next steps
If you haven't already, download the Spark connector from azure-sqldb-spark GitHub repository and explore the
additional resources in the repo:
Sample Azure Databricks notebooks
Sample scripts (Scala)
You might also want to review the Apache Spark SQL, DataFrames, and Datasets Guide and the Azure Databricks
documentation.
Use Java and JDBC with Azure SQL Database
9/13/2022 • 9 minutes to read • Edit Online
This topic demonstrates creating a sample application that uses Java and JDBC to store and retrieve information
in Azure SQL Database.
JDBC is the standard Java API to connect to traditional relational databases.
Prerequisites
An Azure account. If you don't have one, get a free trial.
Azure Cloud Shell or Azure CLI. We recommend Azure Cloud Shell so you'll be logged in automatically and
have access to all the tools you'll need.
A supported Java Development Kit, version 8 (included in Azure Cloud Shell).
The Apache Maven build tool.
AZ_RESOURCE_GROUP=database-workshop
AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
AZ_LOCATION=<YOUR_AZURE_REGION>
AZ_SQL_SERVER_USERNAME=demo
AZ_SQL_SERVER_PASSWORD=<YOUR_AZURE_SQL_PASSWORD>
AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>
Replace the placeholders with the following values, which are used throughout this article:
<YOUR_DATABASE_NAME> : The name of your Azure SQL Database server. It should be unique across Azure.
<YOUR_AZURE_REGION> : The Azure region you'll use. You can use eastus by default, but we recommend that
you configure a region closer to where you live. You can have the full list of available regions by entering
az account list-locations .
<AZ_SQL_SERVER_PASSWORD> : The password of your Azure SQL Database server. That password should have a
minimum of eight characters. The characters should be from three of the following categories: English
uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and
so on).
<YOUR_LOCAL_IP_ADDRESS> : The IP address of your local computer, from which you'll run your Java application.
One convenient way to find it is to point your browser to whatismyip.akamai.com.
Next, create a resource group using the following command:
az group create \
--name $AZ_RESOURCE_GROUP \
--location $AZ_LOCATION \
| jq
NOTE
We use the jq utility to display JSON data and make it more readable. This utility is installed by default on Azure Cloud
Shell. If you don't like that utility, you can safely remove the | jq part of all the commands we'll use.
NOTE
You can read more detailed information about creating Azure SQL Database servers in Quickstart: Create an Azure SQL
Database single database.
az sql db create \
--resource-group $AZ_RESOURCE_GROUP \
--name demo \
--server $AZ_DATABASE_NAME \
| jq
<properties>
<java.version>1.8</java.version>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>
<dependencies>
<dependency>
<groupId>com.microsoft.sqlserver</groupId>
<artifactId>mssql-jdbc</artifactId>
<version>7.4.1.jre8</version>
</dependency>
</dependencies>
</project>
url=jdbc:sqlserver://$AZ_DATABASE_NAME.database.windows.net:1433;database=demo;encrypt=true;trustServerCerti
ficate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;
user=demo@$AZ_DATABASE_NAME
password=$AZ_SQL_SERVER_PASSWORD
Replace the two $AZ_DATABASE_NAME variables with the value that you configured at the beginning of this
article.
Replace the $AZ_SQL_SERVER_PASSWORD variable with the value that you configured at the beginning of this
article.
Create an SQL file to generate the database schema
We will use a src/main/resources/ schema.sql file in order to create a database schema. Create that file, with the
following content:
import java.sql.*;
import java.util.*;
import java.util.logging.Logger;
static {
System.setProperty("java.util.logging.SimpleFormatter.format", "[%4$-7s] %5$s %n");
log =Logger.getLogger(DemoApplication.class.getName());
}
properties.load(DemoApplication.class.getClassLoader().getResourceAsStream("application.properties"));
/*
Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
insertData(todo, connection);
todo = readData(connection);
todo.setDetails("congratulations, you have updated data!");
updateData(todo, connection);
deleteData(todo, connection);
*/
This Java code will use the application.properties and the schema.sql files that we created earlier, in order to
connect to the SQL Server database and create a schema that will store our data.
In this file, you can see that we commented methods to insert, read, update and delete data: we will code those
methods in the rest of this article, and you will be able to uncomment them one after each other.
NOTE
The database credentials are stored in the user and password properties of the application.properties file. Those
credentials are used when executing DriverManager.getConnection(properties.getProperty("url"), properties); ,
as the properties file is passed as an argument.
You can now execute this main class with your favorite tool:
Using your IDE, you should be able to right-click on the DemoApplication class and execute it.
Using Maven, you can run the application by executing:
mvn exec:java -Dexec.mainClass="com.example.demo.DemoApplication" .
The application should connect to the Azure SQL Database, create a database schema, and then close the
connection, as you should see in the console logs:
public Todo() {
}
@Override
public String toString() {
return "Todo{" +
"id=" + id +
", description='" + description + '\'' +
", details='" + details + '\'' +
", done=" + done +
'}';
}
}
This class is a domain model mapped on the todo table that you created when executing the schema.sql script.
Insert data into Azure SQL database
In the src/main/java/DemoApplication.java file, after the main method, add the following method to insert data
into the database:
insertStatement.setLong(1, todo.getId());
insertStatement.setString(2, todo.getDescription());
insertStatement.setString(3, todo.getDetails());
insertStatement.setBoolean(4, todo.isDone());
insertStatement.executeUpdate();
}
You can now uncomment the two following lines in the main method:
Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
insertData(todo, connection);
Executing the main class should now produce the following output:
You can now uncomment the following line in the main method:
todo = readData(connection);
Executing the main class should now produce the following output:
[INFO ] Loading application properties
[INFO ] Connecting to the database
[INFO ] Database connection test: demo
[INFO ] Create database schema
[INFO ] Insert data
[INFO ] Read data
[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you
have set up JDBC correctly!', done=true}
[INFO ] Closing database connection
updateStatement.setString(1, todo.getDescription());
updateStatement.setString(2, todo.getDetails());
updateStatement.setBoolean(3, todo.isDone());
updateStatement.setLong(4, todo.getId());
updateStatement.executeUpdate();
readData(connection);
}
You can now uncomment the two following lines in the main method:
Executing the main class should now produce the following output:
You can now uncomment the following line in the main method:
deleteData(todo, connection);
Executing the main class should now produce the following output:
az group delete \
--name $AZ_RESOURCE_GROUP \
--yes
Next steps
Design your first database in Azure SQL Database
Microsoft JDBC Driver for SQL Server
Report issues/ask questions
What is Azure SQL Database?
9/13/2022 • 19 minutes to read • Edit Online
Deployment models
Azure SQL Database provides the following deployment options for a database:
Single database represents a fully managed, isolated database. You might use this option if you have modern
cloud applications and microservices that need a single reliable data source. A single database is similar to a
contained database in the SQL Server database engine.
Elastic pool is a collection of single databases with a shared set of resources, such as CPU or memory. Single
databases can be moved into and out of an elastic pool.
IMPORTANT
To understand the feature differences between SQL Database, SQL Server, and Azure SQL Managed Instance, as well as
the differences among different Azure SQL Database options, see SQL Database features.
SQL Database delivers predictable performance with multiple resource types, service tiers, and compute sizes. It
provides dynamic scalability with no downtime, built-in intelligent optimization, global scalability and
availability, and advanced security options. These capabilities allow you to focus on rapid app development and
accelerating your time-to-market, rather than on managing virtual machines and infrastructure. SQL Database is
currently in 38 datacenters around the world, so you can run your database in a datacenter near you.
With elastic pools, you don't need to focus on dialing database performance up and down as demand for
resources fluctuates. The pooled databases consume the performance resources of the elastic pool as needed.
Pooled databases consume but don't exceed the limits of the pool, so your cost remains predictable even if
individual database usage doesn't.
You can add and remove databases to the pool, scaling your app from a handful of databases to thousands, all
within a budget that you control. You can also control the minimum and maximum resources available to
databases in the pool, to ensure that no database in the pool uses all the pool resources, and that every pooled
database has a guaranteed minimum amount of resources. To learn more about design patterns for software as
a service (SaaS) applications that use elastic pools, see Design patterns for multi-tenant SaaS applications with
SQL Database.
Scripts can help with monitoring and scaling elastic pools. For an example, see Use PowerShell to monitor and
scale an elastic pool in Azure SQL Database.
Blend single databases with pooled databases
You can blend single databases with elastic pools, and change the service tiers of single databases and elastic
pools to adapt to your situation. You can also mix and match other Azure services with SQL Database to meet
your unique app design needs, drive cost and resource efficiencies, and unlock new business opportunities.
Availability capabilities
Azure SQL Database enables your business to continue operating during disruptions. In a traditional SQL Server
environment, you generally have at least two machines locally set up. These machines have exact, synchronously
maintained, copies of the data to protect against a failure of a single machine or component. This environment
provides high availability, but it doesn't protect against a natural disaster destroying your datacenter.
Disaster recovery assumes that a catastrophic event is geographically localized enough to have another
machine or set of machines with a copy of your data far away. In SQL Server, you can use Always On Availability
Groups running in async mode to get this capability. People often don't want to wait for replication to happen
that far away before committing a transaction, so there's potential for data loss when you do unplanned
failovers.
Databases in the Premium and Business Critical service tiers already do something similar to the
synchronization of an availability group. Databases in lower service tiers provide redundancy through storage
by using a different but equivalent mechanism. Built-in logic helps protect against a single machine failure. The
active geo-replication feature gives you the ability to protect against disaster where a whole region is destroyed.
Azure Availability Zones tries to protect against the outage of a single datacenter building within a single region.
It helps you protect against the loss of power or network to a building. In SQL Database, you place the different
replicas in different availability zones (different buildings, effectively).
In fact, the service level agreement (SLA) of Azure, powered by a global network of Microsoft-managed
datacenters, helps keep your app running 24/7. The Azure platform fully manages every database, and it
guarantees no data loss and a high percentage of data availability. Azure automatically handles patching,
backups, replication, failure detection, underlying potential hardware, software or network failures, deploying
bug fixes, failovers, database upgrades, and other maintenance tasks. Standard availability is achieved by a
separation of compute and storage layers. Premium availability is achieved by integrating compute and storage
on a single node for performance, and then implementing technology similar to Always On Availability Groups.
For a full discussion of the high availability capabilities of Azure SQL Database, see SQL Database availability.
In addition, SQL Database provides built-in business continuity and global scalability features. These include:
Automatic backups:
SQL Database automatically performs full, differential, and transaction log backups of databases to
enable you to restore to any point in time. For single databases and pooled databases, you can configure
SQL Database to store full database backups to Azure Storage for long-term backup retention. For
managed instances, you can also perform copy-only backups for long-term backup retention.
Point-in-time restores:
All SQL Database deployment options support recovery to any point in time within the automatic backup
retention period for any database.
Active geo-replication:
The single database and pooled databases options allow you to configure up to four readable secondary
databases in either the same or globally distributed Azure datacenters. For example, if you have a SaaS
application with a catalog database that has a high volume of concurrent read-only transactions, use
active geo-replication to enable global read scale. This removes bottlenecks on the primary that are due
to read workloads. For managed instances, use auto-failover groups.
Auto-failover groups:
All SQL Database deployment options allow you to use failover groups to enable high availability and
load balancing at global scale. This includes transparent geo-replication and failover of large sets of
databases, elastic pools, and managed instances. Failover groups enable the creation of globally
distributed SaaS applications, with minimal administration overhead. This leaves all the complex
monitoring, routing, and failover orchestration to SQL Database.
Zone-redundant databases:
SQL Database allows you to provision Premium or Business Critical databases or elastic pools across
multiple availability zones. Because these databases and elastic pools have multiple redundant replicas
for high availability, placing these replicas into multiple availability zones provides higher resilience. This
includes the ability to recover automatically from the datacenter scale failures, without data loss.
Built-in intelligence
With SQL Database, you get built-in intelligence that helps you dramatically reduce the costs of running and
managing databases, and that maximizes both performance and security of your application. Running millions
of customer workloads around the clock, SQL Database collects and processes a massive amount of telemetry
data, while also fully respecting customer privacy. Various algorithms continuously evaluate the telemetry data
so that the service can learn and adapt with your application.
Automatic performance monitoring and tuning
SQL Database provides detailed insight into the queries that you need to monitor. SQL Database learns about
your database patterns, and enables you to adapt your database schema to your workload. SQL Database
provides performance tuning recommendations, where you can review tuning actions and apply them.
However, constantly monitoring a database is a hard and tedious task, especially when you're dealing with many
databases. Intelligent Insights does this job for you by automatically monitoring SQL Database performance at
scale. It informs you of performance degradation issues, it identifies the root cause of each issue, and it provides
performance improvement recommendations when possible.
Managing a huge number of databases might be impossible to do efficiently even with all available tools and
reports that SQL Database and Azure provide. Instead of monitoring and tuning your database manually, you
might consider delegating some of the monitoring and tuning actions to SQL Database by using automatic
tuning. SQL Database automatically applies recommendations, tests, and verifies each of its tuning actions to
ensure the performance keeps improving. This way, SQL Database automatically adapts to your workload in a
controlled and safe way. Automatic tuning means that the performance of your database is carefully monitored
and compared before and after every tuning action. If the performance doesn't improve, the tuning action is
reverted.
Many of our partners that run SaaS multi-tenant apps on top of SQL Database are relying on automatic
performance tuning to make sure their applications always have stable and predictable performance. For them,
this feature tremendously reduces the risk of having a performance incident in the middle of the night. In
addition, because part of their customer base also uses SQL Server, they're using the same indexing
recommendations provided by SQL Database to help their SQL Server customers.
Two automatic tuning aspects are available in SQL Database:
Automatic index management : Identifies indexes that should be added in your database, and indexes that
should be removed.
Automatic plan correction : Identifies problematic plans and fixes SQL plan performance problems.
Adaptive query processing
You can use adaptive query processing, including interleaved execution for multi-statement table-valued
functions, batch mode memory grant feedback, and batch mode adaptive joins. Each of these adaptive query
processing features applies similar "learn and adapt" techniques, helping further address performance issues
related to historically intractable query optimization problems.
IMPORTANT
Microsoft has certified Azure SQL Database (all deployment options) against a number of compliance standards. For more
information, see the Microsoft Azure Trust Center, where you can find the most current list of SQL Database compliance
certifications.
Easy-to-use tools
SQL Database makes building and maintaining applications easier and more productive. SQL Database allows
you to focus on what you do best: building great apps. You can manage and develop in SQL Database by using
tools and skills you already have.
TO O L DESC RIP T IO N
The Azure portal A web-based application for managing all Azure services.
SQL Server Management Studio A free, downloadable client application for managing any
SQL infrastructure, from SQL Server to SQL Database.
TO O L DESC RIP T IO N
SQL Server Data Tools in Visual Studio A free, downloadable client application for developing SQL
Server relational databases, databases in Azure SQL
Database, Integration Services packages, Analysis Services
data models, and Reporting Services reports.
Visual Studio Code A free, downloadable, open-source code editor for Windows,
macOS, and Linux. It supports extensions, including the
mssql extension for querying Microsoft SQL Server, Azure
SQL Database, and Azure Synapse Analytics.
SQL Database supports building applications with Python, Java, Node.js, PHP, Ruby, and .NET on macOS, Linux,
and Windows. SQL Database supports the same connection libraries as SQL Server.
Create and manage Azure SQL resources with the Azure portal
The Azure portal provides a single page where you can manage all of your Azure SQL resources including your
SQL Server on Azure virtual machines (VMs).
To access the Azure SQL page, from the Azure portal menu, select Azure SQL or search for and select Azure
SQL in any page.
NOTE
Azure SQL provides a quick and easy way to access all of your SQL resources in the Azure portal, including single and
pooled databases in Azure SQL Database as well as the logical server hosting them, SQL Managed Instances, and SQL
Server on Azure VMs. Azure SQL is not a service or resource, but rather a family of SQL-related services.
To manage existing resources, select the desired item in the list. To create new Azure SQL resources, select +
Create .
After selecting + Create , view additional information about the different options by selecting Show details on
any tile.
For details, see:
Create a single database
Create an elastic pool
Create a managed instance
Create a SQL virtual machine
Next steps
See the pricing page for cost comparisons and calculators regarding single databases and elastic pools.
See these quickstarts to get started:
Create a database in the Azure portal
Create a database with the Azure CLI
Create a database using PowerShell
For a set of Azure CLI and PowerShell samples, see:
Azure CLI samples for SQL Database
Azure PowerShell samples for SQL Database
For information about new capabilities as they're announced, see Azure Roadmap for SQL Database.
See the Azure SQL Database blog, where SQL Server product team members blog about SQL Database
news and features.
What's new in Azure SQL Database?
9/13/2022 • 10 minutes to read • Edit Online
Preview
The following table lists the features of Azure SQL Database that are currently in preview:
Azure Synapse Link for Azure SQL Database Azure Synapse Link for SQL enables near real time analytics
over operational data in Azure SQL Database or SQL Server
2022.
Elastic jobs The elastic jobs feature is the SQL Server Agent replacement
for Azure SQL Database as a PaaS offering.
Elastic queries The elastic queries feature allows for cross-database queries
in Azure SQL Database.
JavaScript & Python bindings Use JavaScript or Python SQL bindings with Azure Functions.
Maintenance window advance notifications Advance notifications are available for databases configured
to use a non-default maintenance window. Advance
notifications for maintenance windows are in public preview
for Azure SQL Database.
Query editor in the Azure portal The query editor in the portal allows you to run queries
against your Azure SQL Database directly from the Azure
portal.
Reverse migrate from Hyperscale Reverse migration to the General Purpose service tier allows
customers who have recently migrated an existing database
in Azure SQL Database to the Hyperscale service tier to
move back in an emergency, should Hyperscale not meet
their needs. While reverse migration is initiated by a service
tier change, it's essentially a size-of-data move between
different architectures.
F EAT URE DETA IL S
SQL Database emulator The Azure SQL Database emulator provides the ability to
locally validate database and query design together with
client application code in a simple and frictionless model as
part of the application development process.
SQL Database Projects extension An extension to develop databases for Azure SQL Database
with Azure Data Studio and VS Code. A SQL project is a local
representation of SQL objects that comprise the schema for
a single database, such as tables, stored procedures, or
functions.
Zone redundant configuration for August 2022 The zone redundant configuration
Hyperscale databases feature utilizes Azure Availability Zones
to replicate databases across multiple
physical locations within an Azure
region. By selecting zone redundancy,
you can make your Hyperscale
databases resilient to a much larger set
of failures, including catastrophic
datacenter outages, without any
changes to the application logic.
Query Store hints August 2022 Use query hints to optimize your
query execution via the OPTION
clause.
Named Replicas for Hyperscale June 2022 Named Replicas enable a broad variety
databases of read scale-out scenarios, and easily
implement near-real time hybrid
transactional and analytical processing
(HTAP) solutions.
F EAT URE GA M O N T H DETA IL S
Active geo-replication and Auto- June 2022 Active geo-replication and Auto-
failover groups for Hyperscale failover groups provide a turn-key
databases business continuity solution for
Hyperscale databases, letting you
perform quick disaster recovery of
databases in case of a regional disaster
or a large scale outage.
Change data capture April 2022 Change data capture (CDC) lets you
track all the changes that occur on a
database. Though this feature has
been available for SQL Server for quite
some time, using it with Azure SQL
Database is now generally available.
Zone redundant configuration for April 2022 The zone redundant configuration
General Purpose tier feature utilizes Azure Availability Zones
to replicate databases across multiple
physical locations within an Azure
region. By selecting zone redundancy,
you can make your provisioned and
serverless General Purpose databases
and elastic pools resilient to a much
larger set of failures, including
catastrophic datacenter outages,
without any changes to the application
logic.
Storage redundancy for Hyperscale March 2022 When creating a Hyperscale database,
databases you can choose your preferred storage
type: read-access geo-redundant
storage (RA-GRS), zone-redundant
storage (ZRS), or locally redundant
storage (LRS) Azure standard storage.
The selected storage redundancy
option will be used for the lifetime of
the database for both data storage
redundancy and backup storage
redundancy.
Azure Active Directory-only November 2021 It's possible to configure your Azure
authentication SQL Database to allow authentication
only from Azure Active Directory.
F EAT URE GA M O N T H DETA IL S
Azure AD service principal September 2021 Azure Active Directory (Azure AD)
supports user creation in Azure SQL
Database on behalf of Azure AD
applications (service principals).
Audit management operations March 2021 Azure SQL audit capabilities enable
you to audit operations done by
Microsoft support engineers when
they need to access your SQL assets
during a support request, enabling
more transparency in your workforce.
Documentation changes
Learn about significant changes to the Azure SQL Database documentation.
August 2022
C H A N GES DETA IL S
Zone redundant configuration for Hyperscale The zone redundant configuration feature utilizes Azure
databases Availability Zones to replicate databases across multiple
physical locations within an Azure region. By selecting zone
redundancy, you can make your Hyperscale databases
resilient to a much larger set of failures, including
catastrophic datacenter outages, without any changes to the
application logic. This configuration option is now generally
available. To learn more, review Zone redundant
configuration for Hyperscale databases.
June 2022
C H A N GES DETA IL S
Named Replicas for Hyperscale databases GA Named Replicas enable a broad variety of read scale-out
scenarios, and easily implement near-real time hybrid
transactional and analytical processing (HTAP) solutions. This
feature is now generally available. See named replicas to
learn more.
Active geo-replication and Auto-failover groups for Active geo-replication and Auto-failover groups are now
Hyperscale databases GA generally available for Hyperscale databases, providing a
turn-key business continuity solution, letting you perform
quick disaster recovery of databases in case of a regional
disaster or a large scale outage.
May 2022
C H A N GES DETA IL S
JavaScript & Python bindings Support for JavaScript and Python SQL bindings for Azure
Functions is currently in preview. See Azure SQL bindings for
Azure Functions to learn more.
Local development experience The Azure SQL Database local development experience is a
combination of tools and procedures that empowers
application developers and database professionals to design,
edit, build/validate, publish, and run database schemas for
databases directly on their workstation using an Azure SQL
Database containerized environment. To learn more, see
Local development experience for Azure SQL Database.
SQL Database emulator The Azure SQL Database emulator provides the ability to
locally validate database and query design together with
client application code in a simple and frictionless model as
part of the application development process. The SQL
Database emulator is currently in preview. Review SQL
Database emulator to learn more.
SDK-style SQL projects Use Microsoft.Build.Sql for SDK-style SQL projects in the SQL
Database Projects extension in Azure Data Studio or Visual
Studio Code. This feature is currently in preview. To learn
more, see SDK-style SQL projects.
Azure Synapse Link for Azure SQL Database Azure Synapse Link enables near real-time analytics over
operational data in SQL Server 2022 and Azure SQL
Database. With a seamless integration between operational
stores and Azure Synapse Analytics dedicated SQL pools,
Azure Synapse Link enables you to run analytics, business
intelligence and machine learning scenarios on your
operational data with minimum impact on source databases
with a new change feed technology. For more information,
see What is Synapse Link for SQL? (Preview).
April 2022
C H A N GES DETA IL S
General Purpose tier Zone redundancy GA Enabling zone redundancy for your provisioned and
serverless General Purpose databases and elastic pools is
now generally available in select regions. To learn more,
including region availability see General Purpose zone
redundancy.
Change data capture GA Change data capture (CDC) lets you track all the changes
that occur on a database. Though this feature has been
available for SQL Server for quite some time, using it with
Azure SQL Database is now generally available. To learn
more, see Change data capture.
March 2022
C H A N GES DETA IL S
C H A N GES DETA IL S
GA for maintenance window The maintenance window feature allows you to configure a
maintenance schedule for your Azure SQL Database and
receive advance notifications of maintenance windows.
Maintenance window advance notifications are in public
preview for databases configured to use a non-default
maintenance window.
Hyperscale zone redundant configuration preview It's now possible to create new Hyperscale databases with
zone redundancy to make your databases resilient to a
much larger set of failures. This feature is currently in
preview for the Hyperscale service tier. To learn more, see
Hyperscale zone redundancy.
Hyperscale storage redundancy GA Choosing your storage redundancy for your databases in
the Hyperscale service tier is now generally available. See
Configure backup storage redundancy to learn more.
February 2022
C H A N GES DETA IL S
New Hyperscale ar ticles We have reorganized some existing content into new articles
and added new content for Hyperscale. Learn about
Hyperscale distributed functions architecture, how to
manage a Hyperscale database, and how to create a
Hyperscale database.
Free Azure SQL Database Try Azure SQL Database for free using the Azure free
account. To learn more, review Try SQL Database for free.
2021
C H A N GES DETA IL S
Azure AD-only authentication Restricting authentication to your Azure SQL Database only
to Azure Active Directory users is now generally available. To
learn more, see Azure AD-only authentication.
Split what's new The previously combined What's new article has been split
by product - What's new in SQL Database and What's new in
SQL Managed Instance, making it easier to identify what
features are currently in preview, generally available, and
significant documentation changes. Additionally, the Known
Issues in SQL Managed Instance content has moved to its
own page.
C H A N GES DETA IL S
Maintenance Window suppor t for availability zones You can now use the Maintenance Window feature if your
Azure SQL Database is deployed to an availability zone. This
feature is currently in preview.
Azure AD-only authentication It's now possible to restrict authentication to your Azure SQL
Database to Azure Active Directory users only. This feature is
currently in preview. To learn more, see Azure AD-only
authentication.
Quer y store hints It's now possible to use query hints to optimize your query
execution via the OPTION clause. To learn more, see Query
store hints.
Change data capture Using change data capture (CDC) with Azure SQL Database
is now in preview. To learn more, see Change data capture.
SQL Database ledger SQL Database ledger is in preview, and introduces the ability
to cryptographically attest to other parties, such as auditors
or other business parties, that your data hasn't been
tampered with. To learn more, see Ledger.
Contribute to content
To contribute to the Azure SQL documentation, see the Docs contributor guide.
Try Azure SQL Database free with Azure free
account
9/13/2022 • 6 minutes to read • Edit Online
Azure SQL Database is an intelligent, scalable, relational database service built for the cloud. SQL Database is a
fully managed platform as a service (PaaS) database engine that handles most database management functions
such as upgrading, patching, backups, and monitoring without user involvement.
Using an Azure free account, you can try Azure SQL Database for free for 12 months with the following
monthly limit :
1 S0 database with 10 database transaction units and 250 GB storage
This article shows you how to create and use an Azure SQL Database for free using an Azure free account.
Prerequisites
To try Azure SQL Database for free, you need:
An Azure free account. If you don't have one, create a free account before you begin.
Create a database
This article uses the Azure portal to create a SQL Database with public access. Alternatively, you can create a
SQL Database using PowerShell, the Azure CLI or an ARM template.
To create your database, follow these steps:
1. Sign in to the Azure portal with your Azure free account.
2. Search for and select SQL databases :
Alternatively, you can search for and navigate to Free Ser vices , and then select the Azure SQL
Database tile from the list:
3. Select Create .
4. On the Basics tab of the Create SQL Database form, under Project details , select the free trial Azure
Subscription .
5. For Resource group , select Create new , enter myResourceGroup, and select OK .
6. For Database name , enter mySampleDatabase.
7. For Ser ver , select Create new , and fill out the New ser ver form with the following values:
Ser ver name : Enter mysqlserver, and add some characters for uniqueness. We can't provide an exact
server name to use because server names must be globally unique for all servers in Azure, not just
unique within a subscription. So enter something like mysqlserver12345, and the portal lets you
know if it's available or not.
Ser ver admin login : Enter azureuser.
Password : Enter a password that meets complexity requirements, and enter it again in the Confirm
password field.
Location : Select a location from the dropdown list.
Select OK .
8. Leave Want to use SQL elastic pool set to No .
9. Under Compute + storage , select Configure database .
10. For the free trial, under Ser vice Tier select Standard (For workloads with typical performance
requirements) . Set DTUs to 10 and Data max size (GB) to 250 , and then select Apply .
11. Leave Backup storage redundancy set to Geo-redundant backup storage
12. Select Next: Networking at the bottom of the page.
13. On the Networking tab, for Connectivity method , select Public endpoint .
14. For Firewall rules , set Allow Azure ser vices and resources to access this ser ver set to Yes and
set Add current client IP address to Yes .
15. Leave Connection policy set to Default .
16. For Encr ypted Connections , leave Minimum TLS version set to TLS 1.2 .
17. Select Next: Security at the bottom of the page.
18. Leave the values unchanged on Security tab.
19. Select Next: Additional settings at the bottom of the page.
20. On the Additional settings tab, in the Data source section, for Use existing data , select Sample .
This creates an AdventureWorksLT sample database so there are some tables and data to query and
experiment with, as opposed to an empty blank database.
21. Select Review + create at the bottom of the page.
22. On the Review + create page, after reviewing, select Create .
IMPORTANT
While creating the SQL Database from your Azure free account, you will still see an Estimated cost per month
in the Compute + Storage : Cost Summar y blade and Review + Create tab. But, as long as you are using
your Azure free account, and your free service usage is within monthly limits, you won't be charged for the
service. To view usage information, review Monitor and track free ser vices usage later in this article.
5. Select Run , and then review the query results in the Results pane.
6. Close the Quer y editor page, and select OK when prompted to discard your unsaved edits.
The following table describes the values on the track usage page:
VA L UE DESC RIP T IO N
Usage/limit The usage of the meter for the current month, and the limit
for the meter.
VA L UE DESC RIP T IO N
IMPORTANT
With an Azure free account, you also get $200 in credit to use in 30 days. During this time, any usage of the service
beyond the free monthly amount is deducted from this credit.
At the end of your first 30 days or after you spend your $200 credit (whichever comes first), you'll only pay for what
you use beyond the free monthly amount of services. To keep getting free services after 30 days, move to pay-as-
you-go pricing. If you don't move to pay as you go, you can't purchase Azure services beyond your $200 credit and
eventually your account and services will be disabled.
For more information, see Azure free account FAQ .
Clean up resources
When you're finished using these resources, you can delete the resource group you created, which will also
delete the server and single database within it.
To delete myResourceGroup and all its resources using the Azure portal:
1. In the portal, search for and select Resource groups , and then select myResourceGroup from the list.
2. On the resource group page, select Delete resource group .
3. Under Type the resource group name , enter myResourceGroup, and then select Delete .
Next steps
Connect and query your database using different tools and languages:
Connect and query using SQL Server Management Studio
Connect and query using Azure Data Studio
Getting started with single databases in Azure SQL
Database
9/13/2022 • 2 minutes to read • Edit Online
Quickstart overview
In this section, you'll see an overview of available articles that can help you to quickly get started with single
databases. The following quickstarts enable you to quickly create a single database, configure a server-level
firewall rule, and then import a database into the new single database using a .bacpac file:
Create a single database using the Azure portal.
After creating the database, you would need to secure your database by configuring firewall rules.
If you have an existing database on SQL Server that you want to migrate to Azure SQL Database, you should
install Data Migration Assistant (DMA) that will analyze your databases on SQL Server and find any issue that
could block migration. If you don't find any issue, you can export your database as .bacpac file and import it
using the Azure portal or SqlPackage.
Next steps
Find a high-level list of supported features in Azure SQL Database.
Learn how to make your database more secure.
Find more advanced how-to's in how to use a single database in Azure SQL Database.
Find more sample scripts written in PowerShell and the Azure CLI.
Learn more about the management API that you can use to configure your databases.
Identify the right Azure SQL Database or Azure SQL Managed Instance SKU for your on-premises database.
Quickstart: Create a single database - Azure SQL
Database
9/13/2022 • 13 minutes to read • Edit Online
In this quickstart, you create a single database in Azure SQL Database using either the Azure portal, a
PowerShell script, or an Azure CLI script. You then query the database using Quer y editor in the Azure portal.
Prerequisites
An active Azure subscription. If you don't have one, create a free account.
The latest version of either Azure PowerShell or Azure CLI.
To create a single database in the Azure portal, this quickstart starts at the Azure SQL page.
1. Browse to the Select SQL Deployment option page.
2. Under SQL databases , leave Resource type set to Single database , and select Create .
3. On the Basics tab of the Create SQL Database form, under Project details , select the desired Azure
Subscription .
4. For Resource group , select Create new , enter myResourceGroup, and select OK .
5. For Database name , enter mySampleDatabase.
6. For Ser ver , select Create new , and fill out the New ser ver form with the following values:
Ser ver name : Enter mysqlserver, and add some characters for uniqueness. We can't provide an exact
server name to use because server names must be globally unique for all servers in Azure, not just
unique within a subscription. So enter something like mysqlserver12345 , and the portal lets you know
if it's available or not.
Location : Select a location from the dropdown list.
Authentication method : Select Use SQL authentication .
Ser ver admin login : Enter azureuser.
Password : Enter a password that meets requirements, and enter it again in the Confirm password
field.
Select OK .
7. Leave Want to use SQL elastic pool set to No .
8. Under Compute + storage , select Configure database .
9. This quickstart uses a serverless database, so leave Ser vice tier set to General Purpose (Scalable
compute and storage options) and set Compute tier to Ser verless . Select Apply .
10. Under Backup storage redundancy , choose a redundancy option for the storage account where your
backups will be saved. To learn more, see backup storage redundancy.
11. Select Next: Networking at the bottom of the page.
12. On the Networking tab, for Connectivity method , select Public endpoint .
13. For Firewall rules , set Add current client IP address to Yes . Leave Allow Azure ser vices and
resources to access this ser ver set to No .
14. Under Connection policy , choose the Default connection policy, and leave the Minimum TLS
version at the default of TLS 1.2.
15. Select Next: Security at the bottom of the page.
16. On the Security page, you can choose to start a free trial of Microsoft Defender for SQL, as well as
configure Ledger, Managed identities and Transparent data encryption (TDE) if you desire. Select Next:
Additional settings at the bottom of the page.
17. On the Additional settings tab, in the Data source section, for Use existing data , select Sample .
This creates an AdventureWorksLT sample database so there's some tables and data to query and
experiment with, as opposed to an empty blank database. You can also configure database collation and a
maintenance window.
18. Select Review + create at the bottom of the page:
19. On the Review + create page, after reviewing, select Create .
5. Select Run , and then review the query results in the Results pane.
6. Close the Quer y editor page, and select OK when prompted to discard your unsaved edits.
Clean up resources
Keep the resource group, server, and single database to go on to the next steps, and learn how to connect and
query your database with different methods.
When you're finished using these resources, you can delete the resource group you created, which will also
delete the server and single database within it.
Portal
Azure CLI
Azure CLI (sql up)
PowerShell
To delete myResourceGroup and all its resources using the Azure portal:
1. In the portal, search for and select Resource groups , and then select myResourceGroup from the list.
2. On the resource group page, select Delete resource group .
3. Under Type the resource group name , enter myResourceGroup, and then select Delete .
Next steps
Connect and query your database using different tools and languages:
Connect and query using SQL Server Management Studio
Connect and query using Azure Data Studio
Want to optimize and save on your cloud spending?
Start analyzing costs with Cost Management
Quickstart: Create a Hyperscale database in Azure
SQL Database
9/13/2022 • 14 minutes to read • Edit Online
In this quickstart, you create a logical server in Azure and a Hyperscale database in Azure SQL Database using
the Azure portal, a PowerShell script, or an Azure CLI script, with the option to create one or more High
Availability (HA) replicas. If you would like to use an existing logical server in Azure, you can also create a
Hyperscale database using Transact-SQL.
Prerequisites
An active Azure subscription. If you don't have one, create a free account.
The latest version of either Azure PowerShell or Azure CLI, if you would like to follow the quickstart
programmatically. Alternately, you can complete the quickstart in the Azure portal.
An existing logical server in Azure is required if you would like to create a Hyperscale database with Transact-
SQL. For this approach, you will need to install SQL Server Management Studio (SSMS), Azure Data Studio,
or the client of your choice to run Transact-SQL commands (sqlcmd, etc.).
To create a single database in the Azure portal, this quickstart starts at the Azure SQL page.
1. Browse to the Select SQL Deployment option page.
2. Under SQL databases , leave Resource type set to Single database , and select Create .
3. On the Basics tab of the Create SQL Database form, under Project details , select the desired Azure
Subscription .
4. For Resource group , select Create new , enter myResourceGroup, and select OK .
5. For Database name , enter mySampleDatabase.
6. For Ser ver , select Create new , and fill out the New ser ver form with the following values:
Ser ver name : Enter mysqlserver, and add some characters for uniqueness. We can't provide an exact
server name to use because server names must be globally unique for all servers in Azure, not just
unique within a subscription. Enter a name such as mysqlserver12345, and the portal will let you
know if it's available.
Ser ver admin login : Enter azureuser.
Password : Enter a password that meets requirements, and enter it again in the Confirm password
field.
Location : Select a location from the dropdown list.
Select OK .
7. Under Compute + storage , select Configure database .
8. This quickstart creates a Hyperscale database. For Ser vice tier , select Hyperscale .
9. Under Compute Hardware , select Change configuration . Review the available hardware
configurations and select the most appropriate configuration for your database. For this example, we will
select the Gen5 configuration.
10. Select OK to confirm the hardware generation.
11. Under Save money , review if you qualify to use Azure Hybrid Benefit for this database. If so, select Yes
and then confirm you have the required license.
12. Optionally, adjust the vCores slider if you would like to increase the number of vCores for your database.
For this example, we will select 2 vCores.
13. Adjust the High-Availability Secondar y Replicas slider to create one High Availability (HA) replica.
14. Select Apply .
15. Carefully consider the configuration option for Backup storage redundancy when creating a
Hyperscale database. Storage redundancy can only be specified during the database creation process for
Hyperscale databases. You may choose locally redundant (preview), zone-redundant (preview), or geo-
redundant storage. The selected storage redundancy option will be used for the lifetime of the database
for both data storage redundancy and backup storage redundancy. Existing databases can migrate to
different storage redundancy using database copy or point in time restore.
16. Select Next: Networking at the bottom of the page.
17. On the Networking tab, for Connectivity method , select Public endpoint .
18. For Firewall rules , set Add current client IP address to Yes . Leave Allow Azure ser vices and
resources to access this ser ver set to No .
19. Select Next: Security at the bottom of the page.
20. Optionally, enable Microsoft Defender for SQL.
21. Select Next: Additional settings at the bottom of the page.
22. On the Additional settings tab, in the Data source section, for Use existing data , select Sample .
This creates an AdventureWorksLT sample database so there's some tables and data to query and
experiment with, as opposed to an empty blank database.
23. Select Review + create at the bottom of the page:
24. On the Review + create page, after reviewing, select Create .
If you created an empty database using the Transact-SQL sample code, enter another example query in
the Quer y editor pane, such as the following:
ALTER TABLE dbo.TestTable ADD CONSTRAINT DF_TestTable_TestTime DEFAULT (getdate()) FOR TestTime
GO
5. Select Run , and then review the query results in the Results pane.
6. Close the Quer y editor page, and select OK when prompted to discard your unsaved edits.
Clean up resources
Keep the resource group, server, and single database to go on to the next steps, and learn how to connect and
query your database with different methods.
When you're finished using these resources, you can delete the resource group you created, which will also
delete the server and single database within it.
Portal
Azure CLI
PowerShell
Transact-SQL
To delete myResourceGroup and all its resources using the Azure portal:
1. In the portal, search for and select Resource groups , and then select myResourceGroup from the list.
2. On the resource group page, select Delete resource group .
3. Under Type the resource group name , enter myResourceGroup, and then select Delete .
Next steps
Connect and query your database using different tools and languages:
Connect and query using SQL Server Management Studio
Connect and query using Azure Data Studio
Learn more about Hyperscale databases in the following articles:
Hyperscale service tier
Azure SQL Database Hyperscale FAQ
Hyperscale secondary replicas
Azure SQL Database Hyperscale named replicas FAQ
Quickstart: Create a single database in Azure SQL
Database using Bicep
9/13/2022 • 2 minutes to read • Edit Online
Creating a single database is the quickest and simplest option for creating a database in Azure SQL Database.
This quickstart shows you how to create a single database using Bicep.
Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. It provides
concise syntax, reliable type safety, and support for code reuse. Bicep offers the best authoring experience for
your infrastructure-as-code solutions in Azure.
Prerequisites
If you don't have an Azure subscription, create a free account.
CLI
PowerShell
NOTE
Replace <admin-login> with the administrator username of the SQL logical server. You'll be prompted to enter
administratorLoginPassword .
When the deployment finishes, you should see a message indicating the deployment succeeded.
Review deployed resources
Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
CLI
PowerShell
Clean up resources
When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and
its resources.
CLI
PowerShell
Next steps
Create a server-level firewall rule to connect to the single database from on-premises or remote tools. For
more information, see Create a server-level firewall rule.
After you create a server-level firewall rule, connect and query your database using several different tools
and languages.
Connect and query using SQL Server Management Studio
Connect and query using Azure Data Studio
To create a single database using the Azure CLI, see Azure CLI samples.
To create a single database using Azure PowerShell, see Azure PowerShell samples.
To learn how to create Bicep files, see Create Bicep files with Visual Studio Code.
Quickstart: Create a single database in Azure SQL
Database using an ARM template
9/13/2022 • 2 minutes to read • Edit Online
Creating a single database is the quickest and simplest option for creating a database in Azure SQL Database.
This quickstart shows you how to create a single database using an Azure Resource Manager template (ARM
template).
An ARM template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for
your project. The template uses declarative syntax. In declarative syntax, you describe your intended deployment
without writing the sequence of programming commands to create the deployment.
If your environment meets the prerequisites and you're familiar with using ARM templates, select the Deploy to
Azure button. The template will open in the Azure portal.
Prerequisites
If you don't have an Azure subscription, create a free account.
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"metadata": {
"_generator": {
"name": "bicep",
"version": "0.5.6.12127",
"templateHash": "17606057535442789180"
}
},
"parameters": {
"serverName": {
"type": "string",
"defaultValue": "[uniqueString('sql', resourceGroup().id)]",
"metadata": {
"description": "The name of the SQL logical server."
}
},
"sqlDBName": {
"type": "string",
"defaultValue": "SampleDB",
"metadata": {
"description": "The name of the SQL Database."
}
},
"location": {
"type": "string",
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location for all resources."
}
},
"administratorLogin": {
"type": "string",
"metadata": {
"description": "The administrator username of the SQL logical server."
}
},
"administratorLoginPassword": {
"type": "secureString",
"metadata": {
"description": "The administrator password of the SQL logical server."
}
}
},
"resources": [
{
"type": "Microsoft.Sql/servers",
"apiVersion": "2021-08-01-preview",
"name": "[parameters('serverName')]",
"location": "[parameters('location')]",
"properties": {
"administratorLogin": "[parameters('administratorLogin')]",
"administratorLoginPassword": "[parameters('administratorLoginPassword')]"
}
},
{
"type": "Microsoft.Sql/servers/databases",
"apiVersion": "2021-08-01-preview",
"name": "[format('{0}/{1}', parameters('serverName'), parameters('sqlDBName'))]",
"location": "[parameters('location')]",
"sku": {
"name": "Standard",
"tier": "Standard"
},
"dependsOn": [
"[resourceId('Microsoft.Sql/servers', parameters('serverName'))]"
]
}
]
}
$resourceGroupName = "${projectName}rg"
Clean up resources
Keep this resource group, server, and single database if you want to go to the Next steps. The next steps show
you how to connect and query your database using different methods.
To delete the resource group:
Next steps
Create a server-level firewall rule to connect to the single database from on-premises or remote tools. For
more information, see Create a server-level firewall rule.
After you create a server-level firewall rule, connect and query your database using several different tools
and languages.
Connect and query using SQL Server Management Studio
Connect and query using Azure Data Studio
To create a single database using the Azure CLI, see Azure CLI samples.
To create a single database using Azure PowerShell, see Azure PowerShell samples.
To learn how to create ARM templates, see Create your first template.
Quickstart: Create a database in Azure SQL
Database with ledger enabled
9/13/2022 • 12 minutes to read • Edit Online
Prerequisite
You need an active Azure subscription. If you don't have one, create a free account.
To create a single database in the Azure portal, this quickstart starts at the Azure SQL page.
1. Browse to the Select SQL Deployment option page.
2. Under SQL databases , leave Resource type set to Single database , and select Create .
3. On the Basics tab of the Create SQL Database form, under Project details , select the Azure
subscription you want to use.
4. For Resource group , select Create new , enter myResourceGroup , and select OK .
5. For Database name , enter demo .
6. For Ser ver , select Create new . Fill out the New ser ver form with the following values:
Ser ver name : Enter mysqlser ver , and add some characters for uniqueness. We can't provide an
exact server name to use because server names must be globally unique for all servers in Azure, not
just unique within a subscription. Enter something like mysqlser ver12345 , and the portal lets you
know if it's available or not.
Ser ver admin login : Enter azureuser .
Password : Enter a password that meets requirements. Enter it again in the Confirm password box.
Location : Select a location from the dropdown list.
Allow Azure ser vices to access this ser ver : Select this option to enable access to digest storage.
Select OK .
7. Leave Want to use SQL elastic pool set to No .
8. Under Compute + storage , select Configure database .
9. This quickstart uses a serverless database, so select Ser verless , and then select Apply .
10. On the Networking tab, for Connectivity method , select Public endpoint .
11. For Firewall rules , set Add current client IP address to Yes . Leave Allow Azure ser vices and
resources to access this ser ver set to No .
12. Select Next: Security at the bottom of the page.
13. On the Security tab, in the Ledger section, select the Configure ledger option.
14. On the Configure ledger pane, in the Ledger section, select the Enable for all future tables in this
database checkbox. This setting ensures that all future tables in the database will be ledger tables. For
this reason, all data in the database will show any evidence of tampering. By default, new tables will be
created as updatable ledger tables, even if you don't specify LEDGER = ON in CREATE TABLE. You can also
leave this option unselected. You're then required to enable ledger functionality on a per-table basis when
you create new tables by using Transact-SQL.
15. In the Digest Storage section, Enable automatic digest storage is automatically selected. Then, a
new Azure Storage account and container where your digests are stored is created.
16. Select Apply .
Clean up resources
Keep the resource group, server, and single database for the next steps. You'll learn how to use the ledger feature
of your database with different methods.
When you're finished using these resources, delete the resource group you created. This action also deletes the
server and single database within it, and the storage account.
NOTE
If you've configured and locked a time-based retention policy on the container, you need to wait until the specified
immutability period ends before you can delete the storage account.
Portal
The Azure CLI
PowerShell
To delete myResourceGroup and all its resources by using the Azure portal:
1. In the portal, search for and select Resource groups . Then select myResourceGroup from the list.
2. On the resource group page, select Delete resource group .
3. Under Type the resource group name , enter myResourceGroup , and then select Delete .
Next steps
Connect and query your database by using different tools and languages:
Create and use updatable ledger tables
Create and use append-only ledger tables
Create an Azure SQL Database server with a user-
assigned managed identity
9/13/2022 • 9 minutes to read • Edit Online
NOTE
If you're looking for a guide on Azure SQL Managed Instance, see Create an Azure SQL Managed Instance with a user-
assigned managed identity.
This how-to guide outlines the steps to create a logical server for Azure SQL Database with a user-assigned
managed identity. For more information on the benefits of using a user-assigned managed identity for the
server identity in Azure SQL Database, see User-assigned managed identity in Azure AD for Azure SQL.
Prerequisites
To provision a SQL Database server with a user-assigned managed identity, the SQL Server Contributor role
(or a role with greater permissions), along with an Azure RBAC role containing the following action is
required:
Microsoft.ManagedIdentity/userAssignedIdentities/*/assign/action - For example, the Managed
Identity Operator has this action.
Create a user-assigned managed identity and assign it the necessary permission to be a server or managed
instance identity. For more information, see Manage user-assigned managed identities and user-assigned
managed identity permissions for Azure SQL.
Az.Sql module 3.4 or higher is required when using PowerShell for user-assigned managed identities.
The Azure CLI 2.26.0 or higher is required to use the Azure CLI with user-assigned managed identities.
For a list of limitations and known issues with using user-assigned managed identity, see User-assigned
managed identity in Azure AD for Azure SQL
NOTE
Multiple user-assigned managed identities can be added to the server, but only one identity can be the primary identity
at any given time. In this example, the system-assigned managed identity is disabled, but it can be enabled as well.
Portal
The Azure CLI
PowerShell
REST API
ARM Template
1. Browse to the Select SQL deployment option page in the Azure portal.
2. If you aren't already signed in to Azure portal, sign in when prompted.
3. Under SQL databases , leave Resource type set to Single database , and select Create .
4. On the Basics tab of the Create SQL Database form, under Project details , select the desired Azure
Subscription .
5. For Resource group , select Create new , enter a name for your resource group, and select OK .
6. For Database name enter your desired database name.
7. For Ser ver , select Create new , and fill out the New ser ver form with the following values:
Ser ver name : Enter a unique server name. Server names must be globally unique for all servers in
Azure, not just unique within a subscription.
Ser ver admin login : Enter an admin login name, for example: azureuser .
Password : Enter a password that meets the password requirements, and enter it again in the
Confirm password field.
Location : Select a location from the dropdown list
8. Select Next: Networking at the bottom of the page.
9. On the Networking tab, for Connectivity method , select Public endpoint .
10. For Firewall rules , set Add current client IP address to Yes . Leave Allow Azure ser vices and
resources to access this ser ver set to No .
11. Select Next: Security at the bottom of the page.
12. On the Security tab, under Identity , select Configure Identities .
13. On the Identity blade, under User assigned managed identity , select Add . Select the desired
Subscription and then under User assigned managed identities select the desired user assigned
managed identity from the selected subscription. Then select the Select button.
14. Under Primar y identity , select the same user-assigned managed identity selected in the previous step.
NOTE
If the system-assigned managed identity is the primary identity, the Primar y identity field must be empty.
See also
User-assigned managed identity in Azure AD for Azure SQL
Create an Azure SQL Managed Instance with a user-assigned managed identity.
Quickstart: Create a server-level firewall rule in
Azure portal
9/13/2022 • 3 minutes to read • Edit Online
Prerequisites
We will use the resources developed in Create a single database using the Azure portal as a starting point for
this tutorial.
NOTE
Azure SQL Database communicates over port 1433. When you connect from within a corporate network, outbound
traffic over port 1433 may not be permitted by your network firewall. This means your IT department needs to open port
1433 for you to connect to your server.
IMPORTANT
A firewall rule of 0.0.0.0 enables all Azure services to pass through the server-level firewall rule and attempt to connect to
a database through the server.
We'll use the following steps to create a server-level IP-based, firewall rule for a specific, client IP address. This
enables external connectivity for that IP address through the Azure SQL Database firewall.
1. After the database deployment completes, select SQL databases from the left-hand menu and then
select mySampleDatabase on the SQL databases page. The overview page for your database opens. It
displays the fully qualified server name (such as mydocssampleser ver.database.windows.net ) and
provides options for further configuration. You can also find the firewall settings by navigating directly to
your server, and selecting Networking under Security .
2. Copy the fully qualified server name. You will use it when you connect to your server and its databases in
other quickstarts. Select Set ser ver firewall on the toolbar.
3. Set Public network access to Selected networks to reveal the virtual networks and firewall rules.
When set to Disabled , virtual networks and firewall rule settings are hidden.
4. Choose Add your client IP to add your current IP address to a new, server-level, firewall rule. This rule
can open Port 1433 for a single IP address or for a range of IP addresses. You can also configure firewall
settings by choosing Add a firewall rule .
IMPORTANT
By default, access through the Azure SQL Database firewall is disabled for all Azure services. Choose ON on this
page to enable access for all Azure services.
5. Select Save . Port 1433 is now open on the server and a server-level IP-based, firewall rule is created for
your current IP address.
6. Close the Networking page.
Open SQL Server Management Studio or another tool of your choice. Use the server admin account you
created earlier to connect to the server and its databases from your IP address.
7. Save the resources from this quickstart to complete additional SQL database tutorials.
Clean up resources
Use the following steps to delete the resources that you created during this quickstart:
1. From the left-hand menu in Azure portal, select Resource groups and then select myResourceGroup .
2. On your resource group page, select Delete , type myResourceGroup in the text box, and then select
Delete .
Next steps
Learn how to connect and query your database using your favorite tools or languages, including:
Connect and query using SQL Server Management Studio
Connect and query using Azure Data Studio
Learn how to design your first database, create tables, and insert data, see one of these tutorials:
Design your first single database in Azure SQL Database using SSMS
Design a single database in Azure SQL Database and connect with C# and ADO.NET
Quickstart: Create a local development environment
for Azure SQL Database
9/13/2022 • 3 minutes to read • Edit Online
Prerequisites
To complete this Quickstart, you must first Set up a local development environment for Azure SQL Database.
6. To set the target platform for your project, right-click the Database Project name and choose Change
Target Platform . Select Azure SQL Database as the target platform for your project.
Setting your target platform provides editing and build time support for your SQL Database Project
objects and scripts. After selecting your target platform, Visual Studio Code highlights syntax issues or
indicates the select platform is using unsupported features.
Optionally, SQL Database Project files can be put under source control together with your application
projects.
7. Add objects to your Database Project. You can create or alter database objects such as tables, views,
stored procedures and scripts. For example, right-click the Database Project name and select Add
Table to add a table.
8. Build your Database Project to validate that it will work against the Azure SQL Database platform. To build
your project, right-click the Database Project name and select Build .
9. Once your Database Project is ready to be tested, publish it to a target. To begin the publishing process,
right-click on the name of your Database Project and select Publish .
10. When publishing, you can choose to publish to either a new or existing server. In this example, we choose
Publish to a new Azure SQL Database emulator .
11. When publishing to a new Azure SQL Database emulator, you are prompted to choose between Lite and
Full images. The Lite image has compatibility with most Azure SQL Database capabilities and is a
lightweight image that takes less to download and instantiate. The Full image gives you access to
advanced features like in-memory optimized tables, geo-spatial data types and more, but requires more
resources.
You can create as many local instances as necessary based on available resources, and manage their
lifecycle through the Visual Studio Code Docker Extension or CLI commands.
12. Once instances of your Database Projects are running, you can connect from the Visual Studio Code
mssql extension and test your scripts and queries, like any regular database in Azure SQL Database.
13. Rebuild and deploy your Database project to one of the containerized instances running on your local
machine with each iteration of adding or modifying objects in your Database Project, until it’s ready.
14. The final step of the Database Project lifecycle is to publish the finished artifact to a new or existing
database in Azure SQL Database using the mssql extension. Right-click the Database Project name and
choose to Publish . Then select the destination where you want to publish your project, such as a new or
existing logical server in Azure.
Next steps
Learn more about the local development experience for Azure SQL Database:
Set up a local development environment for Azure SQL Database
Create a Database Project for a local Azure SQL Database development environment
Publish a Database Project for Azure SQL Database to the local emulator
Quickstart: Create a local development environment for Azure SQL Database
Introducing the Azure SQL Database emulator
Use GitHub Actions to connect to Azure SQL
Database
9/13/2022 • 6 minutes to read • Edit Online
Get started with GitHub Actions by using a workflow to deploy database updates to Azure SQL Database.
Prerequisites
You will need:
An Azure account with an active subscription. Create an account for free.
A GitHub repository with a dacpac package ( Database.dacpac ). If you don't have a GitHub account, sign up
for free.
An Azure SQL Database.
Quickstart: Create an Azure SQL Database single database
How to create a dacpac package from your existing SQL Server Database
SEC T IO N TA SK S
You can create a service principal with the az ad sp create-for-rbac command in the Azure CLI. Run this
command with Azure Cloud Shell in the Azure portal or by selecting the Tr y it button.
Replace the placeholders server-name with the name of your SQL server hosted on Azure. Replace the
subscription-id and resource-group with the subscription ID and resource group connected to your SQL
server.
The output is a JSON object with the role assignment credentials that provide access to your database similar to
this example. Copy your output JSON object for later.
{
"clientId": "<GUID>",
"clientSecret": "<GUID>",
"subscriptionId": "<GUID>",
"tenantId": "<GUID>",
(...)
}
IMPORTANT
It is always a good practice to grant minimum access. The scope in the previous example is limited to the specific server
and not the entire resource group.
- uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
name: CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
4. Rename your workflow SQL for GitHub Actions and add the checkout and login actions. These actions
will check out your site code and authenticate with Azure using the AZURE_CREDENTIALS GitHub secret you
created earlier.
Service principal
OpenID Connect
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: windows-latest
steps:
- uses: actions/checkout@v1
- uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
1. Use the Azure SQL Deploy action to connect to your SQL instance. Replace SQL_SERVER_NAME with the
name of your server. You should have a dacpac package ( Database.dacpac ) at the root level of your
repository.
- uses: azure/sql-action@v1
with:
server-name: SQL_SERVER_NAME
connection-string: ${{secrets.AZURE_SQL_CONNECTION_STRING }}
dacpac-package: './Database.dacpac'
2. Complete your workflow by adding an action to logout of Azure. Here's the completed workflow. The file
will appear in the .github/workflows folder of your repository.
Service principal
OpenID Connect
name: SQL for GitHub Actions
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: windows-latest
steps:
- uses: actions/checkout@v1
- uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- uses: azure/sql-action@v1
with:
server-name: SQL_SERVER_NAME
connection-string: ${{secrets.AZURE_SQL_CONNECTION_STRING }}
dacpac-package: './Database.dacpac'
# Azure logout
- name: logout
run: |
az logout
Clean up resources
When your Azure SQL database and repository are no longer needed, clean up the resources you deployed by
deleting the resource group and your GitHub repository.
Next steps
Learn about Azure and GitHub integration
Tutorial: Design a relational database in Azure SQL
Database using SSMS
9/13/2022 • 8 minutes to read • Edit Online
TIP
This free Learn module shows you how to Develop and configure an ASP.NET application that queries an Azure SQL
Database, including the creation of a simple database.
NOTE
For the purpose of this tutorial, we are using Azure SQL Database. You could also use a pooled database in an elastic pool
or a SQL Managed Instance. For connectivity to a SQL Managed Instance, see these SQL Managed Instance quickstarts:
Quickstart: Configure Azure VM to connect to an Azure SQL Managed Instance and Quickstart: Configure a point-to-site
connection to an Azure SQL Managed Instance from on-premises.
Prerequisites
To complete this tutorial, make sure you've installed:
SQL Server Management Studio (latest version)
BCP and SQLCMD (latest version)
3. Fill out the SQL Database form with the following information, as shown on the preceding image:
4. Click Ser ver to use an existing server or create and configure a new server. Either select an existing
server or click Create a new ser ver and fill out the New ser ver form with the following information:
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N
Ser ver name Any globally unique name For valid server names, see Naming
rules and restrictions.
Ser ver admin login Any valid name For valid login names, see Database
identifiers.
5. Click Select .
6. Click Pricing tier to specify the service tier, the number of DTUs or vCores, and the amount of storage.
You may explore the options for the number of DTUs/vCores and storage that is available to you for each
service tier.
After selecting the service tier, the number of DTUs or vCores, and the amount of storage, click Apply .
7. Enter a Collation for the blank database (for this tutorial, use the default value). For more information
about collations, see Collations
8. Now that you've completed the SQL Database form, click Create to provision the database. This step
may take a few minutes.
9. On the toolbar, click Notifications to monitor the deployment process.
Create a server-level IP firewall rule
Azure SQL Database creates an IP firewall at the server-level. This firewall prevents external applications and
tools from connecting to the server and any databases on the server unless a firewall rule allows their IP
through the firewall. To enable external connectivity to your database, you must first add an IP firewall rule for
your IP address (or IP address range). Follow these steps to create a server-level IP firewall rule.
IMPORTANT
Azure SQL Database communicates over port 1433. If you are trying to connect to this service from within a corporate
network, outbound traffic over port 1433 may not be allowed by your network's firewall. If so, you cannot connect to
your database unless your administrator opens port 1433.
1. After the deployment completes, select SQL databases from the Azure portal menu or search for and
select SQL databases from any page.
2. Select yourDatabase on the SQL databases page. The overview page for your database opens, showing
you the fully qualified Ser ver name (such as contosodatabaseserver01.database.windows.net ) and
provides options for further configuration.
3. Copy this fully qualified server name for use to connect to your server and databases from SQL Server
Management Studio.
4. Click Set ser ver firewall on the toolbar. The Firewall settings page for the server opens.
5. Click Add client IP on the toolbar to add your current IP address to a new IP firewall rule. An IP firewall
rule can open port 1433 for a single IP address or a range of IP addresses.
6. Click Save . A server-level IP firewall rule is created for your current IP address opening port 1433 on the
server.
7. Click OK and then close the Firewall settings page.
Your IP address can now pass through the IP firewall. You can now connect to your database using SQL Server
Management Studio or another tool of your choice. Be sure to use the server admin account you created
previously.
IMPORTANT
By default, access through the SQL Database IP firewall is enabled for all Azure services. Click OFF on this page to disable
for all Azure services.
Ser ver name The fully qualified server name For example,
yourserver.database.windows.net.
Login The server admin account The account that you specified when
you created the server.
Password The password for your server admin The password that you specified
account when you created the server.
3. Click Options in the Connect to ser ver dialog box. In the Connect to database section, enter
yourDatabase to connect to this database.
NOTE
You can also use the table designer in SQL Server Management Studio to create and design your tables.
1. In Object Explorer , right-click yourDatabase and select New Quer y . A blank query window opens that
is connected to your database.
2. In the query window, execute the following query to create four tables in your database:
You have now loaded sample data into the tables you created earlier.
Query data
Execute the following queries to retrieve information from the database tables. See Write SQL queries to learn
more about writing SQL queries. The first query joins all four tables to find the students taught by 'Dominick
Pope' who have a grade higher than 75%. The second query joins all four tables and finds the courses in which
'Noe Coleman' has ever enrolled.
1. In a SQL Server Management Studio query window, execute the following query:
-- Find the students taught by Dominick Pope who have a grade higher than 75%
SELECT person.FirstName, person.LastName, course.Name, credit.Grade
FROM Person AS person
INNER JOIN Student AS student ON person.PersonId = student.PersonId
INNER JOIN Credit AS credit ON student.StudentId = credit.StudentId
INNER JOIN Course AS course ON credit.CourseId = course.courseId
WHERE course.Teacher = 'Dominick Pope'
AND Grade > 75
-- Find all the courses in which Noe Coleman has ever enrolled
SELECT course.Name, course.Teacher, credit.Grade
FROM Course AS course
INNER JOIN Credit AS credit ON credit.CourseId = course.CourseId
INNER JOIN Student AS student ON student.StudentId = credit.StudentId
INNER JOIN Person AS person ON person.PersonId = student.PersonId
WHERE person.FirstName = 'Noe'
AND person.LastName = 'Coleman'
Next steps
In this tutorial, you learned many basic database tasks. You learned how to:
Create a database using the Azure portal*
Set up a server-level IP firewall rule using the Azure portal
Connect to the database with SSMS
Create tables with SSMS
Bulk load data with BCP
Query data with SSMS
Advance to the next tutorial to learn about designing a database using Visual Studio and C#.
Design a relational database within Azure SQL Database C# and ADO.NET
Tutorial: Design a relational database in Azure SQL
Database C# and ADO.NET
9/13/2022 • 9 minutes to read • Edit Online
TIP
This free Learn module shows you how to Develop and configure an ASP.NET application that queries an Azure SQL
Database, including the creation of a simple database.
Prerequisites
An installation of Visual Studio 2019 or later.
4. Click Ser ver to use an existing server or create and configure a new server. Either select an existing
server or click Create a new ser ver and fill out the New ser ver form with the following information:
Ser ver name Any globally unique name For valid server names, see Naming
rules and restrictions.
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N
Ser ver admin login Any valid name For valid login names, see Database
identifiers.
5. Click Select .
6. Click Pricing tier to specify the service tier, the number of DTUs or vCores, and the amount of storage.
You may explore the options for the number of DTUs/vCores and storage that is available to you for each
service tier.
After selecting the service tier, the number of DTUs or vCores, and the amount of storage, click Apply .
7. Enter a Collation for the blank database (for this tutorial, use the default value). For more information
about collations, see Collations
8. Now that you've completed the SQL Database form, click Create to provision the database. This step
may take a few minutes.
9. On the toolbar, click Notifications to monitor the deployment process.
Create a server-level IP firewall rule
SQL Database creates an IP firewall at the server-level. This firewall prevents external applications and tools
from connecting to the server and any databases on the server unless a firewall rule allows their IP through the
firewall. To enable external connectivity to your database, you must first add an IP firewall rule for your IP
address (or IP address range). Follow these steps to create a server-level IP firewall rule.
IMPORTANT
SQL Database communicates over port 1433. If you are trying to connect to this service from within a corporate
network, outbound traffic over port 1433 may not be allowed by your network's firewall. If so, you cannot connect to
your database unless your administrator opens port 1433.
1. After the deployment is complete, click SQL databases from the left-hand menu and then click
yourDatabase on the SQL databases page. The overview page for your database opens, showing you
the fully qualified Ser ver name (such as yourserver.database.windows.net) and provides options for
further configuration.
2. Copy this fully qualified server name for use to connect to your server and databases from SQL Server
Management Studio.
3. Click Set ser ver firewall on the toolbar. The Firewall settings page for the server opens.
4. Click Add client IP on the toolbar to add your current IP address to a new IP firewall rule. An IP firewall
rule can open port 1433 for a single IP address or a range of IP addresses.
5. Click Save . A server-level IP firewall rule is created for your current IP address opening port 1433 on the
server.
6. Click OK and then close the Firewall settings page.
Your IP address can now pass through the IP firewall. You can now connect to your database using SQL Server
Management Studio or another tool of your choice. Be sure to use the server admin account you created
previously.
IMPORTANT
By default, access through the SQL Database IP firewall is enabled for all Azure services. Click OFF on this page to disable
access for all Azure services.
C# program example
The next sections of this article present a C# program that uses ADO.NET to send Transact-SQL (T-SQL)
statements to SQL Database. The C# program demonstrates the following actions:
Connect to SQL Database using ADO.NET
Methods that return T-SQL statements
Create tables
Populate tables with data
Update, delete, and select data
Submit T-SQL to the database
Entity Relationship Diagram (ERD)
The CREATE TABLE statements involve the REFERENCES keyword to create a foreign key (FK) relationship
between two tables. If you're using tempdb, comment out the --REFERENCES keyword using a pair of leading
dashes.
The ERD displays the relationship between the two tables. The values in the tabEmployee.Depar tmentCode
child column are limited to values from the tabDepar tment.Depar tmentCode parent column.
NOTE
You have the option of editing the T-SQL to add a leading # to the table names, which creates them as temporary
tables in tempdb. This is useful for demonstration purposes, when no test database is available. Any reference to foreign
keys are not enforced during their use and temporary tables are deleted automatically when the connection closes after
the program finishes running.
=================================
T-SQL to 3 - Inserts...
8 = rows affected.
=================================
T-SQL to 4 - Update-Join...
2 = rows affected.
=================================
T-SQL to 5 - Delete-Join...
2 = rows affected.
=================================
Now, SelectEmployees (6)...
8ddeb8f5-9584-4afe-b7ef-d6bdca02bd35 , Alison , 20 , acct , Accounting
9ce11981-e674-42f7-928b-6cc004079b03 , Barbara , 17 , hres , Human Resources
315f5230-ec94-4edd-9b1c-dd45fbb61ee7 , Carol , 22 , acct , Accounting
fcf4840a-8be3-43f7-a319-52304bf0f48d , Elle , 15 , NULL , NULL
View the report output here, then press any key to end the program...
namespace csharp_db_test
{
class Program
{
static void Main(string[] args)
{
try
{
var cb = new SqlConnectionStringBuilder();
cb.DataSource = "your_server.database.windows.net";
cb.UserID = "your_user";
cb.Password = "your_password";
cb.InitialCatalog = "your_database";
Submit_6_Tsql_SelectEmployees(connection);
}
}
catch (SqlException e)
{
Console.WriteLine(e.ToString());
}
Console.WriteLine("View the report output here, then press any key to end the program...");
Console.ReadKey();
}
Next steps
In this tutorial, you learned basic database tasks such as create a database and tables, connect to the database,
load data, and run queries. You learned how to:
Create a database using the Azure portal
Set up a server-level IP firewall rule using the Azure portal
Connect to the database with ADO.NET and Visual Studio
Create tables with ADO.NET
Insert, update, and delete data with ADO.NET
Query data ADO.NET
Advance to the next tutorial to learn about data migration.
Migrate SQL Server to Azure SQL Database offline using DMS
Tutorial: Add an Azure SQL Database to an auto-
failover group
9/13/2022 • 24 minutes to read • Edit Online
Prerequisites
Azure portal
PowerShell
Azure CLI
1 - Create a database
In this step, you create a resource group, server, single database, and server-level IP firewall rule for access to
the server.
In this step, you create a logical SQL server and a single database that uses AdventureWorksLT sample data. You
can create the database by using Azure portal menus and screens, or by using an Azure CLI or PowerShell script
in the Azure Cloud Shell.
All the methods include setting up a server-level firewall rule to allow the public IP address of the computer
you're using to access the server. For more information about creating server-level firewall rules, see Create a
server-level firewall. You can also set database-level firewall rules. See Create a database-level firewall rule.
Portal
PowerShell
Azure CLI
To create a resource group, server, and single database in the Azure portal:
1. Sign in to the portal.
2. From the Search bar, search for and select Azure SQL .
3. On the Azure SQL page, select Add .
4. On the Select SQL deployment option page, select the SQL databases tile, with Single database
under Resource type . You can view more information about the different databases by selecting Show
details .
5. Select Create .
6. On the Basics tab of the Create SQL database form, under Project details , select the correct Azure
Subscription if it isn't already selected.
7. Under Resource group , select Create new , enter myResourceGroup, and select OK .
8. Under Database details , for Database name enter mySampleDatabase.
9. For Ser ver , select Create new , and fill out the New ser ver form as follows:
Ser ver name : Enter mysqlserver, and some characters for uniqueness.
Ser ver admin login : Enter AzureAdmin.
Password : Enter a password that meets requirements, and enter it again in the Confirm password
field.
Location : Drop down and choose a location, such as (US) West US .
Select OK .
Record the server admin login and password so you can sign in to the server and its databases. If you
forget your login or password, you can get the login name or reset the password on the SQL ser ver
page after database creation. To open the SQL ser ver page, select the server name on the database
Over view page.
10. Under Compute + storage , if you want to reconfigure the defaults, select Configure database .
On the Configure page, you can optionally:
Change the Compute tier from Provisioned to Ser verless .
Review and change the settings for vCores and Data max size .
Select Change configuration to change hardware configuration.
After making any changes, select Apply .
11. Select Next: Networking at the bottom of the page.
12. On the Networking tab, under Connectivity method , select Public endpoint .
13. Under Firewall rules , set Add current client IP address to Yes .
14. Select Next: Additional settings at the bottom of the page.
For more information about firewall settings, see Allow Azure services and resources to access this server
and Add a private endpoint.
15. On the Additional settings tab, in the Data source section, for Use existing data , select Sample .
16. Optionally, enable Microsoft Defender for SQL.
17. Optionally, set the maintenance window so planned maintenance is performed at the best time for your
database.
18. Select Review + create at the bottom of the page.
19. After reviewing settings, select Create .
Azure portal
PowerShell
Azure CLI
Create your failover group and add your database to it using the Azure portal.
1. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL isn't in the list, select All
ser vices , then type Azure SQL in the search box. (Optional) Select the star next to Azure SQL to favorite
it and add it as an item in the left-hand navigation.
2. Select the database created in section 1, such as mySampleDatabase .
3. Failover groups can be configured at the server level. Select the name of the server under Ser ver name
to open the settings for the server.
4. Select Failover groups under the Settings pane, and then select Add group to create a new failover
group.
5. On the Failover Group page, enter or select the following values, and then select Create :
Failover group name : Type in a unique failover group name, such as failovergrouptutorial .
Secondar y ser ver : Select the option to configure required settings and then choose to Create a
new ser ver . Alternatively, you can choose an already-existing server as the secondary server.
After entering the following values, select Select .
Ser ver name : Type in a unique name for the secondary server, such as mysqlsecondary .
Ser ver admin login : Type azureuser
Password : Type a complex password that meets password requirements.
Location : Choose a location from the drop-down, such as East US . This location can't be the
same location as your primary server.
NOTE
The server login and firewall settings must match that of your primary server.
Databases within the group : Once a secondary server is selected, this option becomes
unlocked. Select it to Select databases to add and then choose the database you created in
section 1. Adding the database to the failover group will automatically start the geo-replication
process.
3 - Test failover
In this step, you will fail your failover group over to the secondary server, and then fail back using the Azure
portal.
Azure portal
PowerShell
Azure CLI
Clean up resources
Clean up resources by deleting the resource group.
Azure portal
PowerShell
Azure CLI
IMPORTANT
If you want to keep the resource group but delete the secondary database, remove it from the failover group before
deleting it. Deleting a secondary database before it is removed from the failover group can cause unpredictable behavior.
Full scripts
PowerShell
Azure CLI
Azure portal
Next steps
In this tutorial, you added a database in Azure SQL Database to a failover group, and tested failover. You learned
how to:
Create a database in Azure SQL Database
Create a failover group for the database between two servers.
Test failover.
Advance to the next tutorial on how to add your elastic pool to a failover group.
Tutorial: Add an Azure SQL Database elastic pool to a failover group
Tutorial: Add an Azure SQL Database elastic pool to
a failover group
9/13/2022 • 28 minutes to read • Edit Online
Prerequisites
Azure portal
PowerShell
Azure CLI
To create a resource group, server, and single database in the Azure portal:
1. Sign in to the portal.
2. From the Search bar, search for and select Azure SQL .
3. On the Azure SQL page, select Add .
4. On the Select SQL deployment option page, select the SQL databases tile, with Single database
under Resource type . You can view more information about the different databases by selecting Show
details .
5. Select Create .
6. On the Basics tab of the Create SQL database form, under Project details , select the correct Azure
Subscription if it isn't already selected.
7. Under Resource group , select Create new , enter myResourceGroup, and select OK .
8. Under Database details , for Database name enter mySampleDatabase.
9. For Ser ver , select Create new , and fill out the New ser ver form as follows:
Ser ver name : Enter mysqlserver, and some characters for uniqueness.
Ser ver admin login : Enter AzureAdmin.
Password : Enter a password that meets requirements, and enter it again in the Confirm password
field.
Location : Drop down and choose a location, such as (US) West US .
Select OK .
Record the server admin login and password so you can sign in to the server and its databases. If you
forget your login or password, you can get the login name or reset the password on the SQL ser ver
page after database creation. To open the SQL ser ver page, select the server name on the database
Over view page.
10. Under Compute + storage , if you want to reconfigure the defaults, select Configure database .
On the Configure page, you can optionally:
Change the Compute tier from Provisioned to Ser verless .
Review and change the settings for vCores and Data max size .
Select Change configuration to change hardware configuration.
After making any changes, select Apply .
11. Select Next: Networking at the bottom of the page.
12. On the Networking tab, under Connectivity method , select Public endpoint .
13. Under Firewall rules , set Add current client IP address to Yes .
14. Select Next: Additional settings at the bottom of the page.
For more information about firewall settings, see Allow Azure services and resources to access this server
and Add a private endpoint.
15. On the Additional settings tab, in the Data source section, for Use existing data , select Sample .
16. Optionally, enable Microsoft Defender for SQL.
17. Optionally, set the maintenance window so planned maintenance is performed at the best time for your
database.
18. Select Review + create at the bottom of the page.
19. After reviewing settings, select Create .
Azure portal
PowerShell
Azure CLI
6. Select Review + create to review your elastic pool settings and then select Create to create your elastic
pool.
4. Select Failover groups under the Settings pane, and then select Add group to create a new failover
group.
5. On the Failover Group page, enter or select the following values, and then select Create :
Failover group name : Type in a unique failover group name, such as failovergrouptutorial .
Secondar y ser ver : Select the option to configure required settings and then choose to Create a
new ser ver . Alternatively, you can choose an already-existing server as the secondary server.
After entering the following values for your new secondary server, select Select .
Ser ver name : Type in a unique name for the secondary server, such as mysqlsecondary .
Ser ver admin login : Type azureuser
Password : Type a complex password that meets password requirements.
Location : Choose a location from the drop-down, such as East US . This location can't be the
same location as your primary server.
NOTE
The server login and firewall settings must match that of your primary server.
6. Select Databases within the group then select the elastic pool you created in section 2. A warning
should appear, prompting you to create an elastic pool on the secondary server. Select the warning, and
then select OK to create the elastic pool on the secondary server.
7. Select Select to apply your elastic pool settings to the failover group, and then select Create to create
your failover group. Adding the elastic pool to the failover group will automatically start the geo-
replication process.
4 - Test failover
In this step, you'll fail your failover group over to the secondary server, and then fail back using the Azure portal.
Azure portal
PowerShell
Azure CLI
4. Select Failover groups under the Settings pane and then choose the failover group you created in
section 2.
5. Review which server is primary, and which server is secondary.
6. Select Failover from the task pane to fail over your failover group containing your elastic pool.
7. Select Yes on the warning that notifies you that TDS sessions will be disconnected.
8. Review which server is primary, which server is secondary. If failover succeeded, the two servers should
have swapped roles.
9. Select Failover again to fail the failover group back to the original settings.
Clean up resources
Clean up resources by deleting the resource group.
Azure portal
PowerShell
Azure CLI
IMPORTANT
If you want to keep the resource group but delete the secondary database, remove it from the failover group before
deleting it. Deleting a secondary database before it is removed from the failover group can cause unpredictable behavior.
Full script
PowerShell
Azure CLI
Azure portal
Next steps
In this tutorial, you added an Azure SQL Database elastic pool to a failover group, and tested failover. You
learned how to:
Create a single database.
Add the database into an elastic pool.
Create a failover group for two elastic pools between two servers.
Test failover.
Advance to the next tutorial on how to migrate using DMS.
Tutorial: Migrate SQL Server to a pooled database using DMS
Configure and manage Azure SQL Database
security for geo-restore or failover
9/13/2022 • 4 minutes to read • Edit Online
NOTE
It is also possible to use Azure Active Directory (AAD) logins to manage your databases. For more information, see Azure
SQL logins and users.
Setting up logins on the target server involves three steps outlined below:
1. Determine logins with access to the primary database
The first step of the process is to determine which logins must be duplicated on the target server. This is
accomplished with a pair of SELECT statements, one in the logical master database on the source server and one
in the primary database itself.
Only the server admin or a member of the LoginManager server role can determine the logins on the source
server with the following SELECT statement.
Only a member of the db_owner database role, the dbo user, or server admin, can determine all of the database
user principals in the primary database.
NOTE
The INFORMATION_SCHEMA and sys users have NULL SIDs, and the guest SID is 0x00 . The dbo SID may start with
0x01060000000001648000000000048454, if the database creator was the server admin instead of a member of
DbManager .
DISABLE doesn’t change the password, so you can always enable it if needed.
Next steps
For more information on managing database access and logins, see SQL Database security: Manage
database access and login security.
For more information on contained database users, see Contained Database Users - Making Your Database
Portable.
To learn about active geo-replication, see Active geo-replication.
To learn about auto-failover groups, see Auto-failover groups.
For information about using geo-restore, see geo-restore
Tutorial: Implement a geo-distributed database
(Azure SQL Database)
9/13/2022 • 7 minutes to read • Edit Online
Prerequisites
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is
for the Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the
AzureRm modules are substantially identical.
To complete the tutorial, make sure you've installed the following items:
Azure PowerShell
A single database in Azure SQL Database. To create one use,
The Azure Portal
The Azure CLI
PowerShell
NOTE
The tutorial uses the AdventureWorksLT sample database.
Java and Maven, see Build an app using SQL Server, highlight Java and select your environment, then
follow the steps.
IMPORTANT
Be sure to set up firewall rules to use the public IP address of the computer on which you're performing the steps in this
tutorial. Database-level firewall rules will replicate automatically to the secondary server.
For information see Create a database-level firewall rule or to determine the IP address used for the server-level firewall
rule for your computer see Create a server-level firewall.
IMPORTANT
This sample requires Azure PowerShell Az 1.0 or later. Run Get-Module -ListAvailable Az to see which versions are
installed. If you need to install, see Install Azure PowerShell module.
Run Connect-AzAccount to sign in to Azure.
$admin = "<adminName>"
$password = "<password>"
$resourceGroup = "<resourceGroupName>"
$location = "<resourceGroupLocation>"
$server = "<serverName>"
$database = "<databaseName>"
$drLocation = "<disasterRecoveryLocation>"
$drServer = "<disasterRecoveryServerName>"
$failoverGroup = "<globallyUniqueFailoverGroupName>"
Geo-replication settings can also be changed in the Azure portal, by selecting your database, then Settings >
Geo-Replication .
Run the sample project
1. In the console, create a Maven project with the following command:
cd SqlDbSample
4. Using your favorite editor, open the pom.xml file in your project folder.
5. Add the Microsoft JDBC Driver for SQL Server dependency by adding the following dependency section.
The dependency must be pasted within the larger dependencies section.
<dependency>
<groupId>com.microsoft.sqlserver</groupId>
<artifactId>mssql-jdbc</artifactId>
<version>6.1.0.jre8</version>
</dependency>
6. Specify the Java version by adding the properties section after the dependencies section:
<properties>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>
7. Support manifest files by adding the build section after the properties section:
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>3.0.0</version>
<configuration>
<archive>
<manifest>
<mainClass>com.sqldbsamples.App</mainClass>
</manifest>
</archive>
</configuration>
</plugin>
</plugins>
</build>
package com.sqldbsamples;
import java.sql.Connection;
import java.sql.Statement;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.Timestamp;
import java.sql.DriverManager;
import java.util.Date;
import java.util.concurrent.TimeUnit;
private static final String FAILOVER_GROUP_NAME = "<your failover group name>"; // add failover
group name
private static final String DB_NAME = "<your database>"; // add database name
private static final String USER = "<your admin>"; // add database user
private static final String PASSWORD = "<your password>"; // add database password
"sqlserver://%s.secondary.database.windows.net:1433;database=%s;user=%s;password=%s;encrypt=true;" +
"hostNameInCertificate=*.database.windows.net;loginTimeout=30;",
FAILOVER_GROUP_NAME, DB_NAME, USER, PASSWORD);
try {
for(int i = 1; i < 1000; i++) {
// loop will run for about 1 hour
System.out.print(i + ": insert on primary " +
(insertData((highWaterMark + i)) ? "successful" : "failed"));
TimeUnit.SECONDS.sleep(1);
System.out.print(", read from secondary " +
(selectData((highWaterMark + i)) ? "successful" : "failed") + "\n");
TimeUnit.SECONDS.sleep(3);
}
} catch(Exception e) {
e.printStackTrace();
}
}
mvn package
12. Start the application that will run for about 1 hour until stopped manually, allowing you time to run the
failover test.
#######################################
## GEO DISTRIBUTED DATABASE TUTORIAL ##
#######################################
Test failover
Run the following scripts to simulate a failover and observe the application results. Notice how some inserts
and selects will fail during the database migration.
PowerShell
The Azure CLI
You can check the role of the disaster recovery server during the test with the following command:
To test a failover:
1. Start a manual failover of the failover group:
Next steps
In this tutorial, you configured a database in Azure SQL Database and an application for failover to a remote
region and tested a failover plan. You learned how to:
Create a geo-replication failover group
Run a Java application to query a database in SQL Database
Test failover
Advance to the next tutorial on how to add an instance of Azure SQL Managed Instance to a failover group:
Add an instance of Azure SQL Managed Instance to a failover group
Tutorial: Configure active geo-replication and
failover (Azure SQL Database)
9/13/2022 • 5 minutes to read • Edit Online
Prerequisites
Portal
Azure CLI
To configure active geo-replication by using the Azure portal, you need the following resource:
A database in Azure SQL Database: The primary database that you want to replicate to a different
geographical region.
NOTE
When using Azure portal, you can only create a secondary database within the same subscription as the primary. If a
secondary database is required to be in a different subscription, use Create Database REST API or ALTER DATABASE
Transact-SQL API.
NOTE
If the partner database already exists, (for example, as a result of terminating a previous geo-replication relationship) the
command fails.
Portal
Azure CLI
1. In the Azure portal, browse to the database that you want to set up for geo-replication.
2. On the SQL Database page, select your database, scroll to Data management , select Replicas , and then
select Create replica .
3. Select or create the server for the secondary database, and configure the Compute + storage options if
necessary. You can select any region for your secondary server, but we recommend the paired region.
Optionally, you can add a secondary database to an elastic pool. To create the secondary database in a
pool, select Yes next to Want to use SQL elastic pool? and select a pool on the target server. A pool
must already exist on the target server. This workflow doesn't create a pool.
4. Click Review + create , review the information, and then click Create .
5. The secondary database is created and the deployment process begins.
6. When the deployment is complete, the secondary database displays its status.
7. Return to the primary database page, and then select Replicas . Your secondary database is listed under
Geo replicas .
Initiate a failover
The secondary database can be switched to become the primary.
Portal
Azure CLI
1. In the Azure portal, browse to the primary database in the geo-replication partnership.
2. Scroll to Data management , and then select Replicas .
3. In the Geo replicas list, select the database you want to become the new primary, select the ellipsis, and
then select Forced failover .
Portal
Azure CLI
1. In the Azure portal, browse to the primary database in the geo-replication partnership.
2. Select Replicas .
3. In the Geo replicas list, select the database you want to remove from the geo-replication partnership,
select the ellipsis, and then select Stop replication .
4. A confirmation window opens. Click Yes to remove the database from the geo-replication partnership.
(Set it to a read-write database not part of any replication.)
Next steps
To learn more about active geo-replication, see active geo-replication.
To learn about auto-failover groups, see Auto-failover groups
For a business continuity overview and scenarios, see Business continuity overview.
Tutorial: Getting started with Always Encrypted with
secure enclaves in Azure SQL Database
9/13/2022 • 11 minutes to read • Edit Online
Prerequisites
An active Azure subscription. If you don't have one, create a free account. You need to be a member of the
Contributor role or the Owner role for the subscription to be able to create resources and configure an
attestation policy.
SQL Server Management Studio (SSMS), version 18.9.1 or later. See Download SQL Server Management
Studio (SSMS) for information on how to download SSMS.
PowerShell requirements
NOTE
The prerequisites listed in this section apply only if you choose to use PowerShell for some of the steps in this tutorial. If
you plan to use Azure portal instead, you can skip this section.
Make sure the following PowerShell modules are installed on your machine.
1. Az version 6.5.0 or later. For details on how to install the Az PowerShell module, see Install the Azure Az
PowerShell module. To determine the version the Az module installed on your machine, run the following
command from a PowerShell session.
Get-InstalledModule -Name Az
The PowerShell Gallery has deprecated Transport Layer Security (TLS) versions 1.0 and 1.1. TLS 1.2 or a later
version is recommended. You may receive the following errors if you are using a TLS version lower than 1.2:
WARNING: Unable to resolve package source 'https://www.powershellgallery.com/api/v2'
PackageManagement\Install-Package: No match was found for the specified search criteria and module name.
To continue to interact with the PowerShell Gallery, run the following command before the Install-Module
commands
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
4. On the Basics tab of the Create SQL Database form, under Project details , select the desired Azure
Subscription .
5. For Resource group , select Create new , enter a name for your resource group, and select OK .
6. For Database name enter ContosoHR .
7. For Ser ver , select Create new , and fill out the New ser ver form with the following values:
Ser ver name : Enter mysqlserver, and add some characters for uniqueness. We can't provide an exact
server name to use because server names must be globally unique for all servers in Azure, not just
unique within a subscription. So enter something like mysqlserver135, and the portal lets you know if
it is available or not.
Ser ver admin login : Enter an admin login name, for example: azureuser.
Password : Enter a password that meets requirements, and enter it again in the Confirm password
field.
Location : Select a location from the dropdown list.
IMPORTANT
You need to select a location (an Azure region) that supports both the DC-series hardware and Microsoft
Azure Attestation. For the list of regions supporting DC-series, see DC-series availability. Here is the
regional availability of Microsoft Azure Attestation.
Select OK .
8. Leave Want to use SQL elastic pool set to No .
9. Under Compute + storage , select Configure database , and click Change configuration .
10. Select the DC-series hardware configuration, and then select OK .
Portal
PowerShell
7. Select Policy on the resource menu on the left side of the window or on the lower pane.
8. Set Attestation Type to SGX-IntelSDK .
9. Select Configure on the upper menu.
10. Set Policy Format to Text . Leave Policy options set to Enter policy .
11. In the Policy text field, replace the default policy with the below policy. For information about the below
policy, see Create and configure an attestation provider.
version= 1.0;
authorizationrules
{
[ type=="x-ms-sgx-is-debuggable", value==false ]
&& [ type=="x-ms-sgx-product-id", value==4639 ]
&& [ type=="x-ms-sgx-svn", value>= 0 ]
&& [ type=="x-ms-sgx-mrsigner",
value=="e31c9e505f37a58de09335075fc8591254313eb20bb1a27e5443cc450b6e33e5"]
=> permit();
};
e. Click Connect .
2. Create a new table, named Employees .
CREATE SCHEMA [HR];
GO
3. To verify the SSN and Salar y columns are now encrypted, open a new query window in the SSMS
instance without Always Encrypted enabled for the database connection and execute the below
statement. The query window should return encrypted values in the SSN and Salar y columns. If you
execute the same query using the SSMS instance with Always Encrypted enabled, you should see the
data decrypted.
3. Try the same query again in the SSMS instance that doesn't have Always Encrypted enabled. A failure
should occur.
Next steps
After completing this tutorial, you can go to one of the following tutorials:
Tutorial: Develop a .NET application using Always Encrypted with secure enclaves
Tutorial: Develop a .NET Framework application using Always Encrypted with secure enclaves
Tutorial: Creating and using indexes on enclave-enabled columns using randomized encryption
See also
Configure and use Always Encrypted with secure enclaves
Tutorial: Secure a database in Azure SQL Database
9/13/2022 • 11 minutes to read • Edit Online
NOTE
Azure SQL Managed Instance is secured using network security rules and private endpoints as described in Azure SQL
Managed Instance and connectivity architecture.
To learn more, see the Azure SQL Database security overview and capabilities articles.
TIP
This free Learn module shows you how to Secure your database in Azure SQL Database.
Prerequisites
To complete the tutorial, make sure you have the following prerequisites:
SQL Server Management Studio
A server and a single database
Create them with the Azure portal, CLI, or PowerShell
If you don't have an Azure subscription, create a free account before you begin.
NOTE
SQL Database communicates over port 1433. If you're trying to connect from within a corporate network, outbound
traffic over port 1433 may not be allowed by your network's firewall. If so, you can't connect to the server unless your
administrator opens port 1433.
NOTE
Be sure to copy your fully qualified server name (such as yourserver.database.windows.net) for use later in the
tutorial.
2. On the Over view page, select Set ser ver firewall . The Firewall settings page for the server opens.
a. Select Add client IP on the toolbar to add your current IP address to a new firewall rule. The rule
can open port 1433 for a single IP address or a range of IP addresses. Select Save .
NOTE
You can also create a server-level firewall rule in SSMS by using the sp_set_firewall_rule command, though you must be
connected to the master database.
2. On the Add admin page, search and select the AD user or group and choose Select . All members and
groups of your Active Directory are listed, and entries grayed out are not supported as Azure AD
administrators. See Azure AD features and limitations.
IMPORTANT
Azure role-based access control (Azure RBAC) only applies to the portal and isn't propagated to SQL Server.
NOTE
Create non-administrator accounts at the database level, unless they need to execute administrator tasks like creating
new users.
Azure AD authentication
Azure Active Directory authentication requires that database users are created as contained. A contained
database user maps to an identity in the Azure AD directory associated with the database and has no login in
the master database. The Azure AD identity can either be for an individual user or a group. For more
information, see Contained database users, make your database portable and review the Azure AD tutorial on
how to authenticate using Azure AD.
NOTE
Database users (excluding administrators) cannot be created using the Azure portal. Azure roles do not propagate to SQL
servers, databases, or data warehouses. They are only used to manage Azure resources and do not apply to database
permissions.
For example, the SQL Server Contributor role does not grant access to connect to a database or data warehouse. This
permission must be granted within the database using T-SQL statements.
IMPORTANT
Special characters like colon : or ampersand & are not supported in user names in the T-SQL CREATE LOGIN and
CREATE USER statements.
NOTE
Azure AD users are marked in the database metadata with type E (EXTERNAL_USER) and type X (EXTERNAL_GROUPS)
for groups. For more information, see sys.database_principals.
NOTE
An example threat is SQL injection, a process where attackers inject malicious SQL into application inputs. An application
can then unknowingly execute the malicious SQL and allow attackers access to breach or modify data in the database.
If anomalous activities are detected, you receive an email with information on the event. This includes the nature
of the activity, database, server, event time, possible causes, and recommended actions to investigate and
mitigate the potential threat. If such an email is received, select the Azure SQL Auditing Log link to launch the
Azure portal and show relevant auditing records for the time of the event.
Auditing
The auditing feature tracks database events and writes events to an audit log in either Azure storage, Azure
Monitor logs, or to an event hub. Auditing helps maintain regulatory compliance, understand database activity,
and gain insight into discrepancies and anomalies that could indicate potential security violations.
To enable auditing:
1. In the Azure portal, select SQL databases from the left-hand menu, and select your database on the
SQL databases page.
2. In the Security section, select Auditing .
3. Under Auditing settings, set the following values:
a. Set Auditing to ON .
b. Select Audit log destination as any of the following:
Storage , an Azure storage account where event logs are saved and can be downloaded as
.xel files
TIP
Use the same storage account for all audited databases to get the most from auditing report
templates.
Log Analytics , which automatically stores events for query or further analysis
NOTE
A Log Analytics workspace is required to support advanced features such as analytics, custom
alert rules, and Excel or Power BI exports. Without a workspace, only the query editor is available.
Event Hub , which allows events to be routed for use in other applications
c. Select Save .
4. Now you can select View audit logs to view database events data.
IMPORTANT
See SQL Database auditing on how to further customize audit events using PowerShell or REST API.
NOTE
Some items considered customer content, such as table names, object names, and index names, may be transmitted in
log files for support and troubleshooting by Microsoft.
Next steps
In this tutorial, you've learned to improve the security of your database with just a few simple steps. You learned
how to:
Create server-level and database-level firewall rules
Configure an Azure Active Directory (AD) administrator
Manage user access with SQL authentication, Azure AD authentication, and secure connection strings
Enable security features, such as Microsoft Defender for SQL, auditing, data masking, and encryption
Advance to the next tutorial to learn how to implement geo-distribution.
Implement a geo-distributed database
Tutorial: Create Azure AD users using Azure AD
applications
9/13/2022 • 10 minutes to read • Edit Online
Prerequisites
An existing Azure SQL Database deployment. We assume you have a working SQL Database for this tutorial.
Access to an already existing Azure Active Directory.
Az.Sql 2.9.0 module or higher is needed when using PowerShell to set up an individual Azure AD application
as Azure AD admin for Azure SQL. Ensure you are upgraded to the latest module.
If you used the New-AzSqlServer command with the parameter AssignIdentity for a new SQL server
creation in the past, you'll need to execute the Set-AzSqlServer command afterwards as a separate
command to enable this property in the Azure fabric.
3. Check the server identity was successfully assigned. Execute the following PowerShell command:
Replace <resource group> and with your resources. If your server name is
<server name>
myserver.database.windows.net , replace <server name> with myserver .
Your output should show you PrincipalId , Type , and TenantId . The identity assigned is the
PrincipalId .
4. You can also check the identity by going to the Azure portal.
Under the Azure Active Director y resource, go to Enterprise applications . Type in the name of
your SQL logical server. You will see that it has an Object ID attached to the resource.
NOTE
This script must be executed by an Azure AD Global Administrator or a Privileged Roles Administrator .
You can assign the Directory Readers role to a group in Azure AD. The group owners can then add the managed
identity as a member of this group, which would bypass the need for a Global Administrator or
Privileged Roles Administrator to grant the Directory Readers role. For more information on this feature, see
Directory Readers role in Azure Active Directory for Azure SQL.
Replace <TenantId> with your TenantId gathered earlier.
Replace <server name> with your SQL logical server name. If your server name is
myserver.database.windows.net , replace <server name> with myserver .
# This script grants Azure "Directory Readers" permission to a Service Principal representing the Azure SQL
logical server
# It can be executed only by a "Global Administrator" or "Privileged Roles Administrator" type of user.
# To check if the "Directory Readers" permission was granted, execute this script again
Import-Module AzureAD
Connect-AzureAD -TenantId "<TenantId>" #Enter your actual TenantId
$AssignIdentityName = "<server name>" #Enter Azure SQL logical server name
NOTE
The output from this above script will indicate if the Directory Readers permission was granted to the identity. You can re-
run the script if you are unsure if the permission was granted.
For a similar approach on how to set the Director y Readers permission for SQL Managed Instance, see
Provision Azure AD admin (SQL Managed Instance).
After the app registration is created, the Application ID value is generated and displayed.
2. You'll also need to create a client secret for signing in. Follow the guide here to upload a certificate or
create a secret for signing in.
3. Record the following from your application registration. It should be available from your Over view pane:
Application ID
Tenant ID - This should be the same as before
In this tutorial, we'll be using AppSP as our main service principal, and myapp as the second service principal
user that will be created in Azure SQL by AppSP. You'll need to create two applications, AppSP and myapp.
For more information on how to create an Azure AD application, see the article How to: Use the portal to create
an Azure AD application and service principal that can access resources.
IMPORTANT
Only Azure AD users can create other Azure AD users in Azure SQL Database. Any SQL user with SQL authentication,
including a server admin cannot create an Azure AD user. The Azure AD admin is the only user who can initially create
Azure AD users in SQL Database. After the Azure AD admin has created other users, any Azure AD user with proper
permissions can create other Azure AD users.
1. Create the user AppSP in the SQL Database using the following T-SQL command:
2. Grant db_owner permission to AppSP, which allows the user to create other Azure AD users in the
database.
1. Use the following script to create an Azure AD service principal user myapp using the service principal
AppSP.
Replace <TenantId>with your TenantId gathered earlier.
Replace <ClientId> with your ClientId gathered earlier.
Replace <ClientSecret> with your client secret created earlier.
Replace <server name> with your SQL logical server name. If your server name is
myserver.database.windows.net , replace <server name> with myserver .
Replace <database name> with your SQL Database name.
# PowerShell script for creating a new SQL user called myapp using application AppSP with secret
# AppSP is part of an Azure AD admin for the Azure SQL server below
$Tok = $result.AccessToken
#Write-host "token"
$Tok
Write-host "results"
$command.ExecuteNonQuery()
$conn.Close()
Alternatively, you can use the code sample in the blog, Azure AD Service Principal authentication to SQL
DB - Code Sample. Modify the script to execute a DDL statement
CREATE USER [myapp] FROM EXTERNAL PROVIDER . The same script can be used to create a regular Azure AD
user or a group in SQL Database.
2. Check if the user myapp exists in the database by executing the following command:
Next steps
Azure Active Directory service principal with Azure SQL
What are managed identities for Azure resources?
How to use managed identities for App Service and Azure Functions
Azure AD Service Principal authentication to SQL DB - Code Sample
Application and service principal objects in Azure Active Directory
Create an Azure service principal with Azure PowerShell
Directory Readers role in Azure Active Directory for Azure SQL
Rotate the Transparent data encryption (TDE)
protector
9/13/2022 • 6 minutes to read • Edit Online
NOTE
A paused dedicated SQL pool in Azure Synapse Analytics must be resumed before key rotations.
This article applies to Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics dedicated SQL
pools (formerly SQL DW). For documentation on transparent data encryption (TDE) for dedicated SQL pools inside
Synapse workspaces, see Azure Synapse Analytics encryption.
IMPORTANT
Do not delete previous versions of the key after a rollover. When keys are rolled over, some data is still encrypted with the
previous keys, such as older database backups, backed-up log files and transaction log files.
Prerequisites
This how-to guide assumes that you're already using a key from Azure Key Vault as the TDE protector for
Azure SQL Database or Azure Synapse Analytics. See Transparent data encryption with BYOK Support.
You must have Azure PowerShell installed and running.
TIP
Recommended but optional - create the key material for the TDE protector in a hardware security module (HSM) or local
key store first, and import the key material to Azure Key Vault. Follow the instructions for using a hardware security
module (HSM) and Key Vault to learn more.
Portal
PowerShell
The Azure CLI
NOTE
If the server or managed instance has geo-replication configured, prior to enabling automatic rotation, additional
guidelines need to be followed as described here.
Portal
PowerShell
The Azure CLI
NOTE
The combined length for the key vault name and key name cannot exceed 94 characters.
Portal
PowerShell
The Azure CLI
Using the Azure portal to switch the TDE protector from Microsoft-managed to BYOK mode:
1. Browse to the Transparent data encr yption menu for an existing server.
2. Select the Customer-managed key option.
3. Select the key vault and key to be used as the TDE protector.
4. Select Save .
Next steps
In case of a security risk, learn how to remove a potentially compromised TDE protector: Remove a
potentially compromised key.
Get started with Azure Key Vault integration and Bring Your Own Key support for TDE: Turn on TDE using
your own key from Key Vault using PowerShell.
Remove a Transparent Data Encryption (TDE)
protector using PowerShell
9/13/2022 • 5 minutes to read • Edit Online
The procedures outlined in this article should only be done in extreme cases or in test environments. Review the
steps carefully, as deleting actively used TDE protectors from Azure Key Vault will result in database becoming
unavailable .
If a key is ever suspected to be compromised, such that a service or user had unauthorized access to the key, it's
best to delete the key.
Keep in mind that once the TDE protector is deleted in Key Vault, in up to 10 minutes, all encrypted databases
will start denying all connections with the corresponding error message and change its state to Inaccessible.
This how-to guide goes over the approach to render databases inaccessible after a compromised incident
response.
NOTE
This article applies to Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics (dedicated SQL
pools (formerly SQL DW)). For documentation on Transparent Data Encryption for dedicated SQL pools inside Synapse
workspaces, see Azure Synapse Analytics encryption.
Prerequisites
You must have an Azure subscription and be an administrator on that subscription
You must have Azure PowerShell installed and running.
This how-to guide assumes that you are already using a key from Azure Key Vault as the TDE protector for an
Azure SQL Database or Azure Synapse. See Transparent Data Encryption with BYOK Support to learn more.
PowerShell
The Azure CLI
For Az module installation instructions, see Install Azure PowerShell. For specific cmdlets, see AzureRM.Sql.
IMPORTANT
The PowerShell Azure Resource Manager (RM) module is still supported but all future development is for the Az.Sql
module. The AzureRM module will continue to receive bug fixes until at least December 2020. The arguments for the
commands in the Az module and in the AzureRm modules are substantially identical. For more about their compatibility,
see Introducing the new Azure PowerShell Az module.
Check TDE Protector thumbprints
The following steps outline how to check the TDE Protector thumbprints still in use by Virtual Log Files (VLF) of
a given database. The thumbprint of the current TDE protector of the database, and the database ID can be
found by running:
SELECT [database_id],
[encryption_state],
[encryptor_type], /*asymmetric key means AKV, certificate means service-managed keys*/
[encryptor_thumbprint]
FROM [sys].[dm_database_encryption_keys]
The following query returns the VLFs and the TDE Protector respective thumbprints in use. Each different
thumbprint refers to different key in Azure Key Vault (AKV):
The PowerShell command Get-AzureRmSqlSer verKeyVaultKey provides the thumbprint of the TDE Protector
used in the query, so you can see which keys to keep and which keys to delete in AKV. Only keys no longer used
by the database can be safely deleted from Azure Key Vault.
1. Create a new key in Key Vault. Make sure this new key is created in a separate key vault from the
potentially compromised TDE protector, since access control is provisioned on a vault level.
2. Add the new key to the server using the Add-AzSqlServerKeyVaultKey and Set-
AzSqlServerTransparentDataEncryptionProtector cmdlets and update it as the server's new TDE protector.
# set the key as the TDE protector for all resources under the server
Set-AzSqlServerTransparentDataEncryptionProtector -ResourceGroupName <SQLDatabaseResourceGroupName> `
-ServerName <LogicalServerName> -Type AzureKeyVault -KeyId <KeyVaultKeyId>
3. Make sure the server and any replicas have updated to the new TDE protector using the Get-
AzSqlServerTransparentDataEncryptionProtector cmdlet.
NOTE
It may take a few minutes for the new TDE protector to propagate to all databases and secondary databases
under the server.
Get-AzSqlServerTransparentDataEncryptionProtector -ServerName <LogicalServerName> -ResourceGroupName
<SQLDatabaseResourceGroupName>
5. Delete the compromised key from Key Vault using the Remove-AzKeyVaultKey cmdlet.
6. To restore a key to Key Vault in the future using the Restore-AzKeyVaultKey cmdlet:
NOTE
It may take around 10 minutes for any permission changes to take effect for the key vault. This includes revoking access
permissions to the TDE protector in AKV, and users within this time frame may still have access permissions.
Next steps
Learn how to rotate the TDE protector of a server to comply with security requirements: Rotate the
Transparent Data Encryption protector Using PowerShell
Get started with Bring Your Own Key support for TDE: Turn on TDE using your own key from Key Vault using
PowerShell
Tutorial: Set up SQL Data Sync between databases
in Azure SQL Database and SQL Server
9/13/2022 • 10 minutes to read • Edit Online
IMPORTANT
SQL Data Sync does not support Azure SQL Managed Instance or Azure Synapse Analytics at this time.
2. Select the database you want to use as the hub database for Data Sync.
NOTE
The hub database is a sync topology's central endpoint, in which a sync group has multiple database endpoints.
All other member databases with endpoints in the sync group, sync with the hub database.
3. On the SQL database menu for the selected database, select Sync to other databases .
4. On the Sync to other databases page, select New Sync Group . The New sync group page opens
with Create sync group .
On the Create Data Sync Group page, change the following settings:
Sync Group Name Enter a name for the new sync group. This name is
distinct from the name of the database itself.
NOTE
Microsoft recommends to create a new, empty database for use as the Sync Metadata Database . Data Sync
creates tables in this database and runs a frequent workload. This database is shared as the Sync Metadata
Database for all sync groups in a selected region and subscription. You can't change the database or its name
without removing all sync groups and sync agents in the region. Additionally, an Elastic jobs database cannot be
used as the SQL Data Sync Metadata database and vice versa.
Select OK and wait for the sync group to be created and deployed.
5. On the New Sync Group page, if you selected Use private link , you will need to approve the private
endpoint connection. The link in the info message will take you to the private endpoint connections
experience where you can approve the connection.
NOTE
The private links for the sync group and the sync members need to be created, approved, and disabled separately.
On the Configure Azure SQL Database page, change the following settings:
Sync Member Name Provide a name for the new sync member. This name is
distinct from the database name itself.
SET T IN G DESC RIP T IO N
Username and Password Enter the existing credentials for the server on which the
member database is located. Don't enter new credentials in
this section.
Select OK and wait for the new sync member to be created and deployed.
2. On the Choose the Sync Agent page, choose whether to use an existing agent or create an agent.
If you choose Existing agents , select the existing agent from the list.
If you choose Create a new agent , do the following things:
a. Download the data sync agent from the link provided and install it on the computer where the SQL
Server is located. You can also download the agent directly from Azure SQL Data Sync Agent.
IMPORTANT
You have to open outbound TCP port 1433 in the firewall to let the client agent communicate with the
server.
a. In the sync agent app, select Submit Agent Key . The Sync Metadata Database Configuration
dialog box opens.
b. In the Sync Metadata Database Configuration dialog box, paste in the agent key copied from
the Azure portal. Also provide the existing credentials for the server on which the metadata
database is located. (If you created a metadata database, this database is on the same server as the
hub database.) Select OK and wait for the configuration to finish.
NOTE
If you get a firewall error, create a firewall rule on Azure to allow incoming traffic from the SQL Server
computer. You can create the rule manually in the portal or in SQL Server Management Studio (SSMS). In
SSMS, connect to the hub database on Azure by entering its name as
<hub_database_name>.database.windows.net.
c. Select Register to register a SQL Server database with the agent. The SQL Ser ver
Configuration dialog box opens.
d. In the SQL Ser ver Configuration dialog box, choose to connect using SQL Server
authentication or Windows authentication. If you choose SQL Server authentication, enter the
existing credentials. Provide the SQL Server name and the name of the database that you want to
sync and select Test connection to test your settings. Then select Save and the registered
database appears in the list.
NOTE
To connect to SQL Data Sync and the local agent, add your user name to the role DataSync_Executor. Data Sync creates
this role on the SQL Server instance.
Next steps
Congratulations. You've created a sync group that includes both a SQL Database instance and a SQL Server
database.
For more info about SQL Data Sync, see:
Data Sync Agent for Azure SQL Data Sync
Best practices and How to troubleshoot issues with Azure SQL Data Sync
Monitor SQL Data Sync with Azure Monitor logs
Update the sync schema with Transact-SQL or PowerShell
For more info about SQL Database, see:
SQL Database Overview
Database Lifecycle Management
How to migrate your SQLite database to Azure
SQL Database serverless
9/13/2022 • 3 minutes to read • Edit Online
Prerequisites
An Azure Subscription
SQLite2 or SQLite3 database that you wish to migrate
A Windows environment
If you do not have a local Windows environment, you can use a Windows VM in Azure for the
migration. Move and make your SQLite database file available on the VM using Azure Files and
Storage Explorer.
Review prerequisites for Azure Data Factory self-hosted integration runtime
Steps
1. Provision a new Azure SQL Database in the serverless compute tier.
2. Ensure you have your SQLite database file available in your Windows environment. Install a SQLite ODBC
Driver if you do not already have one (there are many available in Open Source, for example,
http://www.ch-werner.de/sqliteodbc/).
3. Create a System DSN for the database in your local Windows server. Ensure you use the Data Source
Administrator application that matches your system architecture (32-bit vs 64-bit). You can find which
version you are running in your system settings.
Open ODBC Data Source Administrator in your environment.
Select the system DSN tab and select "Add".
Select the SQLite ODBC connector you installed and give the connection a meaningful name, for
example, sqlitemigrationsource .
Set the database name to the .db file.
Save and exit.
4. Download and install the self-hosted integration runtime on your local Windows server. The easiest way
to do this is the Option 1: Express setup install option, as detailed in the documentation. If you opt for
a manual install, you will need to provide the application with an authentication key, which can be located
in your Azure Data Factory instance by:
Start up the Azure Data Factory UI via the Author and Monitor link from the service in the Azure
portal.
Select the Author tab (Blue pencil) on the menu. For more information, see Visual authoring in Azure
Data Factory.
Select Connections (bottom left), then Integration runtimes .
Add a new Self-Hosted Integration Runtime , give it a name, select Option 2 .
5. Create a new linked service for the source SQLite database in your Data Factory.
6. In Connections , under Linked Ser vice , select New .
7. Search for and select the "ODBC" connector.
8. Give the linked service a meaningful name, for example, sqlite_odbc . Select your integration runtime
from the "Connect via integration runtime" dropdown. Enter the below into the connection string,
replacing the Initial Catalog variable with the filepath for the .db file, and the DSN with the name of the
system DSN connection:
11. Create another linked service for your Serverless SQL target. Select the database using the linked service
wizard, and provide the SQL authentication credentials.
12. Extract the CREATE TABLE statements from your SQLite database. You can do this by executing the below
Python script on your database file.
#!/usr/bin/python
import sqlite3
conn = sqlite3.connect("sqlitemigrationsource.db")
c = conn.cursor()
13. Create the landing tables in your Serverless SQL target environment by copying the CREATE table
statements from the CreateTables.sql file and running the SQL statements in the Query Editor in the
Azure portal.
14. Return to the home screen of your Data Factory and select Copy Data to run through the job creation
wizard.
15. Select all tables from the source SQLite database using the check boxes, and map them to the target
tables in Azure SQL. Once the job has run, you have successfully migrated your data from SQLite to
Azure SQL!
Next steps
To get started, see Quickstart: Create a single database in Azure SQL Database using the Azure portal.
For resource limits, see Serverless compute tier resource limits.
Configure isolated access to a Hyperscale named
replica
9/13/2022 • 4 minutes to read • Edit Online
Retrieve the SID hexadecimal value for the created login from the sys.sql_logins system view:
Disable the login. This will prevent this login from accessing any database on the server hosting the primary
replica.
As an optional step, once the database user has been created, you can drop the server login created in the
previous step if there are concerns about the login getting re-enabled in any way. Connect to the master
database on the logical server hosting the primary database, and execute the following:
Then, create a named replica for the primary database on this server. For example, using AZ CLI:
At this point, users and applications using third-party-login can connect to the named replica, but not to the
primary replica.
As an alternative to granting permissions individually on every table, you can add the user to the
db_datareaders database role to allow read access to all tables, or you can use schemas to allow access to all
existing and new tables in a schema.
Test access
You can test this configuration by using any client tool and attempt to connect to the primary and the named
replica. For example, using sqlcmd , you can try to connect to the primary replica using the third-party-login
user:
Sqlcmd: Error: Microsoft ODBC Driver 13 for SQL Server : Login failed for user 'third-party-login'. Reason:
The account is disabled.
No errors are returned, and queries can be executed on the named replica as allowed by granted object-level
permissions.
For more information:
Azure SQL logical Servers, see What is a server in Azure SQL Database
Managing database access and logins, see SQL Database security: Manage database access and login security
Database engine permissions, see Permissions
Granting object permissions, see GRANT Object Permissions
What is a single database in Azure SQL Database?
9/13/2022 • 3 minutes to read • Edit Online
Dynamic scalability
You can build your first app on a small, single database at low cost in the serverless compute tier or a small
compute size in the provisioned compute tier. You change the compute or service tier manually or
programmatically at any time to meet the needs of your solution. You can adjust performance without
downtime to your app or to your customers. Dynamic scalability enables your database to transparently
respond to rapidly changing resource requirements and enables you to only pay for the resources that you need
when you need them.
Availability capabilities
Single databases and elastic pools provide many availability characteristics. For information, see Availability
characteristics.
Transact-SQL differences
Most Transact-SQL features that applications use are fully supported in both Microsoft SQL Server and Azure
SQL Database. For example, the core SQL components such as data types, operators, string, arithmetic, logical,
and cursor functions, work identically in SQL Server and SQL Database. There are, however, a few T-SQL
differences in DDL (data-definition language) and DML (data manipulation language) elements resulting in T-
SQL statements and queries that are only partially supported (which we discuss later in this article).
In addition, there are some features and syntax that are not supported because Azure SQL Database is designed
to isolate features from dependencies on the master database and the operating system. As such, most server-
level activities are inappropriate for SQL Database. T-SQL statements and options are not available if they
configure server-level options, configure operating system components, or specify file system configuration.
When such capabilities are required, an appropriate alternative is often available in some other way from SQL
Database or from another Azure feature or service.
For more information, see Resolving Transact-SQL differences during migration to SQL Database.
Security
SQL Database provides a range of built-in security and compliance features to help your application meet
various security and compliance requirements.
IMPORTANT
Azure SQL Database has been certified against a number of compliance standards. For more information, see the
Microsoft Azure Trust Center, where you can find the most current list of SQL Database compliance certifications.
Next steps
To quickly get started with a single database, start with the Single database quickstart guide.
To learn about migrating a SQL Server database to Azure, see Migrate to Azure SQL Database.
For information about supported features, see Features.
Elastic pools help you manage and scale multiple
databases in Azure SQL Database
9/13/2022 • 10 minutes to read • Edit Online
IMPORTANT
There's no per-database charge for elastic pools. You're billed for each hour a pool exists at the highest eDTU or vCores,
regardless of usage or whether the pool was active for less than an hour.
Elastic pools enable you to purchase resources for a pool shared by multiple databases to accommodate
unpredictable periods of usage by individual databases. You can configure resources for the pool based either
on the DTU-based purchasing model or the vCore-based purchasing model. The resource requirement for a
pool is determined by the aggregate utilization of its databases.
The amount of resources available to the pool is controlled by your budget. All you have to do is:
Add databases to the pool.
Optionally set the minimum and maximum resources for the databases. These resources are either minimum
and maximum DTUs or minimum or maximum vCores depending on your choice of resourcing model.
Set the resources of the pool based on your budget.
You can use pools to seamlessly grow your service from a lean startup to a mature business at ever-increasing
scale.
Within the pool, individual databases are given the flexibility to use resources within set parameters. Under
heavy load, a database can consume more resources to meet demand. Databases under light loads consume
less, and databases under no load consume no resources. Provisioning resources for the entire pool rather than
for single databases simplifies your management tasks. Plus, you have a predictable budget for the pool.
More resources can be added to an existing pool with minimum downtime. If extra resources are no longer
needed, they can be removed from an existing pool at any time. You can also add or remove databases from the
pool. If a database is predictably underutilizing resources, you can move it out.
NOTE
When you move databases into or out of an elastic pool, there's no downtime except for a brief period (on the order of
seconds) at the end of the operation when database connections are dropped.
The chart illustrates DTU usage over one hour from 12:00 to 1:00 where each data point has one-minute
granularity. At 12:10, DB1 peaks up to 90 DTUs, but its overall average usage is less than five DTUs. An S3
compute size is required to run this workload in a single database, but this size leaves most of the resources
unused during periods of low activity.
A pool allows these unused DTUs to be shared across multiple databases. A pool reduces the DTUs needed and
the overall cost.
Building on the previous example, suppose there are other databases with similar utilization patterns as DB1. In
the next two figures, the utilization of four databases and 20 databases are layered onto the same graph to
illustrate the nonoverlapping nature of their utilization over time by using the DTU-based purchasing model:
The aggregate DTU utilization across all 20 databases is illustrated by the black line in the preceding chart. This
line shows that the aggregate DTU utilization never exceeds 100 DTUs and indicates that the 20 databases can
share 100 eDTUs over this time period. The result is a 20-time reduction in DTUs and a 13-time price reduction
compared to placing each of the databases in S3 compute sizes for single databases.
This example is ideal because:
There are large differences between peak utilization and average utilization per database.
The peak utilization for each database occurs at different points in time.
eDTUs are shared between many databases.
In the DTU purchasing model, the price of a pool is a function of the pool eDTUs. While the eDTU unit price for a
pool is 1.5 times greater than the DTU unit price for a single database, pool eDTUs can be shared by many
databases and fewer total eDTUs are needed. These distinctions in pricing and eDTU sharing are the basis of the
price savings potential that pools can provide.
In the vCore purchasing model, the vCore unit price for elastic pools is the same as the vCore unit price for
single databases.
How do I choose the correct pool size?
The best size for a pool depends on the aggregate resources needed for all databases in the pool. You need to
determine:
Maximum compute resources utilized by all databases in the pool. Compute resources are indexed by either
eDTUs or vCores depending on your choice of purchasing model.
Maximum storage bytes utilized by all databases in the pool.
For service tiers and resource limits in each purchasing model, see the DTU-based purchasing model or the
vCore-based purchasing model.
The following steps can help you estimate whether a pool is more cost-effective than single databases:
1. Estimate the eDTUs or vCores needed for the pool:
For the DTU-based purchasing model:
MAX(<Total number of DBs × Average DTU utilization per DB>, <Number of concurrently
peaking DBs × Peak DTU utilization per DB>)
For the vCore-based purchasing model:
MAX(<Total number of DBs × Average vCore utilization per DB>, <Number of concurrently
peaking DBs × Peak vCore utilization per DB>)
2. Estimate the total storage space needed for the pool by adding the data size needed for all the databases in
the pool. For the DTU purchasing model, determine the eDTU pool size that provides this amount of storage.
3. For the DTU-based purchasing model, take the larger of the eDTU estimates from step 1 and step 2. For the
vCore-based purchasing model, take the vCore estimate from step 1.
4. See the SQL Database pricing page and find the smallest pool size that's greater than the estimate from step
3.
5. Compare the pool price from step 4 to the price of using the appropriate compute sizes for single databases.
IMPORTANT
If the number of databases in a pool approaches the maximum supported, make sure to consider resource management
in dense elastic pools.
Per-database properties
You can optionally set per-database properties to modify resource consumption patterns in elastic pools. For
more information, see resource limits documentation for DTU and vCore elastic pools.
Create a new SQL Database elastic pool by using the Azure portal
You can create an elastic pool in the Azure portal in two ways:
Create an elastic pool and select an existing or new server.
Create an elastic pool from an existing server.
To create an elastic pool and select an existing or new server:
1. Go to the Azure portal to create an elastic pool. Search for and select Azure SQL .
2. Select Create to open the Select SQL deployment option pane. To view more information about
elastic pools, on the Databases tile, select Show details .
3. On the Databases tile, in the Resource type dropdown, select Elastic pool . Then select Create .
NOTE
You can create multiple pools on a server, but you can't add databases from different servers into the same pool.
The pool's service tier determines the features available to the elastics in the pool, and the maximum amount of
resources available to each database. For more information, see resource limits for elastic pools in the DTU
model. For vCore-based resource limits for elastic pools, see vCore-based resource limits - elastic pools.
To configure the resources and pricing of the pool, select Configure pool . Then select a service tier, add
databases to the pool, and configure the resource limits for the pool and its databases.
After you've configured the pool, select Apply , name the pool, and select OK to create the pool.
Next steps
For pricing information, see Elastic pool pricing.
To scale elastic pools, see Scale elastic pools and Scale an elastic pool - sample code.
To learn more about design patterns for SaaS applications by using elastic pools, see Design patterns for
multitenant SaaS applications with SQL Database.
For a SaaS tutorial by using elastic pools, see Introduction to the Wingtip SaaS application.
To learn about resource management in elastic pools with many databases, see Resource management in
dense elastic pools.
What is a logical SQL server in Azure SQL
Database and Azure Synapse?
9/13/2022 • 11 minutes to read • Edit Online
You can create the resource group for a logical server ahead of time or while creating the server itself. There are
multiple methods for getting to a new SQL server form, either by creating a new SQL server or as part of
creating a new database.
Create a blank server
To create a blank logical server (without a database, elastic pool, or dedicated SQL pool) using the Azure portal,
navigate to a blank SQL server (logical SQL ser ver ) form.
Create a blank or sample database in Azure SQL Database
To create a database in SQL Database using the Azure portal, navigate to create a new SQL Database and
provide the requested information. You can create the resource group and server ahead of time or while
creating the database itself. You can create a blank database or create a sample database based on
AdventureWorksLT .
IMPORTANT
For information on selecting the pricing tier for your database, see DTU-based purchasing model and vCore-based
purchasing model.
2. Set Public network access to Selected networks to reveal the virtual networks and firewall rules.
When set to Disabled , virtual networks and firewall rule settings are hidden.
3. Choose Add a firewall rule to configure the firewall.
IMPORTANT
To configure performance properties for a database, see DTU-based purchasing model and vCore-based purchasing
model.
TIP
For an Azure portal quickstart, see Create a database in SQL Database in the Azure portal.
Next steps
To learn about migrating a SQL Server database to Azure SQL Database, see Migrate to Azure SQL Database.
For information about supported features, see Features.
Azure SQL Database serverless
9/13/2022 • 17 minutes to read • Edit Online
Performance configuration
The minimum vCores and maximum vCores are configurable parameters that define the range of
compute capacity available for the database. Memory and IO limits are proportional to the vCore range
specified.
The auto-pause delay is a configurable parameter that defines the period of time the database must be
inactive before it is automatically paused. The database is automatically resumed when the next login or
other activity occurs. Alternatively, automatic pausing can be disabled.
Cost
The cost for a serverless database is the summation of the compute cost and storage cost.
When compute usage is between the min and max limits configured, the compute cost is based on vCore and
memory used.
When compute usage is below the min limits configured, the compute cost is based on the min vCores and
min memory configured.
When the database is paused, the compute cost is zero and only storage costs are incurred.
The storage cost is determined in the same way as in the provisioned compute tier.
For more cost details, see Billing.
Scenarios
Serverless is price-performance optimized for single databases with intermittent, unpredictable usage patterns
that can afford some delay in compute warm-up after idle usage periods. In contrast, the provisioned compute
tier is price-performance optimized for single databases or multiple databases in elastic pools with higher
average usage that cannot afford any delay in compute warm-up.
Scenarios well suited for serverless compute
Single databases with intermittent, unpredictable usage patterns interspersed with periods of inactivity, and
lower average compute utilization over time.
Single databases in the provisioned compute tier that are frequently rescaled and customers who prefer to
delegate compute rescaling to the service.
New single databases without usage history where compute sizing is difficult or not possible to estimate
prior to deployment in SQL Database.
Scenarios well suited for provisioned compute
Single databases with more regular, predictable usage patterns and higher average compute utilization over
time.
Databases that cannot tolerate performance trade-offs resulting from more frequent memory trimming or
delays in resuming from a paused state.
Multiple databases with intermittent, unpredictable usage patterns that can be consolidated into elastic pools
for better price-performance optimization.
Database usage pattern Intermittent, unpredictable usage with More regular usage patterns with
lower average compute utilization over higher average compute utilization
time. over time, or multiple databases using
elastic pools.
Autoscaling
Scaling responsiveness
In general, serverless databases are run on a machine with sufficient capacity to satisfy resource demand
without interruption for any amount of compute requested within limits set by the max vCores value.
Occasionally, load balancing automatically occurs if the machine is unable to satisfy resource demand within a
few minutes. For example, if the resource demand is 4 vCores, but only 2 vCores are available, then it may take
up to a few minutes to load balance before 4 vCores are provided. The database remains online during load
balancing except for a brief period at the end of the operation when connections are dropped.
Memory management
Memory for serverless databases is reclaimed more frequently than for provisioned compute databases. This
behavior is important to control costs in serverless and can impact performance.
Cache reclamation
Unlike provisioned compute databases, memory from the SQL cache is reclaimed from a serverless database
when CPU or active cache utilization is low.
Active cache utilization is considered low when the total size of the most recently used cache entries falls
below a threshold for a period of time.
When cache reclamation is triggered, the target cache size is reduced incrementally to a fraction of its
previous size and reclaiming only continues if usage remains low.
When cache reclamation occurs, the policy for selecting cache entries to evict is the same selection policy as
for provisioned compute databases when memory pressure is high.
The cache size is never reduced below the min memory limit as defined by min vCores, that can be
configured.
In both serverless and provisioned compute databases, cache entries may be evicted if all available memory is
used.
When CPU utilization is low, active cache utilization can remain high depending on the usage pattern and
prevent memory reclamation. Also, there can be other delays after user activity stops before memory
reclamation occurs due to periodic background processes responding to prior user activity. For example, delete
operations and Query Store cleanup tasks generate ghost records that are marked for deletion, but are not
physically deleted until the ghost cleanup process runs. Ghost cleanup may involve reading additional data
pages into cache.
Cache hydration
The SQL cache grows as data is fetched from disk in the same way and with the same speed as for provisioned
databases. When the database is busy, the cache is allowed to grow unconstrained up to the max memory limit.
SELECT session_id,
host_name,
program_name,
client_interface_name,
login_name,
status,
login_time,
last_request_start_time,
last_request_end_time
FROM sys.dm_exec_sessions AS s
INNER JOIN sys.dm_resource_governor_workload_groups AS wg
ON s.group_id = wg.group_id
WHERE s.session_id <> @@SPID
AND
(
(
wg.name like 'UserPrimaryGroup.DB%'
AND
TRY_CAST(RIGHT(wg.name, LEN(wg.name) - LEN('UserPrimaryGroup.DB') - 2) AS int) = DB_ID()
)
OR
wg.name = 'DACGroup'
);
TIP
After running the query, make sure to disconnect from the database. Otherwise, the open session used by the query will
prevent auto-pausing.
If the result set is non-empty, it indicates that there are sessions currently preventing auto-pausing.
If the result set is empty, it is still possible that sessions were open, possibly for a short time, at some point
earlier during the auto-pause delay period. To see if such activity has occurred during the delay period, you can
use Azure SQL Auditing and examine audit data for the relevant period.
The presence of open sessions, with or without concurrent CPU utilization in the user resource pool, is the most
common reason for a serverless database to not auto-pause as expected.
Auto -resuming
Auto-resuming is triggered if any of the following conditions are true at any time:
Data discovery and classification Adding, modifying, deleting, or viewing sensitivity labels
SQL data sync Synchronization between hub and member databases that
run on a configurable schedule or are performed manually
SQL Server Management Studio (SSMS) Using SSMS versions earlier than 18.1 and opening a new
query window for any database in the server will resume any
auto-paused database in the same server. This behavior
does not occur if using SSMS version 18.1 or later.
Monitoring, management, or other solutions performing any of the operations listed above will trigger auto-
resuming.
Auto-resuming is also triggered during the deployment of some service updates that require the database be
online.
Connectivity
If a serverless database is paused, then the first login will resume the database and return an error stating that
the database is unavailable with error code 40613. Once the database is resumed, the login must be retried to
establish connectivity. Database clients with connection retry logic should not need to be modified. For
connection retry logic options that are built-in to the SqlClient driver, see configurable retry logic in SqlClient.
Latency
The latency to auto-resume and auto-pause a serverless database is generally order of 1 minute to auto-resume
and 1-10 minutes after the expiration of the delay period to auto-pause.
Customer managed transparent data encryption (BYOK )
If using customer managed transparent data encryption (BYOK) and the serverless database is auto-paused
when key deletion or revocation occurs, then the database remains in the auto-paused state. In this case, after
the database is next resumed, the database becomes inaccessible within approximately 10 minutes. Once the
database becomes inaccessible, the recovery process is the same as for provisioned compute databases. If the
serverless database is online when key deletion or revocation occurs, then the database also becomes
inaccessible within approximately 10 minutes in the same way as with provisioned compute databases.
PA RA M ET ER VA L UE C H O IC ES DEFA ULT VA L UE
Monitoring
Resources used and billed
The resources of a serverless database are encapsulated by app package, SQL instance, and user resource pool
entities.
App package
The app package is the outer most resource management boundary for a database, regardless of whether the
database is in a serverless or provisioned compute tier. The app package contains the SQL instance and external
services like Full-text Search that all together scope all user and system resources used by a database in SQL
Database. The SQL instance generally dominates the overall resource utilization across the app package.
User resource pool
The user resource pool is an inner resource management boundary for a database, regardless of whether the
database is in a serverless or provisioned compute tier. The user resource pool scopes CPU and IO for user
workload generated by DDL queries such as CREATE and ALTER, DML queries such as INSERT, UPDATE, DELETE,
and MERGE, and SELECT queries. These queries generally represent the most substantial proportion of
utilization within the app package.
Metrics
Metrics for monitoring the resource usage of the app package and user resource pool of a serverless database
are listed in the following table:
Resource limits
For resource limits, see serverless compute tier.
Billing
The amount of compute billed is the maximum of CPU used and memory used each second. If the amount of
CPU used and memory used is less than the minimum amount provisioned for each, then the provisioned
amount is billed. In order to compare CPU with memory for billing purposes, memory is normalized into units
of vCores by rescaling the amount of memory in GB by 3 GB per vCore.
Resource billed : CPU and memory
Amount billed : vCore unit price * max (min vCores, vCores used, min memory GB * 1/3, memory GB used *
1/3)
Billing frequency : Per second
The vCore unit price is the cost per vCore per second. Refer to the Azure SQL Database pricing page for specific
unit prices in a given region.
The amount of compute billed is exposed by the following metric:
Metric : app_cpu_billed (vCore seconds)
Definition : max (min vCores, vCores used, min memory GB * 1/3, memory GB used * 1/3)
Repor ting frequency : Per minute
This quantity is calculated each second and aggregated over 1 minute.
Minimum compute bill
If a serverless database is paused, then the compute bill is zero. If a serverless database is not paused, then the
minimum compute bill is no less than the amount of vCores based on max (min vCores, min memory GB * 1/3).
Examples:
Suppose a serverless database is not paused and configured with 8 max vCores and 1 min vCore
corresponding to 3.0 GB min memory. Then the minimum compute bill is based on max (1 vCore, 3.0 GB * 1
vCore / 3 GB) = 1 vCore.
Suppose a serverless database is not paused and configured with 4 max vCores and 0.5 min vCores
corresponding to 2.1 GB min memory. Then the minimum compute bill is based on max (0.5 vCores, 2.1 GB *
1 vCore / 3 GB) = 0.7 vCores.
The Azure SQL Database pricing calculator for serverless can be used to determine the min memory
configurable based on the number of max and min vCores configured. As a rule, if the min vCores configured is
greater than 0.5 vCores, then the minimum compute bill is independent of the min memory configured and
based only on the number of min vCores configured.
Example scenario
Consider a serverless database configured with 1 min vCore and 4 max vCores. This configuration corresponds
to around 3 GB min memory and 12 GB max memory. Suppose the auto-pause delay is set to 6 hours and the
database workload is active during the first 2 hours of a 24-hour period and otherwise inactive.
In this case, the database is billed for compute and storage during the first 8 hours. Even though the database is
inactive starting after the second hour, it is still billed for compute in the subsequent 6 hours based on the
minimum compute provisioned while the database is online. Only storage is billed during the remainder of the
24-hour period while the database is paused.
More precisely, the compute bill in this example is calculated as follows:
VC O RE SEC O N DS
VC O RES USED EA C H GB USED EA C H C O M P UT E B IL L ED O VER T IM E
T IM E IN T ERVA L SEC O N D SEC O N D DIM EN SIO N B IL L ED IN T ERVA L
Suppose the compute unit price is $0.000145/vCore/second. Then the compute billed for this 24-hour period is
the product of the compute unit price and vCore seconds billed: $0.000145/vCore/second * 50400 vCore
seconds ~ $7.31.
Azure Hybrid Benefit and reserved capacity
Azure Hybrid Benefit (AHB) and reserved capacity discounts do not apply to the serverless compute tier.
Available regions
The serverless compute tier is available worldwide except the following regions: China East, China North,
Germany Central, Germany Northeast, and US Gov Central (Iowa).
Next steps
To get started, see Quickstart: Create a single database in Azure SQL Database using the Azure portal.
For resource limits, see Serverless compute tier resource limits.
Hyperscale service tier
9/13/2022 • 11 minutes to read • Edit Online
NOTE
For details on the General Purpose and Business Critical service tiers in the vCore-based purchasing model, see
General Purpose and Business Critical service tiers. For a comparison of the vCore-based purchasing model with the
DTU-based purchasing model, see Azure SQL Database purchasing models and resources.
The Hyperscale service tier is currently only available for Azure SQL Database, and not Azure SQL Managed Instance.
IMPORTANT
Elastic pools do not support the Hyperscale service tier.
Best for Offers budget oriented Most business workloads. OLTP applications with high
balanced compute and Autoscaling storage size up transaction rate and low IO
storage options. to 100 TB, fast vertical and latency. Offers highest
horizontal compute scaling, resilience to failures and fast
fast database restore. failovers using multiple
synchronously updated
replicas.
Storage type Premium remote storage De-coupled storage with Super-fast local SSD storage
(per instance) local SSD cache (per (per instance)
instance)
IOPS 500 IOPS per vCore with Hyperscale is a multi-tiered 5,000 IOPS with 200,000
7,000 maximum IOPS architecture with caching at maximum IOPS
multiple levels. Effective
IOPS will depend on the
workload.
NOTE
Short-term backup retention for 1-35 days for Hyperscale databases is now in preview.
O P ERAT IO N DETA IL S L EA RN M O RE
Create a Hyperscale database Hyperscale databases are available Find examples to create a Hyperscale
only using the vCore-based database in Quickstart: Create a
purchasing model. Hyperscale database in Azure SQL
Database.
Upgrade an existing database to Migrating an existing database in Learn how to migrate an existing
Hyperscale Azure SQL Database to the Hyperscale database to Hyperscale.
tier is a size of data operation.
Reverse migrate a Hyperscale If you previously migrated an existing Learn how to reverse migrate from
database to the General Purpose Azure SQL Database to the Hyperscale Hyperscale, including the limitations
ser vice tier (preview) service tier, you can reverse migrate for reverse migration.
the database to the General Purpose
service tier within 45 days of the
original migration to Hyperscale.
Known limitations
These are the current limitations of the Hyperscale service tier. We're actively working to remove as many of
these limitations as possible.
Short-term backup retention Short-term backup retention for 1-35 days for Hyperscale
databases is now in preview. A non-Hyperscale database
can't be restored as a Hyperscale database, and a Hyperscale
database can't be restored as a non-Hyperscale database.
Service tier change from Hyperscale to General Purpose tier Reverse migration from Hyperscale allows customers who
is supported directly under limited scenarios have recently migrated an existing Azure SQL Database to
the Hyperscale service tier to move to General Purpose tier,
should Hyperscale not meet their needs. While reverse
migration is initiated by a service tier change, it's essentially
a size-of-data move between different architectures.
Databases created in the Hyperscale service tier aren't
eligible for reverse migration. Learn the limitations for
reverse migration.
Migration of databases with In-Memory OLTP objects Hyperscale supports a subset of In-Memory OLTP objects,
including memory-optimized table types, table variables, and
natively compiled modules. However, when any In-Memory
OLTP objects are present in the database being migrated,
migration from Premium and Business Critical service tiers to
Hyperscale isn't supported. To migrate such a database to
Hyperscale, all In-Memory OLTP objects and their
dependencies must be dropped. After the database is
migrated, these objects can be recreated. Durable and non-
durable memory-optimized tables aren't currently supported
in Hyperscale, and must be changed to disk tables.
Database integrity check DBCC CHECKDB isn't currently supported for Hyperscale
databases. DBCC CHECKTABLE ('TableName') WITH TABLOCK
and DBCC CHECKFILEGROUP WITH TABLOCK may be used
as a workaround. See Data Integrity in Azure SQL Database
for details on data integrity management in Azure SQL
Database.
A Hyperscale database contains the following types of components: compute nodes, page servers, the log
service, and Azure storage.
Compute
The compute node is where the relational engine lives. The compute node is where language, query, and
transaction processing occur. All user interactions with a Hyperscale database happen through compute nodes.
Compute nodes have local SSD-based caches called Resilient Buffer Pool Extension (RBPEX Data Cache). RBPEX
Data Cache is an intelligent low latency data cache that minimizes the need to fetch data from remote page
servers.
Hyperscale databases have one primary compute node where the read-write workload and transactions are
processed. One or more secondary compute nodes act as hot standby nodes for failover purposes. Secondary
compute nodes can serve as read-only compute nodes to offload read workloads when desired. Named replicas
are secondary compute nodes designed to enable massive OLTP read-scale out scenarios and to improve
Hybrid Transactional and Analytical Processing (HTAP) workloads.
The database engine running on Hyperscale compute nodes is the same as in other Azure SQL Database service
tiers. When users interact with the database engine on Hyperscale compute nodes, the supported surface area
and engine behavior are the same as in other service tiers, with the exception of known limitations.
Page server
Page servers are systems representing a scaled-out storage engine. Each page server is responsible for a subset
of the pages in the database. Each page server also has a replica that is kept for redundancy and availability.
The job of a page server is to serve database pages out to the compute nodes on demand, and to keep the
pages updated as transactions update data. Page servers are kept up to date by replaying transaction log
records from the log service.
Page servers also maintain covering SSD-based caches to enhance performance. Long-term storage of data
pages is kept in Azure Storage for durability.
Log service
The log service accepts transaction log records that correspond to data changes from the primary compute
replica. Page servers then receive the log records from the log service and apply the changes to their respective
slices of data. Additionally, compute secondary replicas receive log records from the log service and replay only
the changes to pages already in their buffer pool or local RBPEX cache. All data changes from the primary
compute replica are propagated through the log service to all the secondary compute replicas and page servers.
Finally, transaction log records are pushed out to long-term storage in Azure Storage, which is a virtually infinite
storage repository. This mechanism removes the need for frequent log truncation. The log service has local
memory and SSD caches to speed up access to log records.
The log on Hyperscale is practically infinite, with the restriction that a single transaction cannot generate more
than 1 TB of log. Additionally, if using Change Data Capture, at most 1 TB of log can be generated since the start
of the oldest active transaction. Avoid unnecessarily large transactions to stay below this limit.
Azure storage
Azure Storage contains all data files in a database. Page servers keep data files in Azure Storage up to date. This
storage is also used for backup purposes and may be replicated between regions based on choice of storage
redundancy.
Backups are implemented using storage snapshots of data files. Restore operations using snapshots are fast
regardless of data size. A database can be restored to any point in time within its backup retention period.
Hyperscale supports configurable storage redundancy. When creating a Hyperscale database, you can choose
read-access geo-redundant storage (RA-GRS), zone-redundant storage (ZRS)(preview), or locally redundant
storage (LRS)(preview) Azure standard storage. The selected storage redundancy option will be used for the
lifetime of the database for both data storage redundancy and backup storage redundancy.
Next steps
Learn more about Hyperscale in the following articles:
Hyperscale service tier
Azure SQL Database Hyperscale FAQ
Quickstart: Create a Hyperscale database in Azure SQL Database
Azure SQL Database Hyperscale named replicas FAQ
Automated backups for Hyperscale databases
9/13/2022 • 7 minutes to read • Edit Online
Backup retention
Default short-term retention of backups for Hyperscale databases is 7 days. Long-term retention (LTR) policies
aren't currently supported.
NOTE
Short-term retention of backups in the range of 1 to 35 days for Hyperscale databases is now in preview.
Backup scheduling
There are no traditional full, differential, and transaction log backups for Hyperscale databases. Instead, regular
storage snapshots of data files are taken.
The generated transaction logs are retained as is for the configured retention period. At restore time, relevant
transaction log records are applied to the restored storage snapshot. The result is a transactionally consistent
database without any data loss as of the specified point in time within the retention period.
Data storage size is not included in the billable backup because it's already billed as allocated database storage.
Deleted Hyperscale databases incur backup costs to support recovery to a point in time before deletion. For a
deleted Hyperscale database, billable backup storage is calculated as follows:
Total billable backup storage size for deleted Hyperscale database = (data storage size + data backup size +
log backup storage size) * (remaining backup retention period after deletion / configured backup retention
period)
Data storage size is included in the formula because allocated database storage is not billed separately for a
deleted database. For a deleted database, data is stored after deletion to enable recovery during the configured
backup retention period.
Billable backup storage for a deleted database reduces gradually over time after it's deleted. It becomes zero
when backups are no longer retained, and then recovery is no longer possible. If it's a permanent deletion and
you no longer need backups, you can optimize costs by reducing retention before deleting the database.
Monitor backup costs
To understand backup storage costs:
1. In the Azure portal, go to Cost Management + Billing .
2. Select Cost Management > Cost analysis .
3. For Scope , select the desired subscription.
4. Filter for the time period and service you're interested in by following these steps:
a. Add a filter for Ser vice name .
b. Choose sql-database from the dropdown list.
c. Add another filter for Meter .
d. To monitor backup costs for point-in-time recovery, select Data Stored - Backup - RA from the
dropdown list.
The following screenshot shows an example cost analysis.
Data and backup storage redundancy
Hyperscale supports configurable storage redundancy. When you're creating a Hyperscale database, you can
choose your preferred storage type: read-access geo-redundant storage (RA-GRS), zone-redundant storage
(ZRS), or locally redundant storage (LRS). The selected storage redundancy option is used for the lifetime of the
database for both data storage redundancy and backup storage redundancy.
Consider backup storage redundancy carefully when you create a Hyperscale database, because you can set it
only during database creation. You can't modify this setting after the resource is provisioned.
Use active geo-replication to update backup storage redundancy settings for an existing Hyperscale database
with minimum downtime. Alternatively, you can use database copy.
WARNING
Geo-restore is disabled as soon as a database is updated to use locally redundant or zone-redundant storage.
Zone-redundant storage is currently available in only certain regions.
If you prefer, you can copy the database to a different region. Use this method if geo-restore is not available
because it's not supported with the selected storage redundancy type. For details, see Database copy for
Hyperscale.
Next steps
Database backups are an essential part of any business continuity and disaster recovery strategy because
they help protect your data from accidental corruption or deletion. To learn about the other SQL Database
business continuity solutions, see Business continuity overview.
For information about how to configure, manage, and restore from long-term retention of automated
backups in Azure Blob Storage by using the Azure portal, see Manage long-term backup retention by using
the Azure portal.
For information about how to configure, manage, and restore from long-term retention of automated
backups in Azure Blob Storage by using PowerShell, see Manage long-term backup retention by using
PowerShell.
Get more information about how to restore a database to a point in time by using the Azure portal.
Get more information about how to restore a database to a point in time by using PowerShell.
Hyperscale secondary replicas
9/13/2022 • 9 minutes to read • Edit Online
All HA replicas are identical in their resource capacity. If more than one HA replica is present, the read-intent
workload is distributed arbitrarily across all available HA replicas. When there are multiple HA replicas, keep in
mind that each one could have different data latency with respect to data changes made on the primary. Each
HA replica uses the same data as the primary on the same set of page servers. However, local data caches on
each HA replica reflect the changes made on the primary via the transaction log service, which forwards log
records from the primary replica to HA replicas. As the result, depending on the workload being processed by
an HA replica, application of log records may happen at different speeds, and thus different replicas could have
different data latency relative to the primary replica.
Named replica
A named replica, just like an HA replica, uses the same page servers as the primary replica. Similar to HA
replicas, there is no data copy needed to add a named replica.
The difference from HA replicas is that named replicas:
appear as regular (read-only) Azure SQL databases in the portal and in API (AZ CLI, PowerShell, T-SQL) calls;
can have database name different from the primary replica, and optionally be located on a different logical
server (as long as it is in the same region as the primary replica);
have their own Service Level Objective that can be set and changed independently from the primary replica;
support for up to 30 named replicas (for each primary replica);
support different authentication for each named replica by creating different logins on logical servers
hosting named replicas.
As a result, named replicas offers several benefits over HA replicas, for what concern read-only workloads:
users connected to a named replica will suffer no disconnection if the primary replica is scaled up or down; at
the same time users connected to primary replica will be unaffected by named replicas scaling up or down
workloads running on any replica, primary or named, will be unaffected by long running queries running on
other replicas
The main goal of named replicas is to enable a broad variety of read scale-out scenarios, and to improve Hybrid
Transactional and Analytical Processing (HTAP) workloads. Examples of how to create such solutions are
available here:
OLTP scale-out sample
Aside from the main scenarios listed above, named replicas offer flexibility and elasticity to also satisfy many
other use cases:
Access Isolation: you can grant access to a specific named replica, but not the primary replica or other named
replicas.
Workload-dependent service level objective: as a named replica can have its own service level objective, it is
possible to use different named replicas for different workloads and use cases. For example, one named
replica could be used to serve Power BI requests, while another can be used to serve data to Apache Spark
for Data Science tasks. Each one can have an independent service level objective and scale independently.
Workload-dependent routing: with up to 30 named replicas, it is possible to use named replicas in groups so
that an application can be isolated from another. For example, a group of four named replicas could be used
to serve requests coming from mobile applications, while another group two named replicas can be used to
serve requests coming from a web application. This approach would allow a fine-grained tuning of
performance and costs for each group.
The following example creates a named replica WideWorldImporters_NamedReplica for database
WideWorldImporters . The primary replica uses service level objective HS_Gen5_4, while the named replica uses
HS_Gen5_2. Both use the same logical server contosoeast . If you prefer to use REST API directly, this option is
also possible: Databases - Create A Database As Named Replica Secondary.
Portal
T-SQL
PowerShell
Azure CLI
1. In the Azure portal, browse to the database for which you want to create the named replica.
2. On the SQL Database page, select your database, scroll to Data management , select Replicas , and then
select Create replica .
3. Choose Named replica under Replica configuration, select or create the server for the named replica,
enter named replica database name and configure the Compute + storage options if necessary.
4. Click Review + create , review the information, and then click Create .
5. The named replica deployment process begins.
6. When the deployment is complete, the named replica displays its status.
7. Return to the primary database page, and then select Replicas . Your named replica is listed under
Named replicas .
As there is no data movement involved, in most cases a named replica will be created in about a minute. Once
the named replica is available, it will be visible from the portal or any command-line tool like AZ CLI or
PowerShell. A named replica is usable as a regular read-only database.
NOTE
For frequently asked questions on Hyperscale named replicas, see Azure SQL Database Hyperscale named replicas FAQ.
Open named replica database page, and then select Compute + storage . Update the vCores.
Removing a named replica
To remove a named replica, you drop it just like you would a regular database.
Portal
T-SQL
PowerShell
Azure CLI
IMPORTANT
Named replicas will be automatically removed when the primary replica from which they have been created is deleted.
Known issues
Partially incorrect data returned from sys.databases
Row values returned from sys.databases , for named replicas, in columns other than name and database_id ,
may be inconsistent and incorrect. For example, the compatibility_level column for a named replica could be
reported as 140 even if the primary database from which the named replica has been created is set to 150. A
workaround, when possible, is to get the same data using the DATABASEPROPERTYEX() function, which will return
correct data.
Geo-replica
With active geo-replication, you can create a readable secondary replica of the primary Hyperscale database in
the same or in a different Azure region. Geo-replicas must be created on a different logical server. The database
name of a geo-replica always matches the database name of the primary.
When creating a geo-replica, all data is copied from the primary to a different set of page servers. A geo-replica
does not share page servers with the primary, even if they are in the same region. This architecture provides the
necessary redundancy for geo-failovers.
Geo-replicas are used to maintain a transactionally consistent copy of the database via asynchronous
replication. If a geo-replica is in a different Azure region, it can be used for disaster recovery in case of a disaster
or outage in the primary region. Geo-replicas can also be used for geographic read scale-out scenarios.
Geo-replication for Hyperscale database has following current limitations:
Only one geo-replica can be created (in the same or different region).
Point in time restore of the geo-replica is not supported.
Creating a database copy of the geo-replica is not supported.
Secondary of a secondary (also known as "geo-replica chaining") is not supported.
Next steps
Hyperscale service tier
Active geo-replication
Configure Security to allow isolated access to Azure SQL Database Hyperscale Named Replicas
Azure SQL Database Hyperscale named replicas FAQ
Compare vCore and DTU-based purchasing models
of Azure SQL Database
9/13/2022 • 6 minutes to read • Edit Online
Purchasing models
There are two purchasing models:
vCore-based purchasing model is available for both Azure SQL Database and Azure SQL Managed Instance.
The Hyperscale service tier is available for single databases that are using the vCore-based purchasing
model.
DTU-based purchasing model is available for Azure SQL Database.
The following table and chart compares and contrasts the vCore-based and the DTU-based purchasing models:
Compute costs
Compute costs are calculated differently based on each purchasing model.
DTU compute costs
In the DTU purchasing model, DTUs are offered in preconfigured bundles of compute resources and included
storage to drive different levels of application performance. You are billed by the number of DTUs you allocate to
your database for your application.
vCore compute costs
In the vCore-based purchasing model, choose between the provisioned compute tier, or the serverless compute
tier. In the provisioned compute tier, the compute cost reflects the total compute capacity that is provisioned for
the application. In the serverless compute tier, compute resources are auto-scaled based on workload capacity
and billed for the amount of compute used, per second.
For single databases, compute resources, I/O, and data and log storage are charged per database. For elastic
pools, these resources are charged per pool. However, backup storage is always charged per database.
Since three additional replicas are automatically allocated in the Business Critical service tier, the price is
approximately 2.7 times higher than it is in the General Purpose service tier. Likewise, the higher storage price
per GB in the Business Critical service tier reflects the higher IO limits and lower latency of the local SSD storage.
Storage costs
Storage costs are calculated differently based on each purchasing model.
DTU storage costs
Storage is included in the price of the DTU. It's possible to add extra storage in the standard and premium tiers.
See the Azure SQL Database pricing options for details on provisioning extra storage. Long-term backup
retention is not included, and is billed separately.
Overview
A virtual core (vCore) represents a logical CPU and offers you the option to choose the physical characteristics
of the hardware (for example, the number of cores, the memory, and the storage size). The vCore-based
purchasing model gives you flexibility, control, transparency of individual resource consumption, and a
straightforward way to translate on-premises workload requirements to the cloud. This model optimizes price,
and allows you to choose compute, memory, and storage resources based on your workload needs.
In the vCore-based purchasing model, your costs depend on the choice and usage of:
Service tier
Hardware configuration
Compute resources (the number of vCores and the amount of memory)
Reserved database storage
Actual backup storage
IMPORTANT
Compute resources, I/O, and data and log storage are charged per database or elastic pool. Backup storage is charged per
each database.
The vCore purchasing model used by Azure SQL Database provides several benefits over the DTU purchasing
model:
Higher compute, memory, I/O, and storage limits.
Choice of hardware configuration to better match compute and memory requirements of the workload.
Pricing discounts for Azure Hybrid Benefit (AHB).
Greater transparency in the hardware details that power the compute, that facilitates planning for migrations
from on-premises deployments.
Reserved instance pricing is only available for vCore purchasing model.
Higher scaling granularity with multiple compute sizes available.
Service tiers
Service tier options in the vCore purchasing model include General Purpose, Business Critical, and Hyperscale.
The service tier generally service tier defines hardware, storage type and IOPS, high availability and disaster
recovery options, and other features like memory-optimized object types.
For greater details, review resource limits for logical server, single databases, and pooled databases.
USE C A SE GEN ERA L P URP O SE B USIN ESS C RIT IC A L H Y P ERSC A L E
Best for Most business workloads. Offers business applications Most business workloads
Offers budget-oriented, the highest resilience to with highly scalable storage
balanced, and scalable failures by using several and read-scale
compute and storage isolated replicas, and requirements. Offers higher
options. provides the highest I/O resilience to failures by
performance per database allowing configuration of
replica. more than one isolated
database replica.
Pricing/billing vCore, reserved storage, vCore, reserved storage, vCore for each replica and
and backup storage are and backup storage are used storage are charged.
charged. charged. IOPS not yet charged.
IOPS is not charged. IOPS is not charged.
Discount models Reserved instances Reserved instances Azure Hybrid Benefit (not
Azure Hybrid Benefit (not Azure Hybrid Benefit (not available on dev/test
available on dev/test available on dev/test subscriptions)
subscriptions) subscriptions) Enterprise and Pay-As-You-
Enterprise and Pay-As-You- Enterprise and Pay-As-You- Go Dev/Test subscriptions
Go Dev/Test subscriptions Go Dev/Test subscriptions
NOTE
For more information on the Service Level Agreement (SLA), see SLA for Azure SQL Database
Resource limits
For vCore resource limits, see logical servers, single databases, pooled databases.
Compute tiers
Compute tier options in the vCore model include the provisioned and serverless compute tiers.
While the provisioned compute tier provides a specific amount of compute resources that are
continuously provisioned independent of workload activity, the ser verless compute tier auto-scales
compute resources based on workload activity.
While the provisioned compute tier bills for the amount of compute provisioned at a fixed price per hour,
the ser verless compute tier bills for the amount of compute used, per second.
Hardware configuration
Common hardware configurations in the vCore model include standard-series (Gen5), Fsv2-series, and DC-
series. Hardware configuration defines compute and memory limits and other characteristics that impact
workload performance.
Certain hardware configurations such as Gen5 may use more than one type of processor (CPU), as described in
Compute resources (CPU and memory). While a given database or elastic pool tends to stay on the hardware
with the same CPU type for a long time (commonly for multiple months), there are certain events that can cause
a database or pool to be moved to hardware that uses a different CPU type. For example, a database or pool can
be moved if it is scaled up or down to a different service objective, or if the current infrastructure in a datacenter
is approaching its capacity limits, or if the currently used hardware is being decommissioned due to its end of
life.
For some workloads, a move to a different CPU type can change performance. SQL Database configures
hardware with the goal to provide predictable workload performance even if CPU type changes, keeping
performance changes within a narrow band. However, across the wide spectrum of customer workloads running
in SQL Database, and as new types of CPUs become available, it is possible to occasionally see more noticeable
changes in performance if a database or pool moves to a different CPU type.
Regardless of CPU type used, resource limits for a database or elastic pool remain the same as long as the
database stays on the same service objective.
Gen4/Gen5
Gen4/Gen5 hardware provides balanced compute and memory resources, and is suitable for most database
workloads that do not have higher memory, higher vCore, or faster single vCore requirements as provided
by Fsv2-series or M-series.
For regions where Gen4/Gen5 is available, see Gen4/Gen5 availability.
Fsv2-series
Fsv2-series is a compute optimized hardware configuration delivering low CPU latency and high clock speed
for the most CPU demanding workloads.
Depending on the workload, Fsv2-series can deliver more CPU performance per vCore than other types of
hardware. For example, the 72 vCore Fsv2 compute size can provide more CPU performance than 80 vCores
on Gen5, at lower cost.
Fsv2 provides less memory and tempdb per vCore than other hardware, so workloads sensitive to those
limits may perform better on standard-series (Gen5).
Fsv2-series in only supported in the General Purpose tier. For regions where Fsv2-series is available, see Fsv2-
series availability.
M -series
M-series is a memory optimized hardware configuration for workloads demanding more memory and
higher compute limits than provided by other types of hardware.
M-series provides 29 GB per vCore and up to 128 vCores, which increases the memory limit relative to Gen5
by 8x to nearly 4 TB.
M-series is only supported in the Business Critical tier and does not support zone redundancy. For regions
where M-series is available, see M-series availability.
Azure offer types supported by M-series
For regions where M-series is available, see M-series availability.
There are two subscription requirements for M-series hardware:
1. To create databases or elastic pools on M-series hardware, the subscription must be a paid offer type
including Pay-As-You-Go or Enterprise Agreement (EA). For a complete list of Azure offer types supported
by M-series, see current offers without spending limits.
2. To enable M-series hardware for a subscription and region, a support request must be opened. In the
Azure portal, create a New Support Request to Request a quota increase for your subscription. Use the
"M-series region access" quota type request to indicate access to M-series hardware.
DC -series
DC-series hardware uses Intel processors with Software Guard Extensions (Intel SGX) technology.
DC-series is required for Always Encrypted with secure enclaves, which is not supported with other hardware
configurations.
DC-series is designed for workloads that process sensitive data and demand confidential query processing
capabilities, provided by Always Encrypted with secure enclaves.
DC-series hardware provides balanced compute and memory resources.
DC-series is only supported for Provisioned compute (Serverless is not supported) and does not support zone
redundancy. For regions where DC-series is available, see DC-series availability.
Azure offer types supported by DC-series
To create databases or elastic pools on DC-series hardware, the subscription must be a paid offer type including
Pay-As-You-Go or Enterprise Agreement (EA). For a complete list of Azure offer types supported by DC-series,
see current offers without spending limits.
Selecting hardware configuration
You can select hardware configuration for a database or elastic pool in SQL Database at the time of creation. You
can also change hardware configuration of an existing database or elastic pool.
To select a hardware configuration when creating a SQL Database or pool
For detailed information, see Create a SQL Database.
On the Basics tab, select the Configure database link in the Compute + storage section, and then select the
Change configuration link:
IMPORTANT
Gen4 hardware is being retired and is not available for new deployments, as announced on December 18, 2019.
Customers using Gen4 for Azure SQL Databases, elastic pools, or SQL managed instances should migrate to currently
available hardware, such as standard-series (Gen5), before January 31, 2023.
For more information on Gen4 hardware retirement and migration to current hardware, see our Blog post on Gen4
retirement. Existing Gen4 databases, elastic pools, and SQL managed instances will be migrated automatically to
equivalent standard-series (Gen5) hardware.
Downtime caused by automatic migration will be minimal and similar to downtime during scaling operations within
selected service tier. To avoid unplanned interruptions to workloads, migrate proactively at the time of your choice before
January 31, 2023.
Next steps
To get started, see Creating a SQL Database using the Azure portal
For pricing details, see the Azure SQL Database pricing page
For details about the specific compute and storage sizes available, see:
vCore-based resource limits for Azure SQL Database
vCore-based resource limits for pooled Azure SQL Database
DTU-based purchasing model overview
9/13/2022 • 9 minutes to read • Edit Online
DTUs are most useful for understanding the relative resources that are allocated for databases at different
compute sizes and service tiers. For example:
Doubling the DTUs by increasing the compute size of a database equates to doubling the set of resources
available to that database.
A premium service tier P11 database with 1750 DTUs provides 350 times more DTU compute power than a
basic service tier database with 5 DTUs.
To gain deeper insight into the resource (DTU) consumption of your workload, use query-performance insights
to:
Identify the top queries by CPU/duration/execution count that can potentially be tuned for improved
performance. For example, an I/O-intensive query might benefit from in-memory optimization techniques to
make better use of the available memory at a certain service tier and compute size.
Drill down into the details of a query to view its text and its history of resource usage.
Access performance-tuning recommendations that show actions taken by SQL Database Advisor.
Elastic database transaction units (eDTUs)
Rather than provide a dedicated set of resources (DTUs) that might not always be needed, you can place these
databases into an elastic pool. The databases in an elastic pool use a single instance of the database engine and
share the same pool of resources.
The shared resources in an elastic pool are measured by elastic database transaction units (eDTUs). Elastic pools
provide a simple, cost-effective solution to manage performance goals for multiple databases that have widely
varying and unpredictable usage patterns. An elastic pool guarantees that all the resources can't be consumed
by one database in the pool, while ensuring that each database in the pool always has a minimum amount of
necessary resources available.
A pool is given a set number of eDTUs for a set price. In the elastic pool, individual databases can autoscale
within the configured boundaries. A database under a heavier load will consume more eDTUs to meet demand.
Databases under lighter loads will consume fewer eDTUs. Databases with no load will consume no eDTUs.
Because resources are provisioned for the entire pool, rather than per database, elastic pools simplify your
management tasks and provide a predictable budget for the pool.
You can add additional eDTUs to an existing pool with minimal database downtime. Similarly, if you no longer
need extra eDTUs, remove them from an existing pool at any time. You can also add databases to or remove
databases from a pool at any time. To reserve eDTUs for other databases, limit the number of eDTUs databases
can use under a heavy load. If a database has consistently high resource utilization that impacts other databases
in the pool, move it out of the pool and configure it as a single database with a predictable amount of required
resources.
Workloads that benefit from an elastic pool of resources
Pools are well suited for databases with a low resource-utilization average and relatively infrequent utilization
spikes. For more information, see When should you consider a SQL Database elastic pool?.
The input values for this formula can be obtained from sys.dm_db_resource_stats, sys.resource_stats, and
sys.elastic_pool_resource_stats DMVs. In other words, to determine the percentage of DTU/eDTU utilization
toward the DTU/eDTU limit of a database or an elastic pool, pick the largest percentage value from the following:
avg_cpu_percent , avg_data_io_percent , and avg_log_write_percent at a given point in time.
NOTE
The DTU limit of a database is determined by CPU, reads, writes, and memory available to the database. However,
because the SQL Database engine typically uses all available memory for its data cache to improve performance, the
avg_memory_usage_percent value will usually be close to 100 percent, regardless of current database load. Therefore,
even though memory does indirectly influence the DTU limit, it is not used in the DTU utilization formula.
Hardware configuration
In the DTU-based purchasing model, customers cannot choose the hardware configuration used for their
databases. While a given database usually stays on a specific type of hardware for a long time (commonly for
multiple months), there are certain events that can cause a database to be moved to different hardware.
For example, a database can be moved to different hardware if it's scaled up or down to a different service
objective, or if the current infrastructure in a datacenter is approaching its capacity limits, or if the currently used
hardware is being decommissioned due to its end of life.
If a database is moved to different hardware, workload performance can change. The DTU model guarantees
that the throughput and response time of the DTU benchmark workload will remain substantially identical as the
database moves to a different hardware type, as long as its service objective (the number of DTUs) stays the
same.
However, across the wide spectrum of customer workloads running in Azure SQL Database, the impact of using
different hardware for the same service objective can be more pronounced. Different workloads may benefit
from different hardware configurations and features. Therefore, for workloads other than the DTU benchmark,
it's possible to see performance differences if the database moves from one type of hardware to another.
Customers can use the vCore model to choose their preferred hardware configuration during database creation
and scaling. In the vCore model, detailed resource limits of each service objective in each hardware
configuration are documented for single databases and elastic pools. For more information about hardware in
the vCore model, see Hardware configuration for SQL Database or Hardware configuration for SQL Managed
Instance.
IOPS (approximate) * 1-4 IOPS per DTU 1-4 IOPS per DTU >25 IOPS per DTU
B A SIC STA N DA RD P REM IUM
* All read and write IOPS against data files, including background IO (checkpoint and lazy writer)
IMPORTANT
The Basic, S0, S1 and S2 service objectives provide less than one vCore (CPU). For CPU-intensive workloads, a service
objective of S3 or greater is recommended.
In the Basic, S0, and S1 service objectives, database files are stored in Azure Standard Storage, which uses hard disk drive
(HDD)-based storage media. These service objectives are best suited for development, testing, and other infrequently
accessed workloads that are less sensitive to performance variability.
TIP
To see actual resource governance limits for a database or elastic pool, query the sys.dm_user_db_resource_governance
view.
NOTE
You can get a free database in Azure SQL Database at the Basic service tier in conjunction with an Azure free account to
explore Azure. For information, see Create a managed cloud database with your Azure free account.
Resource limits
Resource limits differ for single and pooled databases.
Single database storage limits
Compute sizes are expressed in terms of Database Transaction Units (DTUs) for single databases and elastic
Database Transaction Units (eDTUs) for elastic pools. To learn more, review Resource limits for single databases.
IMPORTANT
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see
Manage file space in Azure SQL Database.
IMPORTANT
More than 1 TB of storage in the Premium tier is currently available in all regions except: China East, China North,
Germany Central, and Germany Northeast. In these regions, the storage max in the Premium tier is limited to 1 TB. For
more information, see P11-P15 current limitations.
IMPORTANT
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see
manage file space in Azure SQL Database.
DTU Benchmark
Physical characteristics (CPU, memory, IO) associated with each DTU measure are calibrated using a benchmark
that simulates real-world database workload.
Learn about the schema, transaction types used, workload mix, users and pacing, scaling rules, and metrics
associated with the DTU benchmark.
Next steps
Learn more about purchasing models and related concepts in the following articles:
For details on specific compute sizes and storage size choices available for single databases, see SQL
Database DTU-based resource limits for single databases.
For details on specific compute sizes and storage size choices available for elastic pools, see SQL Database
DTU-based resource limits.
For information on the benchmark associated with the DTU-based purchasing model, see DTU benchmark.
Compare vCore and DTU-based purchasing models of Azure SQL Database.
Azure SQL Database and Azure Synapse Analytics
connectivity architecture
9/13/2022 • 5 minutes to read • Edit Online
Connectivity architecture
The following diagram provides a high-level overview of the connectivity architecture.
The following steps describe how a connection is established to Azure SQL Database:
Clients connect to the gateway that has a public IP address and listens on port 1433.
The gateway, depending on the effective connection policy, redirects or proxies the traffic to the right
database cluster.
Inside the database cluster, traffic is forwarded to the appropriate database.
Connection policy
Servers in SQL Database and Azure Synapse support the following three options for the server's connection
policy setting:
Redirect (recommended): Clients establish connections directly to the node hosting the database, leading
to reduced latency and improved throughput. For connections to use this mode, clients need to:
Allow outbound communication from the client to all Azure SQL IP addresses in the region on ports in
the range of 11000 to 11999. Use the Service Tags for SQL to make this easier to manage.
Allow outbound communication from the client to Azure SQL Database gateway IP addresses on port
1433.
When using the Redirect connection policy, refer to the Azure IP Ranges and Service Tags – Public
Cloud for a list of your region's IP addresses to allow.
Proxy: In this mode, all connections are proxied via the Azure SQL Database gateways, leading to increased
latency and reduced throughput. For connections to use this mode, clients need to allow outbound
communication from the client to Azure SQL Database gateway IP addresses on port 1433.
When using the Proxy connection policy, refer to the Gateway IP addresses list later in this article for
your region's IP addresses to allow.
Default: This is the connection policy in effect on all servers after creation unless you explicitly alter the
connection policy to either Proxy or Redirect . The default policy is Redirect for all client connections
originating inside of Azure (for example, from an Azure Virtual Machine) and Proxy for all client connections
originating outside (for example, connections from your local workstation).
We highly recommend the Redirect connection policy over the Proxy connection policy for the lowest latency
and highest throughput. However, you will need to meet the additional requirements for allowing network traffic
as outlined above. If the client is an Azure Virtual Machine, you can accomplish this using Network Security
Groups (NSG) with service tags. If the client is connecting from a workstation on-premises then you may need
to work with your network admin to allow network traffic through your corporate firewall.
IMPORTANT
Connections to private endpoint only support Proxy as the connection policy.
IMPORTANT
Open TCP ports 1434 and 14000-14999 to enable Connecting with DAC.
Gateway IP addresses
The table below lists the individual Gateway IP addresses and also Gateway IP address ranges per region.
Periodically, we will retire Gateways using old hardware and migrate the traffic to new Gateways as per the
process outlined at Azure SQL Database traffic migration to newer Gateways. We strongly encourage customers
to use the Gateway IP address subnets in order to not be impacted by this activity in a region.
IMPORTANT
Logins for SQL Database or Azure Synapse can land on any of the Gateways in a region . For consistent connectivity
to SQL Database or Azure Synapse, allow network traffic to and from ALL Gateway IP addresses and Gateway IP address
subnets for the region.
Next steps
For information on how to change the Azure SQL Database connection policy for a server, see conn-policy.
For information about Azure SQL Database connection behavior for clients that use ADO.NET 4.5 or a later
version, see Ports beyond 1433 for ADO.NET 4.5.
For general application development overview information, see SQL Database Application Development
Overview.
Refer to Azure IP Ranges and Service Tags – Public Cloud
What is a logical SQL server in Azure SQL Database and Azure Synapse?
Azure SQL connectivity settings
9/13/2022 • 7 minutes to read • Edit Online
Applies to: Azure SQL Database Azure Synapse Analytics (dedicated SQL pool (formerly SQL DW)
only)
This article introduces settings that control connectivity to the server for Azure SQL Database and dedicated
SQL pool (formerly SQL DW) in Azure Synapse Analytics. These settings apply to all SQL Database and
dedicated SQL pool (formerly SQL DW) databases associated with the server.
You can change these settings from the networking tab of your logical server:
IMPORTANT
This article doesn't apply to Azure SQL Managed Instance. This article also does not apply to dedicated SQL pools in
Azure Synapse Analytics workspaces. See Azure Synapse Analytics IP firewall rules for guidance on how to configure IP
firewall rules for Azure Synapse Analytics with workspaces.
Error 47073
An instance-specific error occurred while establishing a connection to SQL Server.
The public network interface on this server is not accessible.
To connect to this server, use the Private Endpoint from inside your virtual network.
When Connectivity method is set to No access , any attempts to add, remove or edit any firewall rules will be
denied with an error message similar to:
Error 42101
Unable to create or modify firewall rules when public network interface for the server is disabled.
To manage server or database level firewall rules, please enable the public network interface.
Ensure that Connectivity method is set to Public endpoint or Private endpoint to be able to add, remove
or edit any firewall rules for Azure SQL Database and Azure Synapse Analytics.
To enable public network access for the logical server hosting your databases, go to the Networking page in
the Azure portal, choose the Public access tab, and then set the Public network access to Select networks .
From this page, you can add a virtual network rule, as well as configure firewall rules for your public endpoint.
Choose the Private access tab to configure a private endpoint.
NOTE
These settings take effect immediately after they're applied. Your customers might experience connection loss if they don't
meet the requirements for each setting.
IMPORTANT
The default for the minimal TLS version is to allow all versions. After you enforce a version of TLS, it's not possible to
revert to the default.
For customers with applications that rely on older versions of TLS, we recommend setting the minimal TLS
version according to the requirements of your applications. For customers that rely on applications to connect
by using an unencrypted connection, we recommend not setting any minimal TLS version.
For more information, see TLS considerations for SQL Database connectivity.
After you set the minimal TLS version, login attempts from customers who are using a TLS version lower than
the minimal TLS version of the server will fail with the following error:
Error 47072
Login failed with invalid TLS version
Portal
PowerShell
Azure CLI
In the Azure portal, go to your SQL ser ver resource. Under the Security settings, select Networking and then
choose the Connectivity tab. Select the Minimum TLS Version desired for all databases associated with the
server, and select Save .
Portal
PowerShell
Azure CLI
It's possible to change your connection policy for your logical server by using the Azure portal.
In the Azure portal, go to your SQL ser ver resource. Under the Security settings, select Networking and then
choose the Connectivity tab. Choose the desired connection policy, and select Save .
Next steps
For an overview of how connectivity works in Azure SQL Database, refer to Connectivity architecture.
For information on how to change the connection policy for a server, see conn-policy.
What is the local development experience for Azure
SQL Database?
9/13/2022 • 3 minutes to read • Edit Online
Overview
The Azure SQL Database local development experience is a combination of tools and procedures that empowers
application developers and database professionals to design, edit, build/validate, publish, and run database
schemas for databases while working offline.
The Azure SQL Database local development experience consists of extensions for Visual Studio Code and Azure
Data Studio and an Azure SQL Database emulator (preview). Extensions allow users to create, build and source
control Database Projects while working offline with Azure SQL Database emulator, which is a containerized
database with close fidelity to the Azure SQL Database public service.
The local development experience uses the emulator as a runtime host for Database Projects that can be
published and tested locally as part of a developer's inner loop.
A common example would be to push a project to a GitHub repository that leverages GitHub Actions to
automate database creation or apply schema changes to a database in Azure SQL Database. The Azure SQL
Database emulator itself can also be used as part of Continuous Integration and Continuous Deployment
(CI/CD) processes to automate database validation and testing.
NOTE
To learn more about upcoming use cases and support for new scenarios, review the Devs's Corner blog.
Visual Studio Code and Azure Data Studio extensions
To use the Azure SQL Database local development experience, install the appropriate extension depending on
whether you are using Visual Studio Code or Azure Data Studio.
The mssql extension for Enables you to connect and Install the mssql extension. There is no need to install
Visual Studio Code run queries and test scripts the mssql extension
against a database. The because this functionality is
database may be running in provided natively by Azure
the Azure SQL Database Data Studio.
emulator locally, or it may
be a database in the global
Azure SQL Database
service.
SQL Database Projects Enables you to capture an The SQL Database Projects Install the SQL Database
extension (Preview) existing database schema extension is bundled into Projects extension.
and/or design new the mssql extension for
database objects using a Visual Studio Code and is
declarative database design installed or updated
model. You can commit a automatically when the
database schema to version mssql extension is updated
control. You can also or installed.
publish a database schema
to a database running in
the Azure SQL Database
emulator, or to a database
running in the global Azure
SQL Database service. You
may publish an entire
database, or incremental
changes to a database.
To learn how to install the extensions, review Set up a local development environment.
Next steps
Learn more about the local development experience for Azure SQL Database:
Set up a local development environment for Azure SQL Database
Create a Database Project for a local Azure SQL Database development environment
Publish a Database Project for Azure SQL Database to the local emulator
Quickstart: Create a local development environment for Azure SQL Database
Introducing the Azure SQL Database emulator (preview)
Introducing the Azure SQL Database emulator
(preview)
9/13/2022 • 2 minutes to read • Edit Online
This local development experience is supported on Windows, macOS and Linux, and is available on x64 and
ARM64-based hardware platforms.
Once validation and testing have succeeded, developers can directly deploy their SQL Database Projects from
within Visual Studio Code to a database in Azure SQL Database and leverage additional capabilities like
Serverless.
Limitations
The current implementation of the Azure SQL Database emulator is derived from an Azure SQL Edge base
image, as it offers a cross-hardware platform compatibility and smaller image size. This means that compared to
the Azure SQL Database public service, some specific features may not be available. For example, the Azure SQL
Database emulator does not support all features that are supported across multiple Azure SQL Database service
tiers. Limitations include:
Spatial data types
Memory-optimized tables in in-memory OLTP
HierarchyID data type
Full-text search
Azure Active Directory Integration
While lack of compatibility with some of these features can be impactful, the emulator is still a great tool for
local development and testing and supports most of the Azure SQL Database programmability surface.
In future releases, we plan to increase feature parity and provide higher-fidelity with Azure SQL Database public
service.
Refer to the Azure SQL Edge documentation for more specific details.
Next steps
Learn more about the local development experience for Azure SQL Database:
What is the local development experience for Azure SQL Database?
Set up a local development environment for Azure SQL Database
Quickstart: Create a local development environment for Azure SQL Database
Create a Database Project for a local Azure SQL Database development environment
Publish a Database Project for Azure SQL Database to the local emulator
Plan for Intel SGX enclaves and attestation in Azure
SQL Database
9/13/2022 • 2 minutes to read • Edit Online
NOTE
Intel SGX is not available in hardware other than DC-series. For example, Intel SGX is not available for Gen5 hardware, and
it is not available for databases using the DTU model.
IMPORTANT
Before you configure the DC-series hardware for your database, check the regional availability of DC-series and make sure
you understand its performance limitations. For details, see DC-series.
Next steps
Enable Intel SGX for your Azure SQL database
See also
Tutorial: Getting started with Always Encrypted with secure enclaves in Azure SQL Database
Enable Intel SGX for Always Encrypted for your
Azure SQL Database
9/13/2022 • 2 minutes to read • Edit Online
NOTE
Intel SGX is not available in hardware configurations other than DC-series. For example, Intel SGX is not available for Gen5
hardware, and it is not available for databases using the DTU model.
IMPORTANT
Before you configure the DC-series hardware for your database, check the regional availability of DC-series and make sure
you understand its performance limitations. For more information, see DC-series.
For detailed instructions for how to configure a new or existing database to use a specific hardware
configuration, see Hardware configuration.
Next steps
Configure Azure Attestation for your Azure SQL database server
See also
Tutorial: Getting started with Always Encrypted with secure enclaves in Azure SQL Database
Configure attestation for Always Encrypted using
Azure Attestation
9/13/2022 • 4 minutes to read • Edit Online
NOTE
Configuring attestation is the responsibility of the attestation administrator. See Roles and responsibilities when
configuring SGX enclaves and attestation.
IMPORTANT
An attestation provider gets created with the default policy for Intel SGX enclaves, which does not validate the code
running inside the enclave. Microsoft strongly advises you set the below recommended policy, and not use the default
policy, for Always Encrypted with secure enclaves.
Microsoft recommends the following policy for attesting Intel SGX enclaves used for Always Encrypted in Azure
SQL Database:
version= 1.0;
authorizationrules
{
[ type=="x-ms-sgx-is-debuggable", value==false ]
&& [ type=="x-ms-sgx-product-id", value==4639 ]
&& [ type=="x-ms-sgx-svn", value>= 0 ]
&& [ type=="x-ms-sgx-mrsigner",
value=="e31c9e505f37a58de09335075fc8591254313eb20bb1a27e5443cc450b6e33e5"]
=> permit();
};
The product ID of the enclave matches the product ID assigned to Always Encrypted with secure enclaves.
Each enclave has a unique product ID that differentiates the enclave from other enclaves. The product
ID assigned to the Always Encrypted enclave is 4639.
The SVN allows Microsoft to respond to potential security bugs identified in the enclave code. In case
a security issue is dicovered and fixed, Microsoft will deploy a new version of the enclave with a new
(incremented) SVN. The above recommended policy will be updated to reflect the new SVN. By
updating your policy to match the recommended policy you can ensure that if a malicious
administrator tries to load an older and insecure enclave, attestation will fail.
The library in the enclave has been signed using the Microsoft signing key (the value of the x-ms-sgx-
mrsigner claim is the hash of the signing key).
One of the main goals of attestation is to convince clients that the binary running in the enclave is the
binary that is supposed to run. Attestation policies provide two mechanisms for this purpose. One is
the mrenclave claim which is the hash of the binary that is supposed to run in an enclave. The
problem with the mrenclave is that the binary hash changes even with trivial changes to the code,
which makes it hard to rev the code running in the enclave. Hence, we recommend the use of the
mrsigner , which is a hash of a key that is used to sign the enclave binary. When Microsoft revs the
enclave, the mrsigner stays the same as long as the signing key does not change. In this way, it
becomes feasible to deploy updated binaries without breaking customers' applications.
IMPORTANT
Microsoft may need to rotate the key used to sign the Always Encrypted enclave binary, which is expected to be a rare
event. Before a new version of the enclave binary, signed with a new key, is deployed to Azure SQL Database, this article
will be updated to provide a new recommended attestation policy and instructions on how you should update the policy
in your attestation providers to ensure your applications continue to work uninterrupted.
For instructions for how to create an attestation provider and configure with an attestation policy using:
Quickstart: Set up Azure Attestation with Azure portal
IMPORTANT
When you configure your attestation policy with Azure portal, set Attestation Type to SGX-IntelSDK .
IMPORTANT
When you configure your attestation policy with Azure CLI, set the attestation-type parameter to
SGX-IntelSDK .
Next Steps
Manage keys for Always Encrypted with secure enclaves
See also
Tutorial: Getting started with Always Encrypted with secure enclaves in Azure SQL Database
Auditing for Azure SQL Database and Azure
Synapse Analytics
9/13/2022 • 15 minutes to read • Edit Online
NOTE
For information on Azure SQL Managed Instance auditing, see the following article, Get started with SQL Managed
Instance auditing.
Overview
You can use SQL Database auditing to:
Retain an audit trail of selected events. You can define categories of database actions to be audited.
Repor t on database activity. You can use pre-configured reports and a dashboard to get started quickly with
activity and event reporting.
Analyze reports. You can find suspicious events, unusual activity, and trends.
IMPORTANT
Auditing for Azure SQL Database, Azure Synapse and Azure SQL Managed Instance is optimized for availability and
performance of the database(s) or instance(s) that are being audited. During periods of very high activity or high network
load, the auditing feature may allow transactions to proceed without recording all of the events marked for auditing.
Auditing limitations
User managed identity authentication type for enabling auditing to storage behind firewall is not
currently supported.
Enabling auditing on a paused Azure Synapse is not supported. To enable auditing, resume Azure Synapse.
Auditing for Azure Synapse SQL pools supports default audit action groups only .
When you configure the auditing for your logical server in Azure or Azure SQL Database with log destination
as the storage account, the target storage account must be enabled with access to storage account keys. If the
storage account is configured to use Azure AD authentication only and not configured for access key usage,
the auditing cannot be configured.
Define server-level vs. database-level auditing policy
An auditing policy can be defined for a specific database or as a default server policy in Azure (which hosts SQL
Database or Azure Synapse):
A server policy applies to all existing and newly created databases on the server.
If server auditing is enabled, it always applies to the database. The database will be audited, regardless of
the database auditing settings.
When auditing policy is defined at the database-level to a Log Analytics workspace or an Event Hubs
destination, the following operations will not keep the source database-level auditing policy:
Database copy
Point-in-time restore
Geo-replication (Secondary database will not have database-level auditing)
Enabling auditing on the database, in addition to enabling it on the server, does not override or change
any of the settings of the server auditing. Both audits will exist side by side. In other words, the database
is audited twice in parallel; once by the server policy and once by the database policy.
NOTE
You should avoid enabling both server auditing and database blob auditing together, unless:
You want to use a different storage account, retention period or Log Analytics Workspace for a specific
database.
You want to audit event types or categories for a specific database that differ from the rest of the databases on
the server. For example, you might have table inserts that need to be audited only for a specific database.
Otherwise, we recommended that you enable only server-level auditing and leave the database-level auditing
disabled for all databases.
Remarks
Premium storage with BlockBlobStorage is supported. Standard storage is supported. However, for audit
to write to a storage account behind a VNet or firewall, you must have a general-purpose v2 storage
account . If you have a general-purpose v1 or blob storage account, upgrade to a general-purpose v2
storage account. For specific instructions see, Write audit to a storage account behind VNet and firewall. For
more information, see Types of storage accounts.
Hierarchical namespace for all types of standard storage account and premium storage account
with BlockBlobStorage is supported.
Audit logs are written to Append Blobs in an Azure Blob storage on your Azure subscription
Audit logs are in .xel format and can be opened by using SQL Server Management Studio (SSMS).
To configure an immutable log store for the server or database-level audit events, follow the instructions
provided by Azure Storage. Make sure you have selected Allow additional appends when you configure
the immutable blob storage.
You can write audit logs to an Azure Storage account behind a VNet or firewall.
For details about the log format, hierarchy of the storage folder and naming conventions, see the Blob Audit
Log Format Reference.
Auditing on Read-Only Replicas is automatically enabled. For further details about the hierarchy of the
storage folders, naming conventions, and log format, see the SQL Database Audit Log Format.
When using Azure AD Authentication, failed logins records will not appear in the SQL audit log. To view failed
login audit records, you need to visit the Azure Active Directory portal, which logs details of these events.
Logins are routed by the gateway to the specific instance where the database is located. In the case of Azure
AD logins, the credentials are verified before attempting to use that user to login into the requested database.
In the case of failure, the requested database is never accessed, so no auditing occurs. In the case of SQL
logins, the credentials are verified on the requested data, so in this case they can be audited. Successful
logins, which obviously reach the database, are audited in both cases.
After you've configured your auditing settings, you can turn on the new threat detection feature and
configure emails to receive security alerts. When you use threat detection, you receive proactive alerts on
anomalous database activities that can indicate potential security threats. For more information, see Getting
started with threat detection.
After a database with auditing enabled is copied to another logical server, you may receive an email notifying
you that the audit failed. This is a known issue and auditing should work as expected on the newly copied
database.
NOTE
Enabling auditing on a paused dedicated SQL pool is not possible. To enable auditing, un-pause the dedicated SQL
pool. Learn more about dedicated SQL pool.
When auditing is configured to a Log Analytics workspace or to an Event Hubs destination via the Azure portal or
PowerShell cmdlet, a Diagnostic Setting is created with "SQLSecurityAuditEvents" category enabled.
To review the audit logs of Microsoft Support operations in your Log Analytics workspace, use the following
query:
AzureDiagnostics
| where Category == "DevOpsOperationsAudit"
You have the option of choosing a different storage destination for this auditing log, or use the same auditing
configuration for your server.
Audit to storage destination
To configure writing audit logs to a storage account, select Storage when you get to the Auditing section.
Select the Azure storage account where you want to save your logs. You can use the following two storage
authentication types: managed identity and storage access keys. For managed identity, system and user
managed identity is supported. By default, the primary user identity assigned to the server is selected. If there is
no user identity, then a system assigned identity is created and used for authentication purposes. After you have
chosen an authentication type, select a retention period by opening *Advanced properties and selecting Save .
Logs older than the retention period are deleted.
NOTE
If you are deploying from the Azure portal, be sure that the storage account is in the same region as your database and
server. If you are deploying through other methods, the storage account can be in any region.
The default value for retention period is 0 (unlimited retention). You can change this value by moving the
Retention (Days) slider in Advanced proper ties when configuring the storage account for auditing.
If you change retention period from 0 (unlimited retention) to any other value, please note that
retention will only apply to logs written after retention value was changed (logs written during the
period when retention was set to unlimited are preserved, even after retention is enabled).
Audit to Log Analytics destination
To configure writing audit logs to a Log Analytics workspace, select Log Analytics and open Log Analytics
details . Select the Log Analytics workspace where logs will be written and then click OK . If you have not created
a Log Analytics workspace, see Create a Log Analytics workspace in the Azure portal.
For more details about Azure Monitor Log Analytics workspace, see Designing your Azure Monitor Logs
deployment
Audit to Event Hubs destination
To configure writing audit logs to an event hub, select Event Hub . Select the event hub where logs will be
written and then click Save . Be sure that the event hub is in the same region as your database and server.
Analyze audit logs and reports
If you chose to write audit logs to Log Analytics:
Use the Azure portal. Open the relevant database. At the top of the database's Auditing page, select
View audit logs .
Alternatively, you can also access the audit logs from Log Analytics blade. Open your Log Analytics
workspace and under General section, click Logs . You can start with a simple query, such as: search
"SQLSecurityAuditEvents" to view the audit logs. From here, you can also use Azure Monitor logs to run
advanced searches on your audit log data. Azure Monitor logs gives you real-time operational insights
using integrated search and custom dashboards to readily analyze millions of records across all your
workloads and servers. For additional useful information about Azure Monitor logs search language and
commands, see Azure Monitor logs search reference.
If you chose to write audit logs to Event Hub:
To consume audit logs data from Event Hub, you will need to set up a stream to consume events and write
them to a target. For more information, see Azure Event Hubs Documentation.
Audit logs in Event Hub are captured in the body of Apache Avro events and stored using JSON formatting
with UTF-8 encoding. To read the audit logs, you can use Avro Tools or similar tools that process this format.
If you chose to write audit logs to an Azure storage account, there are several methods you can use to view the
logs:
Audit logs are aggregated in the account you chose during setup. You can explore audit logs by using a
tool such as Azure Storage Explorer. In Azure storage, auditing logs are saved as a collection of blob files
within a container named sqldbauditlogs . For further details about the hierarchy of the storage folders,
naming conventions, and log format, see the SQL Database Audit Log Format.
Use the Azure portal. Open the relevant database. At the top of the database's Auditing page, click View
audit logs .
Audit records opens, from which you'll be able to view the logs.
You can view specific dates by clicking Filter at the top of the Audit records page.
You can switch between audit records that were created by the server audit policy and the
database audit policy by toggling Audit Source .
Use the system function sys.fn_get_audit_file (T-SQL) to return the audit log data in tabular format. For
more information on using this function, see sys.fn_get_audit_file.
Use Merge Audit Files in SQL Server Management Studio (starting with SSMS 17):
1. From the SSMS menu, select File > Open > Merge Audit Files .
2. The Add Audit Files dialog box opens. Select one of the Add options to choose whether to
merge audit files from a local disk or import them from Azure Storage. You are required to provide
your Azure Storage details and account key.
3. After all files to merge have been added, click OK to complete the merge operation.
4. The merged file opens in SSMS, where you can view and analyze it, as well as export it to an XEL or
CSV file, or to a table.
Use Power BI. You can view and analyze audit log data in Power BI. For more information and to access a
downloadable template, see Analyze audit log data in Power BI.
Download log files from your Azure Storage blob container via the portal or by using a tool such as Azure
Storage Explorer.
After you have downloaded a log file locally, double-click the file to open, view, and analyze the logs in
SSMS.
You can also download multiple files simultaneously via Azure Storage Explorer. To do so, right-click a
specific subfolder and select Save as to save in a local folder.
Additional methods:
After downloading several files or a subfolder that contains log files, you can merge them locally as
described in the SSMS Merge Audit Files instructions described previously.
View blob auditing logs programmatically: Query Extended Events Files by using PowerShell.
Production practices
Auditing geo -replicated databases
With geo-replicated databases, when you enable auditing on the primary database the secondary database will
have an identical auditing policy. It is also possible to set up auditing on the secondary database by enabling
auditing on the secondar y ser ver , independently from the primary database.
Server-level (recommended ): Turn on auditing on both the primar y ser ver as well as the secondar y
ser ver - the primary and secondary databases will each be audited independently based on their respective
server-level policy.
Database-level: Database-level auditing for secondary databases can only be configured from Primary
database auditing settings.
Auditing must be enabled on the primary database itself, not the server.
After auditing is enabled on the primary database, it will also become enabled on the secondary
database.
IMPORTANT
With database-level auditing, the storage settings for the secondary database will be identical to those of
the primary database, causing cross-regional traffic. We recommend that you enable only server-level
auditing, and leave the database-level auditing disabled for all databases.
2. Go to the storage configuration page and regenerate the primary access key.
3. Go back to the auditing configuration page, switch the storage access key from secondary to primary,
and then click OK . Then click Save at the top of the auditing configuration page.
4. Go back to the storage configuration page and regenerate the secondary access key (in preparation for
the next key's refresh cycle).
NOTE
The linked samples are on an external public repository and are provided 'as is', without warranty, and are not supported
under any Microsoft support program/service.
See also
Data Exposed episode What's New in Azure SQL Auditing on Channel 9.
Auditing for SQL Managed Instance
Auditing for SQL Server
SQL Database audit log format
9/13/2022 • 4 minutes to read • Edit Online
Applies to: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
Azure SQL Database auditing tracks database events and writes them to an audit log in your Azure storage
account, or sends them to Event Hub or Log Analytics for downstream processing and analysis.
Naming conventions
Blob audit
Audit logs stored in Azure Blob storage are stored in a container named sqldbauditlogs in the Azure storage
account. The directory hierarchy within the container is of the form
<ServerName>/<DatabaseName>/<AuditName>/<Date>/ . The Blob file name format is
<CreationTime>_<FileNumberInSession>.xel , where CreationTime is in UTC hh_mm_ss_ms format, and
FileNumberInSession is a running index in case session logs spans across multiple Blob files.
For example, for database Database1 on Server1 the following is a possible valid path:
Server1/Database1/SqlDbAuditing_ServerAudit_NoRetention/2019-02-03/12_23_30_794_0.xel
Read-only Replicas audit logs are stored in the same container. The directory hierarchy within the container is of
the form <ServerName>/<DatabaseName>/<AuditName>/<Date>/RO/ . The Blob file name shares the same format. The
Audit Logs of Read-only Replicas are stored in the same container.
Event Hub
Audit events are written to the namespace and event hub that was defined during auditing configuration, and
are captured in the body of Apache Avro events and stored using JSON formatting with UTF-8 encoding. To read
the audit logs, you can use Avro Tools or similar tools that process this format.
Log Analytics
Audit events are written to Log Analytics workspace defined during auditing configuration, to the
AzureDiagnostics table with the category SQLSecurityAuditEvents . For additional useful information about Log
Analytics search language and commands, see Log Analytics search reference.
NOTE
In Azure Synapse Analytics, the Azure SQL logical server DNS alias is only supported for dedicated SQL Pool (formerly
DW). For dedicated SQL pools in Azure Synapse workspaces, the DNS alias is not currently supported. What's the
difference?
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
IMPORTANT
The PowerShell Azure Resource Manager module is still supported, but all future development is for the Az.Sql module.
For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the AzureRm modules are
substantially identical.
Limitations
Presently, a DNS alias has the following limitations:
Delay of up to 2 minutes: It takes up to 2 minutes for a DNS alias to be updated or removed.
Regardless of any brief delay, the alias immediately stops referring client connections to the legacy
server.
DNS lookup: For now, the only authoritative way to check what server a given DNS alias refers to is by
performing a DNS lookup.
Table auditing is not supported: You cannot use a DNS alias on a server that has table auditing enabled on a
database.
Table auditing is deprecated.
We recommend that you move to Blob Auditing.
We recommend that you move to Blob Auditing.
DNS alias is subject to naming restrictions.
Related resources
Overview of business continuity with Azure SQL Database, including disaster recovery.
Azure REST API reference
Server Dns Aliases API
Next steps
PowerShell for DNS Alias to Azure SQL Database
Azure SQL Database and Azure Synapse Analytics
network access controls
9/13/2022 • 6 minutes to read • Edit Online
When you create a logical SQL server from the Azure portal for Azure SQL Database and Azure Synapse
Analytics, the result is a public endpoint in the format, yourservername.database.windows.net.
You can use the following network access controls to selectively allow access to a database via the public
endpoint:
Allow Azure Services: When set to ON, other resources within the Azure boundary, for example an Azure
Virtual Machine, can access SQL Database
IP firewall rules: Use this feature to explicitly allow connections from a specific IP address, for example from
on-premises machines
You can also allow private access to the database from virtual networks via:
Virtual network firewall rules: Use this feature to allow traffic from a specific virtual network within the Azure
boundary
Private Link: Use this feature to create a private endpoint for logical SQL server within a specific virtual
network
IMPORTANT
This article does not apply to SQL Managed Instance . For more information about the networking configuration, see
connecting to Azure SQL Managed Instance .
See the below video for a high-level explanation of these access controls and what they do:
When set to ON , your server allows communications from all resources inside the Azure boundary, that may or
may not be part of your subscription.
In many cases, the ON setting is more permissive than what most customers want. You may want to set this
setting to OFF and replace it with more restrictive IP firewall rules or virtual network firewall rules.
However, doing so affects the following features that run on virtual machines in Azure that aren't part of your
virtual network and hence connect to the database via an Azure IP address:
Import Export Service
Import Export Service doesn't work when Allow access to Azure ser vices is set to OFF . However you can
work around the problem by manually running sqlpackage.exe from an Azure VM or performing the export
directly in your code by using the DACFx API.
Data Sync
To use the Data sync feature with Allow access to Azure ser vices set to OFF , you need to create individual
firewall rule entries to add IP addresses from the Sql ser vice tag for the region hosting the Hub database. Add
these server-level firewall rules to the servers hosting both Hub and Member databases (which may be in
different regions)
Use the following PowerShell script to generate IP addresses corresponding to the SQL service tag for West US
region
TIP
Get-AzNetworkServiceTag returns the global range for SQL Service Tag despite specifying the Location parameter. Be sure
to filter it to the region that hosts the Hub database used by your sync group
Note that the output of the PowerShell script is in Classless Inter-Domain Routing (CIDR) notation. This needs to
be converted to a format of Start and End IP address using Get-IPrangeStartEnd.ps1 like this:
You can use this additional PowerShell script to convert all the IP addresses from CIDR to Start and End IP
address format.
PS C:\>foreach( $i in $sql.Properties.AddressPrefixes) {$ip,$cidr= $i.split('/') ; Get-IPrangeStartEnd -ip
$ip -cidr $cidr;}
start end
----- ---
13.86.216.0 13.86.216.127
13.86.216.128 13.86.216.191
13.86.216.192 13.86.216.223
You can now add these as distinct firewall rules and then set Allow Azure ser vices to access ser ver to OFF.
IP firewall rules
Ip based firewall is a feature of the logical SQL server in Azure that prevents all access to your server until you
explicitly add IP addresses of the client machines.
Private Link
Private Link allows you to connect to a server via a private endpoint . A private endpoint is a private IP address
within a specific virtual network and Subnet.
Next steps
For a quickstart on creating a server-level IP firewall rule, see Create a database in SQL Database.
For a quickstart on creating a server-level virtual network firewall rule, see Virtual Network service
endpoints and rules for Azure SQL Database.
For help with connecting to a database in SQL Database from open source or third-party applications, see
Client quickstart code samples to SQL Database.
For information on additional ports that you may need to open, see the SQL Database: Outside vs
inside section of Ports beyond 1433 for ADO.NET 4.5 and SQL Database
For an overview of Azure SQL Database Connectivity, see Azure SQL Connectivity Architecture
For an overview of Azure SQL Database security, see Securing your database
Outbound firewall rules for Azure SQL Database
and Azure Synapse Analytics
9/13/2022 • 3 minutes to read • Edit Online
Applies to: Azure SQL Database Azure Synapse Analytics (dedicated SQL pool (formerly SQL DW)
only)
Outbound firewall rules limit network traffic from the Azure SQL logical server to a customer defined list of
Azure Storage accounts and Azure SQL logical servers. Any attempt to access storage accounts or databases not
in this list is denied. The following Azure SQL Database features support this feature:
Auditing
Vulnerability assessment
Import/Export service
OPENROWSET
Bulk Insert
Elastic query
IMPORTANT
This article applies to both Azure SQL Database and dedicated SQL pool (formerly SQL DW) in Azure Synapse
Analytics. These settings apply to all SQL Database and dedicated SQL pool (formerly SQL DW) databases associated
with the server. For simplicity, the term 'database' refers to both databases in Azure SQL Database and Azure Synapse
Analytics. Likewise, any references to 'server' is referring to the logical SQL server that hosts Azure SQL Database and
dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics. This article does not apply to Azure SQL Managed
Instance or dedicated SQL pools in Azure Synapse Analytics workspaces.
Outbound firewall rules are defined at the logical server. Geo-replication and Auto-failover groups require the same
set of rules to be defined on the primary and all secondaries.
3. After you're done, you should see a screen similar to the one below. Select OK to apply these settings.
The following PowerShell script shows how to change the outbound networking setting (using the
RestrictOutboundNetworkAccess property):
Next steps
For an overview of Azure SQL Database security, see Securing your database.
For an overview of Azure SQL Database connectivity, see Azure SQL Connectivity Architecture.
Learn more about Azure SQL Database and Azure Synapse Analytics network access controls.
Learn about Azure Private Link for Azure SQL Database and Azure Synapse Analytics.
Azure Private Link for Azure SQL Database and
Azure Synapse Analytics
9/13/2022 • 9 minutes to read • Edit Online
Applies to: Azure SQL Database Azure Synapse Analytics (dedicated SQL pool (formerly SQL DW)
only)
Private Link allows you to connect to various PaaS services in Azure via a private endpoint . For a list of PaaS
services that support Private Link functionality, go to the Private Link Documentation page. A private endpoint is
a private IP address within a specific VNet and subnet.
IMPORTANT
This article applies to both Azure SQL Database and dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics.
These settings apply to all SQL Database and dedicated SQL pool (formerly SQL DW) databases associated with the
server. For simplicity, the term 'database' refers to both databases in Azure SQL Database and Azure Synapse Analytics.
Likewise, any references to 'server' is referring to the logical server that hosts Azure SQL Database and dedicated SQL
pool (formerly SQL DW) in Azure Synapse Analytics. This article does not apply to Azure SQL Managed Instance or
dedicated SQL pools in Azure Synapse Analytics workspaces.
3. The SQL admin can choose to approve or reject a PEC and optionally add a short text response.
4. After approval or rejection, the list will reflect the appropriate state along with the response text.
IMPORTANT
When you add a private endpoint connection, public routing to your logical server isn't blocked by default. In the
Firewall and vir tual networks pane, the setting Deny public network access is not selected by default. To disable
public network access, ensure that you select Deny public network access .
When Telnet connects successfully, you'll see a blank screen at the command window like the below image:
>psping.exe mysqldbsrvr.database.windows.net:1433
...
TCP connect to 10.9.0.4:1433:
5 iterations (warmup 1) ping test:
Connecting to 10.9.0.4:1433 (warmup): from 10.6.0.4:49953: 2.83ms
Connecting to 10.9.0.4:1433: from 10.6.0.4:49954: 1.26ms
Connecting to 10.9.0.4:1433: from 10.6.0.4:49955: 1.98ms
Connecting to 10.9.0.4:1433: from 10.6.0.4:49956: 1.43ms
Connecting to 10.9.0.4:1433: from 10.6.0.4:49958: 2.28ms
The output show that Psping could ping the private IP address associated with the private endpoint.
Check connectivity using Nmap
Nmap (Network Mapper) is a free and open-source tool used for network discovery and security auditing. For
more information and the download link, visit https://nmap.org. You can use this tool to ensure that the private
endpoint is listening for connections on port 1433.
Run Nmap as follows by providing the address range of the subnet that hosts the private endpoint.
NOTE
Use the Fully Qualified Domain Name (FQDN) of the server in connection strings for your clients (
<server>.database.windows.net ). Any login attempts made directly to the IP address or using the private link FQDN (
<server>.privatelink.database.windows.net ) shall fail. This behavior is by design, since private endpoint routes traffic
to the SQL Gateway in the region and the correct FQDN needs to be specified for logins to succeed.
Follow the steps here to use SSMS to connect to the SQL Database. After you connect to the SQL Database using
SSMS, the following query shall reflect client_net_address that matches the private IP address of the Azure VM
you are connecting from:
Limitations
Connections to private endpoint only support Proxy as the connection policy
Next steps
For an overview of Azure SQL Database security, see Securing your database
For an overview of Azure SQL Database connectivity, see Azure SQL Connectivity Architecture
You may also be interested in the Web app with private connectivity to Azure SQL database architecture
scenario, which connects a web application outside of the virtual network to the private endpoint of a
database.
Use virtual network service endpoints and rules for
servers in Azure SQL Database
9/13/2022 • 14 minutes to read • Edit Online
NOTE
This article applies to both SQL Database and Azure Synapse Analytics. For simplicity, the term database refers to both
databases in SQL Database and Azure Synapse Analytics. Likewise, any references to server refer to the logical SQL server
that hosts SQL Database and Azure Synapse Analytics.
To create a virtual network rule, there must first be a virtual network service endpoint for the rule to reference.
NOTE
In some cases, the database in SQL Database and the virtual network subnet are in different subscriptions. In these cases,
you must ensure the following configurations:
Both subscriptions must be in the same Azure Active Directory (Azure AD) tenant.
The user has the required permissions to initiate operations, such as enabling service endpoints and adding a virtual
network subnet to the given server.
Both subscriptions must have the Microsoft.Sql provider registered.
Limitations
For SQL Database, the virtual network rules feature has the following limitations:
In the firewall for your database in SQL Database, each virtual network rule references a subnet. All these
referenced subnets must be hosted in the same geographic region that hosts the database.
Each server can have up to 128 ACL entries for any virtual network.
Virtual network rules apply only to Azure Resource Manager virtual networks and not to classic deployment
model networks.
Turning on virtual network service endpoints to SQL Database also enables the endpoints for Azure Database
for MySQL and Azure Database for PostgreSQL. With endpoints set to ON , attempts to connect from the
endpoints to your Azure Database for MySQL or Azure Database for PostgreSQL instances might fail.
The underlying reason is that Azure Database for MySQL and Azure Database for PostgreSQL likely
don't have a virtual network rule configured. You must configure a virtual network rule for Azure
Database for MySQL and Azure Database for PostgreSQL.
To define virtual network firewall rules on a SQL logical server that's already configured with private
endpoints, set Deny public network access to No .
On the firewall, IP address ranges do apply to the following networking items, but virtual network rules don't:
Site-to-site (S2S) virtual private network (VPN)
On-premises via Azure ExpressRoute
Considerations when you use service endpoints
When you use service endpoints for SQL Database, review the following considerations:
Outbound to Azure SQL Database public IPs is required. Network security groups (NSGs) must be
opened to SQL Database IPs to allow connectivity. You can do this by using NSG service tags for SQL
Database.
ExpressRoute
If you use ExpressRoute from your premises, for public peering or Microsoft peering, you'll need to identify the
NAT IP addresses that are used. For public peering, each ExpressRoute circuit by default uses two NAT IP
addresses applied to Azure service traffic when the traffic enters the Microsoft Azure network backbone. For
Microsoft peering, the NAT IP addresses that are used are provided by either the customer or the service
provider. To allow access to your service resources, you must allow these public IP addresses in the resource IP
firewall setting. To find your public peering ExpressRoute circuit IP addresses, open a support ticket with
ExpressRoute via the Azure portal. To learn more about NAT for ExpressRoute public and Microsoft peering, see
NAT requirements for Azure public peering.
To allow communication from your circuit to SQL Database, you must create IP network rules for the public IP
addresses of your NAT.
IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is
for the Az.Sql module. The AzureRM module will continue to receive bug fixes until at least December 2020. The
arguments for the commands in the Az module and in the AzureRm modules are substantially identical. For more about
their compatibility, see Introducing the new Azure PowerShell Az module.
Steps
1. If you have a standalone dedicated SQL pool (formerly SQL DW), register your SQL server with Azure AD
by using PowerShell:
Connect-AzAccount
Select-AzSubscription -SubscriptionId <subscriptionId>
Set-AzSqlServer -ResourceGroupName your-database-server-resourceGroup -ServerName your-SQL-servername
-AssignIdentity
This step isn't required for the dedicated SQL pools within an Azure Synapse Analytics workspace. The
system assigned managed identity (SA-MI) of the workspace is a member of the Synapse Administrator
role and thus has elevated privileges on the dedicated SQL pools of the workspace.
2. Create a general-purpose v2 Storage Account by following the steps in Create a storage account.
If you have a general-purpose v1 or Blob Storage account, you must first upgrade to v2 by following
the steps in Upgrade to a general-purpose v2 storage account.
For known issues with Azure Data Lake Storage Gen2, see Known issues with Azure Data Lake Storage
Gen2.
3. On your storage account page, select Access control (IAM) .
4. Select Add > Add role assignment to open the Add role assignment page.
5. Assign the following role. For detailed steps, see Assign Azure roles using the Azure portal.
SET T IN G VA L UE
NOTE
Only members with Owner privilege on the storage account can perform this step. For various Azure built-in roles,
see Azure built-in roles.
CREATE DATABASE SCOPED CREDENTIAL msi_cred WITH IDENTITY = 'Managed Service Identity';
There's no need to specify SECRET with an Azure Storage access key because this
mechanism uses Managed Identity under the covers. This step isn't required for the
dedicated SQL pools within an Azure Synapse Analytics workspace. The system assigned
managed identity (SA-MI) of the workspace is a member of the Synapse Administrator role
and thus has elevated privileges on the dedicated SQL pools of the workspace.
The IDENTITY name must be 'Managed Ser vice Identity' for PolyBase connectivity to
work with an Azure Storage account secured to a virtual network.
c. Create an external data source with the abfss:// scheme for connecting to your general-purpose
v2 storage account using PolyBase.
If you already have external tables associated with a general-purpose v1 or Blob Storage
account, you should first drop those external tables. Then drop the corresponding external data
source. Next, create an external data source with the abfss:// scheme that connects to a
general-purpose v2 storage account, as previously shown. Then re-create all the external tables
by using this new external data source. You could use the Generate and Publish Scripts Wizard
to generate create-scripts for all the external tables for ease.
For more information on the abfss:// scheme, see Use the Azure Data Lake Storage Gen2 URI.
For more information on the T-SQL commands, see CREATE EXTERNAL DATA SOURCE.
d. Query as normal by using external tables.
SQL Database blob auditing
Azure SQL auditing can write SQL audit logs to your own storage account. If this storage account uses the
virtual network service endpoints feature, see how to write audit to a storage account behind VNet and firewall.
NOTE
For similar instructions in Azure Synapse Analytics, see Azure Synapse Analytics IP firewall rules
Prerequisites
You must already have a subnet that's tagged with the particular virtual network service endpoint type name
relevant to SQL Database.
The relevant endpoint type name is Microsoft.Sql .
If your subnet might not be tagged with the type name, see Verify your subnet is an endpoint.
4. In the new Create/Update pane, fill in the boxes with the names of your Azure resources.
TIP
You must include the correct address prefix for your subnet. You can find the Address prefix value in the portal.
Go to All resources > All types > Vir tual networks . The filter displays your virtual networks. Select your
virtual network, and then select Subnets . The ADDRESS RANGE column has the address prefix you need.
5. See the resulting virtual network rule on the Firewall pane.
6. Set Allow Azure ser vices and resources to access this ser ver to No .
IMPORTANT
If you leave Allow Azure ser vices and resources to access this ser ver checked, your server accepts
communication from any subnet inside the Azure boundary. That is communication that originates from one of
the IP addresses that's recognized as those within ranges defined for Azure datacenters. Leaving the control
enabled might be excessive access from a security point of view. The Microsoft Azure Virtual Network service
endpoint feature in coordination with the virtual network rules feature of SQL Database together can reduce your
security surface area.
Related articles
Azure virtual network service endpoints
Server-level and database-level firewall rules
Next steps
Use PowerShell to create a virtual network service endpoint and then a virtual network rule for SQL
Database
Virtual network rules: Operations with REST APIs
Azure SQL Database server roles for permission
management
9/13/2022 • 8 minutes to read • Edit Online
NOTE
The fixed server-level roles in this article are in public preview for Azure SQL Database. These server-level roles are also
part of the release for SQL Server 2022.
NOTE
The roles concept in this article are like groups in the Windows operating system.
These special fixed server-level roles use the prefix ##MS_ and the suffix ## to distinguish from other regular
user-created principals.
Like SQL Server on-premises, server permissions are organized hierarchically. The permissions that are held by
these server-level roles can propagate to database permissions. For the permissions to be effectively useful at
the database level, a login needs to either be a member of the server-level role ##MS_DatabaseConnector## ,
which grants CONNECT to all databases, or have a user account in individual databases. This also applies to the
virtual master database.
For example, the server-level role ##MS_Ser verStateReader## holds the permission VIEW SERVER STATE .
If a login who is member of this role has a user account in the databases master and WideWorldImporters , this
user will have the permission, VIEW DATABASE STATE in those two databases.
NOTE
Any permission can be denied within user databases, in effect, overriding the server-wide grant via role membership.
However, in the system database master, permissions cannot be granted or denied.
Azure SQL Database currently provides 7 fixed server roles. The permissions that are granted to the fixed server
roles can't be changed and these roles can't have other fixed roles as members. You can add server-level logins
as members to server-level roles.
IMPORTANT
Each member of a fixed server role can add other logins to that same role.
For more information on Azure SQL Database logins and users, see Authorize database access to SQL Database,
SQL Managed Instance, and Azure Synapse Analytics.
Fixed server-level roles
The following table shows the fixed server-level roles and their capabilities.
##MS_DefinitionReader## VIEW ANY DATABASE, VIEW ANY VIEW DEFINITION, VIEW SECURITY
DEFINITION, VIEW ANY SECURITY DEFINITION
DEFINITION
##MS_Ser verStateReader## VIEW SERVER STATE, VIEW SERVER VIEW DATABASE STATE, VIEW
PERFORMANCE STATE, VIEW SERVER DATABASE PERFORMANCE STATE,
SECURITY STATE VIEW DATABASE SECURITY STATE
##MS_Ser verStateManager## ALTER SERVER STATE, VIEW SERVER VIEW DATABASE STATE, VIEW
STATE, VIEW SERVER PERFORMANCE DATABASE PERFORMANCE STATE,
STATE, VIEW SERVER SECURITY STATE VIEW DATABASE SECURITY STATE
sys.sql_logins (Transact-SQL) Metadata Returns one row for each SQL login.
Examples
The examples in this section show how to work with server-level roles in Azure SQL Database.
A. Adding a SQL login to a server-level role
The following example adds the SQL login 'Jiao' to the server-level role ##MS_ServerStateReader##. This
statement has to be run in the virtual master database.
B. Listing all principals (SQL authentication) which are members of a server-level role
The following statement returns all members of any fixed server-level role using the sys.server_role_members
and sys.sql_logins catalog views. This statement has to be run in the virtual master database.
SELECT
sql_logins.principal_id AS MemberPrincipalID
, sql_logins.name AS MemberPrincipalName
, roles.principal_id AS RolePrincipalID
, roles.name AS RolePrincipalName
FROM sys.server_role_members AS server_role_members
INNER JOIN sys.server_principals AS roles
ON server_role_members.role_principal_id = roles.principal_id
INNER JOIN sys.sql_logins AS sql_logins
ON server_role_members.member_principal_id = sql_logins.principal_id
;
GO
C. Complete example: Adding a login to a server-level role, retrieving metadata for role membership and
permissions, and running a test query
Part 1: Preparing role membership and user account
Run this command from the virtual master database.
SELECT
sql_logins.principal_id AS MemberPrincipalID
, sql_logins.name AS MemberPrincipalName
, roles.principal_id AS RolePrincipalID
, roles.name AS RolePrincipalName
FROM sys.server_role_members AS server_role_members
INNER JOIN sys.server_principals AS roles
ON server_role_members.role_principal_id = roles.principal_id
INNER JOIN sys.sql_logins AS sql_logins
ON server_role_members.member_principal_id = sql_logins.principal_id
;
GO
-- Does the currently logged in User have the `VIEW DATABASE STATE`-permission?
SELECT HAS_PERMS_BY_NAME(NULL, 'DATABASE', 'VIEW DATABASE STATE');
--> 1 = Yes
-- example query:
SELECT * FROM sys.dm_exec_query_stats
--> will return data since this user has the necessary permission
See also
Database-Level Roles
Security Catalog Views (Transact-SQL)
Security Functions (Transact-SQL)
Permissions (Database Engine)
DBCC FLUSHAUTHCACHE (Transact-SQL)
Automated backups in Azure SQL Database
9/13/2022 • 22 minutes to read • Edit Online
NOTE
This article provides steps about how to delete personal data from the device or service and can be used to support your
obligations under the GDPR. For general information about GDPR, see the GDPR section of the Microsoft Trust Center
and the GDPR section of the Service Trust portal.
Backup frequency
Azure SQL Database creates:
Full backups every week.
Differential backups every 12 or 24 hours.
Transaction log backups approximately every 10 minutes.
The exact frequency of transaction log backups is based on the compute size and the amount of database
activity. When you restore a database, the service determines which full, differential, and transaction log backups
need to be restored.
The Hyperscale architecture does not require full, differential, or log backups. To learn more, see Hyperscale
backups.
Zone-redundant storage (ZRS) : Copies your backups synchronously across three Azure availability
zones in the primary region. It's currently available in only certain regions.
Geo-redundant storage (GRS) : Copies your backups synchronously three times within a single
physical location in the primary region by using LRS. Then it copies your data asynchronously three times
to a single physical location in the paired secondary region.
The result is:
Three synchronous copies in the primary region.
Three synchronous copies in the paired region that were copied over from the primary region to the
secondary region asynchronously.
WARNING
Geo-restore is disabled as soon as a database is updated to use locally redundant or zone-redundant storage.
The storage redundancy diagrams all show regions with multiple availability zones (multi-az). However, there are some
regions which provide only a single availability zone and do not support ZRS.
Backup storage redundancy for Hyperscale databases can be set only during creation. You can't modify this setting
after the resource is provisioned. To update backup storage redundancy settings for an existing Hyperscale database
with minimum downtime, use active geo-replication. Alternatively, you can use database copy. Learn more in
Hyperscale backups and storage redundancy.
Backup usage
You can use automatically created backups in the following scenarios:
Restore an existing database to a point in time within the retention period by using the Azure portal,
Azure PowerShell, the Azure CLI, or the REST API. This operation creates a new database on the same
server as the original database, but it uses a different name to avoid overwriting the original database.
After restore finishes, you can optionally delete the original database and rename the restored database
to the original database name. Alternatively, instead of deleting the original database, you can rename it,
and then rename the restored database to the original database name.
Restore a deleted database to a point in time within the retention period, including the time of deletion.
The deleted database can be restored only on the same server where you created the original database.
Before you delete a database, the service takes a final transaction log backup to prevent any data loss.
Restore a database to another geographic region. Geo-restore allows you to recover from a regional
outage when you can't access your database or backups in the primary region. It creates a new database
on any existing server in any Azure region.
IMPORTANT
Geo-restore is available only for databases that are configured with geo-redundant backup storage. If you're not
currently using geo-replicated backups for a database, you can change this by configuring backup storage
redundancy.
Restore a database from a specific long-term backup of a single or pooled database, if the database has
been configured with an LTR policy. LTR allows you to restore an older version of the database by using
the Azure portal, the Azure CLI, or Azure PowerShell to satisfy a compliance request or to run an older
version of the application. For more information, see Long-term retention.
Restore capabilities and features
This table summarizes the capabilities and features of point-in-time restore (PITR), geo-restore, and long-term
retention.
Types of SQL backup Full, differential, log. Replicated copies of PITR Only the full backups.
backups.
Recover y point 10 minutes, based on Up to 1 hour, based on One week (or user's policy).
objective (RPO) compute size and amount geo-replication.*
of database activity.
Recover y time objective Restore usually takes less Restore usually takes less Restore usually takes less
(RTO) than 12 hours but could than 12 hours but could than 12 hours but could
take longer, depending on take longer, depending on take longer, depending on
size and activity. See size and activity. See size and activity. See
Recovery. Recovery. Recovery.
Azure Storage Geo-redundant by default. Available when PITR backup Geo-redundant by default.
You can optionally configure storage redundancy is set You can configure zone-
zone-redundant or locally to geo-redundant. Not redundant or locally
redundant storage. available when PITR backup redundant storage.
storage is zone-redundant
or locally redundant.
Restoring a new Not supported Supported in any Azure Supported in any Azure
database in another region region
region
* For business-critical applications that require large databases and must ensure business continuity, use auto-
failover groups.
** All PITR backups are stored on geo-redundant storage by default, so geo-restore is enabled by default.
*** The workaround is to restore to a new server and use Resource Move to move the server to another
subscription, or use a cross-subscription database copy.
Restore a database from backup
To perform a restore, see Restore a database from backups. You can explore backup configuration and restore
operations by using the following examples.
Backup scheduling
The first full backup is scheduled immediately after a new database is created or restored. This backup usually
finishes within 30 minutes, but it can take longer when the database is large. For example, the initial backup can
take longer on a restored database or a database copy, which would typically be larger than a new database.
After the first full backup, all further backups are scheduled and managed automatically. The exact timing of all
database backups is determined by the SQL Database service as it balances the overall system workload. You
can't change the schedule of backup jobs or disable them.
IMPORTANT
For a new, restored, or copied database, the point-in-time restore capability becomes available when the initial
transaction log backup that follows the initial full backup is created.
Hyperscale databases are protected immediately after creation, unlike other databases where the initial backup takes
time. The protection is immediate even if the Hyperscale database was created with a large amount of data via copy or
restore. To learn more, review Hyperscale automated backups.
IMPORTANT
Backups of a database are retained to provide PITR even if the database has been deleted. Although deleting and re-
creating a database might save storage and compute costs, it might increase backup storage costs. The reason is that the
service retains backups for each deleted database, every time it's deleted.
Monitor consumption
For vCore databases in Azure SQL Database, the storage that each type of backup (full, differential, and log)
consumes is reported on the database monitoring pane as a separate metric. The following screenshot shows
how to monitor the backup storage consumption for a single database.
For instructions on how to monitor consumption in Hyperscale, see Monitor Hyperscale backup consumption.
Fine -tune backup storage consumption
Backup storage consumption up to the maximum data size for a database is not charged. Excess backup storage
consumption will depend on the workload and maximum size of the individual databases. Consider some of the
following tuning techniques to reduce your backup storage consumption:
Reduce the backup retention period to the minimum for your needs.
Avoid doing large write operations, like index rebuilds, more often than you need to.
For large data load operations, consider using clustered columnstore indexes and following related best
practices. Also consider reducing the number of non-clustered indexes.
In the General Purpose service tier, the provisioned data storage is less expensive than the price of the
backup storage. If you have continually high excess backup storage costs, you might consider increasing data
storage to save on the backup storage.
Use TempDB instead of permanent tables in your application logic for storing temporary results or transient
data.
Use locally redundant backup storage whenever possible (for example, dev/test environments).
Backup retention
Azure SQL Database provides both short-term and long-term retention of backups. Short-term retention allows
PITR within the retention period for the database. Long-term retention provides backups for various compliance
requirements.
Short-term retention
For all new, restored, and copied databases, Azure SQL Database retains sufficient backups to allow PITR within
the last 7 days by default. The service takes regular full, differential, and log backups to ensure that databases
are restorable to any point in time within the retention period that's defined for the database.
Short-term back up retention of 1 to 35 days for Hyperscale databases is now in preview. To learn more, review
Managing backup retention in Hyperscale.
Differential backups can be configured to occur either once in 12 hours or once in 24 hours. A 24-hour
differential backup frequency might increase the time required to restore the database, compared to the 12-
hour frequency. In the vCore model, the default frequency for differential backups is once in 12 hours. In the
DTU model, the default frequency is once in 24 hours.
You can specify your backup storage redundancy option for STR when you create your database, and then
change it at a later time. If you change your backup redundancy option after your database is created, new
backups will use the new redundancy option. Backup copies made with the previous STR redundancy option are
not moved or copied. They're left in the original storage account until the retention period expires, which can be
1 to 35 days.
Except for Basic databases, you can change the backup retention period for each active database in the range of
1 to 35 days. As described in Backup storage consumption, backups stored to enable PITR might be older than
the retention period. If you need to keep backups for longer than the maximum short-term retention period of
35 days, you can enable long-term retention.
If you delete a database, the system keeps backups in the same way that it would for an online database with its
specific retention period. You can't change the backup retention period for a deleted database.
IMPORTANT
If you delete a server, all databases on that server are also deleted and can't be recovered. You can't restore a deleted
server. But if you've configured long-term retention for a database, LTR backups are not deleted. You can then use those
backups to restore databases on a different server in the same subscription, to a point in time when an LTR backup was
taken. To learn more, review Restore long-term backup.
Long-term retention
For SQL Database, you can configure full LTR backups for up to 10 years in Azure Blob Storage. After the LTR
policy is configured, full backups are automatically copied to a different storage container weekly.
To meet various compliance requirements, you can select different retention periods for weekly, monthly, and/or
yearly full backups. The frequency depends on the policy. For example, setting W=0, M=1 would create an LTR
copy monthly. For more information about LTR, see Long-term retention. Databases in the Hyperscale service
tier don't currently support long-term retention.
Updating the backup storage redundancy for an existing database applies the change only to subsequent
backups taken in the future and not for existing backups. All existing LTR backups for the database will continue
to reside in the existing storage blob. New backups will be replicated based on the configured backup storage
redundancy.
Storage consumption depends on the selected frequency and retention periods of LTR backups. You can use the
LTR pricing calculator to estimate the cost of LTR storage.
NOTE
An Azure invoice shows only the excess backup storage consumption, not the entire backup storage consumption. For
example, in a hypothetical scenario, if you have provisioned 4 TB of data storage, you'll get 4 TB of free backup storage
space. If you use a total of 5.8 TB of backup storage space, the Azure invoice will show only 1.8 TB, because you're
charged only for excess backup storage that you've used.
DTU model
In the DTU model, there's no additional charge for backup storage for databases and elastic pools. The price of
backup storage is a part of the database or pool price.
vCore model
Azure SQL Database computes your total billable backup storage as a cumulative value across all backup files.
Every hour, this value is reported to the Azure billing pipeline. The pipeline aggregates this hourly usage to get
your backup storage consumption at the end of each month.
If a database is deleted, backup storage consumption will gradually decrease as older backups age out and are
deleted. Because differential backups and log backups require an earlier full backup to be restorable, all three
backup types are purged together in weekly sets. After all backups are deleted, billing stops.
Backup storage cost is calculated differently for Hyperscale databases. For more information, see Hyperscale
backup storage costs.
For single databases, a backup storage amount equal to 100 percent of the maximum data storage size for the
database is provided at no extra charge. The following equation is used to calculate the total billable backup
storage usage:
Total billable backup storage size = (size of full backups + size of differential backups + size of log
backups) – maximum data storage
For elastic pools, a backup storage amount equal to 100 percent of the maximum data storage for the pool
storage size is provided at no extra charge. For pooled databases, the total size of billable backup storage is
aggregated at the pool level and is calculated as follows:
Total billable backup storage size = (total size of all full backups + total size of all differential backups
+ total size of all log backups) - maximum pool data storage
Total billable backup storage, if any, is charged in gigabytes per month according to the rate of the backup
storage redundancy that you've used. This backup storage consumption depends on the workload and size of
individual databases, elastic pools, and managed instances. Heavily modified databases have larger differential
and log backups, because the size of these backups is proportional to the amount of changed data. Therefore,
such databases will have higher backup charges.
As a simplified example, assume that a database has accumulated 744 GB of backup storage and that this
amount stays constant throughout an entire month because the database is completely idle. To convert this
cumulative storage consumption to hourly usage, divide it by 744.0 (31 days per month times 24 hours per
day). SQL Database will report to the Azure billing pipeline that the database consumed 1 GB of PITR backup
each hour, at a constant rate. Azure billing will aggregate this consumption and show a usage of 744 GB for the
entire month. The cost will be based on the rate for gigabytes per month in your region.
Here's another example. Suppose the same idle database has its retention increased from 7 days to 14 days in
the middle of the month. This increase results in the total backup storage doubling to 1,488 GB. SQL Database
would report 1 GB of usage for hours 1 through 372 (the first half of the month). It would report the usage as 2
GB for hours 373 through 744 (the second half of the month). This usage would be aggregated to a final bill of
1,116 GB per month.
Actual backup billing scenarios are more complex. Because the rate of changes in the database depends on the
workload and is variable over time, the size of each differential and log backup will also vary. The hourly
consumption of backup storage will fluctuate accordingly.
Each differential backup also contains all changes made in the database since the last full backup. So, the total
size of all differential backups gradually increases over the course of a week. Then it drops sharply after an older
set of full, differential, and log backups ages out.
For example, assume that a heavy write activity, such as an index rebuild, runs just after a full backup is
completed. The modifications that the index rebuild makes will then be included:
In the transaction log backups taken over the duration of the rebuild.
In the next differential backup.
In every differential backup taken until the next full backup occurs.
For the last scenario in larger databases, an optimization in the service creates a full backup instead of a
differential backup if a differential backup would be excessively large otherwise. This reduces the size of all
differential backups until the following full backup.
You can monitor total backup storage consumption for each backup type (full, differential, transaction log) over
time, as described in Monitor consumption.
Monitor costs
To understand backup storage costs, go to Cost Management + Billing in the Azure portal. Select Cost
Management , and then select Cost analysis . Select the desired subscription for Scope , and then filter for the
time period and service that you're interested in as follows:
1. Add a filter for Ser vice name .
2. In the dropdown list, select sql database for a single database or an elastic database pool.
3. Add another filter for Meter subcategor y .
4. To monitor PITR backup costs, in the dropdown list, select single/elastic pool pitr backup storage for
a single database or an elastic database pool. Meters will show up only if backup storage consumption
exists.
To monitor LTR backup costs, in the dropdown list, select ltr backup storage for a single database or an
elastic database pool. Meters will show up only if backup storage consumption exists.
The Storage and compute subcategories might also interest you, but they're not associated with backup
storage costs.
IMPORTANT
Meters are visible only for counters that are currently in use. If a counter is not available, it's likely that the category is not
currently being used. For example, storage counters won't be visible for resources that are not consuming storage. If there
is no PITR or LTR backup storage consumption, these meters won't be visible.
Encrypted backups
If your database is encrypted with TDE, backups are automatically encrypted at rest, including LTR backups. All
new databases in Azure SQL are configured with TDE enabled by default. For more information on TDE, see
Transparent data encryption with SQL Database.
Backup integrity
On an ongoing basis, the Azure SQL engineering team automatically tests the restore of automated database
backups. Upon point-in-time restore, databases also receive DBCC CHECKDB integrity checks.
Any issues found during an integrity check will result in an alert to the engineering team. For more information,
see Data integrity in SQL Database.
All database backups are taken with the CHECKSUM option to provide additional backup integrity.
Compliance
When you migrate your database from a DTU-based service tier to a vCore-based service tier, the PITR retention
is preserved to ensure that your application's data recovery policy isn't compromised. If the default retention
doesn't meet your compliance requirements, you can change the PITR retention period. For more information,
see Change the PITR backup retention period.
NOTE
This article provides steps about how to delete personal data from the device or service and can be used to support your
obligations under the GDPR. For general information about GDPR, see the GDPR section of the Microsoft Trust Center
and the GDPR section of the Service Trust portal.
IMPORTANT
Azure policies are not enforced when you're creating a database via T-SQL. To specify data residency when you're creating
a database by using T-SQL, use LOCAL or ZONE as input to the BACKUP_STORAGE_REDUNDANCY parameter in the
CREATE DATABASE statement.
Next steps
To learn about other SQL Database business continuity solutions, see Business continuity overview.
To change backup settings, see Change settings.
To restore a backup, see Recover by using backups or Restore a database to a point in time by using
PowerShell.
For information about how to configure, manage, and restore from long-term retention of automated
backups in Azure Blob Storage, see Manage long-term backup retention.
For Azure SQL Managed Instance, see Automated backups for SQL Managed Instance.
Active geo-replication
9/13/2022 • 19 minutes to read • Edit Online
NOTE
Active geo-replication is not supported by Azure SQL Managed Instance. For geographic failover of instances of SQL
Managed Instance, use Auto-failover groups.
NOTE
To migrate SQL databases from Azure Germany using active geo-replication, see Migrate SQL Database using active geo-
replication.
If your application requires a stable connection endpoint and automatic geo-failover support in addition to geo-
replication, use Auto-failover groups.
The following diagram illustrates a typical configuration of a geo-redundant cloud application using Active geo-
replication.
If for any reason your primary database fails, you can initiate a geo-failover to any of your secondary databases.
When a secondary is promoted to the primary role, all other secondaries are automatically linked to the new
primary.
You can manage geo-replication and initiate a geo-failover using the following:
The Azure portal
PowerShell: Single database
PowerShell: Elastic pool
Transact-SQL: Single database or elastic pool
REST API: Single database
Active geo-replication leverages the Always On availability group technology to asynchronously replicate
transaction log generated on the primary replica to all geo-replicas. While at any given point, a secondary
database might be slightly behind the primary database, the data on a secondary is guaranteed to be
transactionally consistent. In other words, changes made by uncommitted transactions are not visible.
NOTE
Active geo-replication replicates changes by streaming database transaction log from the primary replica to secondary
replicas. It is unrelated to transactional replication, which replicates changes by executing DML (INSERT, UPDATE, DELETE)
commands on subscribers.
Regional redundancy provided by geo-replication enables applications to quickly recover from a permanent loss
of an entire Azure region, or parts of a region, caused by natural disasters, catastrophic human errors, or
malicious acts. Geo-replication RPO can be found in Overview of Business Continuity.
The following figure shows an example of active geo-replication configured with a primary in the North Central
US region and a geo-secondary in the South Central US region.
In addition to disaster recovery, active geo-replication can be used in the following scenarios:
Database migration : You can use active geo-replication to migrate a database from one server to another
with minimum downtime.
Application upgrades : You can create an extra secondary as a fail back copy during application upgrades.
To achieve full business continuity, adding database regional redundancy is only a part of the solution.
Recovering an application (service) end-to-end after a catastrophic failure requires recovery of all components
that constitute the service and any dependent services. Examples of these components include the client
software (for example, a browser with a custom JavaScript), web front ends, storage, and DNS. It is critical that
all components are resilient to the same failures and become available within the recovery time objective (RTO)
of your application. Therefore, you need to identify all dependent services and understand the guarantees and
capabilities they provide. Then, you must take adequate steps to ensure that your service functions during the
failover of the services on which it depends. For more information about designing solutions for disaster
recovery, see Designing Cloud Solutions for Disaster Recovery Using active geo-replication.
IMPORTANT
You can use geo-replication to create secondary replicas in the same region as the primary. You can use these
secondaries to satisfy read scale-out scenarios in the same region. However, a secondary replica in the same
region does not provide additional resilience to catastrophic failures or large scale outages, and therefore is not a
suitable failover target for disaster recovery purposes. It also does not guarantee availability zone isolation. Use
Business Critical or Premium service tiers zone redundant configuration or General Purpose service tier zone
redundant configuration to achieve availability zone isolation.
Planned geo-failover
Planned geo-failover switches the roles of primary and geo-secondary databases after completing full
data synchronization. A planned failover does not result in data loss. The duration of planned geo-failover
depends on the size of transaction log on the primary that needs to be synchronized to the geo-
secondary. Planned geo-failover is designed for the following scenarios:
Perform DR drills in production when the data loss is not acceptable;
Relocate the database to a different region;
Return the database to the primary region after the outage has been mitigated (known as failback).
Unplanned geo-failover
Unplanned, or forced, geo-failover immediately switches the geo-secondary to the primary role without
any synchronization with the primary. Any transactions committed on the primary but not yet replicated
to the secondary are lost. This operation is designed as a recovery method during outages when the
primary is not accessible, but database availability must be quickly restored. When the original primary is
back online, it will be automatically re-connected, reseeded using the current primary data, and become a
new geo-secondary.
IMPORTANT
After either planned or unplanned geo-failover, the connection endpoint for the new primary changes because the
new primary is now located on a different logical server.
IMPORTANT
If your database is a member of a failover group, you cannot initiate its failover using the geo-replication failover
command. Use the failover command for the group. If you need to failover an individual database, you must remove it
from the failover group first. See Auto-failover groups for details.
Configure geo-secondary
Both primary and geo-secondary are required to have the same service tier. It is also strongly recommended
that the geo-secondary is configured with the same backup storage redundancy and compute size (DTUs or
vCores) as the primary. If the primary is experiencing a heavy write workload, a geo-secondary with a lower
compute size may not be able to keep up. That will cause replication lag on the geo-secondary, and may
eventually cause unavailability of the geo-secondary. To mitigate these risks, active geo-replication will reduce
(throttle) the primary's transaction log rate if necessary to allow its secondaries to catch up.
Another consequence of an imbalanced geo-secondary configuration is that after failover, application
performance may suffer due to insufficient compute capacity of the new primary. In that case, it will be
necessary to scale up the database to have sufficient resources, which may take significant time, and will require
a high availability failover at the end of the scale up process, which may interrupt application workloads.
If you decide to create the geo-secondary with a lower compute size, you should monitor log IO rate on the
primary over time. This lets you estimate the minimal compute size of the geo-secondary required to sustain
the replication load. For example, if your primary database is P6 (1000 DTU) and its log IO is sustained at 50%,
the geo-secondary needs to be at least P4 (500 DTU). To retrieve historical log IO data, use the sys.resource_stats
view. To retrieve recent log IO data with higher granularity that better reflects short-term spikes, use the
sys.dm_db_resource_stats view.
TIP
Transaction log IO throttling on the primary due to lower compute size on a geo-secondary is reported using the
HADR_THROTTLE_LOG_RATE_MISMATCHED_SLO wait type, visible in the sys.dm_exec_requests and sys.dm_os_wait_stats
database views.
Transaction log IO on the primary may be throttled for reasons unrelated to lower compute size on a geo-secondary. This
kind of throttling may occur even if the geo-secondary has the same or higher compute size than the primary. For details,
including wait types for different kinds of log IO throttling, see Transaction log rate governance.
By default, backup storage redundancy of the geo-secondary is same as for the primary database. You can
choose to configure a geo-secondary with a different backup storage redundancy. Backups are always taken on
the primary database. If the secondary is configured with a different backup storage redundancy, then after a
geo-failover, when the geo-secondary is promoted to the primary, new backups will be stored and billed
according to the type of storage (RA-GRS, ZRS, LRS) selected on the new primary (previous secondary).
Cross-subscription geo-replication
To create a geo-secondary in a subscription different from the subscription of the primary (whether under the
same Azure Active Directory tenant or not), follow the steps in this section.
1. Add the IP address of the client machine executing the T-SQL commands below to the server firewalls of
both the primary and secondary servers. You can confirm that IP address by executing the following
query while connected to the primary server from the same client machine.
3. In the same database, create a user for the login, and add it to the dbmanager role:
4. Take note of the SID value of the new login. Obtain the SID value using the following query.
select sid from sys.sql_logins where name = 'geodrsetup';
5. Connect to the primar y database (not the master database), and create a user for the same login.
7. In the master database on the secondar y server, create the same login as on the primary server, using
the same name, password, and SID. Replace the hexadecimal SID value in the sample command below
with the one obtained in Step 4.
8. In the same database, create a user for the login, and add it to the dbmanager role.
9. Connect to the master database on the primar y server using the new geodrsetup login, and initiate geo-
secondary creation on the secondary server. Adjust database name and secondary server name as
needed. Once the command is executed, you can monitor geo-secondary creation by querying the
sys.dm_geo_replication_link_status view in the primar y database, and the sys.dm_operation_status view
in the master database on the primar y server. The time needed to create a geo-secondary depends on
the primary database size.
10. After the geo-secondary is successfully created, the users, logins, and firewall rules created by this
procedure can be removed.
NOTE
Cross-subscription geo-replication operations including setup and geo-failover are only supported using REST API & T-
SQL commands.
Adding a geo-secondary using T-SQL is not supported when connecting to the primary server over a private endpoint. If
a private endpoint is configured but public network access is allowed, adding a geo-secondary is supported when
connected to the primary server from a public IP address. Once a geo-secondary is added, public access can be denied.
Creating a geo-secondary on a logical server in a different Azure tenant is not supported when Azure Active Directory
only authentication for Azure SQL is active (enabled) on either primary or secondary logical server.
NOTE
If you created a geo-secondary as part of failover group configuration, it is not recommended to scale it down. This is to
ensure your data tier has sufficient capacity to process your regular workload after a geo-failover.
IMPORTANT
The primary database in a failover group can't scale to a higher service tier (edition) unless the secondary database is first
scaled to the higher tier. For example, if you want to scale up the primary from General Purpose to Business Critical, you
have to first scale the geo-secondary to Business Critical. If you try to scale the primary or geo-secondary in a way that
violates this rule, you will receive the following error:
The source database 'Primaryserver.DBName' cannot have higher edition than the target database
'Secondaryserver.DBName'. Upgrade the edition on the target before upgrading the source.
NOTE
sp_wait_for_database_copy_sync prevents data loss after geo-failover for specific transactions, but does not guarantee
full synchronization for read access. The delay caused by a sp_wait_for_database_copy_sync procedure call can be
significant and depends on the size of the not yet transmitted transaction log on the primary at the time of the call.
TIP
If replication_lag_sec on the primary is NULL, it means that the primary does not currently know how far behind a geo-
secondary is. This typically happens after process restarts and should be a transient condition. Consider sending an alert if
replication_lag_sec returns NULL for an extended period of time. It may indicate that the geo-secondary cannot
communicate with the primary due to a connectivity failure.
There are also conditions that could cause the difference between last_commit time on the geo-secondary and on the
primary to become large. For example, if a commit is made on the primary after a long period of no changes, the
difference will jump up to a large value before quickly returning to zero. Consider sending an alert if the difference
between these two values remains large for a long time.
IMPORTANT
These T-SQL commands only apply to active geo-replication and do not apply to failover groups. As such, they also do
not apply to SQL Managed Instance, which only supports failover groups.
sys.dm_geo_replication_link_status Gets the last replication time, last replication lag, and other
information about the replication link for a given database.
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is
for the Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the
AzureRm modules are substantially identical.
C M DL ET DESC RIP T IO N
TIP
For sample scripts, see Configure and failover a single database using active geo-replication and Configure and failover a
pooled database using active geo-replication.
Get Create or Update Database Status Returns the status during a create operation.
Set Secondary Database as Primary (Planned Failover) Sets which secondary database is primary by failing over
from the current primary database. This option is not
suppor ted for SQL Managed Instance.
API DESC RIP T IO N
Set Secondary Database as Primary (Unplanned Failover) Sets which secondary database is primary by failing over
from the current primary database. This operation might
result in data loss. This option is not suppor ted for
SQL Managed Instance.
Get Replication Link Gets a specific replication link for a given database in a geo-
replication partnership. It retrieves the information visible in
the sys.geo_replication_links catalog view. This option is
not suppor ted for SQL Managed Instance.
Replication Links - List By Database Gets all replication links for a given database in a geo-
replication partnership. It retrieves the information visible in
the sys.geo_replication_links catalog view.
Delete Replication Link Deletes a database replication link. Cannot be done during
failover.
Next steps
For sample scripts, see:
Configure and failover a single database using active geo-replication.
Configure and failover a pooled database using active geo-replication.
SQL Database also supports auto-failover groups. For more information, see using auto-failover groups.
For a business continuity overview and scenarios, see Business continuity overview.
To learn about Azure SQL Database automated backups, see SQL Database automated backups.
To learn about using automated backups for recovery, see Restore a database from the service-initiated
backups.
To learn about authentication requirements for a new primary server and database, see SQL Database
security after disaster recovery.
Auto-failover groups overview & best practices
(Azure SQL Database)
9/13/2022 • 21 minutes to read • Edit Online
NOTE
This article covers auto-failover groups for Azure SQL Database. For Azure SQL Managed Instance, see Auto-failover
groups in Azure SQL Managed Instance.
Auto-failover groups support geo-replication of all databases in the group to only one secondary server in a different
region. If you need to create multiple Azure SQL Database geo-secondary replicas (in the same or different regions) for
the same primary replica, use active geo-replication.
Overview
The auto-failover groups feature allows you to manage the replication and failover of a group of databases on a
server or all user databases in a managed instance to another Azure region. It is a declarative abstraction on top
of the active geo-replication feature, designed to simplify deployment and management of geo-replicated
databases at scale.
Automatic failover
You can initiate a geo-failover manually or you can delegate it to the Azure service based on a user-defined
policy. The latter option allows you to automatically recover multiple related databases in a secondary region
after a catastrophic failure or other unplanned event that results in full or partial loss of the SQL Database or
SQL Managed Instance availability in the primary region. Typically, these are outages that cannot be
automatically mitigated by the built-in high availability infrastructure. Examples of geo-failover triggers include
natural disasters, or incidents caused by a tenant or control ring being down due to an OS kernel memory leak
on compute nodes. For more information, see Azure SQL high availability.
Offload read-only workloads
To reduce traffic to your primary databases, you can also use the secondary databases in a failover group to
offload read-only workloads. Use the read-only listener to direct read-only traffic to a readable secondary
database.
Endpoint redirection
Auto-failover groups provide read-write and read-only listener end-points that remain unchanged during geo-
failovers. This means you do not have to change the connection string for your application after a geo-failover,
because connections are automatically routed to the current primary. Whether you use manual or automatic
failover activation, a geo-failover switches all secondary databases in the group to the primary role. After the
geo-failover is completed, the DNS record is automatically updated to redirect the endpoints to the new region.
For geo-failover RPO and RTO, see Overview of Business Continuity.
Recovering an application
To achieve full business continuity, adding regional database redundancy is only part of the solution. Recovering
an application (service) end-to-end after a catastrophic failure requires recovery of all components that
constitute the service and any dependent services. Examples of these components include the client software
(for example, a browser with a custom JavaScript), web front ends, storage, and DNS. It is critical that all
components are resilient to the same failures and become available within the recovery time objective (RTO) of
your application. Therefore, you need to identify all dependent services and understand the guarantees and
capabilities they provide. Then, you must take adequate steps to ensure that your service functions during the
failover of the services on which it depends.
IMPORTANT
The name of the failover group must be globally unique within the .database.windows.net domain.
Ser vers
Some or all of the user databases on a logical server can be placed in a failover group. Also, a server
supports multiple failover groups on a single server.
Primar y
The server that hosts the primary databases in the failover group.
Secondar y
The server that hosts the secondary databases in the failover group. The secondary cannot be in the
same Azure region as the primary.
Adding single databases to failover group
You can put several single databases on the same server into the same failover group. If you add a single
database to the failover group, it automatically creates a secondary database using the same edition and
compute size on secondary server. You specified that server when the failover group was created. If you
add a database that already has a secondary database in the secondary server, that geo-replication link is
inherited by the group. When you add a database that already has a secondary database in a server that
is not part of the failover group, a new secondary is created in the secondary server.
IMPORTANT
Make sure that the secondary server doesn't have a database with the same name unless it is an existing
secondary database.
NOTE
Because verification of the scale of the outage and how quickly it can be mitigated involves human actions, the
grace period cannot be set below one hour. This limitation applies to all databases in the failover group regardless
of their data synchronization state.
Planned failover
Planned failover performs full data synchronization between primary and secondary databases before
the secondary switches to the primary role. This guarantees no data loss. Planned failover is used in the
following scenarios:
Perform disaster recovery (DR) drills in production when data loss is not acceptable
Relocate the databases to a different region
Return the databases to the primary region after the outage has been mitigated (failback)
NOTE
If a database contains in-memory OLTP objects, the primary databases and the target secondary geo-replica
databases should have matching service tiers, as in-memory OLTP objects are always hydrated in memory. A
lower service tier on the target geo-replica database may result in out-of-memory issues. If this happens, the
affected geo-secondary database replica may be put into a limited read-only mode called in-memor y OLTP
checkpoint-only mode. Read-only table queries are allowed, but read-only in-memory OLTP table queries are
disallowed on the affected geo-secondary database replica. Planned failover is blocked if all replicas in the geo-
secondary database are in checkpoint only mode. Unplanned failover may fail due to out-of-memory issues. To
avoid this, upgrade the service tier of the secondary database to match the primary database during the planned
failover, or drill. Service tier upgrades can be size-of-data operations, and may take a while to finish.
Unplanned failover
Unplanned or forced failover immediately switches the secondary to the primary role without waiting for
recent changes to propagate from the primary. This operation may result in data loss. Unplanned failover
is used as a recovery method during outages when the primary is not accessible. When the outage is
mitigated, the old primary will automatically reconnect and become a new secondary. A planned failover
may be executed to fail back, returning the replicas to their original primary and secondary roles.
Manual failover
You can initiate a geo-failover manually at any time regardless of the automatic failover configuration.
During an outage that impacts the primary, if automatic failover policy is not configured, a manual
failover is required to promote the secondary to the primary role. You can initiate a forced (unplanned) or
friendly (planned) failover. A friendly failover is only possible when the old primary is accessible, and can
be used to relocate the primary to the secondary region without data loss. When a failover is completed,
the DNS records are automatically updated to ensure connectivity to the new primary.
Grace period with data loss
Because the data is replicated to the secondary database using asynchronous replication, an automatic
geo-failover may result in data loss. You can customize the automatic failover policy to reflect your
application’s tolerance to data loss. By configuring GracePeriodWithDataLossHours , you can control how
long the system waits before initiating a forced failover, which may result in data loss.
When designing a service with business continuity in mind, follow the general guidelines and best practices
outlined in this article. When configuring a failover group, ensure that authentication and network access on the
secondary is set up to function correctly after geo-failover, when the geo-secondary becomes the new primary.
For details, see SQL Database security after disaster recovery. For more information about designing solutions
for disaster recovery, see Designing Cloud Solutions for Disaster Recovery Using active geo-replication.
For information about using point-in-time restore with failover groups, see Point in Time Recovery (PITR).
Initial seeding
When adding databases or elastic pools to a failover group, there is an initial seeding phase before data
replication starts. The initial seeding phase is the longest and most expensive operation. Once initial seeding
completes, data is synchronized, and then only subsequent data changes are replicated. The time it takes for the
initial seeding to complete depends on the size of your data, number of replicated databases, the load on
primary databases, and the speed of the link between the primary and secondary. Under normal circumstances,
possible seeding speed is up to 500 GB an hour for SQL Database. Seeding is performed for all databases in
parallel.
IMPORTANT
Elastic pools with 800 or fewer DTUs or 8 or fewer vCores, and more than 250 databases may encounter issues including
longer planned geo-failovers and degraded performance. These issues are more likely to occur for write intensive
workloads, when geo-replicas are widely separated by geography, or when multiple secondary geo-replicas are used for
each database. A symptom of these issues is an increase in geo-replication lag over time, potentially leading to a more
extensive data loss in an outage. This lag can be monitored using sys.dm_geo_replication_link_status. If these issues occur,
then mitigation includes scaling up the pool to have more DTUs or vCores, or reducing the number of geo-replicated
databases in the pool.
NOTE
If you are using the read-only listener to load-balance a read-only workload, make sure that this workload is executed
in a VM or other resource in the secondary region so it can connect to the secondary database.
IMPORTANT
To guarantee business continuity during regional outages you must ensure geographic redundancy for both front-end
components and databases.
NOTE
If you created a geo-secondary as part of the failover group configuration it is not recommended to scale down the geo-
secondary. This is to ensure your data tier has sufficient capacity to process your regular workload after a geo-failover.
NOTE
sp_wait_for_database_copy_sync prevents data loss after geo-failover for specific transactions, but does not guarantee
full synchronization for read access. The delay caused by a sp_wait_for_database_copy_sync procedure call can be
significant and depends on the size of the not yet transmitted transaction log on the primary at the time of the call.
Permissions
Permissions for a failover group are managed via Azure role-based access control (Azure RBAC).
Azure RBAC write access is necessary to create and manage failover groups. The SQL Server Contributor role
has all the necessary permissions to manage failover groups.
For specific permission scopes, review how to configure auto-failover groups in Azure SQL Database.
Limitations
Be aware of the following limitations:
Failover groups cannot be created between two servers in the same Azure region.
Failover groups cannot be renamed. You will need to delete the group and re-create it with a different name.
Database rename is not supported for databases in failover group. You will need to temporarily delete
failover group to be able to rename a database, or remove the database from the failover group.
PowerShell
Azure CLI
REST API
C M DL ET DESC RIP T IO N
Next steps
For detailed tutorials, see
Add SQL Database to a failover group
Add an elastic pool to a failover group
For sample scripts, see:
Use PowerShell to configure active geo-replication for Azure SQL Database
Use PowerShell to configure active geo-replication for a pooled database in Azure SQL Database
Use PowerShell to add an Azure SQL Database to a failover group
For a business continuity overview and scenarios, see Business continuity overview
To learn about Azure SQL Database automated backups, see SQL Database automated backups.
To learn about using automated backups for recovery, see Restore a database from the service-initiated
backups.
To learn about authentication requirements for a new primary server and database, see SQL Database
security after disaster recovery.
Restore your Azure SQL Database or failover to a
secondary
9/13/2022 • 5 minutes to read • Edit Online
NOTE
If you are using zone-redundant Premium or Business Critical databases or pools, the recovery process is automated and
the rest of this material does not apply.
Both primary and secondary databases are required to have the same service tier. It is also strongly recommended that
the secondary database is created with the same compute size (DTUs or vCores) as the primary. For more information,
see Upgrading or downgrading as primary database.
Use one or several failover groups to manage failover of multiple databases. If you add an existing geo-replication
relationship to the failover group, make sure the geo-secondary is configured with the same service tier and compute size
as the primary. For more information, see Use auto-failover groups to enable transparent and coordinated failover of
multiple databases.
NOTE
If you are using failover groups and chose automatic failover, the recovery process is automated and transparent to the
application.
Depending on your application tolerance to downtime and possible business liability you can consider the
following recovery options.
Use the Get Recoverable Database (LastAvailableBackupDate) to get the latest Geo-replicated restore point.
NOTE
You should configure and test your server firewall rules and logins (and their permissions) during a disaster recovery drill.
These server-level objects and their configuration may not be available during the outage.
NOTE
If you plan to use Geo-restore as disaster-recovery solution, it is recommended to conduct periodic drills to verify
application tolerance to any loss of recent data modifications, as well as all operational aspects of the recovery procedure.
Next steps
To learn about Azure SQL Database automated backups, see SQL Database automated backups
To learn about business continuity design and recovery scenarios, see Continuity scenarios
To learn about using automated backups for recovery, see restore a database from the service-initiated
backups
Performing disaster recovery drills
9/13/2022 • 2 minutes to read • Edit Online
Geo-restore
To prevent the potential data loss when conducting a disaster recovery drill, perform the drill using a test
environment by creating a copy of the production environment and using it to verify the application’s failover
workflow.
Outage simulation
To simulate the outage, you can rename the source database. This name change causes application connectivity
failures.
Recovery
Perform the geo-restore of the database into a different server as described here.
Change the application configuration to connect to the recovered database and follow the Configure a
database after recovery guide to complete the recovery.
Validation
Complete the drill by verifying the application integrity post recovery (including connection strings, logins, basic
functionality testing, or other validations part of standard application signoffs procedures).
Failover groups
For a database that is protected using failover groups, the drill exercise involves planned failover to the
secondary server. The planned failover ensures that the primary and the secondary databases in the failover
group remain in sync when the roles are switched. Unlike the unplanned failover, this operation does not result
in data loss, so the drill can be performed in the production environment.
Outage simulation
To simulate the outage, you can disable the web application or virtual machine connected to the database. This
outage simulation results in the connectivity failures for the web clients.
Recovery
Make sure the application configuration in the DR region points to the former secondary, which becomes the
fully accessible new primary.
Initiate planned failover of the failover group from the secondary server.
Follow the Configure a database after recovery guide to complete the recovery.
Validation
Complete the drill by verifying the application integrity post recovery (including connectivity, basic functionality
testing, or other validations required for the drill signoffs).
Next steps
To learn about business continuity scenarios, see Continuity scenarios.
To learn about Azure SQL Database automated backups, see SQL Database automated backups
To learn about using automated backups for recovery, see restore a database from the service-initiated
backups.
To learn about faster recovery options, see Active geo-replication and Auto-failover groups.
What is SQL Data Sync for Azure?
9/13/2022 • 14 minutes to read • Edit Online
IMPORTANT
Azure SQL Data Sync does not support Azure SQL Managed Instance or Azure Synapse Analytics at this time.
Overview
Data Sync is based around the concept of a sync group. A sync group is a group of databases that you want to
synchronize.
Data Sync uses a hub and spoke topology to synchronize data. You define one of the databases in the sync
group as the hub database. The rest of the databases are member databases. Sync occurs only between the hub
and individual members.
The Hub Database must be an Azure SQL Database.
The member databases can be either databases in Azure SQL Database or in instances of SQL Server.
The Sync Metadata Database contains the metadata and log for Data Sync. The Sync Metadata Database
has to be an Azure SQL Database located in the same region as the Hub Database. The Sync Metadata
Database is customer created and customer owned. You can only have one Sync Metadata Database per
region and subscription. Sync Metadata Database cannot be deleted or renamed while sync groups or sync
agents exist. Microsoft recommends creating a new, empty database for use as the Sync Metadata Database.
Data Sync creates tables in this database and runs a frequent workload.
NOTE
If you're using an on premises database as a member database, you have to install and configure a local sync agent.
When to use
Data Sync is useful in cases where data needs to be kept updated across several databases in Azure SQL
Database or SQL Server. Here are the main use cases for Data Sync:
Hybrid Data Synchronization: With Data Sync, you can keep data synchronized between your databases
in SQL Server and Azure SQL Database to enable hybrid applications. This capability may appeal to
customers who are considering moving to the cloud and would like to put some of their application in Azure.
Distributed Applications: In many cases, it's beneficial to separate different workloads across different
databases. For example, if you have a large production database, but you also need to run a reporting or
analytics workload on this data, it's helpful to have a second database for this additional workload. This
approach minimizes the performance impact on your production workload. You can use Data Sync to keep
these two databases synchronized.
Globally Distributed Applications: Many businesses span several regions and even several
countries/regions. To minimize network latency, it's best to have your data in a region close to you. With Data
Sync, you can easily keep databases in regions around the world synchronized.
Data Sync isn't the preferred solution for the following scenarios:
ETL (OLTP to OLAP) Azure Data Factory or SQL Server Integration Services
Migration from SQL Server to Azure SQL Database. Azure Database Migration Service
However, SQL Data Sync can be used after the migration is
completed, to ensure that the source and target are kept in
sync.
How it works
Tracking data changes: Data Sync tracks changes using insert, update, and delete triggers. The changes are
recorded in a side table in the user database. Note that BULK INSERT doesn't fire triggers by default. If
FIRE_TRIGGERS isn't specified, no insert triggers execute. Add the FIRE_TRIGGERS option so Data Sync can
track those inserts.
Synchronizing data: Data Sync is designed in a hub and spoke model. The hub syncs with each member
individually. Changes from the hub are downloaded to the member and then changes from the member are
uploaded to the hub.
Resolving conflicts: Data Sync provides two options for conflict resolution, Hub wins or Member wins.
If you select Hub wins, the changes in the hub always overwrite changes in the member.
If you select Member wins, the changes in the member overwrite changes in the hub. If there's more
than one member, the final value depends on which member syncs first.
Compare with Transactional Replication
DATA SY N C T RA N SA C T IO N A L REP L IC AT IO N
The new private link feature allows you to choose a service managed private endpoint to establish a secure
connection between the sync service and your member/hub databases during the data synchronization process.
A service managed private endpoint is a private IP address within a specific virtual network and subnet. Within
Data Sync, the service managed private endpoint is created by Microsoft and is exclusively used by the Data
Sync service for a given sync operation. Before setting up the private link, read the general requirements for the
feature.
NOTE
You must manually approve the service managed private endpoint in the Private endpoint connections page of the
Azure portal during the sync group deployment or by using PowerShell.
Get started
Set up Data Sync in the Azure portal
Set up Azure SQL Data Sync
Data Sync Agent - Data Sync Agent for Azure SQL Data Sync
Set up Data Sync with PowerShell
Use PowerShell to sync between multiple databases in Azure SQL Database
Use PowerShell to sync between a database in Azure SQL Database and a databases in a SQL Server instance
Set up Data Sync with REST API
Use REST API to sync between multiple databases in Azure SQL Database
Review the best practices for Data Sync
Best practices for Azure SQL Data Sync
Did something go wrong
Troubleshoot issues with Azure SQL Data Sync
IMPORTANT
Changing the value of an existing primary key will result in the following faulty behavior:
Data between hub and member can be lost even though sync does not report any issue.
Sync can fail because the tracking table has a non-existing row from source due to the primary key change.
Snapshot isolation must be enabled for both Sync members and hub. For more info, see Snapshot
Isolation in SQL Server.
In order to use Data Sync private link, both the member and hub databases must be hosted in Azure
(same or different regions), in the same cloud type (for example, both in public cloud or both in
government cloud). Additionally, to use private link, Microsoft.Network resource providers must be
Registered for the subscriptions that host the hub and member servers. Lastly, you must manually
approve the private link for Data Sync during the sync configuration, within the "Private endpoint
connections" section in the Azure portal or through PowerShell. For more information on how to approve
the private link, see Set up SQL Data Sync. Once you approve the service managed private endpoint, all
communication between the sync service and the member/hub databases will happen over the private
link. Existing sync groups can be updated to have this feature enabled.
General limitations
A table can't have an identity column that isn't the primary key.
A primary key can't have the following data types: sql_variant, binary, varbinary, image, xml.
Be cautious when you use the following data types as a primary key, because the supported precision is only
to the second: time, datetime, datetime2, datetimeoffset.
The names of objects (databases, tables, and columns) can't contain the printable characters period ( . ), left
square bracket ( [ ), or right square bracket ( ] ).
A table name can't contain printable characters: ! " # $ % ' ( ) * + - or space.
Azure Active Directory authentication isn't supported.
If there are tables with the same name but different schema (for example, dbo.customers and
sales.customers ) only one of the tables can be added into sync.
Columns with user-defined data types aren't supported.
Moving servers between different subscriptions isn't supported.
If two primary keys are only different in case (for example, Foo and foo ), Data Sync won't support this
scenario.
Truncating tables is not an operation supported by Data Sync (changes won't be tracked).
Using an Azure SQL Hyperscale database as a Hub or Sync Metadata database is not supported. However, a
Hyperscale database can be a member database in a Data Sync topology.
Memory-optimized tables are not supported.
Schema changes aren't automatically replicated. A custom solution can be created to automate the
replication of schema changes.
Unsupported data types
FileStream
SQL/CLR UDT
XMLSchemaCollection (XML supported)
Cursor, RowVersion, Timestamp, Hierarchyid
Unsupported column types
Data Sync can't sync read-only or system-generated columns. For example:
Computed columns.
System-generated columns for temporal tables.
Limitations on service and database dimensions
DIM EN SIO N S L IM IT W O RK A RO UN D
NOTE
There may be up to 30 endpoints in a single sync group if there is only one sync group. If there is more than one sync
group, the total number of endpoints across all sync groups cannot exceed 30. If a database belongs to multiple sync
groups, it is counted as multiple endpoints, not one.
Network requirements
NOTE
If you use Sync private link, these network requirements do not apply.
When the sync group is established, the Data Sync service needs to connect to the hub database. When
establishing the sync group, the Azure SQL server must have the following configuration in its
Firewalls and virtual networks settings:
NOTE
If you change the sync group's schema settings, you will need to allow the Data Sync service to access the server again so
that the hub database can be re-provisioned.
Next steps
Update the schema of a synced database
Do you have to update the schema of a database in a sync group? Schema changes aren't automatically
replicated. For some solutions, see the following articles:
Automate the replication of schema changes with SQL Data Sync in Azure
Use PowerShell to update the sync schema in an existing sync group
Monitor and troubleshoot
Is SQL Data Sync doing as expected? To monitor activity and troubleshoot issues, see the following articles:
Monitor SQL Data Sync with Azure Monitor logs
Troubleshoot issues with Azure SQL Data Sync
Learn more about Azure SQL Database
For more info about Azure SQL Database, see the following articles:
SQL Database Overview
Database Lifecycle Management
Data Sync Agent for SQL Data Sync
9/13/2022 • 10 minutes to read • Edit Online
IMPORTANT
SQL Data Sync does not support Azure SQL Managed Instance or Azure Synapse Analytics at this time.
If you provide LocalSystem as the value of SERVICEACCOUNT , use SQL Server authentication when
you configure the agent to connect to SQL Server.
If you provide a domain user account or a local user account as the value of SERVICEACCOUNT , you
also have to provide the password with the SERVICEPASSWORD argument. For example,
SERVICEACCOUNT="<domain>\<user>" SERVICEPASSWORD="<password>" .
You can also turn on logging for all installations that are performed by Windows Installer. The
Microsoft Knowledge Base article How to enable Windows Installer logging provides a one-click
solution to turn on logging for Windows Installer. It also provides the location of the logs.
The client agent doesn't work after I cancel the uninstall
The client agent doesn't work, even after you cancel its uninstallation.
Cause . This occurs because the SQL Data Sync client agent doesn't store credentials.
Resolution . You can try these two solutions:
Use services.msc to reenter the credentials for the client agent.
Uninstall this client agent and then install a new one. Download and install the latest client agent from
Download Center.
My database isn't listed in the agent list
When you attempt to add an existing SQL Server database to a sync group, the database doesn't appear in the
list of agents.
These scenarios might cause this issue:
Cause . The client agent and sync group are in different datacenters.
Resolution . The client agent and the sync group must be in the same datacenter. To set this up, you have
two options:
Create a new agent in the datacenter where the sync group is located. Then, register the database with
that agent.
Delete the current sync group. Then, re-create the sync group in the datacenter where the agent is
located.
Cause . The client agent's list of databases isn't current.
Resolution . Stop and then restart the client agent service.
The local agent downloads the list of associated databases only on the first submission of the agent key. It
doesn't download the list of associated databases on subsequent agent key submissions. Databases that
are registered during an agent move don't show up in the original agent instance.
Client agent doesn't start (Error 1069)
You discover that the agent isn't running on a computer that hosts SQL Server. When you attempt to manually
start the agent, you see a dialog box that displays the message, "Error 1069: The service did not start due to a
logon failure."
Cause . A likely cause of this error is that the password on the local server has changed since you created
the agent and agent password.
Resolution . Update the agent's password to your current server password:
1. Locate the SQL Data Sync client agent service.
a. Select Star t .
b. In the search box, enter ser vices.msc .
c. In the search results, select Ser vices .
d. In the Ser vices window, scroll to the entry for SQL Data Sync Agent .
2. Right-click SQL Data Sync Agent , and then select Stop .
3. Right-click SQL Data Sync Agent , and then select Proper ties .
4. On SQL Data Sync Agent Proper ties , select the Log in tab.
5. In the Password box, enter your password.
6. In the Confirm Password box, reenter your password.
7. Select Apply , and then select OK .
8. In the Ser vices window, right-click the SQL Data Sync Agent service, and then select Star t .
9. Close the Ser vices window.
I can't submit the agent key
After you create or re-create a key for an agent, you try to submit the key through the SqlAzureDataSyncAgent
application. The submission fails to complete.
NOTE
If sync metadata tables remain after a "force delete", use deprovisioningutil.exe to clean them up.
Local Sync Agent app can't connect to the local sync service
Resolution . Try the following steps:
1. Exit the app.
2. Open the Component Services Panel.
a. In the search box on the taskbar, enter ser vices.msc .
b. In the search results, double-click Ser vices .
3. Stop the SQL Data Sync service.
4. Restart the SQL Data Sync service.
5. Reopen the app.
Example
Example
Example
SqlDataSyncAgentCommand.exe -action submitagentkey -agentkey [agent key generated from portal, PowerShell,
or API] -username [user name to sync metadata database] -password [user name to sync metadata database]
Register a database
Usage
Examples
Unregister a database
When you use this command to unregister a database, it deprovisions the database completely. If the database
participates in other sync groups, this operation breaks the other sync groups.
Usage
Example
Update credentials
Usage
Examples
Next steps
For more info about SQL Data Sync, see the following articles:
Overview - Sync data across multiple cloud and on-premises databases with SQL Data Sync in Azure
Set up Data Sync
In the portal - Tutorial: Set up SQL Data Sync to sync data between Azure SQL Database and SQL
Server
With PowerShell
Use PowerShell to sync between multiple databases in Azure SQL Database
Use PowerShell to sync between a database in Azure SQL Database and a database in a SQL
Server instance
Best practices - Best practices for Azure SQL Data Sync
Monitor - Monitor SQL Data Sync with Azure Monitor logs
Troubleshoot - Troubleshoot issues with Azure SQL Data Sync
Update the sync schema
With Transact-SQL - Automate replication of schema changes with SQL Data Sync in Azure
With PowerShell - Use PowerShell to update the sync schema in an existing sync group
Best practices for Azure SQL Data Sync
9/13/2022 • 10 minutes to read • Edit Online
IMPORTANT
Azure SQL Data Sync does not support Azure SQL Managed Instance or Azure Synapse Analytics at this time.
Setup
Database considerations and constraints
Database size
When you create a new database, set the maximum size so that it's always larger than the database you deploy.
If you don't set the maximum size to larger than the deployed database, sync fails. Although SQL Data Sync
doesn't offer automatic growth, you can run the ALTER DATABASE command to increase the size of the database
after it has been created. Ensure that you stay within the database size limits.
IMPORTANT
SQL Data Sync stores additional metadata with each database. Ensure that you account for this metadata when you
calculate space needed. The amount of added overhead is related to the width of the tables (for example, narrow tables
require more overhead) and the amount of traffic.
Sync
Avoid slow and costly initial sync
In this section, we discuss the initial sync of a sync group. Learn how to help prevent an initial sync from taking
longer and being more costly than necessary.
How initial sync works
When you create a sync group, start with data in only one database. If you have data in multiple databases, SQL
Data Sync treats each row as a conflict that needs to be resolved. This conflict resolution causes the initial sync
to go slowly. If you have data in multiple databases, initial sync might take between several days and several
months, depending on the database size.
If the databases are in different datacenters, each row must travel between the different datacenters. This
increases the cost of an initial sync.
Recommendation
If possible, start with data in only one of the sync group's databases.
Design to avoid sync loops
A sync loop occurs when there are circular references within a sync group. In that scenario, each change in one
database is endlessly and circularly replicated through the databases in the sync group.
Ensure that you avoid sync loops, because they cause performance degradation and might significantly increase
costs.
Changes that fail to propagate
Reasons that changes fail to propagate
Changes might fail to propagate for one of the following reasons:
Schema/datatype incompatibility.
Inserting null in non-nullable columns.
Violating foreign key constraints.
What happens when changes fail to propagate?
Sync group shows that it's in a Warning state.
Details are listed in the portal UI log viewer.
If the issue is not resolved for 45 days, the database becomes out of date.
NOTE
These changes never propagate. The only way to recover in this scenario is to re-create the sync group.
Recommendation
Monitor the sync group and database health regularly through the portal and log interface.
Maintenance
Avoid out-of-date databases and sync groups
A sync group or a database in a sync group can become out of date. When a sync group's status is Out-of-date ,
it stops functioning. When a database's status is Out-of-date , data might be lost. It's best to avoid this scenario
instead of trying to recover from it.
Avoid out-of-date databases
A database's status is set to Out-of-date when it has been offline for 45 days or more. To avoid an Out-of-date
status on a database, ensure that none of the databases are offline for 45 days or more.
Avoid out-of-date sync groups
A sync group's status is set to Out-of-date when any change in the sync group fails to propagate to the rest of
the sync group for 45 days or more. To avoid an Out-of-date status on a sync group, regularly check the sync
group's history log. Ensure that all conflicts are resolved, and that changes are successfully propagated
throughout the sync group databases.
A sync group might fail to apply a change for one of these reasons:
Schema incompatibility between tables.
Data incompatibility between tables.
Inserting a row with a null value in a column that doesn't allow null values.
Updating a row with a value that violates a foreign key constraint.
To prevent out-of-date sync groups:
Update the schema to allow the values that are contained in the failed rows.
Update the foreign key values to include the values that are contained in the failed rows.
Update the data values in the failed row so they are compatible with the schema or foreign keys in the target
database.
Avoid deprovisioning issues
In some circumstances, unregistering a database with a client agent might cause sync to fail.
Scenario
1. Sync group A was created by using a SQL Database instance and a SQL Server database, which is associated
with local agent 1.
2. The same on-premises database is registered with local agent 2 (this agent is not associated with any sync
group).
3. Unregistering the on-premises database from local agent 2 removes the tracking and meta tables for sync
group A for the on-premises database.
4. Sync group A operations fail, with this error: "The current operation could not be completed because the
database is not provisioned for sync or you do not have permissions to the sync configuration tables."
Solution
To avoid this scenario, don't register a database with more than one agent.
To recover from this scenario:
1. Remove the database from each sync group that it belongs to.
2. Add the database back into each sync group that you removed it from.
3. Deploy each affected sync group (this action provisions the database).
Modifying a sync group
Don't attempt to remove a database from a sync group and then edit the sync group without first deploying one
of the changes.
Instead, first remove a database from a sync group. Then, deploy the change and wait for deprovisioning to
finish. When deprovisioning is finished, you can edit the sync group and deploy the changes.
If you attempt to remove a database and then edit a sync group without first deploying one of the changes, one
or the other operation fails. The portal interface might become inconsistent. If this happens, refresh the page to
restore the correct state.
Avoid schema refresh timeout
If you have a complex schema to sync, you may encounter an "operation timeout" during a schema refresh if the
sync metadata database has a lower SKU (example: basic).
Solution
To mitigate this issue, consider scaling up your sync metadata database resources.
Next steps
For more information about SQL Data Sync, see:
Overview - Sync data across multiple cloud and on-premises databases with Azure SQL Data Sync
Set up SQL Data Sync
In the portal - Tutorial: Set up SQL Data Sync to sync data between Azure SQL Database and SQL
Server
With PowerShell
Use PowerShell to sync between multiple databases in Azure SQL Database
Use PowerShell to sync between a database in SQL Database and a database in a SQL Server
instance
Data Sync Agent - Data Sync Agent for Azure SQL Data Sync
Monitor - Monitor SQL Data Sync with Azure Monitor logs
Troubleshoot - Troubleshoot issues with Azure SQL Data Sync
Update the sync schema
With Transact-SQL - Automate the replication of schema changes in Azure SQL Data Sync
With PowerShell - Use PowerShell to update the sync schema in an existing sync group
For more information about SQL Database, see:
SQL Database overview
Database lifecycle management
Troubleshoot issues with SQL Data Sync
9/13/2022 • 10 minutes to read • Edit Online
IMPORTANT
SQL Data Sync does not support Azure SQL Managed Instance or Azure Synapse Analytics at this time.
Sync issues
Sync fails in the portal UI for on-premises databases that are associated with the client agent
My sync group is stuck in the processing state
I see erroneous data in my tables
I see inconsistent primary key data after a successful sync
I see a significant degradation in performance
I see this message: "Cannot insert the value NULL into the column <column>. Column does not allow
nulls." What does this mean, and how can I fix it?
How does Data Sync handle circular references? That is, when the same data is synced in multiple sync
groups, and keeps changing as a result?
Error message "Sync0022 Customer does not have authorization to perform action
'syncGroupOperationResults/read'"
Sync fails in the portal UI for on-premises databases that are associated with the client agent
Sync fails in the SQL Data Sync portal UI for on-premises databases that are associated with the client agent. On
the local computer that's running the agent, you see System.IO.IOException errors in the Event Log. The errors
say that the disk has insufficient space.
Cause . The drive has insufficient space.
Resolution . Create more space on the drive on which the %TEMP% directory is located.
My sync group is stuck in the processing state
A sync group in SQL Data Sync has been in the processing state for a long time. It doesn't respond to the stop
command, and the logs show no new entries.
Any of the following conditions might result in a sync group being stuck in the processing state:
Cause . The client agent is offline
Resolution . Be sure that the client agent is online and then try again.
Cause . The client agent is uninstalled or missing.
Resolution . If the client agent is uninstalled or otherwise missing:
1. Remove the agent XML file from the SQL Data Sync installation folder, if the file exists.
2. Install the agent on an on-premises computer (it can be the same or a different computer). Then,
submit the agent key that's generated in the portal for the agent that's showing as offline.
Cause . The SQL Data Sync service is stopped.
Resolution . Restart the SQL Data Sync service.
1. In the Star t menu, search for Ser vices .
2. In the search results, select Ser vices .
3. Find the SQL Data Sync service.
4. If the service status is Stopped , right-click the service name, and then select Star t .
NOTE
If the preceding information doesn't move your sync group out of the processing state, Microsoft Support can reset the
status of your sync group. To have your sync group status reset, in the Microsoft Q&A question page for Azure SQL
Database, create a post. In the post, include your subscription ID and the sync group ID for the group that needs to be
reset. A Microsoft Support engineer will respond to your post, and will let you know when the status has been reset.
IMPORTANT
Don't delete any files while sync is in progress.
Next steps
For more information about SQL Data Sync, see:
Overview - Sync data across multiple cloud and on-premises databases with SQL Data Sync in Azure
Set up Data Sync
In the portal - Tutorial: Set up SQL Data Sync to sync data between Azure SQL Database and SQL
Server
With PowerShell
Use PowerShell to sync between multiple databases in Azure SQL Database
Use PowerShell to sync between multiple databases in Azure SQL Database
Use PowerShell to sync between a database in Azure SQL Database and a database in a SQL
Server instance
Data Sync Agent - Data Sync Agent for SQL Data Sync in Azure
Best practices - Best practices for SQL Data Sync in Azure
Monitor - Monitor SQL Data Sync with Azure Monitor logs
Update the sync schema
With Transact-SQL - Automate the replication of schema changes in SQL Data Sync in Azure
With PowerShell - Use PowerShell to update the sync schema in an existing sync group
For more information about SQL Database, see:
SQL Database Overview
Database Lifecycle Management
Scaling out with Azure SQL Database
9/13/2022 • 6 minutes to read • Edit Online
Sharding
Sharding is a technique to distribute large amounts of identically structured data across a number of
independent databases. It is especially popular with cloud developers creating Software as a Service (SAAS)
offerings for end customers or businesses. These end customers are often referred to as "tenants". Sharding
may be required for any number of reasons:
The total amount of data is too large to fit within the constraints of an individual database
The transaction throughput of the overall workload exceeds the capabilities of an individual database
Tenants may require physical isolation from each other, so separate databases are needed for each tenant
Different sections of a database may need to reside in different geographies for compliance, performance, or
geopolitical reasons.
In other scenarios, such as ingestion of data from distributed devices, sharding can be used to fill a set of
databases that are organized temporally. For example, a separate database can be dedicated to each day or
week. In that case, the sharding key can be an integer representing the date (present in all rows of the sharded
tables) and queries retrieving information for a date range must be routed by the application to the subset of
databases covering the range in question.
Sharding works best when every transaction in an application can be restricted to a single value of a sharding
key. That ensures that all transactions are local to a specific database.
Multi-tenant and single-tenant
Some applications use the simplest approach of creating a separate database for each tenant. This approach is
the single tenant sharding pattern that provides isolation, backup/restore ability, and resource scaling at the
granularity of the tenant. With single tenant sharding, each database is associated with a specific tenant ID value
(or customer key value), but that key need not always be present in the data itself. It is the application's
responsibility to route each request to the appropriate database - and the client library can simplify this.
Others scenarios pack multiple tenants together into databases, rather than isolating them into separate
databases. This pattern is a typical multi-tenant sharding pattern - and it may be driven by the fact that an
application manages large numbers of small tenants. In multi-tenant sharding, the rows in the database tables
are all designed to carry a key identifying the tenant ID or sharding key. Again, the application tier is responsible
for routing a tenant's request to the appropriate database, and this can be supported by the elastic database
client library. In addition, row-level security can be used to filter which rows each tenant can access - for details,
see Multi-tenant applications with elastic database tools and row-level security. Redistributing data among
databases may be needed with the multi-tenant sharding pattern, and is facilitated by the elastic database split-
merge tool. To learn more about design patterns for SaaS applications using elastic pools, see Design Patterns
for Multi-tenant SaaS Applications with Azure SQL Database.
Move data from multiple to single -tenancy databases
When creating a SaaS application, it is typical to offer prospective customers a trial version of the software. In
this case, it is cost-effective to use a multi-tenant database for the data. However, when a prospect becomes a
customer, a single-tenant database is better since it provides better performance. If the customer had created
data during the trial period, use the split-merge tool to move the data from the multi-tenant to the new single-
tenant database.
Next steps
For a sample app that demonstrates the client library, see Get started with Elastic Database tools.
To convert existing databases to use the tools, see Migrate existing databases to scale out.
To see the specifics of the elastic pool, see Price and performance considerations for an elastic pool, or create a
new pool with elastic pools.
Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Distributed transactions across cloud databases
9/13/2022 • 12 minutes to read • Edit Online
Common scenarios
Elastic database transactions enable applications to make atomic changes to data stored in several different
databases. Both SQL Database and SQL Managed Instance support client-side development experiences in C#
and .NET. A server-side experience (code written in stored procedures or server-side scripts) using Transact-SQL
is available for SQL Managed Instance only.
IMPORTANT
Running elastic database transactions between Azure SQL Database and Azure SQL Managed Instance is not supported.
Elastic database transaction can only span across a set of databases in SQL Database or a set databases across managed
instances.
<LocalResources>
...
<LocalStorage name="TEMP" sizeInMB="5000" cleanOnRoleRecycle="false" />
<LocalStorage name="TMP" sizeInMB="5000" cleanOnRoleRecycle="false" />
</LocalResources>
<Startup>
<Task commandLine="install.cmd" executionContext="elevated" taskType="simple">
<Environment>
...
<Variable name="TEMP">
<RoleInstanceValue
xpath="/RoleEnvironment/CurrentInstance/LocalResources/LocalResource[@name='TEMP']/@path" />
</Variable>
<Variable name="TMP">
<RoleInstanceValue
xpath="/RoleEnvironment/CurrentInstance/LocalResources/LocalResource[@name='TMP']/@path" />
</Variable>
</Environment>
</Task>
</Startup>
USE AdventureWorks2012;
GO
SET XACT_ABORT ON;
GO
BEGIN DISTRIBUTED TRANSACTION;
-- Delete candidate from local instance.
DELETE AdventureWorks2012.HumanResources.JobCandidate
WHERE JobCandidateID = 13;
-- Delete candidate from remote instance.
DELETE RemoteServer.AdventureWorks2012.HumanResources.JobCandidate
WHERE JobCandidateID = 13;
COMMIT TRANSACTION;
GO
s.Complete();
}
Following example shows a transaction that is implicitly promoted to distributed transaction once the second
SqlConnecton was started within the TransactionScope.
s.Complete();
}
The following diagram shows a Server Trust Group with managed instances that can execute distributed
transactions with .NET or Transact-SQL:
Monitoring transaction status
Use Dynamic Management Views (DMVs) to monitor status and progress of your ongoing elastic database
transactions. All DMVs related to transactions are relevant for distributed transactions in SQL Database and SQL
Managed Instance. You can find the corresponding list of DMVs here: Transaction Related Dynamic Management
Views and Functions (Transact-SQL).
These DMVs are particularly useful:
sys.dm_tran_active_transactions : Lists currently active transactions and their status. The UOW (Unit Of
Work) column can identify the different child transactions that belong to the same distributed transaction. All
transactions within the same distributed transaction carry the same UOW value. For more information, see
the DMV documentation.
sys.dm_tran_database_transactions : Provides additional information about transactions, such as
placement of the transaction in the log. For more information, see the DMV documentation.
sys.dm_tran_locks : Provides information about the locks that are currently held by ongoing transactions.
For more information, see the DMV documentation.
Limitations
The following limitations currently apply to elastic database transactions in SQL Database:
Only transactions across databases in SQL Database are supported. Other X/Open XA resource providers and
databases outside of SQL Database can't participate in elastic database transactions. That means that elastic
database transactions can't stretch across on premises SQL Server and Azure SQL Database. For distributed
transactions on premises, continue to use MSDTC.
Only client-coordinated transactions from a .NET application are supported. Server-side support for T-SQL
such as BEGIN DISTRIBUTED TRANSACTION is planned, but not yet available.
Transactions across WCF services aren't supported. For example, you have a WCF service method that
executes a transaction. Enclosing the call within a transaction scope will fail as a
System.ServiceModel.ProtocolException.
The following limitations currently apply to distributed transactions in SQL Managed Instance:
Only transactions across databases in managed instances are supported. Other X/Open XA resource
providers and databases outside of Azure SQL Managed Instance can't participate in distributed transactions.
That means that distributed transactions can't stretch across on-premises SQL Server and Azure SQL
Managed Instance. For distributed transactions on premises, continue to use MSDTC.
Transactions across WCF services aren't supported. For example, you have a WCF service method that
executes a transaction. Enclosing the call within a transaction scope will fail as a
System.ServiceModel.ProtocolException.
Azure SQL Managed Instance must be part of a Server trust group in order to participate in distributed
transaction.
Limitations of Server trust groups affect distributed transactions.
Managed Instances that participate in distributed transactions need to have connectivity over private
endpoints (using private IP address from the virtual network where they are deployed) and need to be
mutually referenced using private FQDNs. Client applications can use distributed transactions on private
endpoints. Additionally, in cases when Transact-SQL leverages linked servers referencing private endpoints,
client applications can use distributed transactions on public endpoints as well. This limitation is explained on
the following diagram.
Next steps
For questions, reach out to us on the Microsoft Q&A question page for SQL Database.
For feature requests, add them to the SQL Database feedback forum or SQL Managed Instance forum.
Azure SQL Database elastic query overview
(preview)
9/13/2022 • 10 minutes to read • Edit Online
NOTE
Elastic query works best for reporting scenarios where most of the processing (filtering, aggregation) can be performed
on the external source side. It is not suitable for ETL operations where large amount of data is being transferred from
remote database(s). For heavy reporting workloads or data warehousing scenarios with more complex queries, also
consider using Azure Synapse Analytics.
IMPORTANT
You must possess ALTER ANY EXTERNAL DATA SOURCE permission. This permission is included with the ALTER DATABASE
permission. ALTER ANY EXTERNAL DATA SOURCE permissions are needed to refer to the underlying data source.
Reference data : The topology is used for reference data management. In the figure below, two tables (T1 and
T2) with reference data are kept on a dedicated database. Using an elastic query, you can now access tables T1
and T2 remotely from other databases, as shown in the figure. Use topology 1 if reference tables are small or
remote queries into reference table have selective predicates.
Figure 2 Vertical partitioning - Using elastic query to query reference data
Cross-database quer ying : Elastic queries enable use cases that require querying across several databases in
SQL Database. Figure 3 shows four different databases: CRM, Inventory, HR, and Products. Queries performed in
one of the databases also need access to one or all the other databases. Using an elastic query, you can
configure your database for this case by running a few simple DDL statements on each of the four databases.
After this one-time configuration, access to a remote table is as simple as referring to a local table from your T-
SQL queries or from your BI tools. This approach is recommended if the remote queries do not return large
results.
Figure 3 Vertical partitioning - Using elastic query to query across various databases
The following steps configure elastic database queries for vertical partitioning scenarios that require access to a
table located on remote databases in SQL Database with the same schema:
CREATE MASTER KEY mymasterkey
CREATE DATABASE SCOPED CREDENTIAL mycredential
CREATE/DROP EXTERNAL DATA SOURCE mydatasource of type RDBMS
CREATE/DROP EXTERNAL TABLE mytable
After running the DDL statements, you can access the remote table "mytable" as though it were a local table.
Azure SQL Database automatically opens a connection to the remote database, processes your request on the
remote database, and returns the results.
The following steps configure elastic database queries for horizontal partitioning scenarios that require access
to a set of tables located on (typically) several remote databases in SQL Database:
CREATE MASTER KEY mymasterkey
CREATE DATABASE SCOPED CREDENTIAL mycredential
Create a shard map representing your data tier using the elastic database client library.
CREATE/DROP EXTERNAL DATA SOURCE mydatasource of type SHARD_MAP_MANAGER
CREATE/DROP EXTERNAL TABLE mytable
Once you have performed these steps, you can access the horizontally partitioned table "mytable" as though it
were a local table. Azure SQL Database automatically opens multiple parallel connections to the remote
databases where the tables are physically stored, processes the requests on the remote databases, and returns
the results. More information on the steps required for the horizontal partitioning scenario can be found in
elastic query for horizontal partitioning.
To begin coding, see Getting started with elastic query for horizontal partitioning (sharding).
IMPORTANT
Successful execution of elastic query over a large set of databases relies heavily on the availability of each of databases
during the query execution. If one of databases is not available, entire query will fail. If you plan to query hundreds or
thousands of databases at once, make sure your client application has retry logic embedded, or consider leveraging Elastic
Database Jobs (preview) and querying smaller subsets of databases, consolidating results of each query into a single
destination.
T-SQL querying
Once you have defined your external data sources and your external tables, you can use regular SQL Server
connection strings to connect to the databases where you defined your external tables. You can then run T-SQL
statements over your external tables on that connection with the limitations outlined below. You can find more
information and examples of T-SQL queries in the documentation topics for horizontal partitioning and vertical
partitioning.
Connectivity for tools
You can use regular SQL Server connection strings to connect your applications and BI or data integration tools
to databases that have external tables. Make sure that SQL Server is supported as a data source for your tool.
Once connected, refer to the elastic query database and the external tables in that database just like you would
do with any other SQL Server database that you connect to with your tool.
IMPORTANT
Elastic queries are only supported when connecting with SQL Server Authentication.
Cost
Elastic query is included in the cost of Azure SQL Database. Note that topologies where your remote databases
are in a different data center than the elastic query endpoint are supported, but data egress from remote
databases is charged regularly Azure rates.
Preview limitations
Running your first elastic query can take up to a few minutes on smaller resources and Standard and General
Purpose service tier. This time is necessary to load the elastic query functionality; loading performance
improves with higher service tiers and compute sizes.
Scripting of external data sources or external tables from SSMS or SSDT is not yet supported.
Import/Export for SQL Database does not yet support external data sources and external tables. If you need
to use Import/Export, drop these objects before exporting and then re-create them after importing.
Elastic query currently only supports read-only access to external tables. You can, however, use full Transact-
SQL functionality on the database where the external table is defined. This can be useful to, e.g., persist
temporary results using, for example, SELECT <column_list> INTO <local_table>, or to define stored
procedures on the elastic query database that refer to external tables.
Except for nvarchar(max), LOB types (including spatial types) are not supported in external table definitions.
As a workaround, you can create a view on the remote database that casts the LOB type into nvarchar(max),
define your external table over the view instead of the base table and then cast it back into the original LOB
type in your queries.
Columns of nvarchar(max) data type in result set disable advanced batching technics used in Elastic Query
implementation and may affect performance of query for an order of magnitude, or even two orders of
magnitude in non-canonical use cases where large amount of non-aggregated data is being transferred as a
result of query.
Column statistics over external tables are currently not supported. Table statistics are supported, but need to
be created manually.
Cursors are not supported for external tables in Azure SQL Database.
Elastic query works with Azure SQL Database only. You cannot use it for querying a SQL Server instance.
Private links are currently not supported with elastic query for those databases that are targets of external
data sources.
Next steps
For a vertical partitioning tutorial, see Getting started with cross-database query (vertical partitioning).
For syntax and sample queries for vertically partitioned data, see Querying vertically partitioned data)
For a horizontal partitioning (sharding) tutorial, see Getting started with elastic query for horizontal
partitioning (sharding).
For syntax and sample queries for horizontally partitioned data, see Querying horizontally partitioned data)
See sp_execute _remote for a stored procedure that executes a Transact-SQL statement on a single remote
Azure SQL Database or set of databases serving as shards in a horizontal partitioning scheme.
Building scalable cloud databases
9/13/2022 • 3 minutes to read • Edit Online
Documentation
1. Get started with Elastic Database tools
2. Elastic Database features
3. Shard map management
4. Migrate existing databases to scale out
5. Data dependent routing
6. Multi-shard queries
7. Adding a shard using Elastic Database tools
8. Multi-tenant applications with Elastic Database tools and row-level security
9. Upgrade client library apps
10. Elastic queries overview
11. Elastic Database tools glossary
12. Elastic Database client library with Entity Framework
13. Elastic Database client library with Dapper
14. Split-merge tool
15. Performance counters for shard map manager
16. FAQ for Elastic Database tools
Client capabilities
Scaling out applications using sharding presents challenges for both the developer as well as the administrator.
The client library simplifies the management tasks by providing tools that let both developers and
administrators manage scaled-out databases. In a typical example, there are many databases, known as "shards,"
to manage. Customers are co-located in the same database, and there is one database per customer (a single-
tenant scheme). The client library includes these features:
Shard map management : A special database called the "shard map manager" is created. Shard map
management is the ability for an application to manage metadata about its shards. Developers can use
this functionality to register databases as shards, describe mappings of individual sharding keys or key
ranges to those databases, and maintain this metadata as the number and composition of databases
evolves to reflect capacity changes. Without the Elastic Database client library, you would need to spend a
lot of time writing the management code when implementing sharding. For details, see Shard map
management.
Data dependent routing : Imagine a request coming into the application. Based on the sharding key
value of the request, the application needs to determine the correct database based on the key value. It
then opens a connection to the database to process the request. Data dependent routing provides the
ability to open connections with a single easy call into the shard map of the application. Data dependent
routing was another area of infrastructure code that is now covered by functionality in the Elastic
Database client library. For details, see Data dependent routing.
Multi-shard queries (MSQ) : Multi-shard querying works when a request involves several (or all)
shards. A multi-shard query executes the same T-SQL code on all shards or a set of shards. The results
from the participating shards are merged into an overall result set using UNION ALL semantics. The
functionality as exposed through the client library handles many tasks, including: connection
management, thread management, fault handling, and intermediate results processing. MSQ can query
up to hundreds of shards. For details, see Multi-shard querying.
In general, customers using Elastic Database tools can expect to get full T-SQL functionality when submitting
shard-local operations as opposed to cross-shard operations that have their own semantics.
Next steps
Elastic Database client library (Java, .NET) - to download the library.
Get started with Elastic Database tools - to try the sample app that demonstrates client functions.
GitHub (Java, .NET) - to make contributions to the code.
Azure SQL Database elastic query overview - to use elastic queries.
Moving data between scaled-out cloud databases - for instructions on using the split-merge tool .
Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Scale out databases with the shard map manager
9/13/2022 • 12 minutes to read • Edit Online
Understanding how these maps are constructed is essential to shard map management. This is done using the
ShardMapManager class (Java, .NET), found in the Elastic Database client library to manage shard maps.
Or you can implement a multi-tenant database model using a list mapping to assign multiple tenants to an
individual database. For example, DB1 is used to store information about tenant ID 1 and 5, and DB2 stores data
for tenant 7 and tenant 10.
integer integer
long long
guid uuid
byte[] byte[]
datetime timestamp
timespan duration
datetimeoffset offsetdatetime
K EY SH A RD LO C AT IO N
1 Database_A
3 Database_B
4 Database_C
6 Database_B
... ...
K EY SH A RD LO C AT IO N
[1,50) Database_A
[50,100) Database_B
K EY SH A RD LO C AT IO N
[100,200) Database_C
[400,600) Database_C
... ...
Each of the tables shown above is a conceptual example of a ShardMap object. Each row is a simplified
example of an individual PointMapping (for the list shard map) or RangeMapping (for the range shard map)
object.
Constructing a ShardMapManager
A ShardMapManager object is constructed using a factory (Java, .NET) pattern. The
ShardMapManagerFactor y.GetSqlShardMapManager (Java, .NET) method takes credentials (including the
server name and database name holding the GSM) in the form of a ConnectionString and returns an instance
of a ShardMapManager .
Please Note: The ShardMapManager should be instantiated only once per app domain, within the
initialization code for an application. Creation of additional instances of ShardMapManager in the same app
domain results in increased memory and CPU utilization of the application. A ShardMapManager can contain
any number of shard maps. While a single shard map may be sufficient for many applications, there are times
when different sets of databases are used for different schema or for unique purposes; in those cases multiple
shard maps may be preferable.
In this code, an application tries to open an existing ShardMapManager with the TryGetSqlShardMapManager
(Java, .NET method. If objects representing a Global ShardMapManager (GSM) do not yet exist inside the
database, the client library creates them using the CreateSqlShardMapManager (Java, .NET) method.
// Try to get a reference to the Shard Map Manager in the shardMapManager database.
// If it doesn't already exist, then create it.
ShardMapManager shardMapManager = null;
boolean shardMapManagerExists =
ShardMapManagerFactory.tryGetSqlShardMapManager(shardMapManagerConnectionString,ShardMapManagerLoadPolicy.La
zy, refShardMapManager);
shardMapManager = refShardMapManager.argValue;
if (shardMapManagerExists) {
ConsoleUtils.writeInfo("Shard Map %s already exists", shardMapManager);
}
else {
// The Shard Map Manager does not exist, so create it
shardMapManager = ShardMapManagerFactory.createSqlShardMapManager(shardMapManagerConnectionString);
ConsoleUtils.writeInfo("Created Shard Map %s", shardMapManager);
}
// Try to get a reference to the Shard Map Manager via the Shard Map Manager database.
// If it doesn't already exist, then create it.
ShardMapManager shardMapManager;
bool shardMapManagerExists = ShardMapManagerFactory.TryGetSqlShardMapManager(
connectionString,
ShardMapManagerLoadPolicy.Lazy,
out shardMapManager);
if (shardMapManagerExists)
{
Console.WriteLine("Shard Map Manager already exists");
}
else
{
// Create the Shard Map Manager.
ShardMapManagerFactory.CreateSqlShardMapManager(connectionString);
Console.WriteLine("Created SqlShardMapManager");
shardMapManager = ShardMapManagerFactory.GetSqlShardMapManager(
connectionString,
ShardMapManagerLoadPolicy.Lazy);
// The connectionString contains server name, database name, and admin credentials for privileges on both
the GSM and the shards themselves.
}
For the .NET version, you can use PowerShell to create a new Shard Map Manager. An example is available here.
return shardMap;
}
// Creates a new Range Shard Map with the specified name, or gets the Range Shard Map if it already exists.
public static RangeShardMap<T> CreateOrGetRangeShardMap<T>(ShardMapManager shardMapManager, string
shardMapName)
{
// Try to get a reference to the Shard Map.
RangeShardMap<T> shardMap;
bool shardMapExists = shardMapManager.TryGetRangeShardMap(shardMapName, out shardMap);
if (shardMapExists)
{
ConsoleUtils.WriteInfo("Shard Map {0} already exists", shardMap.Name);
}
else
{
// The Shard Map does not exist, so create it
shardMap = shardMapManager.CreateRangeShardMap<T>(shardMapName);
ConsoleUtils.WriteInfo("Created Shard Map {0}", shardMap.Name);
}
return shardMap;
}
sm.DeleteMapping(sm.MarkMappingOffline(sm.GetMappingForKey(25)));
Adding a shard
Applications often need to add new shards to handle data that is expected from new keys or key ranges, for a
shard map that already exists. For example, an application sharded by Tenant ID may need to provision a new
shard for a new tenant, or data sharded monthly may need a new shard provisioned before the start of each
new month.
If the new range of key values is not already part of an existing mapping and no data movement is necessary, it
is simple to add the new shard and associate the new key or range to that shard. For details on adding new
shards, see Adding a new shard.
For scenarios that require data movement, however, the split-merge tool is needed to orchestrate the data
movement between shards in combination with the necessary shard map updates. For details on using the split-
merge tool, see Overview of split-merge
Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Use data-dependent routing to route a query to an
appropriate database
9/13/2022 • 6 minutes to read • Edit Online
Use lowest privilege credentials possible for getting the shard map
If an application is not manipulating the shard map itself, the credentials used in the factory method should have
read-only permissions on the Global Shard Map database. These credentials are typically different from
credentials used to open connections to the shard map manager. See also Credentials used to access the Elastic
Database client library.
Call the OpenConnectionForKey method
The ShardMap.OpenConnectionForKey method (Java, .NET) returns a connection ready for issuing
commands to the appropriate database based on the value of the key parameter. Shard information is cached in
the application by the ShardMapManager , so these requests do not typically involve a database lookup
against the Global Shard Map database.
// Syntax:
public Connection openConnectionForKey(Object key, String connectionString, ConnectionOptions options)
// Syntax:
public SqlConnection OpenConnectionForKey<TKey>(TKey key, string connectionString, ConnectionOptions
options)
The key parameter is used as a lookup key into the shard map to determine the appropriate database for the
request.
The connectionString is used to pass only the user credentials for the desired connection. No database
name or server name is included in this connectionString since the method determines the database and
server using the ShardMap .
The connectionOptions (Java, .NET) should be set to ConnectionOptions.Validate if an environment
where shard maps may change and rows may move to other databases as a result of split or merge
operations. This validation involves a brief query to the local shard map on the target database (not to the
global shard map) before the connection is delivered to the application.
If the validation against the local shard map fails (indicating that the cache is incorrect), the Shard Map Manager
queries the global shard map to obtain the new correct value for the lookup, update the cache, and obtain and
return the appropriate database connection.
Use ConnectionOptions.None only when shard mapping changes are not expected while an application is
online. In that case, the cached values can be assumed to always be correct, and the extra round-trip validation
call to the target database can be safely skipped. That reduces database traffic. The connectionOptions may
also be set via a value in a configuration file to indicate whether sharding changes are expected or not during a
period of time.
This example uses the value of an integer key CustomerID , using a ShardMap object named
customerShardMap .
ps.setInt(1, productId);
ps.setInt(2, customerId);
ps.executeUpdate();
} catch (SQLException e) {
e.printStackTrace();
}
int customerId = 12345;
int newPersonId = 4321;
// Connect to the shard for that customer ID. No need to call a SqlConnection
// constructor followed by the Open method.
using (SqlConnection conn = customerShardMap.OpenConnectionForKey(customerId,
Configuration.GetCredentialsConnectionString(), ConnectionOptions.Validate))
{
// Execute a simple command.
SqlCommand cmd = conn.CreateCommand();
cmd.CommandText = @"UPDATE Sales.Customer
SET PersonID = @newPersonID WHERE CustomerID = @customerID";
cmd.Parameters.AddWithValue("@customerID", customerId);cmd.Parameters.AddWithValue("@newPersonID",
newPersonId);
cmd.ExecuteNonQuery();
}
The OpenConnectionForKey method returns a new already-open connection to the correct database.
Connections utilized in this way still take full advantage of connection pooling.
The OpenConnectionForKeyAsync method (Java, .NET) is also available if your application makes use
asynchronous programming.
ps.setInt(1, productId);
ps.setInt(2, customerId);
ps.executeUpdate();
} catch (SQLException e) {
e.printStackTrace();
}
});
} catch (Exception e) {
throw new StoreException(e.getMessage(), e);
}
int customerId = 12345;
int newPersonId = 4321;
Configuration.SqlRetryPolicy.ExecuteAction(() -> {
cmd.Parameters.AddWithValue("@customerID", customerId);
cmd.Parameters.AddWithValue("@newPersonID", newPersonId);
cmd.ExecuteNonQuery();
Console.WriteLine("Update completed");
}
});
Packages necessary to implement transient fault handling are downloaded automatically when you build the
elastic database sample application.
Transactional consistency
Transactional properties are guaranteed for all operations local to a shard. For example, transactions submitted
through data-dependent routing execute within the scope of the target shard for the connection. At this time,
there are no capabilities provided for enlisting multiple connections into a transaction, and therefore there are
no transactional guarantees for operations performed across shards.
Next steps
To detach a shard, or to reattach a shard, see Using the RecoveryManager class to fix shard map problems.
Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Credentials used to access the Elastic Database
client library
9/13/2022 • 3 minutes to read • Edit Online
The variable smmAdminConnectionString is a connection string that contains the management credentials.
The user ID and password provide read/write access to both shard map database and individual shards. The
management connection string also includes the server name and database name to identify the global shard
map database. Here is a typical connection string for that purpose:
"Server=<yourserver>.database.windows.net;Database=<yourdatabase>;User ID=<yourmgmtusername>;Password=
<yourmgmtpassword>;Trusted_Connection=False;Encrypt=True;Connection Timeout=30;”
Do not use values in the form of "username@server"—instead just use the "username" value. This is because
credentials must work against both the shard map manager database and individual shards, which may be on
different servers.
Access credentials
When creating a shard map manager in an application that does not administer shard maps, use credentials that
have read-only permissions on the global shard map. The information retrieved from the global shard map
under these credentials is used for data-dependent routing and to populate the shard map cache on the client.
The credentials are provided through the same call pattern to GetSqlShardMapManager :
// Obtain shard map manager.
ShardMapManager shardMapManager = ShardMapManagerFactory.GetSqlShardMapManager(smmReadOnlyConnectionString,
ShardMapManagerLoadPolicy.Lazy);
Note the use of the smmReadOnlyConnectionString to reflect the use of different credentials for this access
on behalf of non-admin users: these credentials should not provide write permissions on the global shard map.
Connection credentials
Additional credentials are needed when using the OpenConnectionForKey (Java, .NET) method to access a
shard associated with a sharding key. These credentials need to provide permissions for read-only access to the
local shard map tables residing on the shard. This is needed to perform connection validation for data-
dependent routing on the shard. This code snippet allows data access in the context of data-dependent routing:
In this example, smmUserConnectionString holds the connection string for the user credentials. For Azure
SQL Database, here is a typical connection string for user credentials:
As with the admin credentials, do not use values in the form of "username@server". Instead, just use
"username". Also note that the connection string does not contain a server name and database name. That is
because the OpenConnectionForKey call automatically directs the connection to the correct shard based on
the key. Hence, the database name and server name are not provided.
See also
Managing databases and logins in Azure SQL Database
Securing your SQL Database
Elastic Database jobs
Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Moving data between scaled-out cloud databases
9/13/2022 • 18 minutes to read • Edit Online
Download
Microsoft.Azure.SqlDatabase.ElasticScale.Service.SplitMerge
Documentation
1. Elastic database split-merge tool tutorial
2. Split-merge security configuration
3. Split-merge security considerations
4. Shard map management
5. Migrate existing databases to scale-out
6. Elastic database tools
7. Elastic database tools glossary
// reference tables
schemaInfo.Add(new ReferenceTableInfo("dbo", "region"));
schemaInfo.Add(new ReferenceTableInfo("dbo", "nation"));
// sharded tables
schemaInfo.Add(new ShardedTableInfo("dbo", "customer", "C_CUSTKEY"));
schemaInfo.Add(new ShardedTableInfo("dbo", "orders", "O_CUSTKEY"));
// publish
smm.GetSchemaInfoCollection().Add(Configuration.ShardMapName, schemaInfo);
The tables ‘region’ and ‘nation’ are defined as reference tables and will be copied with
split/merge/move operations. ‘customer’ and ‘orders’ in turn are defined as sharded tables.
C_CUSTKEY and O_CUSTKEY serve as the sharding key.
Referential integrity
The split-merge service analyzes dependencies between tables and uses foreign key-primary key
relationships to stage the operations for moving reference tables and shardlets. In general, reference
tables are copied first in dependency order, then shardlets are copied in order of their dependencies
within each batch. This is necessary so that FK-PK constraints on the target shard are honored as the new
data arrives.
Shard map consistency and eventual completion
In the presence of failures, the split-merge service resumes operations after any outage and aims to
complete any in progress requests. However, there may be unrecoverable situations, e.g., when the target
shard is lost or compromised beyond repair. Under those circumstances, some shardlets that were
supposed to be moved may continue to reside on the source shard. The service ensures that shardlet
mappings are only updated after the necessary data has been successfully copied to the target. Shardlets
are only deleted on the source once all their data has been copied to the target and the corresponding
mappings have been updated successfully. The deletion operation happens in the background while the
range is already online on the target shard. The split-merge service always ensures correctness of the
mappings stored in the shard map.
Billing
The split-merge service runs as a cloud service in your Microsoft Azure subscription. Therefore charges for
cloud services apply to your instance of the service. Unless you frequently perform split/merge/move
operations, we recommend you delete your split-merge cloud service. That saves costs for running or deployed
cloud service instances. You can re-deploy and start your readily runnable configuration whenever you need to
perform split or merge operations.
Monitoring
Status tables
The split-merge Service provides the RequestStatus table in the metadata store database for monitoring of
completed and ongoing requests. The table lists a row for each split-merge request that has been submitted to
this instance of the split-merge service. It gives the following information for each request:
Timestamp
The time and date when the request was started.
OperationId
A GUID that uniquely identifies the request. This request can also be used to cancel the operation while it
is still ongoing.
Status
The current state of the request. For ongoing requests, it also lists the current phase in which the request
is.
CancelRequest
A flag that indicates whether the request has been canceled.
Progress
A percentage estimate of completion for the operation. A value of 50 indicates that the operation is
approximately 50% complete.
Details
An XML value that provides a more detailed progress report. The progress report is periodically updated
as sets of rows are copied from source to target. In case of failures or exceptions, this column also
includes more detailed information about the failure.
Azure Diagnostics
The split-merge service uses Azure Diagnostics based on Azure SDK 2.5 for monitoring and diagnostics. You
control the diagnostics configuration as explained here: Enabling Diagnostics in Azure Cloud Services and
Virtual Machines. The download package includes two diagnostics configurations - one for the web role and one
for the worker role. It includes the definitions to log Performance Counters, IIS logs, Windows Event Logs, and
split-merge application event logs.
Deploy Diagnostics
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
IMPORTANT
The PowerShell Azure Resource Manager module is still supported, but all future development is for the Az.Sql module.
For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the AzureRm modules are
substantially identical.
To enable monitoring and diagnostics using the diagnostic configuration for the web and worker roles provided
by the NuGet package, run the following commands using Azure PowerShell:
$storageName = "<azureStorageAccount>"
$key = "<azureStorageAccountKey"
$storageContext = New-AzStorageContext -StorageAccountName $storageName -StorageAccountKey $key
$configPath = "<filePath>\SplitMergeWebContent.diagnostics.xml"
$serviceName = "<cloudServiceName>"
You can find more information on how to configure and deploy diagnostics settings here: Enabling Diagnostics
in Azure Cloud Services and Virtual Machines.
Retrieve diagnostics
You can easily access your diagnostics from the Visual Studio Server Explorer in the Azure part of the Server
Explorer tree. Open a Visual Studio instance, and in the menu bar click View, and Server Explorer. Click the Azure
icon to connect to your Azure subscription. Then navigate to Azure -> Storage -> <your storage account> ->
Tables -> WADLogsTable. For more information, see Server Explorer.
The WADLogsTable highlighted in the figure above contains the detailed events from the split-merge service’s
application log. Note that the default configuration of the downloaded package is geared towards a production
deployment. Therefore the interval at which logs and counters are pulled from the service instances is large (5
minutes). For test and development, lower the interval by adjusting the diagnostics settings of the web or the
worker role to your needs. Right-click on the role in the Visual Studio Server Explorer (see above) and then
adjust the Transfer Period in the dialog for the Diagnostics configuration settings:
Performance
In general, better performance is to be expected from higher, more performant service tiers. Higher IO, CPU and
memory allocations for the higher service tiers benefit the bulk copy and delete operations that the split-merge
service uses. For that reason, increase the service tier just for those databases for a defined, limited period of
time.
The service also performs validation queries as part of its normal operations. These validation queries check for
unexpected presence of data in the target range and ensure that any split/merge/move operation starts from a
consistent state. These queries all work over sharding key ranges defined by the scope of the operation and the
batch size provided as part of the request definition. These queries perform best when an index is present that
has the sharding key as the leading column.
In addition, a uniqueness property with the sharding key as the leading column will allow the service to use an
optimized approach that limits resource consumption in terms of log space and memory. This uniqueness
property is required to move large data sizes (typically above 1GB).
How to upgrade
1. Follow the steps in Deploy a split-merge service.
2. Change your cloud service configuration file for your split-merge deployment to reflect the new
configuration parameters. A new required parameter is the information about the certificate used for
encryption. An easy way to do this is to compare the new configuration template file from the download
against your existing configuration. Make sure you add the settings for
“DataEncryptionPrimaryCertificateThumbprint” and “DataEncryptionPrimary” for both the web and the
worker role.
3. Before deploying the update to Azure, ensure that all currently running split-merge operations have finished.
You can easily do this by querying the RequestStatus and PendingWorkflows tables in the split-merge
metadata database for ongoing requests.
4. Update your existing cloud service deployment for split-merge in your Azure subscription with the new
package and your updated service configuration file.
You do not need to provision a new metadata database for split-merge to upgrade. The new version will
automatically upgrade your existing metadata database to the new version.
Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Elastic Database tools glossary
9/13/2022 • 2 minutes to read • Edit Online
Range shard map : A shard map in which the shard distribution strategy is based on multiple ranges of
contiguous values.
Reference tables : Tables that are not sharded but are replicated across shards. For example, zip codes can be
stored in a reference table.
Shard : A database in Azure SQL Database that stores data from a sharded data set.
Shard elasticity : The ability to perform both horizontal scaling and ver tical scaling .
Sharded tables : Tables that are sharded, i.e., whose data is distributed across shards based on their sharding
key values.
Sharding key : A column value that determines how data is distributed across shards. The value type can be
one of the following: int , bigint , varbinar y , or uniqueidentifier .
Shard set : The collection of shards that are attributed to the same shard map in the shard map manager.
Shardlet : All of the data associated with a single value of a sharding key on a shard. A shardlet is the smallest
unit of data movement possible when redistributing sharded tables.
Shard map : The set of mappings between sharding keys and their respective shards.
Shard map manager : A management object and data store that contains the shard map(s), shard locations,
and mappings for one or more shard sets.
Verbs
Horizontal scaling : The act of scaling out (or in) a collection of shards by adding or removing shards to a
shard map, as shown below.
Merge : The act of moving shardlets from two shards to one shard and updating the shard map accordingly.
Shardlet move : The act of moving a single shardlet to a different shard.
Shard : The act of horizontally partitioning identically structured data across multiple databases based on a
sharding key.
Split : The act of moving several shardlets from one shard to another (typically new) shard. A sharding key is
provided by the user as the split point.
Ver tical Scaling : The act of scaling up (or down) the compute size of an individual shard. For example,
changing a shard from Standard to Premium (which results in more computing resources).
Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Resource management in Azure SQL Database
9/13/2022 • 20 minutes to read • Edit Online
TIP
For Azure Synapse Analytics dedicated SQL pool limits, see capacity limits and memory and concurrency limits.
Max elastic pools per logical server Limited by number of DTUs or vCores. For example, if each
pool is 1000 DTUs, then a server can support 54 pools.
IMPORTANT
As the number of databases approaches the limit per logical server, the following can occur:
Increasing latency in running queries against the master database. This includes views of resource utilization statistics
such as sys.resource_stats .
Increasing latency in management operations and rendering portal viewpoints that involve enumerating databases in
the server.
NOTE
To obtain more DTU/eDTU quota, vCore quota, or more logical servers than the default number, submit a new support
request in the Azure portal. For more information, see Request quota increases for Azure SQL Database.
What happens when resource limits are reached
Compute CPU
When database compute CPU utilization becomes high, query latency increases, and queries can even time out.
Under these conditions, queries may be queued by the service and are provided resources for execution as
resources become free. When encountering high compute utilization, mitigation options include:
Increasing the compute size of the database or elastic pool to provide the database with more compute
resources. See Scale single database resources and Scale elastic pool resources.
Optimizing queries to reduce CPU resource utilization of each query. For more information, see Query
Tuning/Hinting.
Storage
When data space used reaches the maximum data size limit, either at the database level or at the elastic pool
level, inserts and updates that increase data size fail and clients receive an error message. SELECT and DELETE
statements remain unaffected.
In Premium and Business Critical service tiers, clients also receive an error message if combined storage
consumption by data, transaction log, and tempdb for a single database or an elastic pool exceeds maximum
local storage size. For more information, see Storage space governance.
When encountering high space utilization, mitigation options include:
Increase maximum data size of the database or elastic pool, or scale up to a service objective with a higher
maximum data size limit. See Scale single database resources and Scale elastic pool resources.
If the database is in an elastic pool, then alternatively the database can be moved outside of the pool, so that
its storage space isn't shared with other databases.
Shrink a database to reclaim unused space. In elastic pools, shrinking a database provides more storage for
other databases in the pool. For more information, see Manage file space in Azure SQL Database.
Check if high space utilization is due to a spike in the size of Persistent Version Store (PVS). PVS is a part of
each database, and is used to implement Accelerated Database Recovery. To determine current PVS size, see
PVS troubleshooting. A common reason for large PVS size is a transaction that is open for a long time
(hours), preventing cleanup of row older versions in PVS.
For databases and elastic pools in Premium and Business Critical service tiers that consume large amounts of
storage, you may receive an out-of-space error even though used space in the database or elastic pool is
below its maximum data size limit. This may happen if tempdb or transaction log files consume a large
amount of storage toward the maximum local storage limit. Fail over the database or elastic pool to reset
tempdb to its initial smaller size, or shrink transaction log to reduce local storage consumption.
NOTE
The initial offering of Azure SQL Database supported only single threaded queries. At that time, the number of requests
was always equivalent to the number of workers. Error message 10928 in Azure SQL Database contains the wording "The
request limit for the database is N and has been reached" for backwards compatibility purposes. The limit reached is
actually the number of workers. If your max degree of parallelism (MAXDOP) setting is equal to zero or is greater than
one, the number of workers may be much higher than the number of requests, and the limit may be reached much
sooner than when MAXDOP is equal to one. Learn more about error 10928 in Resource governance errors.
SO L UT IO N DESC RIP T IO N
Reduce the size of memory grants For more information about memory grants, see the
Understanding SQL Server memory grant blog post. A
common solution for avoiding excessively large memory
grants is keeping statistics up to date. This results in more
accurate estimates of memory consumption by the query
engine, avoiding unnecessarily large memory grants.
Reduce the size of query plan cache The database engine caches query plans in memory, to
avoid compiling a query plan for every query execution. To
avoid query plan cache bloat caused by caching plans that
are only used once, make sure to use parameterized queries,
and consider enabling
OPTIMIZE_FOR_AD_HOC_WORKLOADS database-scoped
configuration.
Reduce the size of lock memory The database engine uses memory for locks. When possible,
avoid large transactions that may acquire a large number of
locks and cause high lock memory consumption.
Resource governance
To enforce resource limits, Azure SQL Database uses a resource governance implementation that is based on
SQL Server Resource Governor, modified and extended to run in the cloud. In SQL Database, multiple resource
pools and workload groups, with resource limits set at both pool and group levels, provide a balanced
Database-as-a-Service. User workload and internal workloads are classified into separate resource pools and
workload groups. User workload on the primary and readable secondary replicas, including geo-replicas, is
classified into the SloSharedPool1 resource pool and UserPrimaryGroup.DBId[N] workload groups, where [N]
stands for the database ID value. In addition, there are multiple resource pools and workload groups for various
internal workloads.
In addition to using Resource Governor to govern resources within the database engine, Azure SQL Database
also uses Windows Job Objects for process level resource governance, and Windows File Server Resource
Manager (FSRM) for storage quota management.
Azure SQL Database resource governance is hierarchical in nature. From top to bottom, limits are enforced at
the OS level and at the storage volume level using operating system resource governance mechanisms and
Resource Governor, then at the resource pool level using Resource Governor, and then at the workload group
level using Resource Governor. Resource governance limits in effect for the current database or elastic pool are
reported in the sys.dm_user_db_resource_governance view.
Data I/O governance
Data I/O governance is a process in Azure SQL Database used to limit both read and write physical I/O against
data files of a database. IOPS limits are set for each service level to minimize the "noisy neighbor" effect, to
provide resource allocation fairness in a multi-tenant service, and to stay within the capabilities of the
underlying hardware and storage.
For single databases, workload group limits are applied to all storage I/O against the database. For elastic pools,
workload group limits apply to each database in the pool. Additionally, the resource pool limit additionally
applies to the cumulative I/O of the elastic pool. In tempdb , I/O is subject to workload group limits, with the
exception of Basic, Standard, and General Purpose service tier, where higher tempdb I/O limits apply. In general,
resource pool limits may not be achievable by the workload against a database (either single or pooled),
because workload group limits are lower than resource pool limits and limit IOPS/throughput sooner. However,
pool limits may be reached by the combined workload against multiple databases in the same pool.
For example, if a query generates 1000 IOPS without any I/O resource governance, but the workload group
maximum IOPS limit is set to 900 IOPS, the query won't be able to generate more than 900 IOPS. However, if
the resource pool maximum IOPS limit is set to 1500 IOPS, and the total I/O from all workload groups
associated with the resource pool exceeds 1500 IOPS, then the I/O of the same query may be reduced below the
workgroup limit of 900 IOPS.
The IOPS and throughput max values returned by the sys.dm_user_db_resource_governance view act as
limits/caps, not as guarantees. Further, resource governance doesn't guarantee any specific storage latency. The
best achievable latency, IOPS, and throughput for a given user workload depend not only on I/O resource
governance limits, but also on the mix of I/O sizes used, and on the capabilities of the underlying storage. SQL
Database uses I/Os that vary in size between 512 KB and 4 MB. For the purposes of enforcing IOPS limits, every
I/O is accounted regardless of its size, with the exception of databases with data files in Azure Storage. In that
case, IOs larger than 256 KB are accounted as multiple 256-KB I/Os, to align with Azure Storage I/O accounting.
For Basic, Standard, and General Purpose databases, which use data files in Azure Storage, the
primary_group_max_io value may not be achievable if a database doesn't have enough data files to cumulatively
provide this number of IOPS, or if data isn't distributed evenly across files, or if the performance tier of
underlying blobs limits IOPS/throughput below the resource governance limits. Similarly, with small log IOs
generated by frequent transaction commits, the primary_max_log_rate value may not be achievable by a
workload due to the IOPS limit on the underlying Azure Storage blob. For databases using Azure Premium
Storage, Azure SQL Database uses sufficiently large storage blobs to obtain needed IOPS/throughput, regardless
of database size. For larger databases, multiple data files are created to increase total IOPS/throughput capacity.
Resource utilization values such as avg_data_io_percent and avg_log_write_percent , reported in the
sys.dm_db_resource_stats, sys.resource_stats, and sys.elastic_pool_resource_stats views, are calculated as
percentages of maximum resource governance limits. Therefore, when factors other than resource governance
limit IOPS/throughput, it's possible to see IOPS/throughput flattening out and latencies increasing as the
workload increases, even though reported resource utilization remains below 100%.
To determine read and write IOPS, throughput, and latency per database file, use the
sys.dm_io_virtual_file_stats() function. This function surfaces all I/O against the database, including background
I/O that isn't accounted towards avg_data_io_percent , but uses IOPS and throughput of the underlying storage,
and can impact observed storage latency. The function reports additional latency that may be introduced by I/O
resource governance for reads and writes, in the io_stall_queued_read_ms and io_stall_queued_write_ms
columns respectively.
Transaction log rate governance
Transaction log rate governance is a process in Azure SQL Database used to limit high ingestion rates for
workloads such as bulk insert, SELECT INTO, and index builds. These limits are tracked and enforced at the
subsecond level to the rate of log record generation, limiting throughput regardless of how many IOs may be
issued against data files. Transaction log generation rates currently scale linearly up to a point that is hardware-
dependent and service tier-dependent.
Log rates are set such that they can be achieved and sustained in a variety of scenarios, while the overall system
can maintain its functionality with minimized impact to the user load. Log rate governance ensures that
transaction log backups stay within published recoverability SLAs. This governance also prevents an excessive
backlog on secondary replicas, that could otherwise lead to longer than expected downtime during failovers.
The actual physical IOs to transaction log files are not governed or limited. As log records are generated, each
operation is evaluated and assessed for whether it should be delayed in order to maintain a maximum desired
log rate (MB/s per second). The delays aren't added when the log records are flushed to storage, rather log rate
governance is applied during log rate generation itself.
The actual log generation rates imposed at run time may also be influenced by feedback mechanisms,
temporarily reducing the allowable log rates so the system can stabilize. Log file space management, avoiding
running into out of log space conditions and data replication mechanisms can temporarily decrease the overall
system limits.
Log rate governor traffic shaping is surfaced via the following wait types (exposed in the sys.dm_exec_requests
and sys.dm_os_wait_stats views):
WA IT T Y P E N OT ES
When encountering a log rate limit that is hampering desired scalability, consider the following options:
Scale up to a higher service level in order to get the maximum log rate of a service tier, or switch to a
different service tier. The Hyperscale service tier provides 100 MB/s log rate regardless of chosen service
level.
If data being loaded is transient, such as staging data in an ETL process, it can be loaded into tempdb (which
is minimally logged).
For analytic scenarios, load into a clustered columnstore table, or a table with indexes that use data
compression. This reduces the required log rate. This technique does increase CPU utilization and is only
applicable to data sets that benefit from clustered columnstore indexes or data compression.
Storage space governance
In Premium and Business Critical service tiers, customer data including data files, transaction log files, and
tempdb files is stored on the local SSD storage of the machine hosting the database or elastic pool. Local SSD
storage provides high IOPS and throughput, and low I/O latency. In addition to customer data, local storage is
used for the operating system, management software, monitoring data and logs, and other files necessary for
system operation.
The size of local storage is finite and depends on hardware capabilities, which determine the maximum local
storage limit, or local storage set aside for customer data. This limit is set to maximize customer data storage,
while ensuring safe and reliable system operation. To find the maximum local storage value for each service
objective, see resource limits documentation for single databases and elastic pools.
You can also find this value, and the amount of local storage currently used by a given database or elastic pool,
using the following query:
This query should be executed in the user database, not in the master database. For elastic pools, the query can
be executed in any database in the pool. Reported values apply to the entire pool.
IMPORTANT
In Premium and Business Critical service tiers, if the workload attempts to increase combined local storage consumption
by data files, transaction log files, and tempdb files over the maximum local storage limit, an out-of-space error will
occur.
Local SSD storage is also used by databases in service tiers other than Premium and Business Critical for the
tempdb database and Hyperscale RBPEX cache. As databases are created, deleted, and increase or decrease in
size, total local storage consumption on a machine fluctuates over time. If the system detects that available local
storage on a machine is low, and a database or an elastic pool is at risk of running out of space, it will move the
database or elastic pool to a different machine with sufficient local storage available.
This move occurs in an online fashion, similarly to a database scaling operation, and has a similar impact,
including a short (seconds) failover at the end of the operation. This failover terminates open connections and
rolls back transactions, potentially impacting applications using the database at that time.
Because all data is copied to local storage volumes on different machines, moving larger databases in Premium
and Business Critical service tiers may require a substantial amount of time. During that time, if local space
consumption by a database or an elastic pool, or by the tempdb database grows rapidly, the risk of running out
of space increases. The system initiates database movement in a balanced fashion to minimize out-of-space
errors while avoiding unnecessary failovers.
Tempdb sizes
Size limits for tempdb in Azure SQL Database depend on the purchasing and deployment model.
To learn more, review tempdb size limits for:
vCore purchasing model: single databases, pooled databases
DTU purchasing model: single databases, pooled databases.
Next steps
For information about general Azure limits, see Azure subscription and service limits, quotas, and constraints.
For information about DTUs and eDTUs, see DTUs and eDTUs.
For information about tempdb size limits, see single vCore databases, pooled vCore databases, single DTU
databases, and pooled DTU databases.
Resource limits for single databases using the vCore
purchasing model
9/13/2022 • 32 minutes to read • Edit Online
IMPORTANT
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see
Manage file space in Azure SQL Database.
Each read-only replica of a database has its own resources, such as vCores, memory, data IOPS, tempdb,
workers, and sessions. Each read-only replica is subject to the resource limits detailed later in this article.
You can set the service tier, compute size (service objective), and storage amount for a single database using:
Transact-SQL via ALTER DATABASE
Azure portal
PowerShell
Azure CLI
REST API
IMPORTANT
For scaling guidance and considerations, see Scale a single database.
Storage type Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD
Number of 1 1 1 1 1
replicas
1
1 Service objectives with smallermax vCore configurations may have insufficient memory for creating and
using columnstore indexes. If encountering performance problems with columnstore, increase the max vCore
configuration to increase the max memory available.
2 For documented max data size values. Reducing max data size reduces max log size proportionally.
3 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
Gen5 hardware (part 2)
C O M P UT E SIZ E
( SERVIC E O B JEC T IVE) GP _S_GEN 5_10 GP _S_GEN 5_12 GP _S_GEN 5_14 GP _S_GEN 5_16
Storage type Remote SSD Remote SSD Remote SSD Remote SSD
Number of replicas 1 1 1 1
C O M P UT E SIZ E
( SERVIC E O B JEC T IVE) GP _S_GEN 5_10 GP _S_GEN 5_12 GP _S_GEN 5_14 GP _S_GEN 5_16
1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
Gen5 hardware (part 3)
C O M P UT E SIZ E
( SERVIC E
O B JEC T IVE) GP _S_GEN 5_18 GP _S_GEN 5_20 GP _S_GEN 5_24 GP _S_GEN 5_32 GP _S_GEN 5_40
Storage type Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD
Number of 1 1 1 1 1
replicas
1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
vCores 2 4 6 8 10 12 14
RBPEX Size 3X 3X 3X 3X 3X 3X 3X
Memory Memory Memory Memory Memory Memory Memory
1 Besides local SSD IO, workloads will use remote page server IO. Effective IOPS will depend on workload. For
details, see Data IO Governance, and Data IO in resource utilization statistics.
2 Latency numbers are approximate and representative for typical workloads at steady state, but are not
guaranteed.
3 Hyperscale is a multi-tiered architecture with separate compute and storage components. For more
information, see Hyperscale service tier architecture.
Gen5 hardware (part 2)
C O M P UT E
SIZ E
( SERVIC E H S_GEN 5_1 H S_GEN 5_1 H S_GEN 5_2 H S_GEN 5_2 H S_GEN 5_3 H S_GEN 5_4 H S_GEN 5_8
O B JEC T IVE) 6 8 0 4 2 0 0
vCores 16 18 20 24 32 40 80
RBPEX Size 3X 3X 3X 3X 3X 3X 3X
Memory Memory Memory Memory Memory Memory Memory
1 Besides local SSD IO, workloads will use remote page server IO. Effective IOPS will depend on workload. For
details, see Data IO Governance, and Data IO in resource utilization statistics.
2 Latency numbers are representative for typical workloads at steady state, but are not guaranteed.
3 Hyperscale is a multi-tiered architecture with separate compute and storage components. For more
information, see Hyperscale service tier architecture.
vCores 2 4 6 8
Memory (GB) 9 18 27 36
1 Besides local SSD IO, workloads will use remote page server IO. Effective IOPS will depend on workload. For
details, see Data IO Governance, and Data IO in resource utilization statistics.
2 Latency numbers are representative for typical workloads at steady state, but are not guaranteed.
3 Hyperscale is a multi-tiered architecture with separate compute and storage components. For more
information, see Hyperscale service tier architecture.
vCores 2 4 6 8 10 12 14
C O M P UT E
SIZ E
( SERVIC E GP _GEN 5_1 GP _GEN 5_1 GP _GEN 5_1
O B JEC T IVE) GP _GEN 5_2 GP _GEN 5_4 GP _GEN 5_6 GP _GEN 5_8 0 2 4
Max log 9 18 27 36 45 50 50
rate (MBps)
Number of 1 1 1 1 1 1 1
replicas
1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
Gen5 hardware (part 2)
C O M P UT E
SIZ E
( SERVIC E GP _GEN 5_1 GP _GEN 5_1 GP _GEN 5_2 GP _GEN 5_2 GP _GEN 5_3 GP _GEN 5_4 GP _GEN 5_8
O B JEC T IVE) 6 8 0 4 2 0 0
vCores 16 18 20 24 32 40 80
Max log 50 50 50 50 50 50 50
rate (MBps)
Number of 1 1 1 1 1 1 1
replicas
1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
vCores 8 10 12 14 16
Tempdb max 37 46 56 65 74
data size (GB)
Storage type Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD
Number of 1 1 1 1 1
replicas
1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
Fsv2-series hardware (part 2)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) GP _F SV2_18 GP _F SV2_20 GP _F SV2_24 GP _F SV2_32 GP _F SV2_36 GP _F SV2_72
vCores 18 20 24 32 36 72
Storage type Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD
Number of 1 1 1 1 1 1
replicas
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) GP _F SV2_18 GP _F SV2_20 GP _F SV2_24 GP _F SV2_32 GP _F SV2_36 GP _F SV2_72
1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
vCores 2 4 6 8
Memory (GB) 9 18 27 36
Storage type Remote SSD Remote SSD Remote SSD Remote SSD
Number of replicas 1 1 1 1
1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
vCores 2 4 6 8 10 12 14
Storage Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD
type
Max log 24 48 72 96 96 96 96
rate (MBps)
Number of 4 4 4 4 4 4 4
replicas
1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
Gen5 hardware (part 2)
C O M P UT E
SIZ E
( SERVIC E B C _GEN 5_1 B C _GEN 5_1 B C _GEN 5_2 B C _GEN 5_2 B C _GEN 5_3 B C _GEN 5_4 B C _GEN 5_8
O B JEC T IVE) 6 8 0 4 2 0 0
vCores 16 18 20 24 32 40 80
Storage Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD
type
Max log 96 96 96 96 96 96 96
rate (MBps)
Number of 4 4 4 4 4 4 4
replicas
1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
vCores 8 10 12 14 16 18
Storage type Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD
Number of 4 4 4 4 4 4
replicas
Multi-AZ No No No No No No
1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
M -series hardware (part 2)
C O M P UT E SIZ E
( SERVIC E
O B JEC T IVE) B C _M _20 B C _M _24 B C _M _32 B C _M _64 B C _M _128
vCores 20 24 32 64 128
Storage type Local SSD Local SSD Local SSD Local SSD Local SSD
Number of 4 4 4 4 4
replicas
Multi-AZ No No No No No
1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
vCores 2 4 6 8
Memory (GB) 9 18 27 36
Storage type Local SSD Local SSD Local SSD Local SSD
Number of replicas 4 4 4 4
Multi-AZ No No No No
Read Scale-out No No No No
1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
IMPORTANT
Gen4 hardware is being retired and is not available for new deployments, as announced on December 18, 2019.
Customers using Gen4 for Azure SQL Databases, elastic pools, or SQL managed instances should migrate to currently
available hardware, such as standard-series (Gen5), before January 31, 2023.
For more information on Gen4 hardware retirement and migration to current hardware, see our Blog post on Gen4
retirement. Existing Gen4 databases, elastic pools, and SQL managed instances will be migrated automatically to
equivalent standard-series (Gen5) hardware.
Downtime caused by automatic migration will be minimal and similar to downtime during scaling operations within
selected service tier. To avoid unplanned interruptions to workloads, migrate proactively at the time of your choice before
January 31, 2023.
vCores 1 2 3 4 5 6
Memory (GB) 7 14 21 28 35 42
1 Besides local SSD IO, workloads will use remote page server IO. Effective IOPS will depend on workload. For
details, see Data IO Governance, and Data IO in resource utilization statistics.
2 Latency numbers are approximate and representative for typical workloads at steady state, but are not
guaranteed.
3 Hyperscale is a multi-tiered architecture with separate compute and storage components. For more
information, see Hyperscale service tier architecture.
Gen4 hardware (part 2)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) H S_GEN 4_7 H S_GEN 4_8 H S_GEN 4_9 H S_GEN 4_10 H S_GEN 4_16 H S_GEN 4_24
vCores 7 8 9 10 16 24
1 Besides local SSD IO, workloads will use remote page server IO. Effective IOPS will depend on workload. For
details, see Data IO Governance, and Data IO in resource utilization statistics.
2 Latency numbers are approximate and representative for typical workloads at steady state, but are not
guaranteed.
3 Hyperscale is a multi-tiered architecture with separate compute and storage components. For more
information, see Hyperscale service tier architecture.
vCores 1 2 3 4 5 6
Memory (GB) 7 14 21 28 35 42
Storage type Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD
Number of 1 1 1 1 1 1
replicas
1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
Gen4 hardware (part 2)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) GP _GEN 4_7 GP _GEN 4_8 GP _GEN 4_9 GP _GEN 4_10 GP _GEN 4_16 GP _GEN 4_24
vCores 7 8 9 10 16 24
Storage type Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD
Number of 1 1 1 1 1 1
replicas
1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
vCores 1 2 3 4 5 6
Memory (GB) 7 14 21 28 35 42
In-memory 1 2 3 4 5 6
OLTP storage
(GB)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) B C _GEN 4_1 B C _GEN 4_2 B C _GEN 4_3 B C _GEN 4_4 B C _GEN 4_5 B C _GEN 4_6
Storage type Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD
Number of 4 4 4 4 4 4
replicas
1
1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
Gen4 hardware (part 2)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) B C _GEN 4_7 B C _GEN 4_8 B C _GEN 4_9 B C _GEN 4_10 B C _GEN 4_16 B C _GEN 4_24
vCores 7 8 9 10 16 24
In-memory 7 8 9.5 11 20 36
OLTP storage
(GB)
Storage type Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD
Number of 4 4 4 4 4 4
replicas
1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
Next steps
For DTU resource limits for a single database, see resource limits for single databases using the DTU
purchasing model
For vCore resource limits for elastic pools, see resource limits for elastic pools using the vCore purchasing
model
For DTU resource limits for elastic pools, see resource limits for elastic pools using the DTU purchasing
model
For resource limits for SQL Managed Instance, see SQL Managed Instance resource limits.
For information about general Azure limits, see Azure subscription and service limits, quotas, and constraints.
For information about resource limits on a server, see overview of resource limits on a server for information
about limits at the server and subscription levels.
Resource limits for single databases using the DTU
purchasing model - Azure SQL Database
9/13/2022 • 4 minutes to read • Edit Online
IMPORTANT
For scaling guidance and considerations, see Scale a single database
Max DTUs 5
IMPORTANT
The Basic service tier provides less than one vCore (CPU). For CPU-intensive workloads, a service tier of S3 or greater is
recommended.
Regarding data storage, the Basic service tier is placed on Standard Page Blobs. Standard Page Blobs use hard disk drive
(HDD)-based storage media and are best suited for development, testing, and other infrequently accessed workloads that
are less sensitive to performance variability.
1 See SQL Database pricing options for details on additional cost incurred due to any extra storage provisioned.
IMPORTANT
The Standard S0, S1 and S2 tiers provide less than one vCore (CPU). For CPU-intensive workloads, a service tier of S3 or
greater is recommended.
Regarding data storage, the Standard S0 and S1 service tiers are placed on Standard Page Blobs. Standard Page Blobs use
hard disk drive (HDD)-based storage media and are best suited for development, testing, and other infrequently accessed
workloads that are less sensitive to performance variability.
1 See SQL Database pricing options for details on additional cost incurred due to any extra storage provisioned.
Premium service tier
C O M P UT E
SIZ E P1 P2 P4 P6 P 11 P 15
Max in- 1 2 4 8 14 32
memory
OLTP storage
(GB)
1 See SQL Database pricing options for details on additional cost incurred due to any extra storage provisioned.
2 From 1024 GB up to 4096 GB in increments of 256 GB.
IMPORTANT
More than 1 TB of storage in the Premium tier is currently available in all regions except: China East, China North,
Germany Central, and Germany Northeast. In these regions, the storage max in the Premium tier is limited to 1 TB. For
more information, see P11-P15 current limitations.
NOTE
For additional information on storage limits in the Premium service tier, see Storage space governance.
Tempdb sizes
The following table lists tempdb sizes for single databases in Azure SQL Database:
S0 13.9 1 13.9
S1 13.9 1 13.9
S2 13.9 1 13.9
S3 32 1 32
S4 32 2 64
S6 32 3 96
S7 32 6 192
S9 32 12 384
S12 32 12 384
P1 13.9 12 166.7
P2 13.9 12 166.7
P4 13.9 12 166.7
P6 13.9 12 166.7
Next steps
For vCore resource limits for a single database, see resource limits for single databases using the vCore
purchasing model
For vCore resource limits for elastic pools, see resource limits for elastic pools using the vCore purchasing
model
For DTU resource limits for elastic pools, see resource limits for elastic pools using the DTU purchasing
model
For resource limits for managed instances in Azure SQL Managed Instance, see SQL Managed Instance
resource limits.
For information about general Azure limits, see Azure subscription and service limits, quotas, and constraints.
For information about resource limits on a logical SQL server, see overview of resource limits on a logical
SQL server for information about limits at the server and subscription levels.
Resource limits for elastic pools using the vCore
purchasing model
9/13/2022 • 35 minutes to read • Edit Online
IMPORTANT
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see
Manage file space in Azure SQL Database.
Each read-only replica of an elastic pool has its own resources, such as vCores, memory, data IOPS, tempdb ,
workers, and sessions. Each read-only replica is subject to elastic pool resource limits detailed later in this article.
You can set the service tier, compute size (service objective), and storage amount using:
Transact-SQL via ALTER DATABASE
Azure portal
PowerShell
Azure CLI
REST API
IMPORTANT
For scaling guidance and considerations, see Scale an elastic pool.
If all vCores of an elastic pool are busy, then each database in the pool receives an equal amount of compute
resources to process queries. Azure SQL Database provides resource sharing fairness between databases by
ensuring equal slices of compute time. Elastic pool resource sharing fairness is in addition to any amount of
resource otherwise guaranteed to each database when the vCore min per database is set to a non-zero value.
vCores 2 4 6 8 10 12 14
Min/max 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5,
elastic pool 1, 2 1...4 1...6 1...8 1...10 1...12 1...14
vCore
choices per
database
Number of 1 1 1 1 1 1 1
replicas
vCores 16 18 20 24 32 40 80
Min/max 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5,
elastic pool 1...16 1...18 1...20 1...20, 24 1...20, 24, 1...16, 24, 1...16, 24,
vCore 32 32, 40 32, 40, 80
choices per
database
Number of 1 1 1 1 1 1 1
replicas
vCores 8 10 12 14 16
TempDB max 37 46 56 65 74
data size (GB)
Storage type Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD
Number of 1 1 1 1 1
replicas
vCores 18 20 24 32 36 72
Storage type Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD
Number of 1 1 1 1 1 1
replicas
Hardware DC DC DC DC
vCores 2 4 6 8
Memory (GB) 9 18 27 36
Storage type Premium (Remote) Premium (Remote) Premium (Remote) Premium (Remote)
Storage Storage Storage Storage
Number of replicas 1 1 1 1
vCores 4 6 8 10 12 14
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) B C _GEN 5_4 B C _GEN 5_6 B C _GEN 5_8 B C _GEN 5_10 B C _GEN 5_12 B C _GEN 5_14
Storage type Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD
Min/max 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5,
elastic pool 1...4 1...6 1...8 1...10 1...12 1...14
vCore choices
per database
Number of 4 4 4 4 4 4
replicas
vCores 16 18 20 24 32 40 80
Storage Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD
type
Min/max 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5,
elastic pool 1...16 1...18 1...20 1...20, 24 1...20, 24, 1...20, 24, 1...20, 24,
vCore 32 32, 40 32, 40, 80
choices per
database
Number of 4 4 4 4 4 4 4
replicas
vCores 8 10 12 14 16 18
Storage type Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD
Number of 4 4 4 4 4 4
replicas
Multi-AZ No No No No No No
vCores 20 24 32 64 128
Storage type Local SSD Local SSD Local SSD Local SSD Local SSD
Number of 4 4 4 4 4
replicas
Multi-AZ No No No No No
Hardware DC DC DC DC
vCores 2 4 6 8
Memory (GB) 9 18 27 36
Storage type Local SSD Local SSD Local SSD Local SSD
Number of replicas 4 4 4 4
Multi-AZ No No No No
Max vCores per database The maximum number of vCores that any database in the
pool may use, if available based on utilization by other
databases in the pool. Max vCores per database is not a
resource guarantee for a database. If the workload in each
database does not need all available pool resources to
perform adequately, consider setting max vCores per
database to prevent a single database from monopolizing
pool resources. Some degree of over-committing is expected
since the pool generally assumes hot and cold usage
patterns for databases, where all databases are not
simultaneously peaking.
Min vCores per database The minimum number of vCores reserved for any database
in the pool. Consider setting a min vCores per database
when you want to guarantee resource availability for each
database regardless of resource consumption by other
databases in the pool. The min vCores per database may be
set to 0, and is also the default value. This property is set to
anywhere between 0 and the average vCores utilization per
database.
Max storage per database The maximum database size set by the user for a database in
a pool. Pooled databases share allocated pool storage, so the
size a database can reach is limited to the smaller of
remaining pool storage and maximum database size.
Maximum database size refers to the maximum size of the
data files and does not include the space used by the log file.
IMPORTANT
Because resources in an elastic pool are finite, setting min vCores per database to a value greater than 0 implicitly limits
resource utilization by each database. If, at a point in time, most databases in a pool are idle, resources reserved to satisfy
the min vCores guarantee are not available to databases active at that point in time.
Additionally, setting min vCores per database to a value greater than 0 implicitly limits the number of databases that can
be added to the pool. For example, if you set the min vCores to 2 in a 20 vCore pool, it means that you will not be able to
add more than 10 databases to the pool, because 2 vCores are reserved for each database.
Even though the per database properties are expressed in vCores, they also govern consumption of other
resource types, such as data IO, log IO, buffer pool memory, and worker threads. As you adjust min and max per
database vCore values, reservations and limits for all resource types are adjusted proportionally.
Min and max per database vCore values apply to resource consumption by user workloads, but not to resource
consumption by internal processes. For example, for a database with a per database max vCores set to half of
the pool vCores, user workload cannot consume more than one half of the buffer pool memory. However, this
database can still take advantage of pages in the buffer pool that were loaded by internal processes. For more
information, see Resource consumption by user workloads and internal processes.
NOTE
The resource limits of individual databases in elastic pools are generally the same as for single databases outside of pools
that have the same compute size (service objective). For example, the max concurrent workers for an GP_S_Gen5_10
database is 750 workers. So, the max concurrent workers for a database in a GP_Gen5_10 pool is also 750 workers. Note,
the total number of concurrent workers in GP_Gen5_10 pool is 1050. For the max concurrent workers for any individual
database, see Single database resource limits.
Previously available hardware
This section includes details on previously available hardware.
IMPORTANT
Gen4 hardware is being retired and is not available for new deployments, as announced on December 18, 2019.
Customers using Gen4 for Azure SQL Databases, elastic pools, or SQL managed instances should migrate to currently
available hardware, such as standard-series (Gen5), before January 31, 2023.
For more information on Gen4 hardware retirement and migration to current hardware, see our Blog post on Gen4
retirement. Existing Gen4 databases, elastic pools, and SQL managed instances will be migrated automatically to
equivalent standard-series (Gen5) hardware.
Downtime caused by automatic migration will be minimal and similar to downtime during scaling operations within
selected service tier. To avoid unplanned interruptions to workloads, migrate proactively at the time of your choice before
January 31, 2023.
vCores 1 2 3 4 5 6
Memory (GB) 7 14 21 28 35 42
Min/max 0, 0.25, 0.5, 1 0, 0.25, 0.5, 1, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5,
elastic pool 2 1...3 1...4 1...5 1...6
vCore choices
per database
Number of 1 1 1 1 1 1
replicas
vCores 7 8 9 10 16 24
Min/max 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5,
elastic pool 1...7 1...8 1...9 1...10 1...10, 16 1...10, 16, 24
vCore choices
per database
Number of 1 1 1 1 1 1
replicas
vCores 2 3 4 5 6
Memory (GB) 14 21 28 35 42
In-memory 2 3 4 5 6
OLTP storage
(GB)
Storage type Local SSD Local SSD Local SSD Local SSD Local SSD
Min/max elastic 0, 0.25, 0.5, 1, 2 0, 0.25, 0.5, 1...3 0, 0.25, 0.5, 1...4 0, 0.25, 0.5, 1...5 0, 0.25, 0.5, 1...6
pool vCore
choices per
database
Number of 4 4 4 4 4
replicas
vCores 7 8 9 10 16 24
In-memory 7 8 9.5 11 20 36
OLTP storage
(GB)
Storage type Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) B C _GEN 4_7 B C _GEN 4_8 B C _GEN 4_9 B C _GEN 4_10 B C _GEN 4_16 B C _GEN 4_24
Min/max 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5,
elastic pool 1...7 1...8 1...9 1...10 1...10, 16 1...10, 16, 24
vCore choices
per database
Number of 4 4 4 4 4 4
replicas
Next steps
For vCore resource limits for a single database, see resource limits for single databases using the vCore
purchasing model
For DTU resource limits for a single database, see resource limits for single databases using the DTU
purchasing model
For DTU resource limits for elastic pools, see resource limits for elastic pools using the DTU purchasing
model
For resource limits for managed instances, see managed instance resource limits.
For information about general Azure limits, see Azure subscription and service limits, quotas, and constraints.
For information about resource limits on a logical SQL server, see overview of resource limits on a logical
SQL server for information about limits at the server and subscription levels.
Resource limits for elastic pools using the DTU
purchasing model
9/13/2022 • 14 minutes to read • Edit Online
IMPORTANT
For scaling guidance and considerations, see Scale an elastic pool
The resource limits of individual databases in elastic pools are generally the same as for single databases
outside of pools based on DTUs and the service tier. For example, the max concurrent workers for an S2
database is 120 workers. So, the max concurrent workers for a database in a Standard pool is also 120 workers
if the max DTU per database in the pool is 50 DTUs (which is equivalent to S2).
For the same number of DTUs, resources provided to an elastic pool may exceed the resources provided to a
single database outside of an elastic pool. This means it is possible for the eDTU utilization of an elastic pool to
be less than the summation of DTU utilization across databases within the pool, depending on workload
patterns. For example, in an extreme case with only one database in an elastic pool where database DTU
utilization is 100%, it is possible for pool eDTU utilization to be 50% for certain workload patterns. This can
happen even if max DTU per database remains at the maximum supported value for the given pool size.
NOTE
The storage per pool resource limit in each of the following tables do not include tempdb and log storage.
Max In- N/A N/A N/A N/A N/A N/A N/A N/A
Memory
OLTP
storage
per pool
(GB)
Min DTU 0, 5 0, 5 0, 5 0, 5 0, 5 0, 5 0, 5 0, 5
per
database
choices
Max DTU 5 5 5 5 5 5 5 5
per
database
choices
EDT US
P ER
POOL 50 100 200 300 400 800 1200 1600
Max 2 2 2 2 2 2 2 2
storage
per
database
(GB)
Min DTU per 0, 10, 20, 50 0, 10, 20, 50, 0, 10, 20, 50, 0, 10, 20, 50, 0, 10, 20, 50, 0, 10, 20, 50,
database 100 100, 200 100, 200, 300 100, 200, 100, 200,
choices 300, 400 300, 400, 800
Max DTU per 10, 20, 50 10, 20, 50, 10, 20, 50, 10, 20, 50, 10, 20, 50, 10, 20, 50,
database 100 100, 200 100, 200, 300 100, 200, 100, 200,
choices 300, 400 300, 400, 800
EDT US P ER
POOL 50 100 200 300 400 800
1 See SQL Database pricing options for details on additional cost incurred due to any extra storage provisioned.
2 See Resource management in dense elastic pools for additional considerations.
3 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.
Standard elastic pool limits (continued)
EDT US P ER P O O L 1200 1600 2000 2500 3000
Min DTU per 0, 10, 20, 50, 0, 10, 20, 50, 0, 10, 20, 50, 0, 10, 20, 50, 0, 10, 20, 50,
database choices 100, 200, 300, 100, 200, 300, 100, 200, 300, 100, 200, 300, 100, 200, 300,
400, 800, 1200 400, 800, 1200, 400, 800, 1200, 400, 800, 1200, 400, 800, 1200,
1600 1600, 2000 1600, 2000, 1600, 2000,
2500 2500, 3000
Max DTU per 10, 20, 50, 100, 10, 20, 50, 100, 10, 20, 50, 100, 10, 20, 50, 100, 10, 20, 50, 100,
database choices 200, 300, 400, 200, 300, 400, 200, 300, 400, 200, 300, 400, 200, 300, 400,
800, 1200 800, 1200, 1600 800, 1200, 1600, 800, 1200, 1600, 800, 1200, 1600,
2000 2000, 2500 2000, 2500,
3000
1 See SQL Database pricing options for details on additional cost incurred due to any extra storage provisioned.
2
2 See Resource management in dense elastic pools for additional considerations.
3 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.
Premium elastic pool limits
EDT US P ER P O O L 125 250 500 1000 1500
Max In-Memory 1 2 4 10 12
OLTP storage
per pool (GB)
Min eDTUs per 0, 25, 50, 75, 0, 25, 50, 75, 0, 25, 50, 75, 0, 25, 50, 75, 0, 25, 50, 75,
database 125 125, 250 125, 250, 500 125, 250, 500, 125, 250, 500,
1000 1000
Max eDTUs per 25, 50, 75, 125 25, 50, 75, 125, 25, 50, 75, 125, 25, 50, 75, 125, 25, 50, 75, 125,
database 250 250, 500 250, 500, 1000 250, 500, 1000
1 See SQL Database pricing options for details on additional cost incurred due to any extra storage provisioned.
2 See Resource management in dense elastic pools for additional considerations.
3 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.
Premium elastic pool limits (continued)
EDT US P ER P O O L 2000 2500 3000 3500 4000
Max In-Memory 16 20 24 28 32
OLTP storage
per pool (GB)
Min DTU per 0, 25, 50, 75, 0, 25, 50, 75, 0, 25, 50, 75, 0, 25, 50, 75, 0, 25, 50, 75,
database choices 125, 250, 500, 125, 250, 500, 125, 250, 500, 125, 250, 500, 125, 250, 500,
1000, 1750 1000, 1750 1000, 1750 1000, 1750 1000, 1750,
4000
Max DTU per 25, 50, 75, 125, 25, 50, 75, 125, 25, 50, 75, 125, 25, 50, 75, 125, 25, 50, 75, 125,
database choices 250, 500, 1000, 250, 500, 1000, 250, 500, 1000, 250, 500, 1000, 250, 500, 1000,
1750 1750 1750 1750 1750, 4000
1 See SQL Database pricing options for details on additional cost incurred due to any extra storage provisioned.
2 See Resource management in dense elastic pools for additional considerations.
3 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.
IMPORTANT
More than 1 TB of storage in the Premium tier is currently available in all regions except: China East, China North,
Germany Central, and Germany Northeast. In these regions, the storage max in the Premium tier is limited to 1 TB. For
more information, see P11-P15 current limitations.
If all DTUs of an elastic pool are used, then each database in the pool receives an equal amount of resources to
process queries. The SQL Database service provides resource sharing fairness between databases by ensuring
equal slices of compute time. Elastic pool resource sharing fairness is in addition to any amount of resource
otherwise guaranteed to each database when the DTU min per database is set to a non-zero value.
NOTE
For additional information on storage limits in the Premium service tier, see Storage space governance.
Max DTUs per database The maximum number of DTUs that any database in the
pool may use, if available based on utilization by other
databases in the pool. Max DTUs per database is not a
resource guarantee for a database. If the workload in each
database does not need all available pool resources to
perform adequately, consider setting max DTUs per database
to prevent a single database from monopolizing pool
resources. Some degree of over-committing is expected since
the pool generally assumes hot and cold usage patterns for
databases, where all databases are not simultaneously
peaking.
Min DTUs per database The minimum number of DTUs reserved for any database in
the pool. Consider setting a min DTUs per database when
you want to guarantee resource availability for each
database regardless of resource consumption by other
databases in the pool. The min DTUs per database may be
set to 0, and is also the default value. This property is set to
anywhere between 0 and the average DTUs utilization per
database.
Max storage per database The maximum database size set by the user for a database in
a pool. Pooled databases share allocated pool storage, so the
size a database can reach is limited to the smaller of
remaining pool storage and maximum database size.
Maximum database size refers to the maximum size of the
data files and does not include the space used by the log file.
IMPORTANT
Because resources in an elastic pool are finite, setting min DTUs per database to a value greater than 0 implicitly limits
resource utilization by each database. If, at a point in time, most databases in a pool are idle, resources reserved to satisfy
the min DTUs guarantee are not available to databases active at that point in time.
Additionally, setting min DTUs per database to a value greater than 0 implicitly limits the number of databases that can be
added to the pool. For example, if you set the min DTUs to 100 in a 400 DTU pool, it means that you will not be able to
add more than 4 databases to the pool, because 100 DTUs are reserved for each database.
While the per database properties are expressed in DTUs, they also govern consumption of other resource types,
such as data IO, log IO, buffer pool memory, and worker threads. As you adjust min and max per database DTUs
values, reservations and limits for all resource types are adjusted proportionally.
Min and max per database DTU values apply to resource consumption by user workloads, but not to resource
consumption by internal processes. For example, for a database with a per database max DTU set to half of the
pool eDTU, user workload cannot consume more than one half of the buffer pool memory. However, this
database can still take advantage of pages in the buffer pool that were loaded by internal processes. For more
information, see Resource consumption by user workloads and internal processes.
Tempdb sizes
The following table lists tempdb sizes for single databases in Azure SQL Database:
Next steps
For vCore resource limits for a single database, see resource limits for single databases using the vCore
purchasing model
For DTU resource limits for a single database, see resource limits for single databases using the DTU
purchasing model
For vCore resource limits for elastic pools, see resource limits for elastic pools using the vCore purchasing
model
For resource limits for managed instances in Azure SQL Managed Instance, see SQL Managed Instance
resource limits.
For information about general Azure limits, see Azure subscription and service limits, quotas, and constraints.
For information about resource limits on a logical SQL server, see overview of resource limits on a logical
SQL server for information about limits at the server and subscription levels.
Migration guide: Access to Azure SQL Database
9/13/2022 • 6 minutes to read • Edit Online
In this guide, you learn how to migrate your Microsoft Access database to an Azure SQL database by using SQL
Server Migration Assistant for Access (SSMA for Access).
For other migration guides, see Azure Database Migration Guide.
Prerequisites
Before you begin migrating your Access database to a SQL database, do the following:
Verify that your source environment is supported.
Download and install SQL Server Migration Assistant for Access.
Ensure that you have connectivity and sufficient permissions to access both source and target.
Pre-migration
After you've met the prerequisites, you're ready to discover the topology of your environment and assess the
feasibility of your Azure cloud migration.
Assess
Use SSMA for Access to review database objects and data, and assess databases for migration.
To create an assessment, do the following:
1. Open SSMA for Access.
2. Select File , and then select New Project .
3. Provide a project name and a location for your project and then, in the drop-down list, select Azure SQL
Database as the migration target.
4. Select OK .
5. Select Add Databases , and then select the databases to be added to your new project.
6. On the Access Metadata Explorer pane, right-click a database, and then select Create Repor t .
Alternatively, you can select the Create Repor t tab at the upper right.
7. Review the HTML report to understand the conversion statistics and any errors or warnings. You can also
open the report in Excel to get an inventory of Access objects and understand the effort required to
perform schema conversions. The default location for the report is in the report folder within
SSMAProjects. For example:
drive:\<username>\Documents\SSMAProjects\MyAccessMigration\report\report_<date>
Validate the data types
Validate the default data type mappings, and change them based on your requirements, if necessary. To do so:
1. In SSMA for Access, select Tools , and then select Project Settings .
2. Select the Type Mapping tab.
3. You can change the type mapping for each table by selecting the table name on the Access Metadata
Explorer pane.
Convert the schema
To convert database objects, do the following:
1. Select the Connect to Azure SQL Database tab, and then do the following:
a. Enter the details for connecting to your SQL database.
b. In the drop-down list, select your target SQL database. Or you can enter a new name, in which case a
database will be created on the target server.
c. Provide authentication details.
d. Select Connect .
2. On the Access Metadata Explorer pane, right-click the database, and then select Conver t Schema .
Alternatively, you can select your database and then select the Conver t Schema tab.
3. After the conversion is completed, compare the converted objects to the original objects to identify
potential problems, and address the problems based on the recommendations.
Compare the converted Transact-SQL text to the original code, and review the recommendations.
4. (Optional) To convert an individual object, right-click the object, and then select Conver t Schema .
Converted objects appear in bold text in Access Metadata Explorer :
5. On the Output pane, select the Review results icon, and review the errors on the Error list pane.
6. Save the project locally for an offline schema remediation exercise. To do so, select File > Save Project .
This gives you an opportunity to evaluate the source and target schemas offline and perform remediation
before you publish them to your SQL database.
Post-migration
After you've successfully completed the migration stage, you need to complete a series of post-migration tasks
to ensure that everything is functioning as smoothly and efficiently as possible.
Remediate applications
After the data is migrated to the target environment, all the applications that formerly consumed the source
need to start consuming the target. Accomplishing this will in some cases require changes to the applications.
Perform tests
The test approach to database migration consists of the following activities:
1. Develop validation tests : To test the database migration, you need to use SQL queries. You must create
the validation queries to run against both the source and target databases. Your validation queries should
cover the scope you've defined.
2. Set up a test environment : The test environment should contain a copy of the source database and the
target database. Be sure to isolate the test environment.
3. Run validation tests : Run validation tests against the source and the target, and then analyze the
results.
4. Run performance tests : Run performance tests against the source and the target, and then analyze and
compare the results.
Optimize
The post-migration phase is crucial for reconciling any data accuracy issues, verifying completeness, and
addressing performance issues with the workload.
For more information about these issues and the steps to mitigate them, see the Post-migration validation and
optimization guide.
Migration assets
For more assistance with completing this migration scenario, see the following resource. It was developed in
support of a real-world migration project engagement.
T IT L E DESC RIP T IO N
Data workload assessment model and tool Provides suggested “best fit” target platforms, cloud
readiness, and application/database remediation levels for
specified workloads. It offers simple, one-click calculation and
report generation that helps to accelerate large estate
assessments by providing an automated, uniform target-
platform decision process.
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate
complex modernization for data platform migration projects to Microsoft's Azure data platform.
Next steps
For a matrix of Microsoft and third-party services and tools that are available to assist you with various
database and data migration scenarios and specialty tasks, see Service and tools for data migration.
To learn more about Azure SQL Database see:
An overview of SQL Database
Azure total cost of ownership calculator
To learn more about the framework and adoption cycle for cloud migrations, see:
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads for migration to Azure
Cloud Migration Resources
To assess the application access layer, see Data Access Migration Toolkit (preview).
For information about how to perform Data Access Layer A/B testing, see Overview of Database
Experimentation Assistant.
Migration guide: IBM Db2 to Azure SQL Database
9/13/2022 • 6 minutes to read • Edit Online
Prerequisites
To migrate your Db2 database to SQL Database, you need:
To verify that your source environment is supported.
To download SQL Server Migration Assistant (SSMA) for Db2.
A target database in Azure SQL Database.
Connectivity and sufficient permissions to access both source and target.
Pre-migration
After you have met the prerequisites, you're ready to discover the topology of your environment and assess the
feasibility of your Azure cloud migration.
Assess and convert
Use SSMA for DB2 to review database objects and data, and assess databases for migration.
To create an assessment, follow these steps:
1. Open SSMA for Db2.
2. Select File > New Project .
3. Provide a project name and a location to save your project. Then select Azure SQL Database as the
migration target from the drop-down list, and select OK .
4. On Connect to Db2 , enter values for the Db2 connection details.
5. Right-click the Db2 schema you want to migrate, and then choose Create repor t . This will generate an
HTML report. Alternatively, you can choose Create repor t from the navigation bar after selecting the
schema.
6. Review the HTML report to understand conversion statistics and any errors or warnings. You can also
open the report in Excel to get an inventory of Db2 objects and the effort required to perform schema
conversions. The default location for the report is in the report folder within SSMAProjects.
For example: drive:\<username>\Documents\SSMAProjects\MyDb2Migration\report\report_<date> .
Validate data types
Validate the default data type mappings, and change them based on requirements if necessary. To do so, follow
these steps:
1. Select Tools from the menu.
2. Select Project Settings .
3. Select the Type mappings tab.
4. You can change the type mapping for each table by selecting the table in the Db2 Metadata Explorer .
Convert schema
To convert the schema, follow these steps:
1. (Optional) Add dynamic or ad-hoc queries to statements. Right-click the node, and then choose Add
statements .
2. Select Connect to Azure SQL Database .
a. Enter connection details to connect your database in Azure SQL Database.
b. Choose your target SQL Database from the drop-down list, or provide a new name, in which case a
database will be created on the target server.
c. Provide authentication details.
d. Select Connect .
3. Right-click the schema, and then choose Conver t Schema . Alternatively, you can choose Conver t
Schema from the top navigation bar after selecting your schema.
4. After the conversion completes, compare and review the structure of the schema to identify potential
problems. Address the problems based on the recommendations.
5. In the Output pane, select Review results . In the Error list pane, review errors.
6. Save the project locally for an offline schema remediation exercise. From the File menu, select Save
Project . This gives you an opportunity to evaluate the source and target schemas offline, and perform
remediation before you can publish the schema to SQL Database.
Migrate
After you have completed assessing your databases and addressing any discrepancies, the next step is to
execute the migration process.
To publish your schema and migrate your data, follow these steps:
1. Publish the schema. In Azure SQL Database Metadata Explorer , from the Databases node, right-
click the database. Then select Synchronize with Database .
2. Migrate the data. Right-click the database or object you want to migrate in Db2 Metadata Explorer , and
choose Migrate data . Alternatively, you can select Migrate Data from the navigation bar. To migrate
data for an entire database, select the check box next to the database name. To migrate data from
individual tables, expand the database, expand Tables , and then select the check box next to the table. To
omit data from individual tables, clear the check box.
3. Provide connection details for both Db2 and Azure SQL Database.
4. After migration completes, view the Data Migration Repor t .
5. Connect to your database in Azure SQL Database by using SQL Server Management Studio. Validate the
migration by reviewing the data and schema.
Post-migration
After the migration is complete, you need to go through a series of post-migration tasks to ensure that
everything is functioning as smoothly and efficiently as possible.
Remediate applications
After the data is migrated to the target environment, all the applications that formerly consumed the source
need to start consuming the target. Accomplishing this will in some cases require changes to the applications.
Perform tests
Testing consists of the following activities:
1. Develop validation tests : To test database migration, you need to use SQL queries. You must create the
validation queries to run against both the source and the target databases. Your validation queries should
cover the scope you have defined.
2. Set up the test environment : The test environment should contain a copy of the source database and the
target database. Be sure to isolate the test environment.
3. Run validation tests : Run the validation tests against the source and the target, and then analyze the
results.
4. Run performance tests : Run performance tests against the source and the target, and then analyze and
compare the results.
Advanced features
Be sure to take advantage of the advanced cloud-based features offered by SQL Database, such as built-in high
availability, threat detection, and monitoring and tuning your workload.
Some SQL Server features are only available when the database compatibility level is changed to the latest
compatibility level.
Migration assets
For additional assistance, see the following resources, which were developed in support of a real-world
migration project engagement:
Data workload assessment model and tool This tool provides suggested "best fit" target platforms,
cloud readiness, and application/database remediation level
for a given workload. It offers simple, one-click calculation
and report generation that helps to accelerate large estate
assessments by providing and automated and uniform
target platform decision process.
Db2 zOS data assets discovery and assessment package After running the SQL script on a database, you can export
the results to a file on the file system. Several file formats are
supported, including *.csv, so that you can capture the
results in external tools such as spreadsheets. This method
can be useful if you want to easily share results with teams
that do not have the workbench installed.
IBM Db2 LUW inventory scripts and artifacts This asset includes a SQL query that hits IBM Db2 LUW
version 11.1 system tables and provides a count of objects
by schema and object type, a rough estimate of "raw data"
in each schema, and the sizing of tables in each schema, with
results stored in a CSV format.
IBM Db2 to SQL DB - Database Compare utility The Database Compare utility is a Windows console
application that you can use to verify that the data is
identical both on source and target platforms. You can use
the tool to efficiently compare data down to the row or
column level in all or selected tables, rows, and columns.
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate
complex modernization for data platform migration projects to Microsoft's Azure data platform.
Next steps
For Microsoft and third-party services and tools to assist you with various database and data migration
scenarios, see Service and tools for data migration.
To learn more about Azure SQL Database, see:
An overview of SQL Database
Azure total cost of ownership calculator
To learn more about the framework and adoption cycle for cloud migrations, see:
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads migrated to Azure
Cloud Migration Resources
To assess the application access layer, see Data Access Migration Toolkit.
For details on how to perform data access layer A/B testing, see Database Experimentation Assistant.
Migration guide: Oracle to Azure SQL Database
9/13/2022 • 9 minutes to read • Edit Online
IMPORTANT
Try new Database Migration Assessment for Oracle extension in Azure Data Studio for Oracle to SQL pre-assessment and
workload categorization. If you are in early phase of Oracle to SQL migration and would need to do a high level workload
assessment , interested in sizing Azure SQL target for the Oracle workload or understand feature migration parity, try the
new extension. For detailed code assessment and conversion, continue with SSMA for Oracle.
Prerequisites
Before you begin migrating your Oracle schema to SQL Database:
Verify that your source environment is supported.
Download SSMA for Oracle.
Have a target SQL Database instance.
Obtain the necessary permissions for SSMA for Oracle and provider.
Pre-migration
After you've met the prerequisites, you're ready to discover the topology of your environment and assess the
feasibility of your Azure cloud migration. This part of the process involves conducting an inventory of the
databases that you need to migrate, assessing those databases for potential migration issues or blockers, and
then resolving any items you might have uncovered.
Assess
By using SSMA for Oracle, you can review database objects and data, assess databases for migration, migrate
database objects to SQL Database, and then finally migrate data to the database.
To create an assessment:
1. Open SSMA for Oracle.
2. Select File , and then select New Project .
3. Enter a project name and a location to save your project. Then select Azure SQL Database as the
migration target from the drop-down list and select OK .
4. Select Connect to Oracle . Enter values for Oracle connection details in the Connect to Oracle dialog
box.
5. Select the Oracle schemas you want to migrate.
6. In Oracle Metadata Explorer , right-click the Oracle schema you want to migrate and then select
Create Repor t to generate an HTML report. Instead, you can select a database and then select the
Create Repor t tab.
7. Review the HTML report to understand conversion statistics and any errors or warnings. You can also
open the report in Excel to get an inventory of Oracle objects and the effort required to perform schema
conversions. The default location for the report is in the report folder within SSMAProjects.
For example, see
drive:\<username>\Documents\SSMAProjects\MyOracleMigration\report\report_2020_11_12T02_47_55\ .
3. In Oracle Metadata Explorer , right-click the Oracle schema and then select Conver t Schema . Or, you
can select your schema and then select the Conver t Schema tab.
4. After the conversion finishes, compare and review the converted objects to the original objects to identify
potential problems and address them based on the recommendations.
5. Compare the converted Transact-SQL text to the original stored procedures, and review the
recommendations.
6. In the output pane, select Review results and review the errors in the Error List pane.
7. Save the project locally for an offline schema remediation exercise. On the File menu, select Save
Project . This step gives you an opportunity to evaluate the source and target schemas offline and
perform remediation before you publish the schema to SQL Database.
Migrate
After you've assessed your databases and addressed any discrepancies, the next step is to run the migration
process. Migration involves two steps: publishing the schema and migrating the data.
To publish your schema and migrate your data:
1. Publish the schema by right-clicking the database from the Databases node in Azure SQL Database
Metadata Explorer and selecting Synchronize with Database .
2. Review the mapping between your source project and your target.
3. Migrate the data by right-clicking the database or object you want to migrate in Oracle Metadata
Explorer and selecting Migrate Data . Or, you can select the Migrate Data tab. To migrate data for an
entire database, select the check box next to the database name. To migrate data from individual tables,
expand the database, expand Tables , and then select the checkboxes next to the tables. To omit data from
individual tables, clear the checkboxes.
Or, you can also use SQL Server Integration Services to perform the migration. To learn more, see:
Getting started with SQL Server Integration Services
SQL Server Integration Services for Azure and Hybrid Data Movement
Post-migration
After you've successfully completed the migration stage, you need to complete a series of post-migration tasks
to ensure that everything is functioning as smoothly and efficiently as possible.
Remediate applications
After the data is migrated to the target environment, all the applications that formerly consumed the source
need to start consuming the target. Accomplishing this task will require changes to the applications in some
cases.
The Data Access Migration Toolkit is an extension for Visual Studio Code that allows you to analyze your Java
source code and detect data access API calls and queries. The toolkit provides you with a single-pane view of
what needs to be addressed to support the new database back end. To learn more, see the Migrate your Java
applications from Oracle blog post.
Perform tests
The test approach to database migration consists of the following activities:
1. Develop validation tests : To test the database migration, you need to use SQL queries. You must create the
validation queries to run against both the source and the target databases. Your validation queries should
cover the scope you've defined.
2. Set up a test environment : The test environment should contain a copy of the source database and the
target database. Be sure to isolate the test environment.
3. Run validation tests : Run validation tests against the source and the target, and then analyze the results.
4. Run performance tests : Run performance tests against the source and the target, and then analyze and
compare the results.
Validate migrated objects
Microsoft SQL Server Migration Assistant for Oracle Tester (SSMA Tester) allows you to test migrated database
objects. The SSMA Tester is used to verify that converted objects behave in the same way.
Create test case
1. Open SSMA for Oracle, select Tester followed by New Test Case .
6. Finalize the test case by reviewing the information provided in the previous steps. Configure the test
execution options based on the test scenario.
For more information on test case settings,Finishing test case preparation
7. Click on finish to create the test case.
3. Next, provide Oracle source credentials. Click connect after entering the credentials.
4. Provide target SQL Server credentials and click connect.
Optimize
The post-migration phase is crucial for reconciling any data accuracy issues, verifying completeness, and
addressing performance issues with the workload.
NOTE
For more information about these issues and the steps to mitigate them, see the Post-migration validation and
optimization guide.
Migration assets
For more assistance with completing this migration scenario, see the following resources. They were developed
in support of a real-world migration project engagement.
T IT L E/ L IN K DESC RIP T IO N
Data Workload Assessment Model and Tool This tool provides suggested "best fit" target platforms,
cloud readiness, and application or database remediation
level for a given workload. It offers simple, one-click
calculation and report generation that helps to accelerate
large estate assessments by providing an automated and
uniform target platform decision process.
Oracle Inventory Script Artifacts This asset includes a PL/SQL query that hits Oracle system
tables and provides a count of objects by schema type,
object type, and status. It also provides a rough estimate of
raw data in each schema and the sizing of tables in each
schema, with results stored in a CSV format.
Automate SSMA Oracle Assessment Collection & This set of resources uses a .csv file as entry (sources.csv in
Consolidation the project folders) to produce the xml files that are needed
to run an SSMA assessment in console mode. The source.csv
is provided by the customer based on an inventory of
existing Oracle instances. The output files are
AssessmentReportGeneration_source_1.xml,
ServersConnectionFile.xml, and VariableValueFile.xml.
Oracle to SQL DB - Database Compare utility SSMA for Oracle Tester is the recommended tool to
automatically validate the database object conversion and
data migration, and it's a superset of Database Compare
functionality.
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate
complex modernization for data platform migration projects to Microsoft's Azure data platform.
Next steps
For a matrix of Microsoft and third-party services and tools that are available to assist you with various
database and data migration scenarios and specialty tasks, see Services and tools for data migration.
To learn more about SQL Database, see:
An overview of Azure SQL Database
Azure Total Cost of Ownership (TCO) Calculator
To learn more about the framework and adoption cycle for cloud migrations, see:
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads for migration to Azure
Cloud Migration Resources
For video content, see:
Overview of the migration journey and the tools and services recommended for performing
assessment and migration
Migration guide: MySQL to Azure SQL Database
9/13/2022 • 6 minutes to read • Edit Online
Prerequisites
Before you begin migrating your MySQL database to a SQL database, do the following:
Verify that your source environment is supported. Currently, MySQL 5.6 and 5.7 are supported.
Download and install SQL Server Migration Assistant for MySQL.
Ensure that you have connectivity and sufficient permissions to access both the source and the target.
Pre-migration
After you've met the prerequisites, you're ready to discover the topology of your environment and assess the
feasibility of your Azure cloud migration.
Assess
Use SQL Server Migration Assistant (SSMA) for MySQL to review database objects and data, and assess
databases for migration.
To create an assessment, do the following:
1. Open SSMA for MySQL.
2. Select File , and then select New Project .
3. In the New Project pane, enter a name and location for your project and then, in the Migrate To drop-
down list, select Azure SQL Database .
4. Select OK .
5. Select the Connect to MySQL tab, and then provide details for connecting your MySQL server.
6. On the MySQL Metadata Explorer pane, right-click the MySQL schema, and then select Create
Repor t . Alternatively, you can select the Create Repor t tab at the upper right.
7. Review the HTML report to understand the conversion statistics, errors, and warnings. Analyze it to
understand the conversion issues and resolutions. You can also open the report in Excel to get an
inventory of MySQL objects and understand the effort that's required to perform schema conversions.
The default location for the report is in the report folder within SSMAProjects. For example:
drive:\Users\<username>\Documents\SSMAProjects\MySQLMigration\report\report_2016_11_12T02_47_55\
3. Right-click the schema you're working with, and then select Conver t Schema . Alternatively, you can
select the Conver t schema tab at the upper right.
4. After the conversion is completed, review and compare the converted objects to the original objects to
identify potential problems and address them based on the recommendations.
Compare the converted Transact-SQL text to the original code, and review the recommendations.
5. On the Output pane, select Review results , and then review any errors on the Error list pane.
6. Save the project locally for an offline schema remediation exercise. To do so, select File > Save Project .
This gives you an opportunity to evaluate the source and target schemas offline and perform remediation
before you publish the schema to your SQL database.
Compare the converted procedures to the original procedures, as shown here:
Post-migration
After you've successfully completed the migration stage, you need to complete a series of post-migration tasks
to ensure that everything is functioning as smoothly and efficiently as possible.
Remediate applications
After the data is migrated to the target environment, all the applications that formerly consumed the source
need to start consuming the target. Accomplishing this will in some cases require changes to the applications.
Perform tests
The test approach to database migration consists of the following activities:
1. Develop validation tests : To test the database migration, you need to use SQL queries. You must create
the validation queries to run against both the source and target databases. Your validation queries should
cover the scope you've defined.
2. Set up a test environment : The test environment should contain a copy of the source database and the
target database. Be sure to isolate the test environment.
3. Run validation tests : Run validation tests against the source and the target, and then analyze the
results.
4. Run performance tests : Run performance tests against the source and the target, and then analyze and
compare the results.
Optimize
The post-migration phase is crucial for reconciling any data accuracy issues, verifying completeness, and
addressing performance issues with the workload.
For more information about these issues and the steps to mitigate them, see the Post-migration validation and
optimization guide.
Migration assets
For more assistance with completing this migration scenario, see the following resource. It was developed in
support of a real-world migration project engagement.
T IT L E DESC RIP T IO N
Data workload assessment model and tool Provides suggested “best fit” target platforms, cloud
readiness, and application/database remediation levels for
specified workloads. It offers simple, one-click calculation and
report generation that helps to accelerate large estate
assessments by providing an automated, uniform target-
platform decision process.
MySQL to SQL DB - Database Compare utility The Database Compare utility is a Windows console
application that you can use to verify that the data is
identical both on source and target platforms. You can use
the tool to efficiently compare data down to the row or
column level in all or selected tables, rows, and columns.
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate
complex modernization for data platform migration projects to Microsoft's Azure data platform.
Next steps
To help estimate the cost savings you can realize by migrating your workloads to Azure, see the Azure
total cost of ownership calculator.
For a matrix of Microsoft and third-party services and tools that are available to assist you with various
database and data migration scenarios and specialty tasks, see Service and tools for data migration.
For other migration guides, see Azure Database Migration Guide.
For migration videos, see Overview of the migration journey and recommended migration and
assessment tools and services.
For more cloud migration resources, see cloud migration solutions.
Migration guide: SAP ASE to Azure SQL Database
9/13/2022 • 5 minutes to read • Edit Online
Prerequisites
Before you begin migrating your SAP SE database to your SQL database, do the following:
Verify that your source environment is supported.
Download and install SQL Server Migration Assistant for SAP Adaptive Server Enterprise (formerly SAP
Sybase ASE).
Ensure that you have connectivity and sufficient permissions to access both source and target.
Pre-migration
After you've met the prerequisites, you're ready to discover the topology of your environment and assess the
feasibility of your Azure cloud migration.
Assess
By using SQL Server Migration Assistant (SSMA) for SAP Adaptive Server Enterprise (formally SAP Sybase ASE),
you can review database objects and data, assess databases for migration, migrate Sybase database objects to
your SQL database, and then migrate data to the SQL database. To learn more, see SQL Server Migration
Assistant for Sybase (SybaseToSQL).
To create an assessment, do the following:
1. Open SSMA for Sybase.
2. Select File , and then select New Project .
3. In the New Project pane, enter a name and location for your project and then, in the Migrate To drop-
down list, select Azure SQL Database .
4. Select OK .
5. On the Connect to Sybase pane, enter the SAP connection details.
6. Right-click the SAP database you want to migrate, and then select Create repor t . This generates an
HTML report. Alternatively, you can select the Create repor t tab at the upper right.
7. Review the HTML report to understand the conversion statistics and any errors or warnings. You can also
open the report in Excel to get an inventory of SAP ASE objects and the effort that's required to perform
schema conversions. The default location for the report is in the report folder within SSMAProjects. For
example:
drive:\<username>\Documents\SSMAProjects\MySAPMigration\report\report_<date>
Post-migration
After you've successfully completed the migration stage, you need to complete a series of post-migration tasks
to ensure that everything is functioning as smoothly and efficiently as possible.
Remediate applications
After the data is migrated to the target environment, all the applications that formerly consumed the source
need to start consuming the target. Accomplishing this will in some cases require changes to the applications.
Perform tests
The test approach to database migration consists of the following activities:
1. Develop validation tests : To test the database migration, you need to use SQL queries. You must create
the validation queries to run against both the source and target databases. Your validation queries should
cover the scope you've defined.
2. Set up a test environment : The test environment should contain a copy of the source database and the
target database. Be sure to isolate the test environment.
3. Run validation tests : Run validation tests against the source and the target, and then analyze the
results.
4. Run performance tests : Run performance tests against the source and the target, and then analyze and
compare the results.
Optimize
The post-migration phase is crucial for reconciling any data accuracy issues, verifying completeness, and
addressing performance issues with the workload.
For more information about these issues and the steps to mitigate them, see the Post-migration validation and
optimization guide.
Next steps
For a matrix of Microsoft and third-party services and tools that are available to assist you with various
database and data migration scenarios and specialty tasks, see Service and tools for data migration.
To learn more about Azure SQL Database, see:
An overview of SQL Database
Azure total cost of ownership calculator
To learn more about the framework and adoption cycle for cloud migrations, see:
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads for migration to Azure
Cloud Migration Resources
To assess the application access layer, see Data Access Migration Toolkit (preview).
For details on how to perform Data Access Layer A/B testing see Database Experimentation Assistant.
Migration overview: SQL Server to Azure SQL
Database
9/13/2022 • 14 minutes to read • Edit Online
Overview
Azure SQL Database is a recommended target option for SQL Server workloads that require a fully managed
platform as a service (PaaS). SQL Database handles most database management functions. It also has built-in
high availability, intelligent query processing, scalability, and performance capabilities to suit many application
types.
SQL Database provides flexibility with multiple deployment models and service tiers that cater to different types
of applications or workloads.
One of the key benefits of migrating to SQL Database is that you can modernize your application by using the
PaaS capabilities. You can then eliminate any dependency on technical components that are scoped at the
instance level, such as SQL Agent jobs.
You can also save costs by using the Azure Hybrid Benefit for SQL Server to migrate your SQL Server on-
premises licenses to Azure SQL Database. This option is available if you choose the vCore-based purchasing
model.
Be sure to review the SQL Server database engine features available in Azure SQL Database to validate the
supportability of your migration target.
Considerations
The key factors to consider when you're evaluating migration options are:
Number of servers and databases
Size of databases
Acceptable business downtime during the migration process
The migration options listed in this guide take these factors into account. For logical data migration to Azure
SQL Database, the time to migrate can depend on both the number of objects in a database and the size of the
database.
Tools are available for various workloads and user preferences. Some tools can be used to perform a quick
migration of a single database through a UI-based tool. Other tools can automate the migration of multiple
databases to handle migrations at scale.
IMPORTANT
Transaction log rate is governed in Azure SQL Database to limit high ingestion rates. As such, during migration, you might
have to scale target database resources (vCores or DTUs) to ease pressure on CPU or throughput. Choose the
appropriately sized target database, but plan to scale resources up for the migration if necessary.
Migration tools
We recommend the following migration tools:
T EC H N O LO GY DESC RIP T IO N
Azure Migrate This Azure service helps you discover and assess your SQL
data estate at scale on VMware. It provides Azure SQL
deployment recommendations, target sizing, and monthly
estimates.
Data Migration Assistant This desktop tool from Microsoft provides seamless
assessments of SQL Server and single-database migrations
to Azure SQL Database (both schema and data).
Azure Database Migration Service This Azure service can migrate SQL Server databases to
Azure SQL Database through the Azure portal or
automatically through PowerShell. Database Migration
Service requires you to select a preferred Azure virtual
network during provisioning to ensure connectivity to your
source SQL Server databases. You can migrate single
databases or at scale.
T EC H N O LO GY DESC RIP T IO N
Transactional replication Replicate data from source SQL Server database tables to
Azure SQL Database by providing a publisher-subscriber
type migration option while maintaining transactional
consistency. Incremental data changes are propagated to
subscribers as they occur on the publishers.
T EC H N O LO GY DESC RIP T IO N
Import Export Service/BACPAC BACPAC is a Windows file with a .bacpac extension that
encapsulates a database's schema and data. You can use
BACPAC to both export data from a SQL Server source and
import the data into Azure SQL Database. A BACPAC file can
be imported to a new SQL database through the Azure
portal.
Bulk copy The bulk copy program (bcp) tool copies data from an
instance of SQL Server into a data file. Use the tool to export
the data from your source and import the data file into the
target SQL database.
Azure Data Factory The Copy activity in Azure Data Factory migrates data from
source SQL Server databases to Azure SQL Database by
using built-in connectors and an integration runtime.
SQL Data Sync SQL Data Sync is a service built on Azure SQL Database that
lets you synchronize selected data bidirectionally across
multiple databases, both on-premises and in the cloud.
Data Sync is useful in cases where data needs to be kept
updated across several databases in Azure SQL Database or
SQL Server.
Data Migration Assistant - Migrate single databases (both - Migration activity performs data
schema and data). movement between database objects
- Can accommodate downtime during (from source to target), so we
the data migration process. recommend that you run it during off-
peak times.
Supported sources: - Data Migration Assistant reports the
- SQL Server (2005 to 2019) on- status of migration per database
premises or Azure VM object, including the number of rows
- AWS EC2 migrated.
- AWS RDS - For large migrations (number of
- GCP Compute SQL Server VM databases or size of database), use
Azure Database Migration Service.
M IGRAT IO N O P T IO N W H EN TO USE C O N SIDERAT IO N S
Azure Database Migration Service - Migrate single databases or at scale. - Migrations at scale can be
- Can run in both online (minimal automated via PowerShell.
downtime) and offline modes. - Time to complete migration depends
on database size and the number of
Supported sources: objects in the database.
- SQL Server (2005 to 2019) on- - Requires the source database to be
premises or Azure VM set as read-only.
- AWS EC2
- AWS RDS
- GCP Compute SQL Server VM
M ET H O D O R T EC H N O LO GY W H EN TO USE C O N SIDERAT IO N S
Import Export Service/BACPAC - Migrate individual line-of-business - Requires downtime because data
application databases. needs to be exported at the source
- Suited for smaller databases. and imported at the destination.
- Does not require a separate - The file formats and data types used
migration service or tool. in the export or import need to be
consistent with table schemas to avoid
Supported sources: truncation or data-type mismatch
- SQL Server (2005 to 2019) on- errors.
premises or Azure VM - Time taken to export a database with
- AWS EC2 a large number of objects can be
- AWS RDS significantly higher.
- GCP Compute SQL Server VM
Bulk copy - Do full or partial data migrations. - Requires downtime for exporting
- Can accommodate downtime. data from the source and importing
into the target.
Supported sources: - The file formats and data types used
- SQL Server (2005 to 2019) on- in the export or import need to be
premises or Azure VM consistent with table schemas.
- AWS EC2
- AWS RDS
- GCP Compute SQL Server VM
Azure Data Factory - Migrate and/or transform data from - Requires creating data movement
source SQL Server databases. pipelines in Data Factory to move data
- Merging data from multiple sources from source to destination.
of data to Azure SQL Database is - Cost is an important consideration
typically for business intelligence (BI) and is based on factors like pipeline
workloads. triggers, activity runs, and duration of
data movement.
M ET H O D O R T EC H N O LO GY W H EN TO USE C O N SIDERAT IO N S
SQL Data Sync - Synchronize data between source - Azure SQL Database must be the
and target databases. hub database for sync with an on-
- Suitable to run continuous sync premises SQL Server database as a
between Azure SQL Database and on- member database.
premises SQL Server in a bidirectional - Compared to transactional
flow. replication, SQL Data Sync supports
bidirectional data sync between on-
premises and Azure SQL Database.
- Can have a higher performance
impact, depending on the workload.
Feature interoperability
There are more considerations when you're migrating workloads that rely on other SQL Server features.
SQL Server Integration Services
Migrate SQL Server Integration Services (SSIS) packages to Azure by redeploying the packages to the Azure-
SSIS runtime in Azure Data Factory. Azure Data Factory supports migration of SSIS packages by providing a
runtime built to run SSIS packages in Azure. Alternatively, you can rewrite the SSIS ETL (extract, transform, load)
logic natively in Azure Data Factory by using data flows.
SQL Server Reporting Services
Migrate SQL Server Reporting Services (SSRS) reports to paginated reports in Power BI. Use theRDL Migration
Tool to help prepare and migrate your reports. Microsoft developed this tool to help customers migrate Report
Definition Language (RDL) reports from their SSRS servers to Power BI. It's available on GitHub, and it
documents an end-to-end walkthrough of the migration scenario.
High availability
Manual setup of SQL Server high-availability features like Always On failover cluster instances and Always On
availability groups becomes obsolete on the target SQL database. High-availability architecture is already built
into both General Purpose (standard availability model) and Business Critical (premium availability model)
service tiers for Azure SQL Database. The Business Critical/premium service tier also provides read scale-out
that allows connecting into one of the secondary nodes for read-only purposes.
Beyond the high-availability architecture that's included in Azure SQL Database, the auto-failover groups feature
allows you to managethe replication and failover of databases in a managed instance to another region.
Logins and groups
Windows logins are not supported in Azure SQL Database, create an Azure Active Directory login instead.
Manually recreate any SQL logins.
SQL Agent jobs
SQL Agent jobs are not directly supported in Azure SQL Database and need to be deployed to elastic database
jobs (preview).
System databases
For Azure SQL Database, the only applicable system databases are master and tempdb. To learn more, see
Tempdb in Azure SQL Database.
Advanced features
Be sure to take advantage of the advanced cloud-based features in SQL Database. For example, you don't need
to worry about managing backups because the service does it for you. You can restore to any point in time
within the retention period.
To strengthen security, consider usingAzure AD authentication, auditing,threat detection,row-level security,
anddynamic data masking.
In addition to advanced management and security features, SQL Database provides tools that can help you
monitor and tune your workload. Azure SQL Analytics (Preview) is an advanced solution for monitoring the
performance of all of your databases in Azure SQL Database at scale and across multiple subscriptions in a
single view. Azure SQL Analytics collects and visualizes key performance metrics with built-in intelligence for
performance troubleshooting.
Automatic tuningcontinuously monitors performance of your SQL execution plan and automatically fixes
identified performance issues.
Migration assets
For more assistance, see the following resources that were developed for real-world migration projects.
Data workload assessment model and tool This tool provides suggested "best fit" target platforms,
cloud readiness, and an application/database remediation
level for a workload. It offers simple, one-click calculation and
report generation that helps to accelerate large estate
assessments by providing an automated and uniform
decision process for target platforms.
Bulk database creation with PowerShell You can use a set of three PowerShell scripts that create a
resource group (create_rg.ps1), the logical server in Azure
(create_sqlserver.ps1), and a SQL database
(create_sqldb.ps1). The scripts include loop capabilities so
you can iterate and create as many servers and databases as
necessary.
Bulk schema deployment with MSSQL-Scripter and This asset creates a resource group, creates one or multiple
PowerShell logical servers in Azure to host Azure SQL Database, exports
every schema from an on-premises SQL Server instance (or
multiple SQL Server 2005+ instances), and imports the
schemas to Azure SQL Database.
Convert SQL Server Agent jobs into elastic database jobs This script migrates your source SQL Server Agent jobs to
elastic database jobs.
Utility to move on-premises SQL Server logins to Azure SQL A PowerShell script can create a T-SQL command script to
Database re-create logins and select database users from on-premises
SQL Server to Azure SQL Database. The tool allows
automatic mapping of Windows Server Active Directory
accounts to Azure AD accounts, along with optionally
migrating SQL Server native logins.
Perfmon data collection automation by using Logman You can use the Logman tool to collect Perfmon data (to
help you understand baseline performance) and get
migration target recommendations. This tool uses
logman.exe to create the command that will create, start,
stop, and delete performance counters set on a remote SQL
Server instance.
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate
complex modernization for data platform migration projects to Microsoft's Azure data platform.
Next steps
To start migrating your SQL Server databases to Azure SQL Database, see the SQL Server to Azure SQL
Database migration guide.
For a matrix of services and tools that can help you with database and data migration scenarios as well as
specialty tasks, see Services and tools for data migration.
To learn more about SQL Database, see:
Overview of Azure SQL Database
Azure Total Cost of Ownership Calculator
To learn more about the framework and adoption cycle for cloud migrations, see:
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads migrated to Azure
To assess the application access layer, see Data Access Migration Toolkit (Preview).
For details on how to perform A/B testing for the data access layer, see Database Experimentation
Assistant.
Migration guide: SQL Server to Azure SQL
Database
9/13/2022 • 9 minutes to read • Edit Online
Prerequisites
For your SQL Server migration to Azure SQL Database, make sure you have:
Chosen migration method and corresponding tools .
Installed Data Migration Assistant (DMA) on a machine that can connect to your source SQL Server.
Created a target Azure SQL Database.
Configured connectivity and proper permissions to access both source and target.
Reviewed the database engine features available in Azure SQL Database.
Pre-migration
After you've verified that your source environment is supported, start with the pre-migration stage. Discover all
of the existing data sources, assess migration feasibility, and identify any blocking issues that might prevent
your Azure cloud migration.
Discover
In the Discover phase, scan the network to identify all SQL Server instances and features used by your
organization.
Use Azure Migrate to assess migration suitability of on-premises servers, perform performance-based sizing,
and provide cost estimations for running them in Azure.
Alternatively, use theMicrosoft Assessment and Planning Toolkit(the "MAP Toolkit") to assess your current IT
infrastructure. The toolkit provides a powerful inventory, assessment, and reporting tool to simplify the
migration planning process.
For more information about tools available to use for the Discover phase, see Services and tools available for
data migration scenarios.
Assess
NOTE
If you are assessing the entire SQL Server data estate at scale on VMWare, use Azure Migrate to get Azure SQL
deployment recommendations, target sizing, and monthly estimates.
After data sources have been discovered, assess any on-premises SQL Server database(s) that can be migrated
to Azure SQL Database to identify migration blockers or compatibility issues.
You can use the Data Migration Assistant (version 4.1 and later) to assess databases to get:
Azure target recommendations
Azure SKU recommendations
To assess your environment using the Database Migration Assessment, follow these steps:
1. Open the Data Migration Assistant (DMA).
2. Select File and then choose New assessment .
3. Specify a project name, selectSQL Serveras the source server type, and then selectAzure SQL Databaseas the
target server type.
4. Select the type(s) of assessment reports that you want to generate. For example, database compatibility and
feature parity. Based on the type of assessment, the permissions required on the source SQL Server can be
different. DMA will highlight the permissions required for the chosen advisor before running the assessment.
The feature parity category provides a comprehensive set of recommendations, alternatives
available in Azure, and mitigating steps to help you plan your migration project. (sysadmin
permissions required)
The compatibility issues category identifies partially supported or unsupported feature
compatibility issues that might block migration as well as recommendations to address them (
CONNECT SQL , VIEW SERVER STATE , and VIEW ANY DEFINITION permissions required).
5. Specify the source connection details for your SQL Server and connect to the source database.
6. Select Star t assessment .
7. After the process completes, select and review the assessment reports for migration blocking and feature
parity issues. The assessment report can also be exported to a file that can be shared with other teams or
personnel in your organization.
8. Determine the database compatibility level that minimizes post-migration efforts.
9. Identify the best Azure SQL Database SKU for your on-premises workload.
To learn more, see Perform a SQL Server migration assessment with Data Migration Assistant.
If the assessment encounters multiple blockers to confirm that your database it not ready for an Azure SQL
Database migration, then alternatively consider:
Azure SQL Managed Instance if there are multiple instance-scoped dependencies
SQL Server on Azure Virtual Machines if both SQL Database and SQL Managed Instance fail to be suitable
targets.
Scaled Assessments and Analysis
Data Migration Assistant supports performing scaled assessments and consolidation of the assessment reports
for analysis.
If you have multiple servers and databases that need to be assessed and analyzed at scale to provide a wider
view of the data estate, see the following links to learn more:
Performing scaled assessments using PowerShell
Analyzing assessment reports using Power BI
IMPORTANT
Running assessments at scale for multiple databases, especially large ones, can also be automated using the DMA
Command Line Utility and uploaded to Azure Migrate for further analysis and target readiness.
Migrate
After you have completed tasks associated with thePre-migrationstage, you are ready to perform the schema
and data migration.
Migrate your data using your chosen migration method.
This guide describes the two most popular options - Data Migration Assistant and Azure Database Migration
Service.
Data Migration Assistant (DMA )
To migrate a database from SQL Server to Azure SQL Database using DMA, follow these steps:
1. Download and install the Database Migration Assistant.
2. Create a new project and select Migration as the project type.
3. Set the source server type to SQL Ser ver and the target server type to Azure SQL Database , select the
migration scope as Schema and data and select Create .
4. In the migration project, specify the source server details such as the server name, credentials to connect to
the server and the source database to migrate.
5. In the target server details, specify the Azure SQL Database server name, credentials to connect to the server
and the target database to migrate to.
6. Select the schema objects and deploy them to the target Azure SQL Database.
7. Finally, select Star t data migration and monitor the progress of migration.
For a detailed tutorial, see Migrate on-premises SQL Server or SQL Server on Azure VMs to Azure SQL
Database using the Data Migration Assistant.
NOTE
Scale your database to a higher service tier and compute size during the import process to maximize import speed by
providing more resources. You can then scale down after the import is successful.
The compatibility level of the imported database is based on the compatibility level of your source database.
IMPORTANT
For details on the specific steps associated with performing a cutover as part of migrations using DMS, see Performing
migration cutover.
Migration recommendations
To speed up migration to Azure SQL Database, you should consider the following recommendations:
Source (typically on premises) Primary bottleneck during migration in Based on DATA IO and DATA file
source is DATA I/O and latency on latency and depending on whether it’s
DATA file which needs to be monitored a virtual machine or physical server,
carefully. you will have to engage storage admin
and explore options to mitigate the
bottleneck.
Target (Azure SQL Database) Biggest limiting factor is the log To speed up migration, scale up the
generation rate and latency on log file. target SQL DB to Business Critical
With Azure SQL Database, you can get Gen5 8 vCore to get the maximum log
a maximum of 96-MB/s log generation generation rate of 96 MB/s and also
rate. achieve low latency for log file. The
Hyperscale service tier provides 100-
MB/s log rate regardless of chosen
service level
RESO URC E C O N T EN T IO N REC O M M EN DAT IO N
Vir tual machine used for Data CPU is the primary bottleneck for the Things to consider to speed up data
Migration Assistant (DMA) virtual machine running DMA migration by using
- Azure compute intensive VMs
- Use at least F8s_v2 (8 vcore) VM for
running DMA
- Ensure the VM is running in the
same Azure region as target
Azure Database Migration Compute resource contention and Use Premium 4 vCore. DMS
Ser vice (DMS) database objects consideration for automatically takes care of database
DMS objects like foreign keys, triggers,
constraints, and non-clustered indexes
and doesn't need manual intervention.
Post-migration
After you have successfully completed themigrationstage, go through a series of post-migration tasks to ensure
that everything is functioning smoothly and efficiently.
The post-migration phase is crucial for reconciling any data accuracy issues and verifying completeness, as well
as addressing performance issues with the workload.
Remediate applications
After the data is migrated to the target environment, all the applications that formerly consumed the source
need to start consuming the target. Accomplishing this will, in some cases, require changes to the applications.
Perform tests
The test approach for database migration consists of the following activities:
1. Develop validation tests : To test database migration, you need to use SQL queries. You must create the
validation queries to run against both the source and the target databases. Your validation queries should
cover the scope you have defined.
2. Set up test environment : The test environment should contain a copy of the source database and the
target database. Be sure to isolate the test environment.
3. Run validation tests : Run the validation tests against the source and the target, and then analyze the
results.
4. Run performance tests : Run performance test against the source and the target, and then analyze and
compare the results.
Rules Summary
RUL E T IT L E L EVEL C AT EGO RY DETA IL S
Bulk insert
Title: BULK INSERT with non-Azure blob data source isn't suppor ted in Azure SQL Database.
Categor y : Issue
Description
Azure SQL Database cannot access file shares or Windows folders. See the "Impacted Objects" section for the
specific uses of BULK INSERT statements that do not reference an Azure blob. Objects with 'BULK INSERT' where
the source isn't Azure blob storage will not work after migrating to Azure SQL Database.
Recommendation
You will need to convert BULK INSERT statements that use local files or file shares to use files from Azure blob
storage instead, when migrating to Azure SQL Database. Alternatively, migrate to SQL Server on Azure Virtual
Machine.
Compute clause
Title: COMPUTE clause is no longer suppor ted and has been removed.
Categor y : Warning
Description
The COMPUTE clause generates totals that appear as additional summary columns at the end of the result set.
However, this clause is no longer supported in Azure SQL Database.
Recommendation
The T-SQL module needs to be rewritten using the ROLLUP operator instead. The code below demonstrates
how COMPUTE can be replaced with ROLLUP:
USE AdventureWorks
GO;
CLR assemblies
Title: SQL CLR assemblies aren't suppor ted in Azure SQL Database
Categor y : Issue
Description
Azure SQL Database does not support SQL CLR assemblies.
Recommendation
Currently, there is no way to achieve this in Azure SQL Database. The recommended alternative solutions will
require application code and database changes to use only assemblies supported by Azure SQL Database.
Alternatively migrate to Azure SQL Managed Instance or SQL Server on Azure Virtual Machine
More information: Unsupported Transact-SQL differences in SQL Database
Cryptographic provider
Title: A use of CREATE CRYPTOGRAPHIC PROVIDER or ALTER CRYPTOGRAPHIC PROVIDER was
found, which isn't suppor ted in Azure SQL Database
Categor y : Issue
Description
Azure SQL Database does not support CRYPTOGRAPHIC PROVIDER statements because it cannot access files.
See the Impacted Objects section for the specific uses of CRYPTOGRAPHIC PROVIDER statements. Objects with
CREATE CRYPTOGRAPHIC PROVIDER or ALTER CRYPTOGRAPHIC PROVIDER will not work correctly after migrating to Azure
SQL Database.
Recommendation
Review objects with CREATE CRYPTOGRAPHIC PROVIDER or ALTER CRYPTOGRAPHIC PROVIDER . In any such objects that
are required, remove the uses of these features. Alternatively, migrate to SQL Server on Azure Virtual Machine
Database compatibility
Title: Azure SQL Database doesn't suppor t compatibility levels below 100.
Categor y : Warning
Description
Database compatibility level is a valuable tool to assist in database modernization, by allowing the SQL Server
Database Engine to be upgraded, while keeping connecting applications functional status by maintaining the
same pre-upgrade database compatibility level. Azure SQL Database doesn't support compatibility levels below
100.
Recommendation
Evaluate if the application functionality is intact when the database compatibility level is upgraded to 100 on
Azure SQL Managed Instance. Alternatively, migrate to SQL Server on Azure Virtual Machine
Database mail
Title: Database Mail isn't suppor ted in Azure SQL Database.
Categor y : Warning
Description
This server uses the Database Mail feature, which isn't supported in Azure SQL Database.
Recommendation
Consider migrating to Azure SQL Managed Instance that supports Database Mail. Alternatively, consider using
Azure functions and Sendgrid to accomplish mail functionality on Azure SQL Database.
FASTFIRSTROW hint
Title: FASTFIRSTROW quer y hint is no longer suppor ted and has been removed.
Categor y : Warning
Description
FASTFIRSTROW query hint is no longer supported and has been removed in Azure SQL Database.
Recommendation
Instead of FASTFIRSTROW query hint use OPTION (FAST n).
More information: Discontinued Database Engine functionality in SQL Server
FileStream
Title: Filestream isn't suppor ted in Azure SQL Database
Categor y : Issue
Description
The Filestream feature, which allows you to store unstructured data such as text documents, images, and videos
in NTFS file system, isn't supported in Azure SQL Database.
Recommendation
Upload the unstructured files to Azure Blob storage and store metadata related to these files (name, type, URL
location, storage key etc.) in Azure SQL Database. You may have to re-engineer your application to enable
streaming blobs to and from Azure SQL Database. Alternatively, migrate to SQL Server on Azure Virtual
Machine.
More information: Streaming blobs to and from Azure SQL blog
Linked server
Title: Linked ser ver functionality isn't suppor ted in Azure SQL Database
Categor y : Issue
Description
Linked servers enable the SQL Server Database Engine to execute commands against OLE DB data sources
outside of the instance of SQL Server.
Recommendation
Azure SQL Database does not support linked server functionality. The following actions are recommended to
eliminate the need for linked servers:
Identify the dependent datasets from remote SQL servers and consider moving these into the database
being migrated.
Migrate the dependent database(s) to Azure and use Elastic Database Query (preview) functionality to query
across databases in Azure SQL Database.
More information: Check Azure SQL Database elastic query (Preview)
MS DTC
Title: BEGIN DISTRIBUTED TRANSACTION isn't suppor ted in Azure SQL Database.
Categor y : Issue
Description
Distributed transaction started by Transact SQL BEGIN DISTRIBUTED TRANSACTION and managed by Microsoft
Distributed Transaction Coordinator (MS DTC) isn't supported in Azure SQL Database.
Recommendation
Review impacted objects section in Azure Migrate to see all objects using BEGIN DISTRUBUTED TRANSACTION.
Consider migrating the participant databases to Azure SQL Managed Instance where distributed transactions
across multiple instances are supported (Currently in preview). Alternatively, migrate to SQL Server on Azure
Virtual Machine.
More information: Transactions across multiple servers for Azure SQL Managed Instance
OPENROWSET (bulk)
Title: OpenRowSet used in bulk operation with non-Azure blob storage data source isn't suppor ted
in Azure SQL Database.
Categor y : Issue
Description OPENROWSET supports bulk operations through a built-in BULK provider that enables data from
a file to be read and returned as a rowset. OPENROWSET with non-Azure blob storage data source isn't
supported in Azure SQL Database.
Recommendation
Azure SQL Database cannot access file shares and Windows folders, so the files must be imported from Azure
blob storage. Therefore, only blob type DATASOURCE is supported in OPENROWSET function. Alternatively,
migrate to SQL Server on Azure Virtual Machine
More information: Resolving Transact-SQL differences during migration to SQL Database
OPENROWSET (provider)
Title: OpenRowSet with SQL or non-SQL provider isn't suppor ted in Azure SQL Database.
Categor y : Issue
Description
OpenRowSet with SQL or non-SQL provider is an alternative to accessing tables in a linked server and is a one-
time, ad hoc method of connecting and accessing remote data by using OLE DB. OpenRowSet with SQL or non-
SQL provider isn't supported in Azure SQL Database.
Recommendation
Azure SQL Database supports OPENROWSET only to import from Azure blob storage. Alternatively, migrate to
SQL Server on Azure Virtual Machine
More information: Resolving Transact-SQL differences during migration to SQL Database
Next column
Title: Tables and Columns named NEXT will lead to an error In Azure SQL Database.
Categor y : Issue
Description
Tables or columns named NEXT were detected. Sequences, introduced in Microsoft SQL Server, use the ANSI
standard NEXT VALUE FOR function. If a table or a column is named NEXT and the column is aliased as VALUE,
and if the ANSI standard AS is omitted, the resulting statement can cause an error.
Recommendation
Rewrite statements to include the ANSI standard AS keyword when aliasing a table or column. For example,
when a column is named NEXT and that column is aliased as VALUE, the query SELECT NEXT VALUE FROM TABLE
will cause an error and should be rewritten as SELECT NEXT AS VALUE FROM TABLE. Similarly, when a table is
named NEXT and that table is aliased as VALUE, the query SELECT Col1 FROM NEXT VALUE will cause an error and
should be rewritten as SELECT Col1 FROM NEXT AS VALUE .
RAISERROR
Title: Legacy style RAISERROR calls should be replaced with modern equivalents.
Categor y : Warning
Description
RAISERROR calls like the below example are termed as legacy-style because they do not include the commas
and the parenthesis. RAISERROR 50001 'this is a test' . This method of calling RAISERROR is no longer
supported and removed in Azure SQL Database.
Recommendation
Rewrite the statement using the current RAISERROR syntax, or evaluate if the modern approach of
BEGIN TRY { } END TRY BEGIN CATCH { THROW; } END CATCH is feasible.
Server audits
Title: Use Azure SQL Database audit features to replace Ser ver Audits
Categor y : Warning
Description
Server Audits isn't supported in Azure SQL Database.
Recommendation
Consider Azure SQL Database audit features to replace Server Audits. Azure SQL supports audit and the
features are richer than SQL Server. Azure SQL Database can audit various database actions and events,
including: Access to data, Schema changes (DDL), Data changes (DML), Accounts, roles, and permissions (DCL,
Security exceptions. Azure SQL Database Auditing increases an organization's ability to gain deep insight into
events and changes that occur within their database, including updates and queries against the data.
Alternatively migrate to Azure SQL Managed Instance or SQL Server on Azure Virtual Machine.
More information: Auditing for Azure SQL Database
Server credentials
Title: Ser ver scoped credential isn't suppor ted in Azure SQL Database
Categor y : Warning
Description
A credential is a record that contains the authentication information (credentials) required to connect to a
resource outside SQL Server. Azure SQL Database supports database credentials, but not the ones created at the
SQL Server scope.
Recommendation
Azure SQL Database supports database scoped credentials. Convert server scoped credentials to database
scoped credentials. Alternatively migrate to Azure SQL Managed Instance or SQL Server on Azure Virtual
Machine
More information: Creating database scoped credential
Service Broker
Title: Ser vice Broker feature isn't suppor ted in Azure SQL Database
Categor y : Issue
Description
SQL Server Service Broker provides native support for messaging and queuing applications in the SQL Server
Database Engine. Service Broker feature isn't supported in Azure SQL Database.
Recommendation
Service Broker feature isn't supported in Azure SQL Database. Consider migrating to Azure SQL Managed
Instance that supports service broker within the same instance. Alternatively, migrate to SQL Server on Azure
Virtual Machine.
Server-scoped triggers
Title: Ser ver-scoped trigger isn't suppor ted in Azure SQL Database
Categor y : Warning
Description
A trigger is a special kind of stored procedure that executes in response to certain action on a table like
insertion, deletion, or updating of data. Server-scoped triggers aren't supported in Azure SQL Database. Azure
SQL Database does not support the following options for triggers: FOR LOGON, ENCRYPTION, WITH APPEND,
NOT FOR REPLICATION, EXTERNAL NAME option (there is no external method support), ALL SERVER Option
(DDL Trigger), Trigger on a LOGON event (Logon Trigger), Azure SQL Database does not support CLR-triggers.
Recommendation
Use database level trigger instead. Alternatively migrate to Azure SQL Managed Instance or SQL Server on
Azure Virtual Machine
More information: Resolving Transact-SQL differences during migration to SQL Database
SQL Mail
Title: SQL Mail has been discontinued.
Categor y : Warning
Description
SQL Mail has been discontinued and removed in Azure SQL Database.
Recommendation
Consider migrating to Azure SQL Managed Instance or SQL Server on Azure Virtual Machines and use Database
Mail.
More information: Discontinued Database Engine functionality in SQL Server
SystemProcedures110
Title: Detected statements that reference removed system stored procedures that aren't available
in Azure SQL Database.
Categor y : Warning
Description
Following unsupported system and extended stored procedures cannot be used in Azure SQL Database -
sp_dboption , sp_addserver , sp_dropalias , sp_activedirectory_obj , sp_activedirectory_scp ,
sp_activedirectory_start .
Recommendation
Remove references to unsupported system procedures that have been removed in Azure SQL Database.
More information: Discontinued Database Engine functionality in SQL Server
Trace flags
Title: Azure SQL Database does not suppor t trace flags
Categor y : Warning
Description
Trace flags are used to temporarily set specific server characteristics or to switch off a particular behavior. Trace
flags are frequently used to diagnose performance issues or to debug stored procedures or complex computer
systems. Azure SQL Database does not support trace flags.
Recommendation
Review impacted objects section in Azure Migrate to see all trace flags that aren't supported in Azure SQL
Database and evaluate if they can be removed. Alternatively, migrate to Azure SQL Managed Instance which
supports limited number of global trace flags or SQL Server on Azure Virtual Machine.
More information: Resolving Transact-SQL differences during migration to SQL Database
Windows authentication
Title: Database users mapped with Windows authentication (integrated security) aren't suppor ted
in Azure SQL Database.
Categor y : Warning
Description
Azure SQL Database supports two types of authentication
SQL Authentication: uses a username and password
Azure Active Directory Authentication: uses identities managed by Azure Active Directory and is supported
for managed and integrated domains.
Database users mapped with Windows authentication (integrated security) aren't supported in Azure SQL
Database.
Recommendation
Federate the local Active Directory with Azure Active Directory. The Windows identity can then be replaced with
the equivalent Azure Active Directory identities. Alternatively, migrate to SQL Server on Azure Virtual Machine.
More information: SQL Database security capabilities
XP_cmdshell
Title: xp_cmdshell isn't suppor ted in Azure SQL Database.
Categor y : Issue
Description
xp_cmdshell which spawns a Windows command shell and passes in a string for execution isn't supported in
Azure SQL Database.
Recommendation
Review impacted objects section in Azure Migrate to see all objects using xp_cmdshell and evaluate if the
reference to xp_cmdshell or the impacted object can be removed. Also consider exploring Azure Automation
that delivers cloud-based automation and configuration service. Alternatively, migrate to SQL Server on Azure
Virtual Machine.
Next steps
To start migrating your SQL Server to Azure SQL Database, see the SQL Server to SQL Database migration
guide.
For a matrix of available Microsoft and third-party services and tools to assist you with various database
and data migration scenarios as well as specialty tasks, see Service and tools for data migration.
To learn more about SQL Database, see:
Overview of Azure SQL Database
Azure total Cost of Ownership Calculator
To learn more about the framework and adoption cycle for Cloud migrations, see
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads migrate to Azure
To assess the Application access layer, see Data Access Migration Toolkit (Preview)
For details on how to perform Data Access Layer A/B testing see Database Experimentation Assistant.
Configure and manage content reference - Azure
SQL Database
9/13/2022 • 2 minutes to read • Edit Online
Load data
Migrate to SQL Database
Learn how to manage SQL Database after migration.
Copy a database
Import a DB from a BACPAC
Export a DB to BACPAC
Load data with BCP
Load data with ADF
Configure features
Configure Azure Active Directory (Azure AD) auth
Configure Conditional Access
Multi-factor Azure AD auth
Configure Multi-Factor Authentication
Configure backup retention for a database to keep your backups on Azure Blob Storage.
Configure geo-replication to keep a replica of your database in another region.
Configure auto-failover group to automatically failover a group of single or pooled databases to a secondary
server in another region in the event of a disaster.
Configure temporal retention policy
Configure TDE with BYOK
Rotate TDE BYOK keys
Remove TDE protector
Configure In-Memory OLTP
Configure Azure Automation
Configure transactional replication to replicate your date between databases.
Configure threat detection to let Azure SQL Database identify suspicious activities such as SQL Injection or
access from suspicious locations.
Configure dynamic data masking to protect your sensitive data.
Configure security for geo-replicas.
Database sharding
Upgrade elastic database client library.
Create sharded app.
Query horizontally sharded data.
Run Multi-shard queries.
Move sharded data.
Configure security in database shards.
Add a shard to the current set of database shards.
Fix shard map problems.
Migrate sharded DB.
Create counters.
Use entity framework to query sharded data.
Use Dapper framework to query sharded data.
Develop applications
Connectivity
Use Spark Connector
Authenticate app
Use batching for better performance
Connectivity guidance
DNS aliases
Setup DNS alias PowerShell
Ports - ADO.NET
C and C ++
Excel
Design applications
Design for disaster recovery
Design for elastic pools
Design for app upgrades
Design Multi-tenant software as a service (SaaS ) applications
SaaS design patterns
SaaS video indexer
SaaS app security
Next steps
Learn more about How-to guides for Azure SQL Managed Instance
T-SQL differences between SQL Server and Azure
SQL Database
9/13/2022 • 5 minutes to read • Edit Online
When migrating your database from SQL Server to Azure SQL Database, you may discover that your SQL
Server databases require some re-engineering before they can be migrated. This article provides guidance to
assist you in both performing this re-engineering and understanding the underlying reasons why the re-
engineering is necessary. To detect incompatibilities and migrate databases to Azure SQL Database, use Data
Migration Assistant (DMA).
Overview
Most T-SQL features that applications use are fully supported in both Microsoft SQL Server and Azure SQL
Database. For example, the core SQL components such as data types, operators, string, arithmetic, logical, and
cursor functions work identically in SQL Server and SQL Database. There are, however, a few T-SQL differences
in DDL (data definition language) and DML (data manipulation language) elements resulting in T-SQL
statements and queries that are only partially supported (which we discuss later in this article).
In addition, there are some features and syntax that isn't supported at all because Azure SQL Database is
designed to isolate features from dependencies on the system databases and the operating system. As such,
most instance-level features are not supported in SQL Database. T-SQL statements and options aren't available
if they configure instance-level options, operating system components, or specify file system configuration.
When such capabilities are required, an appropriate alternative is often available in some other way from SQL
Database or from another Azure feature or service.
For example, high availability is built into Azure SQL Database. T-SQL statements related to availability groups
are not supported by SQL Database, and the dynamic management views related to Always On Availability
Groups are also not supported.
For a list of the features that are supported and unsupported by SQL Database, see Azure SQL Database feature
comparison. This page supplements that article, and focuses on T-SQL statements.
Next steps
For a list of the features that are supported and unsupported by SQL Database, see Azure SQL Database feature
comparison.
To detect compatibility issues in your SQL Server databases before migrating to Azure SQL Database, and to
migrate your databases, use Data Migration Assistant (DMA).
Plan and manage costs for Azure SQL Database
9/13/2022 • 7 minutes to read • Edit Online
This article describes how you plan for and manage costs for Azure SQL Database.
First, you use the Azure pricing calculator to add Azure resources, and review the estimated costs. After you've
started using Azure SQL Database resources, use Cost Management features to set budgets and monitor costs.
You can also review forecasted costs and identify spending trends to identify areas where you might want to act.
Costs for Azure SQL Database are only a portion of the monthly costs in your Azure bill. Although this article
explains how to plan for and manage costs for Azure SQL Database, you're billed for all Azure services and
resources used in your Azure subscription, including any third-party services.
Prerequisites
Cost analysis supports most Azure account types, but not all of them. To view the full list of supported account
types, see Understand Cost Management data. To view cost data, you need at least read access for an Azure
account.
For information about assigning access to Azure Cost Management data, see Assign access to data.
* In the DTU purchasing model, an initial set of storage for data and backups is provided at no additional cost.
The size of the storage depends on the service tier selected. Extra data storage can be purchased in the standard
and premium tiers. For more information, see Azure SQL Database pricing.
The following table shows the most common billing meters and their possible SKUs for elastic pools :
* In the DTU purchasing model, an initial set of storage for data and backups is provided at no additional cost.
The size of the storage depends on the service tier selected. Extra data storage can be purchased in the standard
and premium tiers. For more information, see Azure SQL Database pricing.
Using Monetary Credit with Azure SQL Database
You can pay for Azure SQL Database charges with your Azure Prepayment (previously called monetary
commitment) credit. However, you can't use Azure Prepayment credit to pay for charges for third-party products
and services including those from the Azure Marketplace.
Monitor costs
As you start using Azure SQL Database, you can see the estimated costs in the portal. Use the following steps to
review the cost estimate:
1. Sign into the Azure portal and navigate to the resource group for your Azure SQL database. You can
locate the resource group by navigating to your database and select Resource group in the Over view
section.
2. In the menu, select Cost analysis .
3. View Accumulated costs and set the chart at the bottom to Ser vice name . This chart shows an
estimate of your current SQL Database costs. To narrow costs for the entire page to Azure SQL Database,
select Add filter and then, select Azure SQL Database . The information and pricing in the following
image are for example purposes only:
From here, you can explore costs on your own. For more and information about the different cost analysis
settings, see Start analyzing costs.
Create budgets
You can create budgets to manage costs and create alerts that automatically notify stakeholders of spending
anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds.
Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an
overall cost monitoring strategy.
Budgets can be created with filters for specific resources or services in Azure if you want more granularity
present in your monitoring. Filters help ensure that you don't accidentally create new resources. For more about
the filter options when you create a budget, see Group and filter options.
Other ways to manage and reduce costs for Azure SQL Database
Azure SQL Database also enables you to scale resources up or down to control costs based on your application
needs. For details, see Dynamically scale database resources.
Save money by committing to a reservation for compute resources for one to three years. For details, see Save
costs for resources with reserved capacity.
Next steps
Learn how to optimize your cloud investment with Azure Cost Management.
Learn more about managing costs with cost analysis.
Learn about how to prevent unexpected costs.
Take the Cost Management guided learning course.
Azure SQL Database and Azure SQL Managed
Instance connect and query articles
9/13/2022 • 5 minutes to read • Edit Online
Quickstarts
Q UIC K STA RT DESC RIP T IO N
SQL Server Management Studio This quickstart demonstrates how to use SSMS to connect
to a database, and then use Transact-SQL statements to
query, insert, update, and delete data in the database.
Azure Data Studio This quickstart demonstrates how to use Azure Data Studio
to connect to a database, and then use Transact-SQL (T-SQL)
statements to create the TutorialDB used in Azure Data
Studio tutorials.
Azure portal This quickstart demonstrates how to use the Query editor
to connect to a database (Azure SQL Database only), and
then use Transact-SQL statements to query, insert, update,
and delete data in the database.
Visual Studio Code This quickstart demonstrates how to use Visual Studio Code
to connect to a database, and then use Transact-SQL
statements to query, insert, update, and delete data in the
database.
.NET with Visual Studio This quickstart demonstrates how to use the .NET
framework to create a C# program with Visual Studio to
connect to a database and use Transact-SQL statements to
query data.
NOTE
For connection information for SQL Server on Azure VM, see Connect to a SQL Server instance.
Drivers
The following minimal versions of the tools and drivers are recommended if you want to connect to Azure SQL
database:
DRIVER/ TO O L VERSIO N
Libraries
You can use various libraries and frameworks to connect to Azure SQL Database or Azure SQL Managed
Instance. Check out our Get started tutorials to quickly get started with programming languages such as C#,
Java, Node.js, PHP, and Python. Then build an app by using SQL Server on Linux or Windows or Docker on
macOS.
The following table lists connectivity libraries or drivers that client applications can use from a variety of
languages to connect to and use SQL Server running on-premises or in the cloud. You can use them on Linux,
Windows, or Docker and use them to connect to Azure SQL Database, Azure SQL Managed Instance, and Azure
Synapse Analytics.
A DDIT IO N A L
L A N GUA GE P L AT F O RM RESO URC ES DO W N LO A D GET STA RT ED
PHP Windows, Linux, PHP SQL driver for Download Get started
macOS SQL Server
Python Windows, Linux, Python SQL driver Install choices: Get started
macOS * pymssql
* pyodbc
Ruby Windows, Linux, Ruby driver for SQL Install Get started
macOS Server
Data-access frameworks
The following table lists examples of object-relational mapping (ORM) frameworks and web frameworks that
client applications can use with SQL Server, Azure SQL Database, Azure SQL Managed Instance, or Azure
Synapse Analytics. You can use the frameworks on Linux, Windows, or Docker.
L A N GUA GE P L AT F O RM O RM ( S)
Next steps
For connectivity architecture information, see Azure SQL Database Connectivity Architecture.
Find SQL Server drivers that are used to connect from client applications.
Connect to Azure SQL Database or Azure SQL Managed Instance:
Connect and query using .NET (C#)
Connect and query using PHP
Connect and query using Node.js
Connect and query using Java
Connect and query using Python
Connect and query using Ruby
Install sqlcmd and bcp the SQL Server command-line tools on Linux - For Linux users, try connecting
to Azure SQL Database or Azure SQL Managed Instance using sqlcmd.
Retry logic code examples:
Connect resiliently with ADO.NET
Connect resiliently with PHP
Quickstart: Use SSMS to connect to and query
Azure SQL Database or Azure SQL Managed
Instance
9/13/2022 • 4 minutes to read • Edit Online
Prerequisites
Completing this quickstart requires the following items:
SQL Server Management Studio (SSMS).
A database in Azure SQL Database. You can use one of these quickstarts to create and then configure a
database in Azure SQL Database:
SQ L SERVER O N A Z URE
A C T IO N SQ L DATA B A SE SQ L M A N A GED IN STA N C E VM
CLI CLI
Load data Adventure Works loaded Restore Wide World Restore Wide World
per quickstart Importers Importers
IMPORTANT
The scripts in this article are written to use the Adventure Works database. With a managed instance, you must
either import the Adventure Works database into an instance database or modify the scripts in this article to use
the Wide World Importers database.
If you simply want to run some ad-hoc queries without installing SSMS, see Quickstart: Use the Azure portal's
query editor to query a database in Azure SQL Database.
NOTE
For connection information for SQL Server on Azure VM, see Connect to SQL Server
IMPORTANT
A server listens on port 1433. To connect to a server from behind a corporate firewall, the firewall must have this port
open.
1. Open SSMS.
2. The Connect to Ser ver dialog box appears. Enter the following information:
Ser ver name The fully qualified server name Something like:
ser vername.database.windows.
net .
Login Server admin account user ID The user ID from the server admin
account used to create the server.
Password Server admin account password The password from the server
admin account used to create the
server.
NOTE
This tutorial utilizes SQL Server Authentication.
3. Select Options in the Connect to Ser ver dialog box. In the Connect to database drop-down menu,
select mySampleDatabase . Completing the quickstart in the Prerequisites section creates an
AdventureWorksLT database named mySampleDatabase. If your working copy of the AdventureWorks
database has a different name than mySampleDatabase, then select it instead.
3. On the toolbar, select Execute to run the query and retrieve data from the Product and ProductCategory
tables.
Insert data
Run this INSERT Transact-SQL code to create a new product in the SalesLT.Product table.
1. Replace the previous query with this one.
2. Select Execute to insert a new row in the Product table. The Messages pane displays (1 row
affected) .
View the result
1. Replace the previous query with this one.
UPDATE [SalesLT].[Product]
SET [ListPrice] = 125
WHERE Name = 'myNewProduct';
2. Select Execute to update the specified row in the Product table. The Messages pane displays (1 row
affected) .
Delete data
Run this DELETE Transact-SQL code to remove your new product.
1. Replace the previous query with this one.
2. Select Execute to delete the specified row in the Product table. The Messages pane displays (1 row
affected) .
Next steps
For information about SSMS, see SQL Server Management Studio.
To connect and query using the Azure portal, see Connect and query with the Azure portal SQL Query editor.
To connect and query using Visual Studio Code, see Connect and query with Visual Studio Code.
To connect and query using .NET, see Connect and query with .NET.
To connect and query using PHP, see Connect and query with PHP.
To connect and query using Node.js, see Connect and query with Node.js.
To connect and query using Java, see Connect and query with Java.
To connect and query using Python, see Connect and query with Python.
To connect and query using Ruby, see Connect and query with Ruby.
Quickstart: Use the Azure portal query editor
(preview) to query Azure SQL Database
9/13/2022 • 7 minutes to read • Edit Online
Prerequisites
The AdventureWorksLT sample Azure SQL database. If you don't have it, you can create a database in
Azure SQL Database that has the AdventureWorks sample data.
A user account with permissions to connect to the database and query editor. You can either:
Have or set up a user that can connect to the database with SQL authentication.
Set up an Azure Active Directory (Azure AD) administrator for the database's SQL server.
An Azure AD server administrator can use a single identity to sign in to the Azure portal and the
SQL server and databases. To set up an Azure AD server admin:
1. In the Azure portal, on your Azure SQL database Over view page, select Ser ver name
under Essentials to navigate to the server for your database.
2. On the server page, select Azure Active Director y in the Settings section of the left
menu.
3. On the Azure Active Director y page toolbar, select Set admin .
4. On the Azure Active Director y form, search for and select the user or group you want to
be the admin, and then select Select .
5. On the Azure Active Director y main page, select Save .
NOTE
Email addresses like outlook.com or gmail.com aren't supported as Azure AD admins. The user must
either be created natively in the Azure AD or federated into the Azure AD.
Azure AD admin sign-in works with accounts that have two-factor authentication enabled, but the
query editor doesn't support two-factor authentication.
2. On the sign-in screen, provide credentials to connect to the database. You can connect using SQL
authentication or Azure AD.
To connect with SQL authentication, under SQL ser ver authentication , enter a Login and
Password for a user that has access to the database, and then select OK . You can always use the
login and password for the server admin.
To connect using Azure AD, if you're the Azure AD server admin, select Continue as <your user
or group ID> . If sign-in is unsuccessful, try refreshing the page.
2. Select Run , and then review the output in the Results pane.
3. Optionally, you can select Save quer y to save the query as an .sql file, or select Expor t data as to
export the results as a .json, .csv, or .xml file.
Run an INSERT query
To add a new product to the SalesLT.Product table, run the following INSERT T-SQL statement.
1. In the query editor, replace the previous query with the following query:
2. Select Run to add the new product. After the query runs, the Messages pane displays Quer y
succeeded: Affected rows: 1 .
Run an UPDATE query
Run the following UPDATE T-SQL statement to update the price of your new product.
1. In the query editor, replace the previous query with the following query:
UPDATE [SalesLT].[Product]
SET [ListPrice] = 125
WHERE Name = 'myNewProduct';
2. Select Run to update the specified row in the Product table. The Messages pane displays Quer y
succeeded: Affected rows: 1 .
Run a DELETE query
Run the following DELETE T-SQL statement to remove your new product.
1. In the query editor, replace the previous query with the following query:
2. Select Run to delete the specified row in the Product table. The Messages pane displays Quer y
succeeded: Affected rows: 1 .
Prerequisites
A database in Azure SQL Database or Azure SQL Managed Instance. You can use one of these quickstarts
to create and then configure a database in Azure SQL Database:
CLI CLI
PowerShell PowerShell
Load data Adventure Works loaded per Restore Wide World Importers
quickstart
IMPORTANT
The scripts in this article are written to use the Adventure Works database. With a SQL Managed Instance, you
must either import the Adventure Works database into an instance database or modify the scripts in this article
to use the Wide World Importers database.
Linux (Ubuntu)
No special configuration needed.
Windows
No special configuration needed.
IMPORTANT
Before continuing, make sure that you have your server and sign in information ready. Once you begin entering the
connection profile information, if you change your focus from Visual Studio Code, you have to restart creating the profile.
1. In Visual Studio Code, press Ctrl+Shift+P (or F1 ) to open the Command Palette.
2. Select MS SQL:Connect and choose Enter .
3. Select Create Connection Profile .
4. Follow the prompts to specify the new profile's connection properties. After specifying each value, choose
Enter to continue.
Ser ver name The fully qualified server name Something like:
mynewser ver20170313.databas
e.windows.net .
User name User name The user name of the server admin
account used to create the server.
Enter a name for this profile A profile name, such as A saved profile speeds your
mySampleProfile connection on subsequent logins.
Query data
Run the following SELECT Transact-SQL statement to query for the top 20 products by category.
1. In the editor window, paste the following SQL query.
2. Press Ctrl +Shift +E to run the query and display results from the Product and ProductCategory tables.
Insert data
Run the following INSERT Transact-SQL statement to add a new product into the SalesLT.Product table.
1. Replace the previous query with this one.
Update data
Run the following UPDATE Transact-SQL statement to update the added product.
1. Replace the previous query with this one:
UPDATE [SalesLT].[Product]
SET [ListPrice] = 125
WHERE Name = 'myNewProduct';
2. Press Ctrl +Shift +E to update the specified row in the Product table.
Delete data
Run the following DELETE Transact-SQL statement to remove the new product.
1. Replace the previous query with this one:
2. Press Ctrl +Shift +E to delete the specified row in the Product table.
Next steps
To connect and query using SQL Server Management Studio, see Quickstart: Use SQL Server Management
Studio to connect to a database in Azure SQL Database and query data.
To connect and query using the Azure portal, see Quickstart: Use the SQL Query editor in the Azure portal to
connect and query data.
For an MSDN magazine article on using Visual Studio Code, see Create a database IDE with MSSQL
extension blog post.
Connect to Azure SQL Database with Azure AD
Multi-Factor Authentication
9/13/2022 • 4 minutes to read • Edit Online
TIP
You can search .NET Framework APIs with the .NET API Browser tool page.
You can also search directly with the optional ?term=<search value> parameter.
Prerequisite
Before you begin, you should have a logical SQL server created and available.
Set an Azure AD admin for your server
For the C# example to run, a logical SQL server admin needs to assign an Azure AD admin for your server.
On the SQL ser ver page, select Active Director y admin > Set admin .
For more information about Azure AD admins and users for Azure SQL Database, see the screenshots in
Configure and manage Azure Active Directory authentication with SQL Database.
Microsoft.Data.SqlClient
The C# example relies on the Microsoft.Data.SqlClient namespace. For more information, see Using Azure Active
Directory authentication with SqlClient.
NOTE
System.Data.SqlClient uses the Azure Active Directory Authentication Library (ADAL), which will be deprecated. If you're
using the System.Data.SqlClient namespace for Azure Active Directory authentication, migrate applications to
Microsoft.Data.SqlClient and the Microsoft Authentication Library (MSAL). For more information about using Azure AD
authentication with SqlClient, see Using Azure Active Directory authentication with SqlClient.
NOTE
If you are a guest user in the database, you also need to provide the Azure AD domain name for the database: Select
Options > AD domain name or tenant ID . If you are running SSMS 18.x or later, the AD domain name or tenant ID
is no longer needed for guest users because 18.x or later automatically recognizes it.
To find the domain name in the Azure portal, select Azure Active Director y > Custom domain names . In the C#
example program, providing a domain name is not necessary.
C# code example
NOTE
If you are using .NET Core, you will want to use the Microsoft.Data.SqlClient namespace. For more information, see the
following blog.
{
conn.Open();
Console.WriteLine("ConnectionString2 succeeded.");
using (var cmd = new SqlCommand("SELECT @@Version", conn))
{
Console.WriteLine("select @@version");
var result = cmd.ExecuteScalar();
Console.WriteLine(result.ToString());
}
}
Console.ReadKey();
}
}
ConnectionString2 succeeded.
select @@version
Microsoft SQL Azure (RTM) - 12.0.2000.8
...
Next steps
Azure Active Directory server principals
Azure AD-only authentication with Azure SQL
Using multi-factor Azure Active Directory authentication
Use Java and JDBC with Azure SQL Database
9/13/2022 • 9 minutes to read • Edit Online
This topic demonstrates creating a sample application that uses Java and JDBC to store and retrieve information
in Azure SQL Database.
JDBC is the standard Java API to connect to traditional relational databases.
Prerequisites
An Azure account. If you don't have one, get a free trial.
Azure Cloud Shell or Azure CLI. We recommend Azure Cloud Shell so you'll be logged in automatically and
have access to all the tools you'll need.
A supported Java Development Kit, version 8 (included in Azure Cloud Shell).
The Apache Maven build tool.
AZ_RESOURCE_GROUP=database-workshop
AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
AZ_LOCATION=<YOUR_AZURE_REGION>
AZ_SQL_SERVER_USERNAME=demo
AZ_SQL_SERVER_PASSWORD=<YOUR_AZURE_SQL_PASSWORD>
AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>
Replace the placeholders with the following values, which are used throughout this article:
<YOUR_DATABASE_NAME> : The name of your Azure SQL Database server. It should be unique across Azure.
<YOUR_AZURE_REGION> : The Azure region you'll use. You can use eastus by default, but we recommend that
you configure a region closer to where you live. You can have the full list of available regions by entering
az account list-locations .
<AZ_SQL_SERVER_PASSWORD> : The password of your Azure SQL Database server. That password should have a
minimum of eight characters. The characters should be from three of the following categories: English
uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and
so on).
<YOUR_LOCAL_IP_ADDRESS> : The IP address of your local computer, from which you'll run your Java application.
One convenient way to find it is to point your browser to whatismyip.akamai.com.
Next, create a resource group using the following command:
az group create \
--name $AZ_RESOURCE_GROUP \
--location $AZ_LOCATION \
| jq
NOTE
We use the jq utility to display JSON data and make it more readable. This utility is installed by default on Azure Cloud
Shell. If you don't like that utility, you can safely remove the | jq part of all the commands we'll use.
NOTE
You can read more detailed information about creating Azure SQL Database servers in Quickstart: Create an Azure SQL
Database single database.
az sql db create \
--resource-group $AZ_RESOURCE_GROUP \
--name demo \
--server $AZ_DATABASE_NAME \
| jq
<properties>
<java.version>1.8</java.version>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>
<dependencies>
<dependency>
<groupId>com.microsoft.sqlserver</groupId>
<artifactId>mssql-jdbc</artifactId>
<version>7.4.1.jre8</version>
</dependency>
</dependencies>
</project>
url=jdbc:sqlserver://$AZ_DATABASE_NAME.database.windows.net:1433;database=demo;encrypt=true;trustServerCerti
ficate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;
user=demo@$AZ_DATABASE_NAME
password=$AZ_SQL_SERVER_PASSWORD
Replace the two $AZ_DATABASE_NAME variables with the value that you configured at the beginning of this
article.
Replace the $AZ_SQL_SERVER_PASSWORD variable with the value that you configured at the beginning of this
article.
Create an SQL file to generate the database schema
We will use a src/main/resources/ schema.sql file in order to create a database schema. Create that file, with the
following content:
import java.sql.*;
import java.util.*;
import java.util.logging.Logger;
static {
System.setProperty("java.util.logging.SimpleFormatter.format", "[%4$-7s] %5$s %n");
log =Logger.getLogger(DemoApplication.class.getName());
}
properties.load(DemoApplication.class.getClassLoader().getResourceAsStream("application.properties"));
/*
Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
insertData(todo, connection);
todo = readData(connection);
todo.setDetails("congratulations, you have updated data!");
updateData(todo, connection);
deleteData(todo, connection);
*/
This Java code will use the application.properties and the schema.sql files that we created earlier, in order to
connect to the SQL Server database and create a schema that will store our data.
In this file, you can see that we commented methods to insert, read, update and delete data: we will code those
methods in the rest of this article, and you will be able to uncomment them one after each other.
NOTE
The database credentials are stored in the user and password properties of the application.properties file. Those
credentials are used when executing DriverManager.getConnection(properties.getProperty("url"), properties); ,
as the properties file is passed as an argument.
You can now execute this main class with your favorite tool:
Using your IDE, you should be able to right-click on the DemoApplication class and execute it.
Using Maven, you can run the application by executing:
mvn exec:java -Dexec.mainClass="com.example.demo.DemoApplication" .
The application should connect to the Azure SQL Database, create a database schema, and then close the
connection, as you should see in the console logs:
public Todo() {
}
@Override
public String toString() {
return "Todo{" +
"id=" + id +
", description='" + description + '\'' +
", details='" + details + '\'' +
", done=" + done +
'}';
}
}
This class is a domain model mapped on the todo table that you created when executing the schema.sql script.
Insert data into Azure SQL database
In the src/main/java/DemoApplication.java file, after the main method, add the following method to insert data
into the database:
insertStatement.setLong(1, todo.getId());
insertStatement.setString(2, todo.getDescription());
insertStatement.setString(3, todo.getDetails());
insertStatement.setBoolean(4, todo.isDone());
insertStatement.executeUpdate();
}
You can now uncomment the two following lines in the main method:
Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
insertData(todo, connection);
Executing the main class should now produce the following output:
You can now uncomment the following line in the main method:
todo = readData(connection);
Executing the main class should now produce the following output:
[INFO ] Loading application properties
[INFO ] Connecting to the database
[INFO ] Database connection test: demo
[INFO ] Create database schema
[INFO ] Insert data
[INFO ] Read data
[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you
have set up JDBC correctly!', done=true}
[INFO ] Closing database connection
updateStatement.setString(1, todo.getDescription());
updateStatement.setString(2, todo.getDetails());
updateStatement.setBoolean(3, todo.isDone());
updateStatement.setLong(4, todo.getId());
updateStatement.executeUpdate();
readData(connection);
}
You can now uncomment the two following lines in the main method:
Executing the main class should now produce the following output:
You can now uncomment the following line in the main method:
deleteData(todo, connection);
Executing the main class should now produce the following output:
az group delete \
--name $AZ_RESOURCE_GROUP \
--yes
Next steps
Design your first database in Azure SQL Database
Microsoft JDBC Driver for SQL Server
Report issues/ask questions
Set up a local development environment for Azure
SQL Database
9/13/2022 • 3 minutes to read • Edit Online
Prerequisites
Before you configure the local development environment for Azure SQL Database, make sure you have met the
following hardware and software requirements:
Software requirements:
Currently supported on Windows 10 or later release, macOS Mojave or later release, and Linux
(preferably Ubuntu 18.04 or later release)
Azure Data Studio or VSCode
Minimum hardware requirements:
8 GB RAM
10 GB available disk space
Install extension
There are different extensions to install depending on your preferred development tool.
The mssql extension for Visual Studio Install the mssql extension. Installation is not necessary as the
Code extension as the functionality is
natively available.
EXT EN SIO N VISUA L ST UDIO C O DE A Z URE DATA ST UDIO
SQL Database Projects extension Installation as not necessary as the Install the SQL Database Projects
(Preview) SQL Database Projects extension is extension.
bundled with the mssql extension and
is automatically installed and updated
when the mssql extension is installed
or updated.
If you are using VSCode, install the mssql extension for Visual Studio Code.
The mssql extension enables you to connect and run queries and test scripts against a database. The database
may be running in the Azure SQL Database emulator locally, or it may be a database in the global Azure SQL
Database service.
To install the extension:
1. In VSCode, select View > Command Palette , or press Ctrl +Shift +P , or press F1 to open the
Command Palette .
2. In the Command Palette , select Extensions: Install Extensions from the dropdown.
3. In the Extensions pane, type mssql .
4. Select the SQL Ser ver (mssql) extension, and then select Install .
5. After the installation completes, select Reload to enable the extension.
Next steps
Learn more about the local development experience for Azure SQL Database:
What is the local development experience for Azure SQL Database?
Create a database project for a local Azure SQL Database development environment
Publish a database project for Azure SQL Database to the local emulator
Quickstart: Create a local development environment for Azure SQL Database
Introducing the Azure SQL Database emulator
Create a project for a local Azure SQL Database
development environment
9/13/2022 • 2 minutes to read • Edit Online
Prerequisites
Before creating or opening a SQL Database project, follow the steps in Set up a local development environment
for Azure SQL Database to configure your environment.
Next steps
Learn more about the local development experience for Azure SQL Database:
What is the local development experience for Azure SQL Database?
Set up a local development environment for Azure SQL Database
Quickstart: Create a local development environment for Azure SQL Database
Publish a database project for Azure SQL Database to the local emulator
Introducing the Azure SQL Database emulator
Publish a Database Project for Azure SQL Database
to the local emulator
9/13/2022 • 2 minutes to read • Edit Online
Overview
The Azure SQL Database local development experience allow users to source control Database Projects and
work offline when needed. The local development experience uses the Azure SQL Database emulator, a
containerized database with close fidelity with the Azure SQL Database public service, as runtime host for
Database Projects that can be published and tested locally as part of developer's inner loop. This article
describes how to publish a Database Project to the local emulator.
Prerequisites
Before you can publish a Database Project to the local emulator, you must:
Follow the steps in Set up a local development environment for Azure SQL Database to configure your
environment.
Create a Database Project by following the steps in Create a SQL Database Project for a local Azure SQL
Database development environment.
Next steps
Learn more about the local development experience for Azure SQL Database:
What is the local development experience for Azure SQL Database?
Set up a local development environment for Azure SQL Database
Create a Database Project for a local Azure SQL Database development environment
Quickstart: Create a local development environment for Azure SQL Database
Introducing the Azure SQL Database emulator
Detectable types of query performance bottlenecks
in Azure SQL Database
9/13/2022 • 13 minutes to read • Edit Online
SELECT *
FROM t1 JOIN t2 ON t1.c1 = t2.c1
WHERE t1.c1 = @p1 AND t2.c2 = '961C3970-0E54-4E8E-82B6-5545BE897F8F';
In this example, t1.c1 takes @p1 , but t2.c2 continues to take GUID as literal. In this case, if you change the
value for c2 , the query is treated as a different query, and a new compilation will happen. To reduce
compilations in this example, you would also parameterize the GUID.
The following query shows the count of queries by query hash to determine whether a query is properly
parameterized:
SELECT TOP 10
q.query_hash
, count (distinct p.query_id ) AS number_of_distinct_query_ids
, min(qt.query_sql_text) AS sampled_query_text
FROM sys.query_store_query_text AS qt
JOIN sys.query_store_query AS q
ON qt.query_text_id = q.query_text_id
JOIN sys.query_store_plan AS p
ON q.query_id = p.query_id
JOIN sys.query_store_runtime_stats AS rs
ON rs.plan_id = p.plan_id
JOIN sys.query_store_runtime_stats_interval AS rsi
ON rsi.runtime_stats_interval_id = rs.runtime_stats_interval_id
WHERE
rsi.start_time >= DATEADD(hour, -2, GETUTCDATE())
AND query_parameterization_type_desc IN ('User', 'None')
GROUP BY q.query_hash
ORDER BY count (distinct p.query_id) DESC;
Factors that affect query plan changes
A query execution plan recompilation might result in a generated query plan that differs from the original
cached plan. An existing original plan might be automatically recompiled for various reasons:
Changes in the schema are referenced by the query
Data changes to the tables are referenced by the query
Query context options were changed
A compiled plan might be ejected from the cache for various reasons, such as:
Instance restarts
Database-scoped configuration changes
Memory pressure
Explicit requests to clear the cache
If you use a RECOMPILE hint, a plan won't be cached.
A recompilation (or fresh compilation after cache eviction) can still result in the generation of a query execution
plan that's identical to the original. When the plan changes from the prior or original plan, these explanations
are likely:
Changed physical design : For example, newly created indexes more effectively cover the requirements
of a query. The new indexes might be used on a new compilation if the query optimizer decides that
using that new index is more optimal than using the data structure that was originally selected for the
first version of the query execution. Any physical changes to the referenced objects might result in a new
plan choice at compile time.
Ser ver resource differences : When a plan in one system differs from the plan in another system,
resource availability, such as the number of available processors, can influence which plan gets
generated. For example, if one system has more processors, a parallel plan might be chosen. For more
information on parallelism in Azure SQL Database, see Configure the max degree of parallelism
(MAXDOP) in Azure SQL Database.
Different statistics : The statistics associated with the referenced objects might have changed or might
be materially different from the original system's statistics. If the statistics change and a recompilation
happens, the query optimizer uses the statistics starting from when they changed. The revised statistics'
data distributions and frequencies might differ from those of the original compilation. These changes are
used to create cardinality estimates. (Cardinality estimates are the number of rows that are expected to
flow through the logical query tree.) Changes to cardinality estimates might lead you to choose different
physical operators and associated orders of operations. Even minor changes to statistics can result in a
changed query execution plan.
Changed database compatibility level or cardinality estimator version : Changes to the database
compatibility level can enable new strategies and features that might result in a different query execution
plan. Beyond the database compatibility level, a disabled or enabled trace flag 4199 or a changed state of
the database-scoped configuration QUERY_OPTIMIZER_HOTFIXES can also influence query execution
plan choices at compile time. Trace flags 9481 (force legacy CE) and 2312 (force default CE) also affect the
plan.
Waiting-related problems
Once you have eliminated a suboptimal plan and Waiting-related problems that are related to execution
problems, the performance problem is generally the queries are probably waiting for some resource. Waiting-
related problems might be caused by:
Blocking :
One query might hold the lock on objects in the database while others try to access the same objects. You
can identify blocking queries by using DMVs or Intelligent Insights. For more information, see Understand
and resolve Azure SQL Database blocking problems.
IO problems
Queries might be waiting for the pages to be written to the data or log files. In this case, check the
INSTANCE_LOG_RATE_GOVERNOR , WRITE_LOG , or PAGEIOLATCH_* wait statistics in the DMV. See using DMVs to
identify IO performance issues.
Tempdb problems
If the workload uses temporary tables or there are tempdb spills in the plans, the queries might have a
problem with tempdb throughput. To investigate further, review identify tempdb issues.
Memor y-related problems
If the workload doesn't have enough memory, the page life expectancy might drop, or the queries might
get less memory than they need. In some cases, built-in intelligence in Query Optimizer will fix memory-
related problems. See using DMVs to identify memory grant issues. For more information and sample
queries, see Troubleshoot out of memory errors with Azure SQL Database. If you encounter out of
memory errors, review sys.dm_os_out_of_memory_events.
Methods to show top wait categories
These methods are commonly used to show the top categories of wait types:
Use Intelligent Insights to identify queries with performance degradation due to increased waits
Use Query Store to find wait statistics for each query over time. In Query Store, wait types are combined into
wait categories. You can find the mapping of wait categories to wait types in sys.query_store_wait_stats.
Use sys.dm_db_wait_stats to return information about all the waits encountered by threads that executed
during a query operation. You can use this aggregated view to diagnose performance problems with Azure
SQL Database and also with specific queries and batches. Queries can be waiting on resources, queue waits,
or external waits.
Use sys.dm_os_waiting_tasks to return information about the queue of tasks that are waiting on some
resource.
In high-CPU scenarios, Query Store and wait statistics might not reflect CPU usage if:
High-CPU-consuming queries are still executing.
The high-CPU-consuming queries were running when a failover happened.
DMVs that track Query Store and wait statistics show results for only successfully completed and timed-out
queries. They don't show data for currently executing statements until the statements finish. Use the dynamic
management view sys.dm_exec_requests to track currently executing queries and the associated worker time.
TIP
Additional tools:
TigerToolbox waits and latches
TigerToolbox usp_whatsup
Next steps
Configure the max degree of parallelism (MAXDOP) in Azure SQL Database
Understand and resolve Azure SQL Database blocking problems in Azure SQL Database
Diagnose and troubleshoot high CPU on Azure SQL Database
SQL Database monitoring and tuning overview
Monitoring Microsoft Azure SQL Database performance using dynamic management views
Tune nonclustered indexes with missing index suggestions
Resource management in Azure SQL Database
Resource limits for single databases using the vCore purchasing model
Resource limits for elastic pools using the vCore purchasing model
Resource limits for single databases using the DTU purchasing model
Resource limits for elastic pools using the DTU purchasing model
Monitoring Microsoft Azure SQL Database
performance using dynamic management views
9/13/2022 • 23 minutes to read • Edit Online
Permissions
In Azure SQL Database, depending on the compute size and deployment option, querying a DMV may require
either VIEW DATABASE STATE or VIEW SERVER STATE permission. The latter permission may be granted via
membership in the ##MS_ServerStateReader## server role.
To grant the VIEW DATABASE STATE permission to a specific database user, run the following query as an
example:
To grant membership to the ##MS_ServerStateReader## server role to a login for the logical server in Azure,
connect to the master database then run the following query as an example:
In an instance of SQL Server and in Azure SQL Managed Instance, dynamic management views return server
state information. In Azure SQL Database, they return information regarding your current logical database only.
Once you identify the problematic queries, it's time to tune those queries to reduce CPU utilization. If you don't
have time to tune the queries, you may also choose to upgrade the SLO of the database to work around the
issue.
For more information about handling CPU performance problems in Azure SQL Database, see Diagnose and
troubleshoot high CPU on Azure SQL Database.
For data file IO issues (including PAGEIOLATCH_SH , PAGEIOLATCH_EX , PAGEIOLATCH_UP ). If the wait type name
has IO in it, it points to an IO issue. If there is no IO in the page latch wait name, it points to a different
type of problem (for example, tempdb contention).
WRITE_LOG
SELECT TOP 10
CONVERT(VARCHAR(30), GETDATE(), 121) AS runtime,
r.session_id,
r.blocking_session_id,
r.cpu_time,
r.total_elapsed_time,
r.reads,
r.writes,
r.logical_reads,
r.row_count,
wait_time,
wait_type,
r.command,
OBJECT_NAME(txt.objectid, txt.dbid) 'Object_Name',
TRIM(REPLACE(
REPLACE(
REPLACE(
SUBSTRING(
SUBSTRING(
text,
(r.statement_start_offset / 2) + 1,
((CASE r.statement_end_offset
WHEN -1 THEN
DATALENGTH(text)
ELSE
r.statement_end_offset
END - r.statement_start_offset
) / 2
) + 1
),
1,
1000
),
CHAR(10),
' '
),
CHAR(13),
' '
)
) stmt_text,
mg.dop, --Degree of parallelism
mg.request_time, --Date and time when this query requested the
memory grant.
mg.grant_time, --NULL means memory has not been granted
mg.requested_memory_kb / 1024.0 requested_memory_mb, --Total requested amount of memory in megabytes
mg.granted_memory_kb / 1024.0 AS granted_memory_mb, --Total amount of memory actually granted in
megabytes. NULL if not granted
mg.required_memory_kb / 1024.0 AS required_memory_mb, --Minimum memory required to run this query in
megabytes.
max_used_memory_kb / 1024.0 AS max_used_memory_mb,
mg.query_cost, --Estimated query cost.
mg.timeout_sec, --Time-out in seconds before this query gives
up the memory grant request.
mg.resource_semaphore_id, --Non-unique ID of the resource semaphore on
which this query is waiting.
mg.wait_time_ms, --Wait time in milliseconds. NULL if the memory
is already granted.
CASE mg.is_next_candidate --Is this process the next candidate for a memory grant
WHEN 1 THEN
'Yes'
WHEN 0 THEN
'No'
ELSE
'Memory has been granted'
END AS 'Next Candidate for Memory Grant',
qp.query_plan
FROM sys.dm_exec_requests AS r
JOIN sys.dm_exec_query_memory_grants AS mg
ON r.session_id = mg.session_id
AND r.request_id = mg.request_id
CROSS APPLY sys.dm_exec_sql_text(mg.sql_handle) AS txt
CROSS APPLY sys.dm_exec_query_plan(r.plan_handle) AS qp
ORDER BY mg.granted_memory_kb DESC;
The following query returns the size of individual objects (in megabytes) in your database:
Monitoring connections
You can use the sys.dm_exec_connections view to retrieve information about the connections established to a
specific database or elastic pool and the details of each connection. In addition, the sys.dm_exec_sessions view is
helpful when retrieving information about all active user connections and internal tasks.
The following query retrieves information on the current connection:
SELECT
c.session_id, c.net_transport, c.encrypt_option,
c.auth_scheme, s.host_name, s.program_name,
s.client_interface_name, s.login_name, s.nt_domain,
s.nt_user_name, s.original_login_name, c.connect_time,
s.login_time
FROM sys.dm_exec_connections AS c
JOIN sys.dm_exec_sessions AS s
ON c.session_id = s.session_id
WHERE c.session_id = @@SPID;
NOTE
When executing the sys.dm_exec_requests and sys.dm_exec_sessions views , if you have VIEW DATABASE STATE
permission on the database, you see all executing sessions on the database; otherwise, you see only the current session.
SELECT
AVG(avg_cpu_percent) AS 'Average CPU use in percent',
MAX(avg_cpu_percent) AS 'Maximum CPU use in percent',
AVG(avg_data_io_percent) AS 'Average data IO in percent',
MAX(avg_data_io_percent) AS 'Maximum data IO in percent',
AVG(avg_log_write_percent) AS 'Average log write use in percent',
MAX(avg_log_write_percent) AS 'Maximum log write use in percent',
AVG(avg_memory_usage_percent) AS 'Average memory use in percent',
MAX(avg_memory_usage_percent) AS 'Maximum memory use in percent'
FROM sys.dm_db_resource_stats;
From the data, this database currently has a peak CPU load of just over 50 percent CPU use relative to the P2
compute size (midday on Tuesday). If CPU is the dominant factor in the application's resource profile, then you
might decide that P2 is the right compute size to guarantee that the workload always fits. If you expect an
application to grow over time, it's a good idea to have an extra resource buffer so that the application doesn't
ever reach the performance-level limit. If you increase the compute size, you can help avoid customer-visible
errors that might occur when a database doesn't have enough power to process requests effectively, especially
in latency-sensitive environments. An example is a database that supports an application that paints webpages
based on the results of database calls.
Other application types might interpret the same graph differently. For example, if an application tries to process
payroll data each day and has the same chart, this kind of "batch job" model might do fine at a P1 compute size.
The P1 compute size has 100 DTUs compared to 200 DTUs at the P2 compute size. The P1 compute size
provides half the performance of the P2 compute size. So, 50 percent of CPU use in P2 equals 100 percent CPU
use in P1. If the application does not have timeouts, it might not matter if a job takes 2 hours or 2.5 hours to
finish, if it gets done today. An application in this category probably can use a P1 compute size. You can take
advantage of the fact that there are periods of time during the day when resource use is lower, so that any "big
peak" might spill over into one of the troughs later in the day. The P1 compute size might be good for that kind
of application (and save money), as long as the jobs can finish on time each day.
The database engine exposes consumed resource information for each active database in the
sys.resource_stats view of the master database in each server. The data in the table is aggregated for 5-
minute intervals. With the Basic, Standard, and Premium service tiers, the data can take more than 5 minutes to
appear in the table, so this data is more useful for historical analysis rather than near-real-time analysis. Query
the sys.resource_stats view to see the recent history of a database and to validate whether the reservation you
chose delivered the performance you want when needed.
NOTE
On Azure SQL Database, you must be connected to the master database to query sys.resource_stats in the
following examples.
This example shows you how the data in this view is exposed:
SELECT TOP 10 *
FROM sys.resource_stats
WHERE database_name = 'resource1'
ORDER BY start_time DESC;
The next example shows you different ways that you can use the sys.resource_stats catalog view to get
information about how your database uses resources:
1. To look at the past week's resource use for the database userdb1, you can run this query:
SELECT *
FROM sys.resource_stats
WHERE database_name = 'userdb1' AND
start_time > DATEADD(day, -7, GETDATE())
ORDER BY start_time DESC;
2. To evaluate how well your workload fits the compute size, you need to drill down into each aspect of the
resource metrics: CPU, reads, writes, number of workers, and number of sessions. Here's a revised query
using sys.resource_stats to report the average and maximum values of these resource metrics:
SELECT
avg(avg_cpu_percent) AS 'Average CPU use in percent',
max(avg_cpu_percent) AS 'Maximum CPU use in percent',
avg(avg_data_io_percent) AS 'Average physical data IO use in percent',
max(avg_data_io_percent) AS 'Maximum physical data IO use in percent',
avg(avg_log_write_percent) AS 'Average log write use in percent',
max(avg_log_write_percent) AS 'Maximum log write use in percent',
avg(max_session_percent) AS 'Average % of sessions',
max(max_session_percent) AS 'Maximum % of sessions',
avg(max_worker_percent) AS 'Average % of workers',
max(max_worker_percent) AS 'Maximum % of workers'
FROM sys.resource_stats
WHERE database_name = 'userdb1' AND start_time > DATEADD(day, -7, GETDATE());
3. With this information about the average and maximum values of each resource metric, you can assess
how well your workload fits into the compute size you chose. Usually, average values from
sys.resource_stats give you a good baseline to use against the target size. It should be your primary
measurement stick. For an example, you might be using the Standard service tier with S2 compute size.
The average use percentages for CPU and IO reads and writes are below 40 percent, the average number
of workers is below 50, and the average number of sessions is below 200. Your workload might fit into
the S1 compute size. It's easy to see whether your database fits in the worker and session limits. To see
whether a database fits into a lower compute size with regard to CPU, reads, and writes, divide the DTU
number of the lower compute size by the DTU number of your current compute size, and then multiply
the result by 100:
S1 DTU / S2 DTU * 100 = 20 / 50 * 100 = 40
The result is the relative performance difference between the two compute sizes in percentage. If your
resource use doesn't exceed this amount, your workload might fit into the lower compute size. However,
you need to look at all ranges of resource use values, and determine, by percentage, how often your
database workload would fit into the lower compute size. The following query outputs the fit percentage
per resource dimension, based on the threshold of 40 percent that we calculated in this example:
SELECT
100*((COUNT(database_name) - SUM(CASE WHEN avg_cpu_percent >= 40 THEN 1 ELSE 0 END) * 1.0) /
COUNT(database_name)) AS 'CPU Fit Percent',
100*((COUNT(database_name) - SUM(CASE WHEN avg_log_write_percent >= 40 THEN 1 ELSE 0 END) * 1.0)
/ COUNT(database_name)) AS 'Log Write Fit Percent',
100*((COUNT(database_name) - SUM(CASE WHEN avg_data_io_percent >= 40 THEN 1 ELSE 0 END) * 1.0) /
COUNT(database_name)) AS 'Physical Data IO Fit Percent'
FROM sys.resource_stats
WHERE database_name = 'sample' AND start_time > DATEADD(day, -7, GETDATE());
Based on your database service tier, you can decide whether your workload fits into the lower compute
size. If your database workload objective is 99.9 percent and the preceding query returns values greater
than 99.9 percent for all three resource dimensions, your workload likely fits into the lower compute size.
Looking at the fit percentage also gives you insight into whether you should move to the next higher
compute size to meet your objective. For example, the CPU usage for a sample database over the past
week:
24.5 100.00
The average CPU is about a quarter of the limit of the compute size, which would fit well into the
compute size of the database. But, the maximum value shows that the database reaches the limit of the
compute size. Do you need to move to the next higher compute size? Look at how many times your
workload reaches 100 percent, and then compare it to your database workload objective.
SELECT
100*((COUNT(database_name) - SUM(CASE WHEN avg_cpu_percent >= 100 THEN 1 ELSE 0 END) * 1.0) /
COUNT(database_name)) AS 'CPU Fit Percent',
100*((COUNT(database_name) - SUM(CASE WHEN avg_log_write_percent >= 100 THEN 1 ELSE 0 END) *
1.0) / COUNT(database_name)) AS 'Log Write Fit Percent',
100*((COUNT(database_name) - SUM(CASE WHEN avg_data_io_percent >= 100 THEN 1 ELSE 0 END) * 1.0)
/ COUNT(database_name)) AS 'Physical Data IO Fit Percent'
FROM sys.resource_stats
WHERE database_name = 'sample' AND start_time > DATEADD(day, -7, GETDATE());
If this query returns a value less than 99.9 percent for any of the three resource dimensions, consider
either moving to the next higher compute size or use application-tuning techniques to reduce the load on
the database.
4. This exercise also considers your projected workload increase in the future.
For elastic pools, you can monitor individual databases in the pool with the techniques described in this section.
But you can also monitor the pool as a whole. For information, see Monitor and manage an elastic pool.
Maximum concurrent requests
To see the current number of concurrent requests, run this Transact-SQL query on your database:
To analyze the workload of a database, modify this query to filter on the specific database you want to analyze.
For example, if you have a database named MyDatabase , this Transact-SQL query returns the count of concurrent
requests in that database:
This is just a snapshot at a single point in time. To get a better understanding of your workload and concurrent
request requirements, you'll need to collect many samples over time.
Maximum concurrent logins
You can analyze your user and application patterns to get an idea of the frequency of logins. You also can run
real-world loads in a test environment to make sure that you're not hitting this or other limits we discuss in this
article. There isn't a single query or dynamic management view (DMV) that can show you concurrent login
counts or history.
If multiple clients use the same connection string, the service authenticates each login. If 10 users
simultaneously connect to a database by using the same username and password, there would be 10 concurrent
logins. This limit applies only to the duration of the login and authentication. If the same 10 users connect to the
database sequentially, the number of concurrent logins would never be greater than 1.
NOTE
Currently, this limit does not apply to databases in elastic pools.
Maximum sessions
To see the number of current active sessions, run this Transact-SQL query on your database:
If you're analyzing a SQL Server workload, modify the query to focus on a specific database. This query helps
you determine possible session needs for the database if you are considering moving it to Azure.
See also
Dynamic Management Views and Functions (Transact-SQL)
System Dynamic Management Views
Next steps
Introduction to Azure SQL Database and Azure SQL Managed Instance
Diagnose and troubleshoot high CPU on Azure SQL Database
Tune applications and databases for performance in Azure SQL Database and Azure SQL Managed Instance
Understand and resolve Azure SQL Database blocking problems
Analyze and prevent deadlocks in Azure SQL Database
Monitor Azure SQL Database with Azure Monitor
9/13/2022 • 6 minutes to read • Edit Online
NOTE
Azure SQL Analytics (preview) is an integration with Azure Monitor, where many monitoring solutions are no longer in
active development. For more monitoring options, see Monitoring and performance tuning in Azure SQL Database and
Azure SQL Managed Instance.
Monitoring data
Azure SQL Database collects the same kinds of monitoring data as other Azure resources that are described in
Monitoring data from Azure resources.
See Monitoring Azure SQL Database with Azure Monitor reference for detailed information on the metrics and
logs metrics created by Azure SQL Database.
Analyzing metrics
You can analyze metrics for Azure SQL Database alongside metrics from other Azure services using the metrics
explorer by opening Metrics from the Monitor menu in the Azure portal. See Getting started with Azure
Metrics Explorer for details on using this tool.
For a list of the platform metrics collected for Azure SQL Database, see Monitoring Azure SQL Database data
reference metrics
For reference, you can see a list of all resource metrics supported in Azure Monitor.
Analyzing logs
Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties. This data is
optionally collected via Diagnostic settings.
All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema
is outlined in Azure Monitor resource log schema.
The Activity log is a type of platform log in Azure that provides insight into subscription-level events. You can
view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using
Log Analytics.
For a list of the types of resource logs collected for Azure SQL Database, see Resource logs for Azure SQL
Database.
For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see Azure Monitor Logs
tables for Azure SQL Database.
Sample Kusto queries
IMPORTANT
Selecting Logs from the Monitoring menu of a database opens Log Analytics with the query scope set to the current
database. This means that log queries will only include data from that resource. If you want to run a query that includes
data from other databases or data from other Azure services, select Logs from the Azure Monitor menu. See Log
query scope and time range in Azure Monitor Log Analytics for details.
NOTE
Occasionally, it might take up to 15 minutes between when an event is emitted and when it appears in a Log Analytics
workspace.
Use the following queries to monitor your database. You may see different options available depending on your
purchase model.
Example A: Log_write_percent from the past hour
AzureMetrics
| where ResourceProvider == "MICROSOFT.SQL"
| where TimeGenerated >= ago(60min)
| where MetricName in ('log_write_percent')
| parse _ResourceId with * "/microsoft.sql/servers/" Resource
| summarize Log_Maximum_last60mins = max(Maximum), Log_Minimum_last60mins = min(Minimum),
Log_Average_last60mins = avg(Average) by Resource, MetricName
Example B: SQL Ser ver wait types from the past 15 minutes
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.SQL"
| where TimeGenerated >= ago(15min)
| parse _ResourceId with * "/microsoft.sql/servers/" LogicalServerName "/databases/" DatabaseName
| summarize Total_count_15mins = sum(delta_waiting_tasks_count_d) by LogicalServerName, DatabaseName,
wait_type_s
AzureMetrics
| where ResourceProvider == "MICROSOFT.SQL"
| where TimeGenerated >= ago(60min)
| where MetricName in ('deadlock')
| parse _ResourceId with * "/microsoft.sql/servers/" Resource
| summarize Deadlock_max_60Mins = max(Maximum) by Resource, MetricName
AzureMetrics
| where ResourceProvider == "MICROSOFT.SQL"
| where TimeGenerated >= ago(60min)
| where MetricName in ('cpu_percent')
| parse _ResourceId with * "/microsoft.sql/servers/" Resource
| summarize CPU_Maximum_last60mins = max(Maximum), CPU_Minimum_last60mins = min(Minimum),
CPU_Average_last60mins = avg(Average) by Resource, MetricName
Alerts
Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data.
These metrics in Azure Monitor are always collected. They allow you to identify and address issues in your
databases or elastic pools before your customers notice them. You can set alerts on metrics, logs, and the
activity log.
If you are creating or running an application in Azure, Azure Monitor Application Insights may offer additional
types of alerts.
You can also configure alerts with the Azure CLI or PowerShell. For example, see Use PowerShell to monitor and
scale a single database in Azure SQL Database.
The following table lists common and recommended alert rules for Azure SQL Database. You may see different
options available depending on your purchase model.
* Alerting on deadlocks may be unnecessary and noisy in some applications where deadlocks are expected and
properly handled.
Next steps
See Monitoring Azure SQL Database data reference for a reference of the metrics, logs, and other important
values created by Azure SQL Database.
See Monitoring Azure resources with Azure Monitor for details on monitoring Azure resources.
Monitor Azure SQL Managed Instance with Azure Monitor
Monitoring Azure SQL Database data reference
9/13/2022 • 2 minutes to read • Edit Online
Metrics
For more on using Azure Monitor SQL Insights (preview) for all products in the Azure SQL family, see Monitor
your SQL deployments with SQL Insights (preview).
For data specific to Azure SQL Database, see Data for Azure SQL Database.
For a complete list of metrics, see:
Microsoft.Sql/servers/databases
Microsoft.Sql/servers/elasticPools
Resource logs
This section lists the types of resource logs you can collect for Azure SQL Database.
For reference, see a list of all resource logs category types supported in Azure Monitor.
For a reference of resource log types collected for Azure SQL Database, see Streaming export of Azure SQL
Database Diagnostic telemetry for export
RESO URC E T Y P E N OT ES
AzureActivity Entries from the Azure Activity log that provides insight into
any subscription-level or management group level events
that have occurred in Azure.
Next steps
See Monitoring Azure SQL Database with Azure Monitor for a description of monitoring Azure SQL
Database.
See Monitoring Azure resources with Azure Monitor for details on monitoring Azure resources.
Create and manage servers and single databases in
Azure SQL Database
9/13/2022 • 7 minutes to read • Edit Online
You can create and manage servers and single databases in Azure SQL Database using the Azure portal,
PowerShell, the Azure CLI, REST API, and Transact-SQL.
TIP
For an Azure portal quickstart, see Create a database in SQL Database in the Azure portal.
PowerShell
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is
for the Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the
AzureRm modules are substantially identical.
To create and manage servers, single and pooled databases, and server-level firewalls with Azure PowerShell,
use the following PowerShell cmdlets. If you need to install or upgrade PowerShell, see Install Azure PowerShell
module.
TIP
For PowerShell example scripts, see Use PowerShell to create a database in SQL Database and configure a server-level
firewall rule and Monitor and scale a database in SQL Database using PowerShell.
C M DL ET DESC RIP T IO N
Azure CLI
To create and manage the servers, databases, and firewalls with Azure CLI, use the following Azure CLI
commands. Use the Cloud Shell to run Azure CLI in your browser, or install it on macOS, Linux, or Windows. For
creating and managing elastic pools, see Elastic pools.
TIP
For an Azure CLI quickstart, see Create a single Azure SQL Database using Azure CLI. For Azure CLI example scripts, see
Use CLI to create a database in Azure SQL Database and configure a SQL Database firewall rule and Use CLI to monitor
and scale a database in Azure SQL Database.
C M DL ET DESC RIP T IO N
az sql db list Lists all databases and data warehouses in a server, or all
databases in an elastic pool
Transact-SQL (T-SQL)
To create and manage the servers, databases, and firewalls with Transact-SQL, use the following T-SQL
commands. You can issue these commands using the Azure portal, SQL Server Management Studio, Visual
Studio Code, or any other program that can connect to a server in SQL Database and pass Transact-SQL
commands. For managing elastic pools, see Elastic pools.
TIP
For a quickstart using SQL Server Management Studio on Microsoft Windows, see Azure SQL Database: Use SQL Server
Management Studio to connect and query data. For a quickstart using Visual Studio Code on the macOS, Linux, or
Windows, see Azure SQL Database: Use Visual Studio Code to connect and query data.
IMPORTANT
You cannot create or delete a server using Transact-SQL.
C OMMAND DESC RIP T IO N
sys.resource_stats Returns CPU usage and storage data for a database in Azure
SQL Database. The data is collected and aggregated within
five-minute intervals.
REST API
To create and manage the servers, databases, and firewalls, use these REST API requests.
Next steps
To learn about migrating a SQL Server database to Azure, see Migrate to Azure SQL Database.
For information about supported features, see Features.
PowerShell for DNS Alias to Azure SQL Database
9/13/2022 • 3 minutes to read • Edit Online
NOTE
This article has been updated to use either the Azure PowerShell Az module or Azure CLI. You can still use the AzureRM
module, which will continue to receive bug fixes until at least December 2020.
To learn more about the Az module and AzureRM compatibility, see Introducing the Azure PowerShell Az module. For
installation instructions, see Install Azure PowerShell or Install Azure CLI.
Prerequisites
If you want to run the demo PowerShell script given in this article, the following prerequisites apply:
An Azure subscription and account, for free trial, see Azure trials
Two servers
Example
The following code example starts by assigning literal values to several variables.
To run the code, edit the placeholder values to match real values in your system.
PowerShell
Azure CLI
# login to Azure
Connect-AzAccount -SubscriptionName $subscriptionName;
$subscriptionId = Get-AzSubscription -SubscriptionName $subscriptionName;
Next steps
For a full explanation of the DNS alias feature for SQL Database, see DNS alias for Azure SQL Database.
Manage file space for databases in Azure SQL
Database
9/13/2022 • 21 minutes to read • Edit Online
NOTE
This article does not apply to Azure SQL Managed Instance.
Overview
With Azure SQL Database, there are workload patterns where the allocation of underlying data files for
databases can become larger than the amount of used data pages. This condition can occur when space used
increases and data is subsequently deleted. The reason is because file space allocated is not automatically
reclaimed when data is deleted.
Monitoring file space usage and shrinking data files may be necessary in the following scenarios:
Allow data growth in an elastic pool when the file space allocated for its databases reaches the pool max size.
Allow decreasing the max size of a single database or elastic pool.
Allow changing a single database or elastic pool to a different service tier or performance tier with a lower
max size.
NOTE
Shrink operations should not be considered a regular maintenance operation. Data and log files that grow due to regular,
recurring business operations do not require shrink operations.
DATA B A SE Q UA N T IT Y DEF IN IT IO N C O M M EN T S
DATA B A SE Q UA N T IT Y DEF IN IT IO N C O M M EN T S
Data space used The amount of space used to store Generally, space used increases
database data. (decreases) on inserts (deletes). In
some cases, the space used does not
change on inserts or deletes
depending on the amount and pattern
of data involved in the operation and
any fragmentation. For example,
deleting one row from every data page
does not necessarily decrease the
space used.
Data space allocated The amount of formatted file space The amount of space allocated grows
made available for storing database automatically, but never decreases
data. after deletes. This behavior ensures
that future inserts are faster since
space does not need to be
reformatted.
Data space allocated but unused The difference between the amount of This quantity represents the maximum
data space allocated and data space amount of free space that can be
used. reclaimed by shrinking database data
files.
Data max size The maximum amount of space that The amount of data space allocated
can be used for storing database data. cannot grow beyond the data max
size.
The following diagram illustrates the relationship between the different types of storage space for a database.
-- Connect to database
-- Database data space allocated in MB and database data space allocated unused in MB
SELECT SUM(size/128.0) AS DatabaseDataSpaceAllocatedInMB,
SUM(size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0) AS DatabaseDataSpaceAllocatedUnusedInMB
FROM sys.database_files
GROUP BY type_desc
HAVING type_desc = 'ROWS';
-- Connect to database
-- Database data max size in bytes
SELECT DATABASEPROPERTYEX('db1', 'MaxSizeInBytes') AS DatabaseDataMaxSizeInBytes;
EL A ST IC P O O L Q UA N T IT Y DEF IN IT IO N C O M M EN T S
Data space allocated but unused The difference between the amount of This quantity represents the maximum
data space allocated and data space amount of space allocated for the
used by all databases in the elastic elastic pool that can be reclaimed by
pool. shrinking database data files.
Data max size The maximum amount of data space The space allocated for the elastic pool
that can be used by the elastic pool for should not exceed the elastic pool max
all of its databases. size. If this condition occurs, then
space allocated that is unused can be
reclaimed by shrinking database data
files.
NOTE
The error message "The elastic pool has reached its storage limit" indicates that the database objects have been allocated
enough space to meet the elastic pool storage limit, but there may be unused space in the data space allocation. Consider
increasing the elastic pool's storage limit, or as a short-term solution, freeing up data space using the Reclaim unused
allocated space section below. You should also be aware of the potential negative performance impact of shrinking
database files, see Index maintenance after shrink section below.
-- Connect to master
-- Elastic pool data space used in MB
SELECT TOP 1 avg_storage_percent / 100.0 * elastic_pool_storage_limit_mb AS ElasticPoolDataSpaceUsedInMB
FROM sys.elastic_pool_resource_stats
WHERE elastic_pool_name = 'ep1'
ORDER BY end_time DESC;
IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is
for the Az.Sql module. The AzureRM module will continue to receive bug fixes until at least December 2020. The
arguments for the commands in the Az module and in the AzureRm modules are substantially identical. For more about
their compatibility, see Introducing the new Azure PowerShell Az module.
The PowerShell script requires SQL Server PowerShell module – see Download PowerShell module to install.
$resourceGroupName = "<resourceGroupName>"
$serverName = "<serverName>"
$poolName = "<poolName>"
$userName = "<userName>"
$password = "<password>"
# for each database in the elastic pool, get space allocated in MB and space allocated unused in MB
foreach ($database in $databasesInPool) {
$sqlCommand = "SELECT DB_NAME() as DatabaseName, `
SUM(size/128.0) AS DatabaseDataSpaceAllocatedInMB, `
SUM(size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0) AS
DatabaseDataSpaceAllocatedUnusedInMB `
FROM sys.database_files `
GROUP BY type_desc `
HAVING type_desc = 'ROWS'"
$serverFqdn = "tcp:" + $serverName + ".database.windows.net,1433"
$databaseStorageMetrics = $databaseStorageMetrics +
(Invoke-Sqlcmd -ServerInstance $serverFqdn -Database $database.DatabaseName `
-Username $userName -Password $password -Query $sqlCommand)
}
-- Connect to master
-- Elastic pools max size in MB
SELECT TOP 1 elastic_pool_storage_limit_mb AS ElasticPoolMaxSizeInMB
FROM sys.elastic_pool_resource_stats
WHERE elastic_pool_name = 'ep1'
ORDER BY end_time DESC;
TIP
It is not recommended to shrink data files if regular application workload will cause the files to grow to the same allocated
size again.
In Azure SQL Database, to shrink files you can use either DBCC SHRINKDATABASE or DBCC SHRINKFILE commands:
DBCC SHRINKDATABASE shrinks all data and log files in a database using a single command. The command
shrinks one data file at a time, which can take a long time for larger databases. It also shrinks the log file,
which is usually unnecessary because Azure SQL Database shrinks log files automatically as needed.
DBCC SHRINKFILE command supports more advanced scenarios:
It can target individual files as needed, rather than shrinking all files in the database.
Each DBCC SHRINKFILE command can run in parallel with other DBCC SHRINKFILE commands to shrink
multiple files at the same time and reduce the total time of shrink, at the expense of higher resource
usage and a higher chance of blocking user queries, if they are executing during shrink.
If the tail of the file does not contain data, it can reduce allocated file size much faster by specifying the
TRUNCATEONLY argument. This does not require data movement within the file.
For more information about these shrink commands, see DBCC SHRINKDATABASE and DBCC SHRINKFILE.
The following examples must be executed while connected to the target user database, not the master
database.
To use DBCC SHRINKDATABASE to shrink all data and log files in a given database:
In Azure SQL Database, a database may have one or more data files, created automatically as data grows. To
determine file layout of your database, including the used and allocated size of each file, query the
sys.database_files catalog view using the following sample script:
-- Review file properties, including file_id and name values to reference in shrink commands
SELECT file_id,
name,
CAST(FILEPROPERTY(name, 'SpaceUsed') AS bigint) * 8 / 1024. AS space_used_mb,
CAST(size AS bigint) * 8 / 1024. AS space_allocated_mb,
CAST(max_size AS bigint) * 8 / 1024. AS max_file_size_mb
FROM sys.database_files
WHERE type_desc IN ('ROWS','LOG');
You can execute a shrink against one file only via the DBCC SHRINKFILE command, for example:
-- Shrink database data file named 'data_0` by removing all unused at the end of the file, if any.
DBCC SHRINKFILE ('data_0', TRUNCATEONLY);
GO
Be aware of the potential negative performance impact of shrinking database files, see the Index maintenance
after shrink section below.
Shrinking transaction log file
Unlike data files, Azure SQL Database automatically shrinks transaction log file to avoid excessive space usage
that can lead to out-of-space errors. It is usually not necessary for customers to shrink the transaction log file.
In Premium and Business Critical service tiers, if the transaction log becomes large, it may significantly
contribute to local storage consumption toward the maximum local storage limit. If local storage consumption is
close to the limit, customers may choose to shrink transaction log using the DBCC SHRINKFILE command as
shown in the following example. This releases local storage as soon as the command completes, without waiting
for the periodic automatic shrink operation.
The following example should be executed while connected to the target user database, not the master database.
-- Shrink the database log file (always file_id 2), by removing all unused space at the end of the file, if
any.
DBCC SHRINKFILE (2, TRUNCATEONLY);
Auto -shrink
As an alternative to shrinking data files manually, auto-shrink can be enabled for a database. However, auto
shrink can be less effective in reclaiming file space than DBCC SHRINKDATABASE and DBCC SHRINKFILE .
By default, auto-shrink is disabled, which is recommended for most databases. If it becomes necessary to enable
auto-shrink, it is recommended to disable it once space management goals have been achieved, instead of
keeping it enabled permanently. For more information, see Considerations for AUTO_SHRINK.
For example, auto-shrink can be helpful in the specific scenario where an elastic pool contains many databases
that experience significant growth and reduction in data file space used, causing the pool to approach its
maximum size limit. This is not a common scenario.
To enable auto-shrink, execute the following command while connected to your database (not the master
database).
For more information about this command, see DATABASE SET options.
Index maintenance after shrink
After a shrink operation is completed against data files, indexes may become fragmented. This reduces their
performance optimization effectiveness for certain workloads, such as queries using large scans. If performance
degradation occurs after the shrink operation is complete, consider index maintenance to rebuild indexes. Keep
in mind that index rebuilds require free space in the database, and hence may cause the allocated space to
increase, counteracting the effect of shrink.
For more information about index maintenance, see Optimize index maintenance to improve query
performance and reduce resource consumption.
Once shrink has completed, you can execute this query again and compare the result to the initial baseline.
Truncate data files
It is recommended to first execute shrink for each data file with the TRUNCATEONLY parameter. This way, if there is
any allocated but unused space at the end of the file, it will be removed quickly and without any data movement.
The following sample command truncates data file with file_id 4:
Once this command is executed for every data file, you can rerun the space usage query to see the reduction in
allocated space, if any. You can also view allocated space for the database in Azure portal.
Evaluate index page density
If truncating data files did not result in a sufficient reduction in allocated space, you will need to shrink data files.
However, as an optional but recommended step, you should first determine average page density for indexes in
the database. For the same amount of data, shrink will complete faster if page density is high, because it will
have to move fewer pages. If page density is low for some indexes, consider performing maintenance on these
indexes to increase page density before shrinking data files. This will also let shrink achieve a deeper reduction
in allocated storage space.
To determine page density for all indexes in the database, use the following query. Page density is reported in
the avg_page_space_used_in_percent column.
If there are indexes with high page count that have page density lower than 60-70%, consider rebuilding or
reorganizing these indexes before shrinking data files.
NOTE
For larger databases, the query to determine page density may take a long time (hours) to complete. Additionally,
rebuilding or reorganizing large indexes also requires substantial time and resource usage. There is a tradeoff between
spending extra time on increasing page density on one hand, and reducing shrink duration and achieving higher space
savings on another.
Following is a sample command to rebuild an index and increase its page density:
ALTER INDEX [index_name] ON [schema_name].[table_name] REBUILD WITH (FILLFACTOR = 100, MAXDOP = 8, ONLINE =
ON (WAIT_AT_LOW_PRIORITY (MAX_DURATION = 5 MINUTES, ABORT_AFTER_WAIT = NONE)), RESUMABLE = ON);
This command initiates an online and resumable index rebuild. This lets concurrent workloads continue using
the table while the rebuild is in progress, and lets you resume the rebuild if it gets interrupted for any reason.
However, this type of rebuild is slower than an offline rebuild, which blocks access to the table. If no other
workloads need to access the table during rebuild, set the ONLINE and RESUMABLE options to OFF and remove
the WAIT_AT_LOW_PRIORITY clause.
If there are multiple indexes with low page density, you may be able to rebuild them in parallel on multiple
database sessions to speed up the process. However, make sure that you are not approaching database resource
limits by doing so, and leave sufficient resource headroom for application workloads that may be running.
Monitor resource consumption (CPU, Data IO, Log IO) in Azure portal or using the sys.dm_db_resource_stats
view, and start additional parallel rebuilds only if resource utilization on each of these dimensions remains
substantially lower than 100%. If CPU, Data IO, or Log IO utilization is at 100%, you can scale up the database to
have more CPU cores and increase IO throughput. This may enable additional parallel rebuilds to complete the
process faster.
To learn more about index maintenance, see Optimize index maintenance to improve query performance and
reduce resource consumption.
Shrink multiple data files
As noted earlier, shrink with data movement is a long-running process. If the database has multiple data files,
you can speed up the process by shrinking multiple data files in parallel. You do this by opening multiple
database sessions, and using DBCC SHRINKFILE on each session with a different file_id value. Similar to
rebuilding indexes earlier, make sure you have sufficient resource headroom (CPU, Data IO, Log IO) before
starting each new parallel shrink command.
The following sample command shrinks data file with file_id 4, attempting to reduce its allocated size to 52000
MB by moving pages within the file:
If you want to reduce allocated space for the file to the minimum possible, execute the statement without
specifying the target size:
If a workload is running concurrently with shrink, it may start using the storage space freed by shrink before
shrink completes and truncates the file. In this case, shrink will not be able to reduce allocated space to the
specified target.
You can mitigate this by shrinking each file in smaller steps. This means that in the DBCC SHRINKFILE command,
you set the target that is slightly smaller than the current allocated space for the file, as seen in the results of
baseline space usage query. For example, if allocated space for file with file_id 4 is 200,000 MB, and you want to
shrink it to 100,000 MB, you can first set the target to 170,000 MB:
Once this command completes, it will have truncated the file and reduced its allocated size to 170,000 MB. You
can then repeat this command, setting target first to 140,000 MB, then to 110,000 MB, etc., until the file is
shrunk to the desired size. If the command completes but the file is not truncated, use smaller steps, for example
15,000 MB rather than 30,000 MB.
To monitor shrink progress for all concurrently running shrink sessions, you can use the following query:
SELECT command,
percent_complete,
status,
wait_resource,
session_id,
wait_type,
blocking_session_id,
cpu_time,
reads,
CAST(((DATEDIFF(s,start_time, GETDATE()))/3600) AS varchar) + ' hour(s), '
+ CAST((DATEDIFF(s,start_time, GETDATE())%3600)/60 AS varchar) + 'min, '
+ CAST((DATEDIFF(s,start_time, GETDATE())%60) AS varchar) + ' sec' AS running_time
FROM sys.dm_exec_requests AS r
LEFT JOIN sys.databases AS d
ON r.database_id = d.database_id
WHERE r.command IN ('DbccSpaceReclaim','DbccFilesCompact','DbccLOBCompact','DBCC');
NOTE
Shrink progress may be non-linear, and the value in the percent_complete column may remain virtually unchanged for
long periods of time, even though shrink is still in progress.
Once shrink has completed for all data files, rerun the space usage query (or check in Azure portal) to determine
the resulting reduction in allocated storage size. If is is insufficient and there is still a large difference between
used space and allocated space, you can rebuild indexes as described earlier. This may temporarily increase
allocated space further, however shrinking data files again after rebuilding indexes should result in a deeper
reduction in allocated space.
-- Retry loop
WHILE @RetryCount >= 0
BEGIN
BEGIN TRY
END TRY
BEGIN CATCH
-- Retry for the declared number of times without raising an error if deadlocked or timed out waiting
for a lock
IF ERROR_NUMBER() IN (1205, 49516) AND @RetryCount > 0
BEGIN
SELECT @RetryCount -= 1;
-- Wait for a random period of time between 1 and 10 seconds before retrying
SELECT @Delay = '00:00:0' + CAST(CAST(1 + RAND() * 8.999 AS decimal(5,3)) AS varchar(5));
WAITFOR DELAY @Delay;
END
ELSE -- Raise error and exit loop
BEGIN
SELECT @RetryCount = -1;
THROW;
END
END CATCH
END;
In addition to timeouts and deadlocks, shrink may encounter errors due to certain known issues.
The errors returned and mitigation steps are as follows:
Error number : 49503 , error message: %.*ls: Page %d:%d could not be moved because it is an off-row
persistent version store page. Page holdup reason: %ls. Page holdup timestamp: %I64d.
This error occurs when there are long running active transactions that have generated row versions in persistent
version store (PVS). The pages containing these row versions cannot be moved by shrink, hence it cannot make
progress and fails with this error.
To mitigate, you have to wait until these long running transactions have completed. Alternatively, you can
identify and terminate these long running transactions, but this can impact your application if it does not handle
transaction failures gracefully. One way to find long running transactions is by running the following query in
the database where you ran the shrink command:
-- Transactions sorted by duration
SELECT st.session_id,
dt.database_transaction_begin_time,
DATEDIFF(second, dt.database_transaction_begin_time, CURRENT_TIMESTAMP) AS
transaction_duration_seconds,
dt.database_transaction_log_bytes_used,
dt.database_transaction_log_bytes_reserved,
st.is_user_transaction,
st.open_transaction_count,
ib.event_type,
ib.parameters,
ib.event_info
FROM sys.dm_tran_database_transactions AS dt
INNER JOIN sys.dm_tran_session_transactions AS st
ON dt.transaction_id = st.transaction_id
OUTER APPLY sys.dm_exec_input_buffer(st.session_id, default) AS ib
WHERE dt.database_id = DB_ID()
ORDER BY transaction_duration_seconds DESC;
You can terminate a transaction by using the KILL command and specifying the associated session_id value
from query result:
KILL 4242; -- replace 4242 with the session_id value from query results
Cau t i on
Once PVS size reported in the persistent_version_store_size_gb column is substantially reduced compared to
its original size, rerunning shrink should succeed.
Error number : 5223 , error message: %.*ls: Empty page %d:%d could not be deallocated.
This error may occur if there are ongoing index maintenance operations such as ALTER INDEX . Retry the shrink
command after these operations are complete.
If this error persists, the associated index might have to be rebuilt. To find the index to rebuild, execute the
following query in the same database where you ran the shrink command:
SELECT OBJECT_SCHEMA_NAME(pg.object_id) AS schema_name,
OBJECT_NAME(pg.object_id) AS object_name,
i.name AS index_name,
p.partition_number
FROM sys.dm_db_page_info(DB_ID(), <file_id>, <page_id>, default) AS pg
INNER JOIN sys.indexes AS i
ON pg.object_id = i.object_id
AND
pg.index_id = i.index_id
INNER JOIN sys.partitions AS p
ON pg.partition_id = p.partition_id;
Before executing this query, replace the <file_id> and <page_id> placeholders with the actual values from the
error message you received. For example, if the message is Empty page 1:62669 could not be deallocated, then
<file_id> is 1 and <page_id> is 62669 .
Rebuild the index identified by the query, and retry the shrink command.
Error number : 5201 , error message: DBCC SHRINKDATABASE: File ID %d of database ID %d was skipped
because the file does not have enough free space to reclaim.
This error means that the data file cannot be shrunk further. You can move on to the next data file.
Next steps
For information about database max sizes, see:
Azure SQL Database vCore-based purchasing model limits for a single database
Resource limits for single databases using the DTU-based purchasing model
Azure SQL Database vCore-based purchasing model limits for elastic pools
Resources limits for elastic pools using the DTU-based purchasing model
Use Resource Health to troubleshoot connectivity
for Azure SQL Database and Azure SQL Managed
Instance
9/13/2022 • 2 minutes to read • Edit Online
Health checks
Resource Health determines the health of your SQL resource by examining the success and failure of logins to
the resource. Currently, Resource Health for your SQL Database resource only examines login failures due to
system error and not user error. The Resource Health status is updated every 1 to 2 minutes.
Health states
Available
A status of Available means that Resource Health has not detected login failures due to system errors on your
SQL resource.
Degraded
A status of Degraded means that Resource Health has detected a majority of successful logins, but some
failures as well. These are most likely transient login errors. To reduce the impact of connection issues caused by
transient login errors, implement retry logic in your code.
Unavailable
A status of Unavailable means that Resource Health has detected consistent login failures to your SQL
resource. If your resource remains in this state for an extended period of time, contact support.
Unknown
The health status of Unknown indicates that Resource Health hasn't received information about this resource
for more than 10 minutes. Although this status isn't a definitive indication of the state of the resource, it is an
important data point in the troubleshooting process. If the resource is running as expected, the status of the
resource will change to Available after a few minutes. If you're experiencing problems with the resource, the
Unknown health status might suggest that an event in the platform is affecting the resource.
Historical information
You can access up to 14 days of health history in the Health History section of Resource Health. The section will
also contain the downtime reason (when available) for the downtimes reported by Resource Health. Currently,
Azure shows the downtime for your database resource at a two-minute granularity. The actual downtime is
likely less than a minute. The average is 8 seconds.
Downtime reasons
When your database experiences downtime, analysis is performed to determine a reason. When available, the
downtime reason is reported in the Health History section of Resource Health. Downtime reasons are typically
published within 45 minutes after an event.
Planned maintenance
The Azure infrastructure periodically performs planned maintenance – the upgrade of hardware or software
components in the datacenter. While the database undergoes maintenance, Azure SQL may terminate some
existing connections and refuse new ones. The login failures experienced during planned maintenance are
typically transient, and retry logic helps reduce the impact. If you continue to experience login errors, contact
support.
Reconfiguration
Reconfigurations are considered transient conditions and are expected from time to time. These events can be
triggered by load balancing or software/hardware failures. Any client production application that connects to a
cloud database should implement a robust connection retry logic, as it would help mitigate these situations and
should generally make the errors transparent to the end user.
Next steps
Learn more about retry logic for transient errors.
Troubleshoot, diagnose, and prevent SQL connection errors.
Learn more about configuring Resource Health alerts.
Get an overview of Resource Health.
Review Resource Health FAQ.
Migrate Azure SQL Database from the DTU-based
model to the vCore-based model
9/13/2022 • 11 minutes to read • Edit Online
Migrate a database
Migrating a database from the DTU-based purchasing model to the vCore-based purchasing model is similar to
scaling between service objectives in the Basic, Standard, and Premium service tiers, with similar duration and a
minimal downtime at the end of the migration process. A database migrated to the vCore-based purchasing
model can be migrated back to the DTU-based purchasing model at any time in the same fashion, with the
exception of databases migrated to the Hyperscale service tier.
TIP
This rule is approximate because it does not consider the specific type of hardware used for the DTU database or elastic
pool.
In the DTU model, the system may select any available hardware configuration for your database or elastic pool.
Further, in the DTU model you have only indirect control over the number of vCores (logical CPUs) by choosing
higher or lower DTU or eDTU values.
In the vCore model, customers must make an explicit choice of both the hardware configuration and the number
of vCores (logical CPUs). While DTU model does not offer these choices, hardware type and the number of
logical CPUs used for every database and elastic pool are exposed via dynamic management views. This makes
it possible to determine the matching vCore service objective more precisely.
The following approach uses this information to determine a vCore service objective with a similar allocation of
resources, to obtain a similar level of performance after migration to the vCore model.
DTU to vCore mapping
A T-SQL query below, when executed in the context of a DTU database to be migrated, returns a matching
(possibly fractional) number of vCores in each hardware configuration in the vCore model. By rounding this
number to the closest number of vCores available for databases and elastic pools in each hardware
configuration in the vCore model, customers can choose the vCore service objective that is the closest match for
their DTU database or elastic pool.
Sample migration scenarios using this approach are described in the Examples section.
Execute this query in the context of the database to be migrated, rather than in the master database. When
migrating an elastic pool, execute the query in the context of any database in the pool.
WITH dtu_vcore_map AS
(
SELECT rg.slo_name,
CAST(DATABASEPROPERTYEX(DB_NAME(), 'Edition') AS nvarchar(40)) COLLATE DATABASE_DEFAULT AS
dtu_service_tier,
CASE WHEN slo.slo_name LIKE '%SQLG4%' THEN 'Gen4'
WHEN slo.slo_name LIKE '%SQLGZ%' THEN 'Gen4'
WHEN slo.slo_name LIKE '%SQLG5%' THEN 'Gen5'
WHEN slo.slo_name LIKE '%SQLG6%' THEN 'Gen5'
WHEN slo.slo_name LIKE '%SQLG7%' THEN 'Gen5'
WHEN slo.slo_name LIKE '%GPGEN8%' THEN 'Gen5'
END COLLATE DATABASE_DEFAULT AS dtu_hardware_gen,
s.scheduler_count * CAST(rg.instance_cap_cpu/100. AS decimal(3,2)) AS dtu_logical_cpus,
CAST((jo.process_memory_limit_mb / s.scheduler_count) / 1024. AS decimal(4,2)) AS
dtu_memory_per_core_gb
FROM sys.dm_user_db_resource_governance AS rg
CROSS JOIN (SELECT COUNT(1) AS scheduler_count FROM sys.dm_os_schedulers WHERE status COLLATE
DATABASE_DEFAULT = 'VISIBLE ONLINE') AS s
CROSS JOIN sys.dm_os_job_object AS jo
CROSS APPLY (
SELECT UPPER(rg.slo_name) COLLATE DATABASE_DEFAULT AS slo_name
) slo
WHERE rg.dtu_limit > 0
AND
DB_NAME() COLLATE DATABASE_DEFAULT <> 'master'
AND
rg.database_id = DB_ID()
)
SELECT dtu_logical_cpus,
dtu_hardware_gen,
dtu_memory_per_core_gb,
dtu_service_tier,
CASE WHEN dtu_service_tier = 'Basic' THEN 'General Purpose'
WHEN dtu_service_tier = 'Standard' THEN 'General Purpose or Hyperscale'
WHEN dtu_service_tier = 'Premium' THEN 'Business Critical or Hyperscale'
END AS vcore_service_tier,
CASE WHEN dtu_hardware_gen = 'Gen4' THEN dtu_logical_cpus
WHEN dtu_hardware_gen = 'Gen5' THEN dtu_logical_cpus * 0.7
END AS Gen4_vcores,
7 AS Gen4_memory_per_core_gb,
CASE WHEN dtu_hardware_gen = 'Gen4' THEN dtu_logical_cpus * 1.7
WHEN dtu_hardware_gen = 'Gen5' THEN dtu_logical_cpus
END AS Gen5_vcores,
5.05 AS Gen5_memory_per_core_gb,
CASE WHEN dtu_hardware_gen = 'Gen4' THEN dtu_logical_cpus
WHEN dtu_hardware_gen = 'Gen5' THEN dtu_logical_cpus * 0.8
END AS Fsv2_vcores,
1.89 AS Fsv2_memory_per_core_gb,
CASE WHEN dtu_hardware_gen = 'Gen5' THEN dtu_logical_cpus * 0.7
WHEN dtu_hardware_gen = 'Gen4' THEN dtu_logical_cpus
END AS DC_vcores,
4.5 AS DC_memory_per_core_gb
FROM dtu_vcore_map;
Additional factors
Besides the number of vCores (logical CPUs) and the type of hardware, several other factors may influence the
choice of vCore service objective:
The mapping Transact-SQL query matches DTU and vCore service objectives in terms of their CPU capacity,
therefore the results will be more accurate for CPU-bound workloads.
For the same hardware type and the same number of vCores, IOPS and transaction log throughput resource
limits for vCore databases are often higher than for DTU databases. For IO-bound workloads, it may be
possible to lower the number of vCores in the vCore model to achieve the same level of performance. Actual
resource limits for DTU and vCore databases are exposed in the sys.dm_user_db_resource_governance view.
Comparing these values between the DTU database or pool to be migrated, and a vCore database or pool
with an approximately matching service objective will help you select the vCore service objective more
precisely.
The mapping query also returns the amount of memory per core for the DTU database or elastic pool to be
migrated, and for each hardware configuration in the vCore model. Ensuring similar or higher total memory
after migration to vCore is important for workloads that require a large memory data cache to achieve
sufficient performance, or workloads that require large memory grants for query processing. For such
workloads, depending on actual performance, it may be necessary to increase the number of vCores to get
sufficient total memory.
The historical resource utilization of the DTU database should be considered when choosing the vCore
service objective. DTU databases with consistently under-utilized CPU resources may need fewer vCores than
the number returned by the mapping query. Conversely, DTU databases where consistently high CPU
utilization causes inadequate workload performance may require more vCores than returned by the query.
If migrating databases with intermittent or unpredictable usage patterns, consider the use of Serverless
compute tier. Note that the max number of concurrent workers in serverless is 75% of the limit in
provisioned compute for the same number of max vCores configured. Also, the max memory available in
serverless is 3 GB times the maximum number of vCores configured, which is less than the per-core memory
for provisioned compute. For example, on Gen5 max memory is 120 GB when 40 max vCores are configured
in serverless, vs. 204 GB for a 40 vCore provisioned compute.
In the vCore model, the supported maximum database size may differ depending on hardware. For large
databases, check supported maximum sizes in the vCore model for single databases and elastic pools.
For elastic pools, the DTU and vCore models have differences in the maximum supported number of
databases per pool. This should be considered when migrating elastic pools with many databases.
Some hardware configurations may not be available in every region. Check availability under Hardware
configuration for SQL Database.
IMPORTANT
The DTU to vCore sizing guidelines above are provided to help in the initial estimation of the target database service
objective.
The optimal configuration of the target database is workload-dependent. Thus, to achieve the optimal price/performance
ratio after migration, you may need to leverage the flexibility of the vCore model to adjust the number of vCores,
hardware configuration, and service and compute tiers. You may also need to adjust database configuration parameters,
such as maximum degree of parallelism, and/or change the database compatibility level to enable recent improvements in
the database engine.
NOTE
The values in the examples below are for illustration purposes only. Actual values returned in described scenarios may be
different.
We see that the DTU database has 24 logical CPUs (vCores), with 5.4 GB of memory per vCore, and is using
Gen5 hardware. The direct match to that is a General Purpose 24 vCore database on Gen5 hardware, i.e. the
GP_Gen5_24 vCore service objective.
Migrating a Standard S0 database
The mapping query returns the following result (some columns not shown for brevity):
We see that the DTU database has the equivalent of 0.25 logical CPUs (vCores), with 0.42 GB of memory per
vCore, and is using Gen4 hardware. The smallest vCore service objectives in the Gen4 and Gen5 hardware
configurations, GP_Gen4_1 and GP_Gen5_2 , provide more compute resources than the Standard S0 database,
so a direct match is not possible. Since Gen4 hardware is being decommissioned, the GP_Gen5_2 option is
preferred. Additionally, if the workload is well-suited for the Serverless compute tier, then GP_S_Gen5_1 would
be a closer match.
Migrating a Premium P15 database
The mapping query returns the following result (some columns not shown for brevity):
We see that the DTU database has 42 logical CPUs (vCores), with 4.86 GB of memory per vCore, and is using
Gen5 hardware. While there is not a vCore service objective with 42 cores, the BC_Gen5_40 service objective is
very close both in terms of CPU and memory capacity, and is a good match.
Migrating a Basic 200 eDTU elastic pool
The mapping query returns the following result (some columns not shown for brevity):
We see that the DTU elastic pool has 4 logical CPUs (vCores), with 5.4 GB of memory per vCore, and is using
Gen5 hardware. The direct match in the vCore model is a GP_Gen5_4 elastic pool. However, this service
objective supports a maximum of 200 databases per pool, while the Basic 200 eDTU elastic pool supports up to
500 databases. If the elastic pool to be migrated has more than 200 databases, the matching vCore service
objective would have to be GP_Gen5_6 , which supports up to 500 databases.
Migrate geo-replicated databases
Migrating from the DTU-based model to the vCore-based purchasing model is similar to upgrading or
downgrading the geo-replication relationships between databases in the standard and premium service tiers.
During migration, you don't have to stop geo-replication, but you must follow these sequencing rules:
When upgrading, you must upgrade the secondary database first, and then upgrade the primary.
When downgrading, reverse the order: you must downgrade the primary database first, and then
downgrade the secondary.
When you're using geo-replication between two elastic pools, we recommend that you designate one pool as
the primary and the other as the secondary. In that case, when you're migrating elastic pools you should use the
same sequencing guidance. However, if you have elastic pools that contain both primary and secondary
databases, treat the pool with the higher utilization as the primary and follow the sequencing rules accordingly.
The following table provides guidance for specific migration scenarios:
Next steps
For the specific compute sizes and storage size choices available for single databases, see SQL Database
vCore-based resource limits for single databases.
For the specific compute sizes and storage size choices available for elastic pools, see SQL Database vCore-
based resource limits for elastic pools.
Scale single database resources in Azure SQL
Database
9/13/2022 • 11 minutes to read • Edit Online
This article describes how to scale the compute and storage resources available for an Azure SQL Database in
the provisioned compute tier. Alternatively, the serverless compute tier provides compute autoscaling and bills
per second for compute used.
After initially picking the number of vCores or DTUs, you can scale a single database up or down dynamically
based on actual experience using:
Transact-SQL
Azure portal
PowerShell
Azure CLI
REST API
IMPORTANT
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see
Manage file space in Azure SQL Database.
Impact
Changing the service tier or compute size of mainly involves the service performing the following steps:
1. Create a new compute instance for the database.
A new compute instance is created with the requested service tier and compute size. For some
combinations of service tier and compute size changes, a replica of the database must be created in the
new compute instance, which involves copying data and can strongly influence the overall latency.
Regardless, the database remains online during this step, and connections continue to be directed to the
database in the original compute instance.
2. Switch routing of connections to a new compute instance.
Existing connections to the database in the original compute instance are dropped. Any new connections
are established to the database in the new compute instance. For some combinations of service tier and
compute size changes, database files are detached and reattached during the switch. Regardless, the
switch can result in a brief service interruption when the database is unavailable generally for less than
30 seconds and often for only a few seconds. If there are long-running transactions running when
connections are dropped, the duration of this step may take longer in order to recover aborted
transactions. Accelerated Database Recovery can reduce the impact from aborting long running
transactions.
IMPORTANT
No data is lost during any step in the workflow. Make sure that you have implemented some retry logic in the
applications and components that are using Azure SQL Database while the service tier is changed.
Latency
The estimated latency to change the service tier, scale the compute size of a single database or elastic pool,
move a database in/out of an elastic pool, or move a database between elastic pools is parameterized as follows:
B A SIC EL A ST IC
POOL,
STA N DA RD ( S2- S12) , P REM IUM O R
B A SIC SIN GL E GEN ERA L P URP O SE B USIN ESS C RIT IC A L
DATA B A SE, SIN GL E DATA B A SE SIN GL E DATA B A SE
SERVIC E T IER STA N DA RD ( S0- S1) O R EL A ST IC P O O L O R EL A ST IC P O O L H Y P ERSC A L E
NOTE
Additionally, for Standard (S2-S12) and General Purpose databases, latency for moving a database in/out of an elastic pool
or between elastic pools will be proportional to database size if the database is using Premium File Share (PFS) storage.
To determine if a database is using PFS storage, execute the following query in the context of the database. If the value in
the AccountType column is PremiumFileStorage or PremiumFileStorage-ZRS , the database is using PFS storage.
SELECT s.file_id,
s.type_desc,
s.name,
FILEPROPERTYEX(s.name, 'AccountType') AS AccountType
FROM sys.database_files AS s
WHERE s.type_desc IN ('ROWS', 'LOG');
NOTE
The zone redundant property will remain the same by default when scaling from the Business Critical to the General
Purpose tier. Latency for this downgrade when zone redundancy is enabled as well as latency for switching to zone
redundancy for the General Purpose tier will be proportional to database size.
TIP
To monitor in-progress operations, see: Manage operations using the SQL REST API, Manage operations using CLI,
Monitor operations using T-SQL and these two PowerShell commands: Get-AzSqlDatabaseActivity and Stop-
AzSqlDatabaseActivity.
Cancelling changes
A service tier change or compute rescaling operation can be canceled.
The Azure portal
In the database overview blade, navigate to Notifications and click on the tile indicating there's an ongoing
operation:
PowerShell
From a PowerShell command prompt, set the $resourceGroupName , $serverName , and $databaseName , and then
run the following command:
$operationName = (az sql db op list --resource-group $resourceGroupName --server $serverName --database
$databaseName --query "[?state=='InProgress'].name" --out tsv)
if (-not [string]::IsNullOrEmpty($operationName)) {
(az sql db op cancel --resource-group $resourceGroupName --server $serverName --database $databaseName -
-name $operationName)
"Operation " + $operationName + " has been canceled"
}
else {
"No service tier change or compute rescaling operation found"
}
Permissions
To scale databases via T-SQL, ALTER DATABASE permissions are needed. To scale a database a login must be
either the server admin login (created when the Azure SQL Database logical server was provisioned), the Azure
AD admin of the server, a member of the dbmanager database role in master , a member of the db_owner
database role in the current database, or dbo of the database. For more information, see ALTER DATABASE.
To scale databases via the Azure portal, PowerShell, Azure CLI, or REST API, Azure RBAC permissions are needed,
specifically the Contributor, SQL DB Contributor role, or SQL Server Contributor Azure RBAC roles. For more
information, visit Azure RBAC built-in roles.
Additional considerations
If you're upgrading to a higher service tier or compute size, the database max size doesn't increase unless
you explicitly specify a larger size (maxsize).
To downgrade a database, the database used space must be smaller than the maximum allowed size of the
target service tier and compute size.
When downgrading from Premium to the Standard tier, an extra storage cost applies if both (1) the max
size of the database is supported in the target compute size, and (2) the max size exceeds the included
storage amount of the target compute size. For example, if a P1 database with a max size of 500 GB is
downsized to S3, then an extra storage cost applies since S3 supports a max size of 1 TB and its included
storage amount is only 250 GB. So, the extra storage amount is 500 GB – 250 GB = 250 GB. For pricing of
extra storage, see Azure SQL Database pricing. If the actual amount of space used is less than the included
storage amount, then this extra cost can be avoided by reducing the database max size to the included
amount.
When upgrading a database with geo-replication enabled, upgrade its secondary databases to the desired
service tier and compute size before upgrading the primary database (general guidance for best
performance). When upgrading to a different edition, it's a requirement that the secondary database is
upgraded first.
When downgrading a database with geo-replication enabled, downgrade its primary databases to the
desired service tier and compute size before downgrading the secondary database (general guidance for
best performance). When downgrading to a different edition, it's a requirement that the primary database is
downgraded first.
The restore service offerings are different for the various service tiers. If you're downgrading to the Basic tier,
there's a lower backup retention period. See Azure SQL Database Backups.
The new properties for the database aren't applied until the changes are complete.
When data copying is required to scale a database (see Latency) when changing the service tier, high
resource utilization concurrent to the scaling operation may cause longer scaling times. With Accelerated
Database Recovery (ADR), rollback of long running transactions is not a significant source of delay, but high
concurrent resource usage may leave less compute, storage, and network bandwidth resources for scaling,
particularly for smaller compute sizes.
Billing
You're billed for each hour a database exists using the highest service tier + compute size that applied during
that hour, regardless of usage or whether the database was active for less than an hour. For example, if you
create a single database and delete it five minutes later your bill reflects a charge for one database hour.
IMPORTANT
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see
Manage file space in Azure SQL Database.
IMPORTANT
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see
Manage file space in Azure SQL Database.
Next steps
For overall resource limits, see Azure SQL Database vCore-based resource limits - single databases and Azure
SQL Database DTU-based resource limits - single databases.
Scale elastic pool resources in Azure SQL Database
9/13/2022 • 7 minutes to read • Edit Online
IMPORTANT
No data is lost during any step in the workflow.
NOTE
In the case of changing the service tier or rescaling compute for an elastic pool, the summation of space used across
all databases in the pool should be used to calculate the estimate.
In the case of moving a database to/from an elastic pool, only the space used by the database impacts the latency, not
the space used by the elastic pool.
For Standard and General Purpose elastic pools, latency of moving a database in/out of an elastic pool or between
elastic pools will be proportional to database size if the elastic pool is using Premium File Share (PFS) storage. To
determine if a pool is using PFS storage, execute the following query in the context of any database in the pool. If the
value in the AccountType column is PremiumFileStorage or PremiumFileStorage-ZRS , the pool is using PFS
storage.
SELECT s.file_id,
s.type_desc,
s.name,
FILEPROPERTYEX(s.name, 'AccountType') AS AccountType
FROM sys.database_files AS s
WHERE s.type_desc IN ('ROWS', 'LOG');
TIP
To monitor in-progress operations, see: Manage operations using the SQL REST API, Manage operations using CLI,
Monitor operations using T-SQL and these two PowerShell commands: Get-AzSqlDatabaseActivity and Stop-
AzSqlDatabaseActivity.
IMPORTANT
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see
Manage file space in Azure SQL Database.
Next steps
For overall resource limits, see SQL Database vCore-based resource limits - elastic pools and SQL Database
DTU-based resource limits - elastic pools.
Manage elastic pools in Azure SQL Database
9/13/2022 • 5 minutes to read • Edit Online
Azure portal
All pool settings can be found in one place: the Configure pool blade. To get here, find an elastic pool in the
Azure portal and click Configure pool either from the top of the blade or from the resource menu on the left.
From here you can make any combination of the following changes and save them all in one batch:
1. Change the service tier of the pool
2. Scale the performance (DTU or vCores) and storage up or down
3. Add or remove databases to/from the pool
4. Set a min (guaranteed) and max performance limit for the databases in the pools
5. Review the cost summary to view any changes to your bill as a result of your new selections
PowerShell
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is
for the Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the
AzureRm modules are substantially identical.
To create and manage SQL Database elastic pools and pooled databases with Azure PowerShell, use the
following PowerShell cmdlets. If you need to install or upgrade PowerShell, see Install Azure PowerShell module.
To create and manage the servers for an elastic pool, see Create and manage servers. To create and manage
firewall rules, see Create and manage firewall rules using PowerShell.
TIP
For PowerShell example scripts, see Create elastic pools and move databases between pools and out of a pool using
PowerShell and Use PowerShell to monitor and scale a SQL elastic pool in Azure SQL Database.
C M DL ET DESC RIP T IO N
TIP
Creation of many databases in an elastic pool can take time when done using the portal or PowerShell cmdlets that create
only a single database at a time. To automate creation into an elastic pool, see CreateOrUpdateElasticPoolAndPopulate.
Azure CLI
To create and manage SQL Database elastic pools with Azure CLI, use the following Azure CLI SQL Database
commands. Use the Cloud Shell to run Azure CLI in your browser, or install it on macOS, Linux, or Windows.
TIP
For Azure CLI example scripts, see Use CLI to move a database in SQL Database in a SQL elastic pool and Use Azure CLI
to scale a SQL elastic pool in Azure SQL Database.
C M DL ET DESC RIP T IO N
az sql elastic-pool list-editions Also includes available pool DTU settings, storage limits, and
per database settings. In order to reduce verbosity,
additional storage limits and per database settings are
hidden by default.
Transact-SQL (T-SQL)
To create and move databases within existing elastic pools or to return information about an SQL Database
elastic pool with Transact-SQL, use the following T-SQL commands. You can issue these commands using the
Azure portal, SQL Server Management Studio, Visual Studio Code, or any other program that can connect to a
server and pass Transact-SQL commands. To create and manage firewall rules using T-SQL, see Manage firewall
rules using Transact-SQL.
IMPORTANT
You cannot create, update, or delete an Azure SQL Database elastic pool using Transact-SQL. You can add or remove
databases from an elastic pool, and you can use DMVs to return information about existing elastic pools.
CREATE DATABASE (Azure SQL Database) Creates a new database in an existing pool or as a single
database. You must be connected to the master database to
create a new database.
ALTER DATABASE (Azure SQL Database) Move a database into, out of, or between elastic pools.
sys.elastic_pool_resource_stats (Azure SQL Database) Returns resource usage statistics for all the elastic pools on a
server. For each elastic pool, there is one row for each 15
second reporting window (four rows per minute). This
includes CPU, IO, Log, storage consumption and concurrent
request/session utilization by all databases in the pool.
C OMMAND DESC RIP T IO N
sys.database_service_objectives (Azure SQL Database) Returns the edition (service tier), service objective (pricing
tier), and elastic pool name, if any, for a database in SQL
Database or Azure Synapse Analytics. If logged on to the
master database in a server, returns information on all
databases. For Azure Synapse Analytics, you must be
connected to the master database.
REST API
To create and manage SQL Database elastic pools and pooled databases, use these REST API requests.
Elastic pools - Create or update Creates a new elastic pool or updates an existing elastic pool.
Elastic pool database activities Returns activity on databases inside of an elastic pool.
Next steps
To learn more about design patterns for SaaS applications using elastic pools, see Design Patterns for Multi-
tenant SaaS Applications with Azure SQL Database.
For a SaaS tutorial using elastic pools, see Introduction to the Wingtip SaaS application.
Resource management in dense elastic pools
9/13/2022 • 15 minutes to read • Edit Online
Resource governance
Resource sharing requires the system to carefully control resource usage to minimize the "noisy neighbor"
effect, where a database with high resource consumption affects other databases in the same elastic pool. Azure
SQL Database achieves these goals by implementing resource governance. At the same time, the system must
provide sufficient resources for features such as high availability and disaster recovery (HADR), backup and
restore, monitoring, Query Store, Automatic tuning, etc. to function reliably.
The primary design goal of elastic pools is to be cost-effective. For this reason, the system intentionally allows
customers to create dense pools, that is pools with the number of databases approaching or at the maximum
allowed, but with a moderate allocation of compute resources. For the same reason, the system doesn't reserve
all potentially needed resources for its internal processes, but allows resource sharing between internal
processes and user workloads.
This approach allows customers to use dense elastic pools to achieve adequate performance and major cost
savings. However, if the workload against many databases in a dense pool is sufficiently intense, resource
contention becomes significant. Resource contention reduces user workload performance, and can negatively
impact internal processes.
IMPORTANT
In dense pools with many active databases, it may not be feasible to increase the number of databases in the pool up to
the maximums documented for DTU and vCore elastic pools.
The number of databases that can be placed in dense pools without causing resource contention and performance
problems depends on the number of concurrently active databases, and on resource consumption by user workloads in
each database. This number can change over time as user workloads change.
Additionally, if the min vCores per database, or min DTUs per database setting is set to a value greater than 0, the
maximum number of databases in the pool will be implicitly limited. For more information, see Database properties for
pooled vCore databases and Database properties for pooled DTU databases.
When resource contention occurs in a densely packed pool, customers can choose one or more of the following
actions to mitigate it:
Tune query workload to reduce resource consumption, or spread resource consumption across multiple
databases over time.
Reduce pool density by moving some databases to another pool, or by making them standalone databases.
Scale up the pool to get more resources.
For suggestions on how to implement the last two actions, see Operational recommendations later in this
article. Reducing resource contention benefits both user workloads and internal processes, and lets the system
reliably maintain expected level of service.
avg_instance_cpu_percent CPU utilization of the SQL process Below 70%. Occasional short spikes up
associated with an elastic pool, as to 90% may be acceptable.
measured by the underlying operating
system. Available in the
sys.dm_db_resource_stats view in
every database, and in the
sys.elastic_pool_resource_stats view in
the master database. This metric is
also emitted to Azure Monitor, where
it is named
sqlserver_process_core_percent ,
and can be viewed in Azure portal. This
value is the same for every database in
the same elastic pool.
max_worker_percent Worker thread utilization. Provided for Below 80%. Spikes up to 100% will
each database in the pool, as well as cause connection attempts and queries
for the pool itself. There are different to fail.
limits on the number of worker
threads at the database level, and at
the pool level, therefore monitoring
this metric at both levels is
recommended. Available in the
sys.dm_db_resource_stats view in
every database, and in the
sys.elastic_pool_resource_stats view in
the master database. This metric is
also emitted to Azure Monitor, where
it is named workers_percent , and
can be viewed in Azure portal.
M ET RIC N A M E DESC RIP T IO N REC O M M EN DED AVERA GE VA L UE
avg_data_io_percent IOPS utilization for read and write Below 80%. Occasional short spikes up
physical IO. Provided for each database to 100% may be acceptable.
in the pool, as well as for the pool
itself. There are different limits on the
number of IOPS at the database level,
and at the pool level, therefore
monitoring this metric at both levels is
recommended. Available in the
sys.dm_db_resource_stats view in
every database, and in the
sys.elastic_pool_resource_stats view in
the master database. This metric is
also emitted to Azure Monitor, where
it is named
physical_data_read_percent , and
can be viewed in Azure portal.
avg_log_write_percent Throughput utilizations for transaction Below 90%. Occasional short spikes up
log write IO. Provided for each to 100% may be acceptable.
database in the pool, as well as for the
pool itself. There are different limits on
the log throughput at the database
level, and at the pool level, therefore
monitoring this metric at both levels is
recommended. Available in the
sys.dm_db_resource_stats view in
every database, and in the
sys.elastic_pool_resource_stats view in
the master database. This metric is
also emitted to Azure Monitor, where
it is named log_write_percent , and
can be viewed in Azure portal. When
this metric is close to 100%, all
database modifications (INSERT,
UPDATE, DELETE, MERGE statements,
SELECT … INTO, BULK INSERT, etc.) will
be slower.
avg_storage_percent Total storage space used by data in all Below 80%. Can approach 100% for
databases within an elastic pool. Does pools with no data growth.
not include empty space in database
files. Available in the
sys.elastic_pool_resource_stats view in
the master database. This metric is
also emitted to Azure Monitor, where
it is named storage_percent , and
can be viewed in Azure portal.
avg_allocated_storage_percent Total storage space used by database Below 90%. Can approach 100% for
files in storage in all databases within pools with no data growth.
an elastic pool. Includes empty space
in database files. Available in the
sys.elastic_pool_resource_stats view in
the master database. This metric is
also emitted to Azure Monitor, where
it is named
allocated_data_storage_percent ,
and can be viewed in Azure portal.
tempdb_log_used_percent Transaction log space utilization in the Below 50%. Occasional spikes up to
tempdb database. Even though 80% are acceptable.
temporary objects created in one
database are not visible in other
databases in the same elastic pool,
tempdb is a shared resource for all
databases in the same pool. A long
running or orphaned transaction in
tempdb started from one database in
the pool can consume a large portion
of transaction log, and cause failures
for queries in other databases in the
same pool. Derived from
sys.dm_db_log_space_usage and
sys.database_files views. This metric is
also emitted to Azure Monitor, and can
be viewed in Azure portal. See
Examples for a sample query to return
the current value of this metric.
In addition to these metrics, Azure SQL Database provides a view that returns actual resource governance limits,
as well as additional views that return resource utilization statistics at the resource pool level, and at the
workload group level.
TIP
To query these and other dynamic management views using a principal other than server administrator, add this principal
to the ##MS_ServerStateReader## server role.
These views can be used to monitor resource utilization and troubleshoot resource contention in near real-time.
User workload on the primary and readable secondary replicas, including geo-replicas, is classified into the
SloSharedPool1 resource pool and UserPrimaryGroup.DBId[N] workload group, where N stands for the
database ID value.
In addition to monitoring current resource utilization, customers using dense pools can maintain historical
resource utilization data in a separate data store. This data can be used in predictive analysis to proactively
manage resource utilization based on historical and seasonal trends.
Operational recommendations
Leave sufficient resource headroom . If resource contention and performance degradation occur, mitigation
may involve moving some databases out of the affected elastic pool, or scaling up the pool, as noted earlier.
However, these actions require additional compute resources to complete. In particular, for Premium and
Business Critical pools, these actions require transferring all data for the databases being moved, or for all
databases in the elastic pool if the pool is scaled up. Data transfer is a long running and resource-intensive
operation. If the pool is already under high resource pressure, the mitigating operation itself will degrade
performance even further. In extreme cases, it may not be possible to solve resource contention via database
move or pool scale-up because the required resources are not available. In this case, temporarily reducing query
workload on the affected elastic pool may be the only solution.
Customers using dense pools should closely monitor resource utilization trends as described earlier, and take
mitigating action while metrics remain within the recommended ranges and there are still sufficient resources in
the elastic pool.
Resource utilization depends on multiple factors that change over time for each database and each elastic pool.
Achieving optimal price/performance ratio in dense pools requires continuous monitoring and rebalancing, that
is moving databases from more utilized pools to less utilized pools, and creating new pools as necessary to
accommodate increased workload.
NOTE
For DTU elastic pools, the eDTU metric at the pool level is not a MAX or a SUM of individual database utilization. It is
derived from the utilization of various pool level metrics. Pool level resource limits may be higher than individual database
level limits, so it is possible that an individual database can reach a specific resource limit (CPU, data IO, log IO, etc.), even
when the eDTU reporting for the pool indicates no limit been reached.
Do not move "hot" databases . If resource contention at the pool level is primarily caused by a small number
of highly utilized databases, it may be tempting to move these databases to a less utilized pool, or make them
standalone databases. However, doing this while a database remains highly utilized is not recommended,
because the move operation will further degrade performance, both for the database being moved, and for the
entire pool. Instead, either wait until high utilization subsides, or move less utilized databases instead to relieve
resource pressure at the pool level. But moving databases with very low utilization does not provide any benefit
in this case, because it does not materially reduce resource utilization at the pool level.
Create new databases in a "quarantine" pool . In scenarios where new databases are created frequently,
such as applications using the tenant-per-database model, there is risk that a new database placed into an
existing elastic pool will unexpectedly consume significant resources and affect other databases and internal
processes in the pool. To mitigate this risk, create a separate "quarantine" pool with ample allocation of
resources. Use this pool for new databases with yet unknown resource consumption patterns. Once a database
has stayed in this pool for a business cycle, such as a week or a month, and its resource consumption is known,
it can be moved to a pool with sufficient capacity to accommodate this additional resource usage.
Monitor both used and allocated space . When allocated pool space (total size of all database files in
storage for all databases in a pool) reaches maximum pool size, out-of-space errors may occur. If allocated space
trends high and is on track to reach maximum pool size, mitigation options include:
Move some databases out of the pool to reduce total allocated space
Shrink database files to reduce empty allocated space in files
Scale up the pool to a service objective with a larger maximum pool size
If used pool space (total size of data in all databases in a pool, not including empty space in files) trends high
and is on track to reach maximum pool size, mitigation options include:
Move some databases out of the pool to reduce total used space
Move (archive) data outside of the database, or delete no longer needed data
Implement data compression
Scale up the pool to a service objective with a larger maximum pool size
Avoid overly dense ser vers . Azure SQL Database supports up to 5000 databases per server. Customers
using elastic pools with thousands of databases may consider placing multiple elastic pools on a single server,
with the total number of databases up to the supported limit. However, servers with many thousands of
databases create operational challenges. Operations that require enumerating all databases on a server, for
example viewing databases in the portal, will be slower. Operational errors, such as incorrect modification of
server level logins or firewall rules, will affect a larger number of databases. Accidental deletion of the server
will require assistance from Microsoft Support to recover databases on the deleted server, and will cause a
prolonged outage for all affected databases.
Limit the number of databases per server to a lower number than the maximum supported. In many scenarios,
using up to 1000-2000 databases per server is optimal. To reduce the likelihood of accidental server deletion,
place a delete lock on the server or its resource group.
Examples
View individual database capacity settings
Use the sys.dm_user_db_resource_governance dynamic management view to view the actual configuration and
capacity settings used by resource governance in the current database or elastic pool. For more information, see
sys.dm_user_db_resource_governance.
Run this query in any database in an elastic pool. All databases in the pool have the same resource governance
settings.
For longer retention time with less frequency, consider the following query on sys.resource_stats , run in the
master database of the Azure SQL logical server. For more information, see sys.resource_stats (Azure SQL
Database). One row exists every five minutes, and historical data is maintained for two weeks.
SELECT pool_id,
name AS resource_pool_name,
IIF(name LIKE 'SloSharedPool%' OR name LIKE 'UserPool%', 'user', 'system') AS resource_pool_type,
SUM(CAST(delta_out_of_memory_count AS decimal))/(SUM(duration_ms)/1000.) AS oom_per_second
FROM sys.dm_resource_governor_resource_pools_history_ex
GROUP BY pool_id, name
ORDER BY pool_id;
Monitoring tempdb log space utilization
This query returns the current value of the tempdb_log_used_percent metric, showing the relative utilization of
the tempdb transaction log relative to its maximum allowed size. This query can be run in any database in an
elastic pool.
Next steps
For an introduction to elastic pools, see Elastic pools help you manage and scale multiple databases in Azure
SQL Database.
For information on tuning query workloads to reduce resource utilization, see Monitoring and tuning, and
Monitoring and performance tuning.
How to manage a Hyperscale database
9/13/2022 • 15 minutes to read • Edit Online
The Azure portal enables you to migrate to the Hyperscale service tier by modifying the pricing tier for your
database.
1. Navigate to the database you wish to migrate in the Azure portal.
2. In the left navigation bar, select Compute + storage .
3. Select the Ser vice tier drop-down to expand the options for service tiers.
4. Select Hyperscale (On-demand scalable storage) from the dropdown menu.
5. Review the Hardware Configuration listed. If desired, select Change configuration to select the
appropriate hardware configuration for your workload.
6. Review the option to Save money . Select it if you qualify for Azure Hybrid Benefit and wish to use it for this
database.
7. Select the vCores slider if you wish to change the number of vCores available for your database under the
Hyperscale service tier.
8. Select the High-AvailabilitySecondar yReplicas slider if you wish to change the number of replicas under
the Hyperscale service tier.
9. Select Apply .
You can monitor operations for a Hyperscale database while the operation is ongoing.
The Azure portal enables you to reverse migrate to the General Purpose service tier by modifying the pricing
tier for your database.
Portal
Azure CLI
PowerShell
Transact-SQL
The Azure portal shows a notification for a database in Azure SQL Database when an operation such as a
migration, reverse migration, or restore is in progress.
The Azure portal shows a list of all databases on a logical server. The Pricing tier column includes the service
tier for each database.
1. Navigate to your logical server in the Azure portal.
2. In the left navigation bar, select Over view .
3. Scroll to the list of resources at the bottom of the pane. The window will display SQL elastic pools and
databases on the logical server.
4. Review the Pricing tier column to identify databases in the Hyperscale service tier.
Next steps
Learn more about Hyperscale databases in the following articles:
Quickstart: Create a Hyperscale database in Azure SQL Database
Hyperscale service tier
Azure SQL Database Hyperscale FAQ
Hyperscale secondary replicas
Azure SQL Database Hyperscale named replicas FAQ
SQL Hyperscale performance troubleshooting
diagnostics
9/13/2022 • 8 minutes to read • Edit Online
WA IT T Y P E DESC RIP T IO N
NOTE
To view these attributes in the query plan properties window, SSMS 18.3 or later is required.
A ratio of reads done on RBPEX to aggregated reads done on all other data files provides RBPEX cache hit ratio.
The counter RBPEX cache hit ratio is also exposed in the performance counters DMV
sys.dm_os_performance_counters .
Data reads
When reads are issued by the SQL Server database engine on a compute replica, they may be served either
by the local RBPEX cache, or by remote page servers, or by a combination of the two if reading multiple
pages.
When the compute replica reads some pages from a specific file, for example file_id 1, if this data resides
solely on the local RBPEX cache, all IO for this read is accounted against file_id 0 (RBPEX). If some part of that
data is in the local RBPEX cache, and some part is on a remote page server, then IO is accounted towards
file_id 0 for the part served from RBPEX, and the part served from the remote page server is accounted
towards file_id 1.
When a compute replica requests a page at a particular LSN from a page server, if the page server has not
caught up to the LSN requested, the read on the compute replica will wait until the page server catches up
before the page is returned to the compute replica. For any read from a page server on the compute replica,
you will see the PAGEIOLATCH_* wait type if it is waiting on that IO. In Hyperscale, this wait time includes
both the time to catch up the requested page on the page server to the LSN required, and the time needed to
transfer the page from the page server to the compute replica.
Large reads such as read-ahead are often done using "Scatter-Gather" Reads. This allows reads of up to 4 MB
of pages at a time, considered a single read in the SQL Server database engine. However, when data being
read is in RBPEX, these reads are accounted as multiple individual 8-KB reads, since the buffer pool and
RBPEX always use 8-KB pages. As the result, the number of read IOs seen against RBPEX may be larger than
the actual number of IOs performed by the engine.
Data writes
The primary compute replica does not write directly to page servers. Instead, log records from the log
service are replayed on corresponding page servers.
Writes that happen on the compute replica are predominantly writes to the local RBPEX (file_id 0). For writes
on logical files that are larger than 8 KB, in other words those done using Gather-write, each write operation
is translated into multiple 8-KB individual writes to RBPEX since the buffer pool and RBPEX always use 8-KB
pages. As the result, the number of write IOs seen against RBPEX may be larger than the actual number of
IOs performed by the engine.
Non-RBPEX files, or data files other than file_id 0 that correspond to page servers, also show writes. In the
Hyperscale service tier, these writes are simulated, because the compute replicas never write directly to page
servers. Write IOPS and throughput are accounted as they occur on the compute replica, but latency for data
files other than file_id 0 does not reflect the actual latency of page server writes.
Log writes
On the primary compute, a log write is accounted for in file_id 2 of sys.dm_io_virtual_file_stats. A log write
on primary compute is a write to the log Landing Zone.
Log records are not hardened on the secondary replica on a commit. In Hyperscale, log is applied by the log
service to the secondary replicas asynchronously. Because log writes don't actually occur on secondary
replicas, any accounting of log IOs on the secondary replicas is for tracking purposes only.
Additional resources
For vCore resource limits for a Hyperscale single database see Hyperscale service tier vCore Limits
For monitoring Azure SQL Databases, enable Azure Monitor SQL Insights (preview)
For Azure SQL Database performance tuning, see Query performance in Azure SQL Database
For performance tuning using Query Store, see Performance monitoring using Query store
For DMV monitoring scripts, see Monitoring performance Azure SQL Database using dynamic management
views
What is Block T-SQL CRUD feature?
9/13/2022 • 2 minutes to read • Edit Online
Overview
To block creation or modification of resources through T-SQL and enforce resource management through an
Azure Resource Manager template (ARM template) for a given subscription, the subscription level preview
features in Azure portal can be used. This is particularly useful when you are using Azure Policies to enforce
organizational standards through ARM templates. Since T-SQL does not adhere to the Azure Policies, a block on
T-SQL create or modify operations can be applied. The syntax blocked includes CRUD (create, update, delete)
statements for databases in Azure SQL, specifically CREATE DATABASE , ALTER DATABASE , and DROP DATABASE
statements.
T-SQL CRUD operations can be blocked via Azure portal, PowerShell, or Azure CLI.
Permissions
In order to register or remove this feature, the Azure user must be a member of the Owner or Contributor role
of the subscription.
Examples
The following section describes how you can register or unregister a preview feature with Microsoft.Sql
resource provider in Azure portal:
Register Block T -SQL CRUD
1. Go to your subscription on Azure portal.
2. Select the Preview Features tab.
3. Select Block T-SQL CRUD .
4. After you select Block T-SQL CRUD , a new window will open, select Register , to register this block with
Microsoft.Sql resource provider.
Re -register Microsoft.sql resource provider
After you register the block of T-SQL CRUD with Microsoft.Sql resource provider, you must re-register the
Microsoft.Sql resource provider for the changes to take effect. To re-register the Microsoft.Sql resource provider:
1. Go to your subscription on Azure portal.
2. Select the Resource Providers tab.
3. Search and select Microsoft.Sql resource provider.
4. Select Re-register .
NOTE
The re-registration step is mandatory for the T-SQL block to be applied to your subscription.
Removing Block T -SQL CRUD
To remove the block on T-SQL create or modify operations from your subscription, first unregister the
previously registered T-SQL block. Then, re-register the Microsoft.Sql resource provider as shown above for the
removal of T-SQL block to take effect.
Next steps
An overview of Azure SQL Database security capabilities
Azure SQL Database security best practices
Manage databases in Azure SQL Database by using
Azure Automation
9/13/2022 • 2 minutes to read • Edit Online
NOTE
The Automation runbook may run from a range of IP addresses at any datacenter in an Azure region. To learn more, see
Automation region DNS records.
Next steps
Now that you've learned the basics of Azure Automation and how it can be used to manage Azure SQL
Database, follow these links to learn more about Azure Automation.
Azure Automation Overview
My first runbook
Automate management tasks using elastic jobs
(preview)
9/13/2022 • 10 minutes to read • Edit Online
EL A ST IC JO B S SQ L A GEN T
Scope Any number of databases in Azure Any individual database in the same
SQL Database and/or data warehouses instance as the SQL agent. The Multi
in the same Azure cloud as the job Server Administration feature of SQL
agent. Targets can be in different Server Agent allows for master/target
servers, subscriptions, and/or regions. instances to coordinate job execution,
though this feature is not available in
Target groups can be composed of SQL managed instance.
individual databases or data
warehouses, or all databases in a
server, pool, or shard map (dynamically
enumerated at job runtime).
Suppor ted APIs and Tools Portal, PowerShell, T-SQL, Azure T-SQL, SQL Server Management
Resource Manager Studio (SSMS)
Elastic Job agent The Azure resource you create to run and manage Jobs.
DESC RIP T IO N ( A DDIT IO N A L DETA IL S A RE B ELO W T H E
C O M P O N EN T TA B L E)
Job database A database in Azure SQL Database that the job agent uses
to store job related data, job definitions, etc.
Target group The set of servers, pools, databases, and shard maps to run
a job against.
During job agent creation, a schema, tables, and a role called jobs_reader are created in the Job database. The
role is created with the following permission and is designed to give administrators finer access control for job
monitoring:
IMPORTANT
Consider the security implications before granting access to the Job database as a database administrator. A malicious
user with permissions to create or edit jobs could create or edit a job that uses a stored credential to connect to a
database under the malicious user's control, which could allow the malicious user to determine the credential's password.
Target group
A target group defines the set of databases a job step will execute on. A target group can contain any number
and combination of the following:
Logical SQL ser ver - if a server is specified, all databases that exist in the server at the time of the job
execution are part of the group. The master database credential must be provided so that the group can be
enumerated and updated prior to job execution. For more information on logical servers, see What is a
server in Azure SQL Database and Azure Synapse Analytics?.
Elastic pool - if an elastic pool is specified, all databases that are in the elastic pool at the time of the job
execution are part of the group. As for a server, the master database credential must be provided so that the
group can be updated prior to the job execution.
Single database - specify one or more individual databases to be part of the group.
Shard map - databases of a shard map.
TIP
At the moment of job execution, dynamic enumeration re-evaluates the set of databases in target groups that include
servers or pools. Dynamic enumeration ensures that jobs run across all databases that exist in the ser ver or
pool at the time of job execution . Re-evaluating the list of databases at runtime is specifically useful for scenarios
where pool or server membership changes frequently.
Pools and single databases can be specified as included or excluded from the group. This enables creating a
target group with any combination of databases. For example, you can add a server to a target group, but
exclude specific databases in an elastic pool (or exclude an entire pool).
A target group can include databases in multiple subscriptions, and across multiple regions. Note that cross-
region executions have higher latency than executions within the same region.
The following examples show how different target group definitions are dynamically enumerated at the
moment of job execution to determine which databases the job will run:
Example 1 shows a target group that consists of a list of individual databases. When a job step is executed
using this target group, the job step's action will be executed in each of those databases.
Example 2 shows a target group that contains a server as a target. When a job step is executed using this target
group, the server is dynamically enumerated to determine the list of databases that are currently in the server.
The job step's action will be executed in each of those databases.
Example 3 shows a similar target group as Example 2, but an individual database is specifically excluded. The
job step's action will not be executed in the excluded database.
Example 4 shows a target group that contains an elastic pool as a target. Similar to Example 2, the pool will be
dynamically enumerated at job run time to determine the list of databases in the pool.
Example 5 and Example 6 show advanced scenarios where servers, elastic pools, and databases can be
combined using include and exclude rules.
Example 7 shows that the shards in a shard map can also be evaluated at job run time.
NOTE
The Job database itself can be the target of a job. In this scenario, the Job database is treated just like any other target
database. The job user must be created and granted sufficient permissions in the Job database, and the database scoped
credential for the job user must also exist in the Job database, just like it does for any other target database.
Job status
You can monitor Elastic Job executions in the Job database by querying the table jobs.job_executions.
Agent performance, capacity, and limitations
Elastic Jobs use minimal compute resources while waiting for long-running jobs to complete.
Depending on the size of the target group of databases and the desired execution time for a job (number of
concurrent workers), the agent requires different amounts of compute and performance of the Job database
(the more targets and the higher number of jobs, the higher the amount of compute required).
Currently, the limit is 100 concurrent jobs.
Prevent jobs from reducing target database performance
To ensure resources aren't overburdened when running jobs against databases in a SQL elastic pool, jobs can be
configured to limit the number of databases a job can run against at the same time.
Next steps
How to create and manage elastic jobs
Create and manage Elastic Jobs using PowerShell
Create and manage Elastic Jobs using Transact-SQL (T-SQL)
Create, configure, and manage elastic jobs (preview)
9/13/2022 • 5 minutes to read • Edit Online
Known limitations
These are the current limitations to the Elastic Jobs service. We're actively working to remove as many of these
limitations as possible.
The Elastic Job agent needs to be recreated and started in The Elastic Jobs service stores all its job agent and job
the new region after a failover/move to a new Azure region. metadata in the jobs database. Any failover or move of
Azure resources to a new Azure region will also move the
jobs database, job agent and jobs metadata to the new
Azure region. However, the Elastic Job agent is a compute
only resource and needs to be explicitly re-created and
started in the new region before jobs will start executing
again in the new region. Once started, the Elastic Job agent
will resume executing jobs in the new region as per the
previously defined job schedule.
Concurrent jobs limit. Currently, the preview is limited to 100 concurrent jobs.
Excessive Audit logs from Jobs database The Elastic Job agent operates by constantly polling the Job
database to check for the arrival of new jobs and other
CRUD operations. If auditing is enabled on the server that
houses a Jobs database, a large amount of audit logs may
be generated by the Jobs database. This can be mitigated by
filtering out these audit logs using the
Set-AzSqlServerAudit command with a predicate
expression.
For example:
Set-AzSqlServerAudit -ResourceGroupName "ResourceGroup01" -ServerName "Server01" -
BlobStorageTargetState Enabled -StorageAccountResourceId "/subscriptions/7fe3301d-31d3-46
211a890ba6e3/resourceGroups/resourcegroup01/providers/Microsoft.Storage/storageAccounts/m
-PredicateExpression "database_principal_name <> '##MS_JobAccount##'"
This command will only filter out Job Agent to Jobs database
audit logs, not Job Agent to any target databases audit logs.
Private end points are not supported Databases and Elastic Pools targeted by Elastic Jobs should
have "Allow Azure Services and resources to access this
server" setting enabled at their server level in the current
preview. If this setting is not enabled, Elastic Job Agent will
not be able to execute jobs at those targets.
Similarly, a script must be able to execute successfully by logically testing for and countering any conditions it
finds.
Next steps
Create and manage Elastic Jobs using PowerShell
Create and manage Elastic Jobs using Transact-SQL (T-SQL)
Create an Elastic Job agent using PowerShell
(preview)
9/13/2022 • 8 minutes to read • Edit Online
Prerequisites
The upgraded version of Elastic Database jobs has a new set of PowerShell cmdlets for use during migration.
These new cmdlets transfer all of your existing job credentials, targets (including databases, servers, custom
collections), job triggers, job schedules, job contents, and jobs over to a new Elastic Job agent.
Install the latest Elastic Jobs cmdlets
If you don't have already have an Azure subscription, create a free account before you begin.
Install the Az.Sql module to get the latest Elastic Job cmdlets. Run the following commands in PowerShell with
administrative access.
Get-Module Az.Sql
In addition to the Az.Sql module, this tutorial also requires the SqlServer PowerShell module. For details, see
Install SQL Server PowerShell module.
# create a server
Write-Output "Creating a server..."
$agentServerName = Read-Host "Please enter an agent server name"
$agentServerName = $agentServerName + "-" + [guid]::NewGuid()
$adminLogin = Read-Host "Please enter the server admin name"
$adminPassword = Read-Host "Please enter the server admin password"
$adminPasswordSecure = ConvertTo-SecureString -String $AdminPassword -AsPlainText -Force
$adminCred = New-Object -TypeName "System.Management.Automation.PSCredential" -ArgumentList $adminLogin,
$adminPasswordSecure
$agentServer = New-AzSqlServer -ResourceGroupName $resourceGroupName -Location $location `
-ServerName $agentServerName -ServerVersion "12.0" -SqlAdministratorCredentials ($adminCred)
# create a target server and sample databases - uses the same credentials
Write-Output "Creating target server..."
$targetServerName = Read-Host "Please enter a target server name"
$targetServerName = $targetServerName + "-" + [guid]::NewGuid()
$targetServer = New-AzSqlServer -ResourceGroupName $resourceGroupName -Location $location `
-ServerName $targetServerName -ServerVersion "12.0" -SqlAdministratorCredentials ($adminCred)
In addition to the credentials in the image, note the addition of the GRANT commands in the following script.
These permissions are required for the script we chose for this example job. Because the example creates a new
table in the targeted databases, each target db needs the proper permissions to successfully run.
To create the required job credentials (in the job database), run the following script:
# in the master database (target server)
# create the master user login, master user, and job user login
$params = @{
'database' = 'master'
'serverInstance' = $targetServer.ServerName + '.database.windows.net'
'username' = $adminLogin
'password' = $adminPassword
'outputSqlErrors' = $true
'query' = 'CREATE LOGIN masteruser WITH PASSWORD=''password!123'''
}
Invoke-SqlCmd @params
$params.query = "CREATE USER masteruser FROM LOGIN masteruser"
Invoke-SqlCmd @params
$params.query = 'CREATE LOGIN jobuser WITH PASSWORD=''password!123'''
Invoke-SqlCmd @params
$targetDatabases | % {
$params.database = $_
$params.query = $createJobUserScript
Invoke-SqlCmd @params
$params.query = $grantAlterSchemaScript
Invoke-SqlCmd @params
$params.query = $grantCreateScript
Invoke-SqlCmd @params
}
After successful completion you should see two new tables in TargetDb1, and only one new table in TargetDb2:
You can also schedule the job to run later. To schedule a job to run at a specific time, run the following command:
Created The job execution was just created and is not yet in progress.
WaitingForRetr y The job execution wasn't able to complete its action and is
waiting to retry.
STAT E DESC RIP T IO N
SucceededWithSkipped The job execution has completed successfully, but some of its
children were skipped.
Failed The job execution has failed and exhausted its retries.
Clean up resources
Delete the Azure resources created in this tutorial by deleting the resource group.
TIP
If you plan to continue to work with these jobs, you do not clean up the resources created in this article.
Next steps
In this tutorial, you ran a Transact-SQL script against a set of databases. You learned how to do the following
tasks:
Create an Elastic Job agent
Create job credentials so that jobs can execute scripts on its targets
Define the targets (servers, elastic pools, databases, shard maps) you want to run the job against
Create database scoped credentials in the target databases so the agent connect and execute jobs
Create a job
Add a job step to the job
Start an execution of the job
Monitor the job
Manage Elastic Jobs using Transact-SQL
Use Transact-SQL (T-SQL) to create and manage
Elastic Database Jobs (preview)
9/13/2022 • 41 minutes to read • Edit Online
--Connect to the new job database specified when creating the Elastic Job agent
-- Create a database master key if one does not already exist, using your own password.
CREATE MASTER KEY ENCRYPTION BY PASSWORD='<EnterStrongPasswordHere>';
--View the recently created target group and target group members
SELECT * FROM jobs.target_groups WHERE target_group_name='ServerGroup1';
SELECT * FROM jobs.target_group_members WHERE target_group_name='ServerGroup1';
--Connect to the job database specified when creating the job agent
--View the recently created target group and target group members
SELECT * FROM [jobs].target_groups WHERE target_group_name = N'ServerGroup';
SELECT * FROM [jobs].target_group_members WHERE target_group_name = N'ServerGroup';
-- View the recently created target group and target group members
SELECT * FROM jobs.target_groups WHERE target_group_name = N'PoolGroup';
SELECT * FROM jobs.target_group_members WHERE target_group_name = N'PoolGroup';
--Connect to the job database specified when creating the job agent
--Connect to the job database specified when creating the job agent
--Connect to the job database specified when creating the job agent
--Connect to the job database specified when creating the job agent
--Connect to the job database specified when creating the job agent
EXEC jobs.sp_update_job
@job_name = 'ResultsJob',
@enabled=1,
@schedule_interval_type = 'Minutes',
@schedule_interval_count = 15;
Cancel a job
The following example shows how to cancel a job.
Connect to the job database and run the following command:
--Connect to the job database specified when creating the job agent
--Connect to the job database specified when creating the job agent
-- Delete history of a specific job's executions older than the specified date
EXEC jobs.sp_purge_jobhistory @job_name='ResultPoolsJob', @oldest_date='2016-07-01 00:00:00';
sp_add_job
Adds a new job.
Syntax
Arguments
[ @job_name = ] 'job_name'
The name of the job. The name must be unique and cannot contain the percent (%) character. job_name is
nvarchar(128), with no default.
[ @description = ] 'description'
The description of the job. description is nvarchar(512), with a default of NULL. If description is omitted, an
empty string is used.
[ @enabled = ] enabled
Whether the job's schedule is enabled. Enabled is bit, with a default of 0 (disabled). If 0, the job is not enabled
and does not run according to its schedule; however, it can be run manually. If 1, the job will run according to its
schedule, and can also be run manually.
[ @schedule_inter val_type = ] schedule_interval_type
Value indicates when the job is to be executed. schedule_interval_type is nvarchar(50), with a default of Once,
and can be one of the following values:
'Once',
'Minutes',
'Hours',
'Days',
'Weeks',
'Months'
[ @schedule_inter val_count = ] schedule_interval_count
Number of schedule_interval_count periods to occur between each execution of the job.
schedule_interval_count is int, with a default of 1. The value must be greater than or equal to 1.
[ @schedule_star t_time = ] schedule_start_time
Date on which job execution can begin. schedule_start_time is DATETIME2, with the default of 0001-01-01
00:00:00.0000000.
[ @schedule_end_time = ] schedule_end_time
Date on which job execution can stop. schedule_end_time is DATETIME2, with the default of 9999-12-31
11:59:59.0000000.
[ @job_id = ] job_id OUTPUT
The job identification number assigned to the job if created successfully. job_id is an output variable of type
uniqueidentifier.
Return Code Values
0 (success) or 1 (failure)
Remarks
sp_add_job must be run from the job agent database specified when creating the job agent. After sp_add_job
has been executed to add a job, sp_add_jobstep can be used to add steps that perform the activities for the job.
The job's initial version number is 0, which will be incremented to 1 when the first step is added.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
sp_update_job
Updates an existing job.
Syntax
Arguments
[ @job_name = ] 'job_name'
The name of the job to be updated. job_name is nvarchar(128).
[ @new_name = ] 'new_name'
The new name of the job. new_name is nvarchar(128).
[ @description = ] 'description'
The description of the job. description is nvarchar(512).
[ @enabled = ] enabled
Specifies whether the job's schedule is enabled (1) or not enabled (0). Enabled is bit.
[ @schedule_inter val_type= ] schedule_interval_type
Value indicates when the job is to be executed. schedule_interval_type is nvarchar(50) and can be one of the
following values:
'Once',
'Minutes',
'Hours',
'Days',
'Weeks',
'Months'
[ @schedule_inter val_count= ] schedule_interval_count
Number of schedule_interval_count periods to occur between each execution of the job.
schedule_interval_count is int, with a default of 1. The value must be greater than or equal to 1.
[ @schedule_star t_time= ] schedule_start_time
Date on which job execution can begin. schedule_start_time is DATETIME2, with the default of 0001-01-01
00:00:00.0000000.
[ @schedule_end_time= ] schedule_end_time
Date on which job execution can stop. schedule_end_time is DATETIME2, with the default of 9999-12-31
11:59:59.0000000.
Return Code Values
0 (success) or 1 (failure)
Remarks
After sp_add_job has been executed to add a job, sp_add_jobstep can be used to add steps that perform the
activities for the job. The job's initial version number is 0, which will be incremented to 1 when the first step is
added.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
sp_delete_job
Deletes an existing job.
Syntax
Arguments
[ @job_name = ] 'job_name'
The name of the job to be deleted. job_name is nvarchar(128).
[ @force = ] force
Specifies whether to delete if the job has any executions in progress and cancel all in-progress executions (1) or
fail if any job executions are in progress (0). force is bit.
Return Code Values
0 (success) or 1 (failure)
Remarks
Job history is automatically deleted when a job is deleted.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
sp_add_jobstep
Adds a step to a job.
Syntax
[jobs].sp_add_jobstep [ @job_name = ] 'job_name'
[ , [ @step_id = ] step_id ]
[ , [ @step_name = ] step_name ]
[ , [ @command_type = ] 'command_type' ]
[ , [ @command_source = ] 'command_source' ]
, [ @command = ] 'command'
, [ @credential_name = ] 'credential_name'
, [ @target_group_name = ] 'target_group_name'
[ , [ @initial_retry_interval_seconds = ] initial_retry_interval_seconds ]
[ , [ @maximum_retry_interval_seconds = ] maximum_retry_interval_seconds ]
[ , [ @retry_interval_backoff_multiplier = ] retry_interval_backoff_multiplier ]
[ , [ @retry_attempts = ] retry_attempts ]
[ , [ @step_timeout_seconds = ] step_timeout_seconds ]
[ , [ @output_type = ] 'output_type' ]
[ , [ @output_credential_name = ] 'output_credential_name' ]
[ , [ @output_subscription_id = ] 'output_subscription_id' ]
[ , [ @output_resource_group_name = ] 'output_resource_group_name' ]
[ , [ @output_server_name = ] 'output_server_name' ]
[ , [ @output_database_name = ] 'output_database_name' ]
[ , [ @output_schema_name = ] 'output_schema_name' ]
[ , [ @output_table_name = ] 'output_table_name' ]
[ , [ @job_version = ] job_version OUTPUT ]
[ , [ @max_parallelism = ] max_parallelism ]
Arguments
[ @job_name = ] 'job_name'
The name of the job to which to add the step. job_name is nvarchar(128).
[ @step_id = ] step_id
The sequence identification number for the job step. Step identification numbers start at 1 and increment
without gaps. If an existing step already has this ID, then that step and all following steps will have their ID's
incremented so that this new step can be inserted into the sequence. If not specified, the step_id will be
automatically assigned to the last in the sequence of steps. step_id is an int.
[ @step_name = ] step_name
The name of the step. Must be specified, except for the first step of a job that (for convenience) has a default
name of 'JobStep'. step_name is nvarchar(128).
[ @command_type = ] 'command_type'
The type of command that is executed by this jobstep. command_type is nvarchar(50), with a default value of
TSql, meaning that the value of the @command_type parameter is a T-SQL script.
If specified, the value must be TSql.
[ @command_source = ] 'command_source'
The type of location where the command is stored. command_source is nvarchar(50), with a default value of
Inline, meaning that the value of the @command_source parameter is the literal text of the command.
If specified, the value must be Inline.
[ @command = ] 'command'
The command must be valid T-SQL script and is then executed by this job step. command is nvarchar(max), with
a default of NULL.
[ @credential_name = ] 'credential_name'
The name of the database scoped credential stored in this job control database that is used to connect to each of
the target databases within the target group when this step is executed. credential_name is nvarchar(128).
[ @target_group_name = ] 'target-group_name'
The name of the target group that contains the target databases that the job step will be executed on.
target_group_name is nvarchar(128).
[ @initial_retr y_inter val_seconds = ] initial_retry_interval_seconds
The delay before the first retry attempt, if the job step fails on the initial execution attempt.
initial_retry_interval_seconds is int, with default value of 1.
[ @maximum_retr y_inter val_seconds = ] maximum_retry_interval_seconds
The maximum delay between retry attempts. If the delay between retries would grow larger than this value, it is
capped to this value instead. maximum_retry_interval_seconds is int, with default value of 120.
[ @retr y_inter val_backoff_multiplier = ] retry_interval_backoff_multiplier
The multiplier to apply to the retry delay if multiple job step execution attempts fail. For example, if the first retry
had a delay of 5 second and the backoff multiplier is 2.0, then the second retry will have a delay of 10 seconds
and the third retry will have a delay of 20 seconds. retry_interval_backoff_multiplier is real, with default value of
2.0.
[ @retr y_attempts = ] retry_attempts
The number of times to retry execution if the initial attempt fails. For example, if the retry_attempts value is 10,
then there will be 1 initial attempt and 10 retry attempts, giving a total of 11 attempts. If the final retry attempt
fails, then the job execution will terminate with a lifecycle of Failed. retry_attempts is int, with default value of 10.
[ @step_timeout_seconds = ] step_timeout_seconds
The maximum amount of time allowed for the step to execute. If this time is exceeded, then the job execution
will terminate with a lifecycle of TimedOut. step_timeout_seconds is int, with default value of 43,200 seconds (12
hours).
[ @output_type = ] 'output_type'
If not null, the type of destination that the command's first result set is written to. output_type is nvarchar(50),
with a default of NULL.
If specified, the value must be SqlDatabase.
[ @output_credential_name = ] 'output_credential_name'
If not null, the name of the database scoped credential that is used to connect to the output destination
database. Must be specified if output_type equals SqlDatabase. output_credential_name is nvarchar(128), with a
default value of NULL.
[ @output_subscription_id = ] 'output_subscription_id'
Needs description.
[ @output_resource_group_name = ] 'output_resource_group_name'
Needs description.
[ @output_ser ver_name = ] 'output_server_name'
If not null, the fully qualified DNS name of the server that contains the output destination database. Must be
specified if output_type equals SqlDatabase. output_server_name is nvarchar(256), with a default of NULL.
[ @output_database_name = ] 'output_database_name'
If not null, the name of the database that contains the output destination table. Must be specified if output_type
equals SqlDatabase. output_database_name is nvarchar(128), with a default of NULL.
[ @output_schema_name = ] 'output_schema_name'
If not null, the name of the SQL schema that contains the output destination table. If output_type equals
SqlDatabase, the default value is dbo. output_schema_name is nvarchar(128).
[ @output_table_name = ] 'output_table_name'
If not null, the name of the table that the command's first result set will be written to. If the table doesn't already
exist, it will be created based on the schema of the returning result-set. Must be specified if output_type equals
SqlDatabase. output_table_name is nvarchar(128), with a default value of NULL.
[ @job_version = ] job_version OUTPUT
Output parameter that will be assigned the new job version number. job_version is int.
[ @max_parallelism = ] max_parallelism OUTPUT
The maximum level of parallelism per elastic pool. If set, then the job step will be restricted to only run on a
maximum of that many databases per elastic pool. This applies to each elastic pool that is either directly
included in the target group or is inside a server that is included in the target group. max_parallelism is int.
Return Code Values
0 (success) or 1 (failure)
Remarks
When sp_add_jobstep succeeds, the job's current version number is incremented. The next time the job is
executed, the new version will be used. If the job is currently executing, that execution will not contain the new
step.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
sp_update_jobstep
Updates a job step.
Syntax
Arguments
[ @job_name = ] 'job_name'
The name of the job to which the step belongs. job_name is nvarchar(128).
[ @step_id = ] step_id
The identification number for the job step to be modified. Either step_id or step_name must be specified. step_id
is an int.
[ @step_name = ] 'step_name'
The name of the step to be modified. Either step_id or step_name must be specified. step_name is nvarchar(128).
[ @new_id = ] new_id
The new sequence identification number for the job step. Step identification numbers start at 1 and increment
without gaps. If a step is reordered, then other steps will be automatically renumbered.
[ @new_name = ] 'new_name'
The new name of the step. new_name is nvarchar(128).
[ @command_type = ] 'command_type'
The type of command that is executed by this jobstep. command_type is nvarchar(50), with a default value of
TSql, meaning that the value of the @command_type parameter is a T-SQL script.
If specified, the value must be TSql.
[ @command_source = ] 'command_source'
The type of location where the command is stored. command_source is nvarchar(50), with a default value of
Inline, meaning that the value of the @command_source parameter is the literal text of the command.
If specified, the value must be Inline.
[ @command = ] 'command'
The command(s) must be valid T-SQL script and is then executed by this job step. command is nvarchar(max),
with a default of NULL.
[ @credential_name = ] 'credential_name'
The name of the database scoped credential stored in this job control database that is used to connect to each of
the target databases within the target group when this step is executed. credential_name is nvarchar(128).
[ @target_group_name = ] 'target-group_name'
The name of the target group that contains the target databases that the job step will be executed on.
target_group_name is nvarchar(128).
[ @initial_retr y_inter val_seconds = ] initial_retry_interval_seconds
The delay before the first retry attempt, if the job step fails on the initial execution attempt.
initial_retry_interval_seconds is int, with default value of 1.
[ @maximum_retr y_inter val_seconds = ] maximum_retry_interval_seconds
The maximum delay between retry attempts. If the delay between retries would grow larger than this value, it is
capped to this value instead. maximum_retry_interval_seconds is int, with default value of 120.
[ @retr y_inter val_backoff_multiplier = ] retry_interval_backoff_multiplier
The multiplier to apply to the retry delay if multiple job step execution attempts fail. For example, if the first retry
had a delay of 5 second and the backoff multiplier is 2.0, then the second retry will have a delay of 10 seconds
and the third retry will have a delay of 20 seconds. retry_interval_backoff_multiplier is real, with default value of
2.0.
[ @retr y_attempts = ] retry_attempts
The number of times to retry execution if the initial attempt fails. For example, if the retry_attempts value is 10,
then there will be 1 initial attempt and 10 retry attempts, giving a total of 11 attempts. If the final retry attempt
fails, then the job execution will terminate with a lifecycle of Failed. retry_attempts is int, with default value of 10.
[ @step_timeout_seconds = ] step_timeout_seconds
The maximum amount of time allowed for the step to execute. If this time is exceeded, then the job execution
will terminate with a lifecycle of TimedOut. step_timeout_seconds is int, with default value of 43,200 seconds (12
hours).
[ @output_type = ] 'output_type'
If not null, the type of destination that the command's first result set is written to. To reset the value of
output_type back to NULL, set this parameter's value to '' (empty string). output_type is nvarchar(50), with a
default of NULL.
If specified, the value must be SqlDatabase.
[ @output_credential_name = ] 'output_credential_name'
If not null, the name of the database scoped credential that is used to connect to the output destination
database. Must be specified if output_type equals SqlDatabase. To reset the value of output_credential_name
back to NULL, set this parameter's value to '' (empty string). output_credential_name is nvarchar(128), with a
default value of NULL.
[ @output_ser ver_name = ] 'output_server_name'
If not null, the fully qualified DNS name of the server that contains the output destination database. Must be
specified if output_type equals SqlDatabase. To reset the value of output_server_name back to NULL, set this
parameter's value to '' (empty string). output_server_name is nvarchar(256), with a default of NULL.
[ @output_database_name = ] 'output_database_name'
If not null, the name of the database that contains the output destination table. Must be specified if output_type
equals SqlDatabase. To reset the value of output_database_name back to NULL, set this parameter's value to ''
(empty string). output_database_name is nvarchar(128), with a default of NULL.
[ @output_schema_name = ] 'output_schema_name'
If not null, the name of the SQL schema that contains the output destination table. If output_type equals
SqlDatabase, the default value is dbo. To reset the value of output_schema_name back to NULL, set this
parameter's value to '' (empty string). output_schema_name is nvarchar(128).
[ @output_table_name = ] 'output_table_name'
If not null, the name of the table that the command's first result set will be written to. If the table doesn't already
exist, it will be created based on the schema of the returning result-set. Must be specified if output_type equals
SqlDatabase. To reset the value of output_server_name back to NULL, set this parameter's value to '' (empty
string). output_table_name is nvarchar(128), with a default value of NULL.
[ @job_version = ] job_version OUTPUT
Output parameter that will be assigned the new job version number. job_version is int.
[ @max_parallelism = ] max_parallelism OUTPUT
The maximum level of parallelism per elastic pool. If set, then the job step will be restricted to only run on a
maximum of that many databases per elastic pool. This applies to each elastic pool that is either directly
included in the target group or is inside a server that is included in the target group. To reset the value of
max_parallelism back to null, set this parameter's value to -1. max_parallelism is int.
Return Code Values
0 (success) or 1 (failure)
Remarks
Any in-progress executions of the job will not be affected. When sp_update_jobstep succeeds, the job's version
number is incremented. The next time the job is executed, the new version will be used.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users
sp_delete_jobstep
Removes a job step from a job.
Syntax
Arguments
[ @job_name = ] 'job_name'
The name of the job from which the step will be removed. job_name is nvarchar(128), with no default.
[ @step_id = ] step_id
The identification number for the job step to be deleted. Either step_id or step_name must be specified. step_id is
an int.
[ @step_name = ] 'step_name'
The name of the step to be deleted. Either step_id or step_name must be specified. step_name is nvarchar(128).
[ @job_version = ] job_version OUTPUT
Output parameter that will be assigned the new job version number. job_version is int.
Return Code Values
0 (success) or 1 (failure)
Remarks
Any in-progress executions of the job will not be affected. When sp_update_jobstep succeeds, the job's version
number is incremented. The next time the job is executed, the new version will be used.
The other job steps will be automatically renumbered to fill the gap left by the deleted job step.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
sp_start_job
Starts executing a job.
Syntax
Arguments
[ @job_name = ] 'job_name'
The name of the job from which the step will be removed. job_name is nvarchar(128), with no default.
[ @job_execution_id = ] job_execution_id OUTPUT
Output parameter that will be assigned the job execution's ID. job_version is uniqueidentifier.
Return Code Values
0 (success) or 1 (failure)
Remarks
None.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
sp_stop_job
Stops a job execution.
Syntax
Arguments
[ @job_execution_id = ] job_execution_id
The identification number of the job execution to stop. job_execution_id is uniqueidentifier, with default of NULL.
Return Code Values
0 (success) or 1 (failure)
Remarks
None.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
sp_add_target_group
Adds a target group.
Syntax
Arguments
[ @target_group_name = ] 'target_group_name'
The name of the target group to create. target_group_name is nvarchar(128), with no default.
[ @target_group_id = ] target_group_id OUTPUT The target group identification number assigned to the job if
created successfully. target_group_id is an output variable of type uniqueidentifier, with a default of NULL.
Return Code Values
0 (success) or 1 (failure)
Remarks
Target groups provide an easy way to target a job at a collection of databases.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
sp_delete_target_group
Deletes a target group.
Syntax
Arguments
[ @target_group_name = ] 'target_group_name'
The name of the target group to delete. target_group_name is nvarchar(128), with no default.
Return Code Values
0 (success) or 1 (failure)
Remarks
None.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
sp_add_target_group_member
Adds a database or group of databases to a target group.
Syntax
Arguments
[ @target_group_name = ] 'target_group_name'
The name of the target group to which the member will be added. target_group_name is nvarchar(128), with no
default.
[ @membership_type = ] 'membership_type'
Specifies if the target group member will be included or excluded. target_group_name is nvarchar(128), with
default of 'Include'. Valid values for membership_type are 'Include' or 'Exclude'.
[ @target_type = ] 'target_type'
The type of target database or collection of databases including all databases in a server, all databases in an
Elastic pool, all databases in a shard map, or an individual database. target_type is nvarchar(128), with no
default. Valid values for target_type are 'SqlServer', 'SqlElasticPool', 'SqlDatabase', or 'SqlShardMap'.
[ @refresh_credential_name = ] 'refresh_credential_name'
The name of the database scoped credential. refresh_credential_name is nvarchar(128), with no default.
[ @ser ver_name = ] 'server_name'
The name of the server that should be added to the specified target group. server_name should be specified
when target_type is 'SqlServer'. server_name is nvarchar(128), with no default.
[ @database_name = ] 'database_name'
The name of the database that should be added to the specified target group. database_name should be
specified when target_type is 'SqlDatabase'. database_name is nvarchar(128), with no default.
[ @elastic_pool_name = ] 'elastic_pool_name'
The name of the Elastic pool that should be added to the specified target group. elastic_pool_name should be
specified when target_type is 'SqlElasticPool'. elastic_pool_name is nvarchar(128), with no default.
[ @shard_map_name = ] 'shard_map_name'
The name of the shard map pool that should be added to the specified target group. elastic_pool_name should
be specified when target_type is 'SqlShardMap'. shard_map_name is nvarchar(128), with no default.
[ @target_id = ] target_group_id OUTPUT
The target identification number assigned to the target group member if created added to the target group.
target_id is an output variable of type uniqueidentifier, with a default of NULL. Return Code Values 0 (success) or
1 (failure)
Remarks
A job executes on all single databases within a server or in an elastic pool at time of execution, when a server or
elastic pool is included in the target group.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
Examples
The following example adds all the databases in the London and NewYork servers to the group Servers
Maintaining Customer Information. You must connect to the jobs database specified when creating the job
agent, in this case ElasticJobs.
--Connect to the jobs database specified when creating the job agent
USE ElasticJobs;
GO
sp_delete_target_group_member
Removes a target group member from a target group.
Syntax
Arguments
[ @target_group_name = ] 'target_group_name'
The name of the target group from which to remove the target group member. target_group_name is
nvarchar(128), with no default.
[ @target_id = ] target_id
The target identification number assigned to the target group member to be removed. target_id is a
uniqueidentifier, with a default of NULL.
Return Code Values
0 (success) or 1 (failure)
Remarks
Target groups provide an easy way to target a job at a collection of databases.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
Examples
The following example removes the London server from the group Servers Maintaining Customer Information.
You must connect to the jobs database specified when creating the job agent, in this case ElasticJobs.
--Connect to the jobs database specified when creating the job agent
USE ElasticJobs ;
GO
sp_purge_jobhistory
Removes the history records for a job.
Syntax
Arguments
[ @job_name = ] 'job_name'
The name of the job for which to delete the history records. job_name is nvarchar(128), with a default of NULL.
Either job_id or job_name must be specified, but both cannot be specified.
[ @job_id = ] job_id
The job identification number of the job for the records to be deleted. job_id is uniqueidentifier, with a default of
NULL. Either job_id or job_name must be specified, but both cannot be specified.
[ @oldest_date = ] oldest_date
The oldest record to retain in the history. oldest_date is DATETIME2, with a default of NULL. When oldest_date is
specified, sp_purge_jobhistory only removes records that are older than the value specified.
Return Code Values
0 (success) or 1 (failure)
Remarks
Target groups provide an easy way to target a job at a collection of databases.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
Examples
The following example adds all the databases in the London and NewYork servers to the group Servers
Maintaining Customer Information. You must connect to the jobs database specified when creating the job
agent, in this case ElasticJobs.
--Connect to the jobs database specified when creating the job agent
EXEC sp_delete_target_group_member
@target_group_name = N'Servers Maintaining Customer Information',
@server_name = N'London.database.windows.net';
GO
Job views
The following views are available in the jobs database.
job_executions view
[ jobs].[ job_executions]
Shows job execution history.
schedule_star t_time datetime2(7) Date and time the job was last started
execution.
job_versions view
[ jobs].[ job_versions]
Shows all job versions.
jobsteps view
[ jobs].[ jobsteps]
Shows all steps in the current version of each job.
step_name nvarchar(128) Unique (for this job) name for the step.
initial_retr y_inter val_seconds int The delay before the first retry
attempt. Default value is 1.
jobstep_versions view
[ jobs].[ jobstep_versions]
Shows all steps in all versions of each job. The schema is identical to jobsteps.
target_groups view
[ jobs].[target_groups]
Lists all target groups.
target_group_members view
[ jobs].[target_group_members]
Shows all members of all target groups.
C O L UM N N A M E DATA T Y P E DESC RIP T IO N
Resources
Transact-SQL Syntax Conventions
Next steps
Create and manage Elastic Jobs using PowerShell
Authorization and Permissions
Migrate to the new Elastic Database jobs (preview)
9/13/2022 • 12 minutes to read • Edit Online
Prerequisites
The upgraded version of Elastic Database jobs has a new set of PowerShell cmdlets for use during migration.
These new cmdlets transfer all of your existing job credentials, targets (including databases, servers, custom
collections), job triggers, job schedules, job contents, and jobs over to a new Elastic Job agent.
Install the latest Elastic Jobs cmdlets
If you don't already have an Azure subscription, create a free account before you begin.
Install the Az.Sql 1.1.1-preview module to get the latest Elastic Job cmdlets. Run the following commands in
PowerShell with administrative access.
# Installs the latest PackageManagement powershell package which PowerShellGet v1.6.5 is dependent on
Find-Package PackageManagement -RequiredVersion 1.1.7.2 | Install-Package -Force
# Installs the latest PowerShellGet module which adds the -AllowPrerelease flag to Install-Module
Find-Package PowerShellGet -RequiredVersion 1.6.5 | Install-Package -Force
# Places Az.Sql preview cmdlets side by side with existing Az.Sql version
Install-Module -Name Az.Sql -RequiredVersion 1.1.1-preview -AllowPrerelease
# Confirm if module successfully imported - if the imported version is 1.1.1, then continue
Get-Module Az.Sql
# Register your subscription for the for the Elastic Jobs public preview feature
Register-AzProviderFeature -FeatureName sqldb-JobAccounts -ProviderNamespace Microsoft.Sql
# Get an existing database to use as the job database - or create a new one if necessary
$db = Get-AzSqlDatabase -ResourceGroupName <resourceGroupName> -ServerName <serverName> -DatabaseName
<databaseName>
# Create a new elastic job agent
$agent = $db | New-AzSqlElasticJobAgent -Name <agentName>
Migration
Now that both the old and new Elastic Jobs cmdlets are initialized, migrate your job credentials, targets, and jobs
to the new job database.
Setup
$ErrorActionPreference = "Stop";
Migrate credentials
$oldCreds = Get-AzureSqlJobCredential
$oldCreds | % {
$oldCredName = $_.CredentialName
$oldUserName = $_.UserName
Write-Output ("Credential " + $oldCredName)
$oldCredential = Get-Credential -UserName $oldUserName `
-Message ("Please enter in the password that was used for your credential " +
$oldCredName)
try
{
$cred = New-AzSqlElasticJobCredential -ParentObject $agent -Name $oldCredName -Credential
$oldCredential
}
catch [System.Management.Automation.PSArgumentException]
{
$cred = Get-AzSqlElasticJobCredential -ParentObject $agent -Name $oldCredName
$cred = Set-AzSqlElasticJobCredential -InputObject $cred -Credential $oldCredential
}
To migrate your credentials, execute the following command by passing in the $agent PowerShell object from
earlier.
Migrate-Credentials $agent
Sample output
Migrate targets
# Flatten list
for ($i=$targetGroups.Count - 1; $i -ge 0; $i--)
{
# Fetch target group's initial list of targets unexpanded
$targets = $targetGroups[$i]
$expandedTargets = $targetGroups[$target.TargetDescription.CustomCollectionName]
# Migrate server target from old jobs to new job's target group
function Add-ServerTarget ($target, $tg) {
$jobTarget = Get-AzureSqlJobTarget -TargetId $target.TargetId
$serverName = $jobTarget.ServerName
$credName = $jobTarget.MasterDatabaseCredentialName
$t = Add-AzSqlElasticJobTarget -ParentObject $tg -ServerName $serverName -RefreshCredentialName $credName
}
# Migrate database target from old jobs to new job's target group
function Add-DatabaseTarget ($target, $tg) {
$jobTarget = Get-AzureSqlJobTarget -TargetId $target.TargetId
$serverName = $jobTarget.ServerName
$databaseName = $jobTarget.DatabaseName
$exclude = $target.Membership
return $tgName
}
return $tgName
}
To migrate your targets (servers, databases, and custom collections) to your new job database, execute the
Migrate-TargetGroups cmdlet to perform the following:
Root level targets that are servers and databases will be migrated to a new target group named "
(<serverName>, <databaseName>)" containing only the root level target.
A custom collection will migrate to a new target group containing all child targets.
Migrate-TargetGroups $agent
Sample output:
Migrate jobs
$oldJobs = Get-AzureSqlJob
$newJobs = [System.Collections.ArrayList] @()
# Schedule
$oldJobTriggers = Get-AzureSqlJobTrigger -JobName $oldJob.JobName
if ($oldJobTriggers.Count -ge 1)
{
foreach ($trigger in $oldJobTriggers)
{
# Migrates jobs
function Setup-Job ($job, $agent) {
$jobName = $newJob.JobName
$jobDescription = $newJob.Description
try {
$job = New-AzSqlElasticJob -ParentObject $agent -Name $jobName `
-Description $jobDescription -IntervalType $intervalType -IntervalCount $intervalCount `
-StartTime $startTime -EndTime $endTime
return $job
}
catch [System.Management.Automation.PSArgumentException] {
$job = Get-AzSqlElasticJob -ParentObject $agent -Name $jobName
$job = $job | Set-AzSqlElasticJob -Description $jobDescription -IntervalType $intervalType -
IntervalCount $intervalCount `
-StartTime $startTime -EndTime $endTime
return $job
}
}
# Create or update a job that runs once
else {
try {
$job = New-AzSqlElasticJob -ParentObject $agent -Name $jobName `
-Description $jobDescription -RunOnce
return $job
}
catch [System.Management.Automation.PSArgumentException] {
$job = Get-AzSqlElasticJob -ParentObject $agent -Name $jobName
$job = $job | Set-AzSqlElasticJob -Description $jobDescription -RunOnce
return $job
}
}
}
# Migrates job steps
function Setup-JobStep ($newJob, $job) {
$defaultJobStepName = 'JobStep'
$contentName = $newJob.Description
$commandText = (Get-AzureSqlJobContentDefinition -ContentName $contentName).CommandText
$targetGroupName = $newJob.TargetGroupName
$credentialName = $newJob.CredentialName
$output = $newJob.Output
try {
$jobStep = $job | Add-AzSqlElasticJobStep -Name $defaultJobStepName `
-TargetGroupName $targetGroupName -CredentialName $credentialName -CommandText $commandText `
-OutputDatabaseObject $outputDatabase `
-OutputSchemaName $outputSchemaName -OutputTableName $outputTableName `
-OutputCredentialName $outputCredentialName
}
catch [System.Management.Automation.PSArgumentException] {
$jobStep = $job | Get-AzSqlElasticJobStep -Name $defaultJobStepName
$jobStep = $jobStep | Set-AzSqlElasticJobStep -TargetGroupName $targetGroupName `
-CredentialName $credentialName -CommandText $commandText `
-OutputDatabaseObject $outputDatabase `
-OutputSchemaName $outputSchemaName -OutputTableName $outputTableName `
-OutputCredentialName $outputCredentialName
}
}
else {
try {
$jobStep = $job | Add-AzSqlElasticJobStep -Name $defaultJobStepName -TargetGroupName $targetGroupName
-CredentialName $credentialName -CommandText $commandText
}
catch [System.Management.Automation.PSArgumentException] {
$jobStep = $job | Get-AzSqlElasticJobStep -Name $defaultJobStepName
$jobStep = $jobStep | Set-AzSqlElasticJobStep -TargetGroupName $targetGroupName -CredentialName
$credentialName -CommandText $commandText
}
}
Log-ChildOutput ("Added step " + $jobStep.StepName + " using target group " + $jobStep.TargetGroupName + "
using credential " + $jobStep.CredentialName)
Log-ChildOutput("Command text script taken from content name " + $contentName)
To migrate your jobs, job content, job triggers, and job schedules over to your new Elastic Job agent's database,
execute the Migrate-Jobs cmdlet passing in your agent.
Jobs with multiple triggers with different schedules are separated into multiple jobs with naming scheme: "
<jobName> (<scheduleName>)".
Job contents are migrated to a job by adding a default job step named JobStep with associated command
text.
Jobs are disabled by default so that you can validate them before enabling them.
Migrate-Jobs $agent
Sample output:
Migration Complete
The job database should now have all of the job credentials, targets, job triggers, job schedules, job contents,
and jobs migrated over.
To confirm that everything migrated correctly, use the following scripts:
$jobs | Start-AzSqlElasticJob
For any jobs that were running on a schedule, remember to enable them so that they can run in the background:
Next steps
Create and manage Elastic Jobs using PowerShell
Create and manage Elastic Jobs using Transact-SQL (T-SQL)
Write audit to a storage account behind VNet and
firewall
9/13/2022 • 4 minutes to read • Edit Online
Background
Azure Virtual Network (VNet) is the fundamental building block for your private network in Azure. VNet enables
many types of Azure resources, such as Azure Virtual Machines (VM), to securely communicate with each other,
the internet, and on-premises networks. VNet is similar to a traditional network in your own data center, but
brings with it additional benefits of Azure infrastructure such as scale, availability, and isolation.
To learn more about the VNet concepts, Best practices and many more, see What is Azure Virtual Network.
To learn more about how to create a virtual network, see Quickstart: Create a virtual network using the Azure
portal.
Prerequisites
For audit to write to a storage account behind a VNet or firewall, the following prerequisites are required:
A general-purpose v2 storage account. If you have a general-purpose v1 or blob storage account, upgrade to
a general-purpose v2 storage account. For more information, see Types of storage accounts.
The premium storage with BlockBlobStorage is supported
The storage account must be on the same tenant and at the same location as the logical SQL server (it's OK
to be on different subscriptions).
The Azure Storage account requires Allow trusted Microsoft services to access this storage account . Set
this on the Storage Account Firewalls and Vir tual networks .
You must have Microsoft.Authorization/roleAssignments/write permission on the selected storage account.
For more information, see Azure built-in roles.
User managed identity authentication type for enabling auditing to storage behind firewall is not
currently supported.
NOTE
When Auditing to stoarge account is already enabled on a server / db, and if the target storage account is moved behind
a firewall, we lose write access to the storage account and audit logs stop getting written to it.To make auditing work we
have to resave the audit settings from portal.
NOTE
If the selected Storage account is behind VNet, you will see the following message:
You have selected a storage account that is behind a firewall or in a virtual network. Using this
storage requires to enable 'Allow trusted Microsoft services to access this storage account' on the
storage account and creates a server managed identity with 'storage blob data contributor' RBAC.
If you do not see this message, then storage account is not behind a VNet.
4. Select the number of days for the retention period. Then click OK . Logs older than the retention period
are deleted.
5. Select Save on your auditing settings.
You have successfully configured audit to write to a storage account behind a VNet or firewall.
SA M P L E VA L UE SA M P L E DESC RIP T IO N
To configure SQL Audit to write events to a storage account behind a VNet or Firewall:
1. Register your server with Azure Active Directory (Azure AD). Use either PowerShell or REST API.
PowerShell
Connect-AzAccount
Select-AzSubscription -SubscriptionId <subscriptionId>
Set-AzSqlServer -ResourceGroupName <your resource group> -ServerName <azure server name> -
AssignIdentity
REST API :
Sample request
PUT https://management.azure.com/subscriptions/<subscription ID>/resourceGroups/<resource
group>/providers/Microsoft.Sql/servers/<azure server name>?api-version=2015-05-01-preview
Request body
{
"identity": {
"type": "SystemAssigned",
},
"properties": {
"fullyQualifiedDomainName": "<azure server name>.database.windows.net",
"administratorLogin": "<administrator login>",
"administratorLoginPassword": "<complex password>",
"version": "12.0",
"state": "Ready"
}
}
2. Assign the Storage Blob Data Contributor role to the server hosting the database that you registered with
Azure Active Directory (Azure AD) in the previous step.
For detailed steps, see Assign Azure roles using the Azure portal.
NOTE
Only members with Owner privilege can perform this step. For various Azure built-in roles, refer to Azure built-in
roles.
Request body
{
"properties": {
"state": "Enabled",
"storageEndpoint": "https://<storage account>.blob.core.windows.net"
}
}
Deploy an Azure SQL Server with Auditing enabled to write audit logs to a blob storage
NOTE
The linked sample is on an external public repository and is provided 'as is', without warranty, and are not supported
under any Microsoft support program/service.
Next steps
Use PowerShell to create a virtual network service endpoint, and then a virtual network rule for Azure SQL
Database.
Virtual Network Rules: Operations with REST APIs
Use virtual network service endpoints and rules for servers
Configure Advanced Threat Protection for Azure
SQL Database
9/13/2022 • 2 minutes to read • Edit Online
c. Under ADVANCED THREAT PROTECTION SETTINGS , select Add your contact details to
the subscription's email settings in Defender for Cloud .
d. Provide the list of emails to receive notifications upon detection of anomalous database activities
in the Additional email addresses (separated by commas) text box.
e. Optionally customize the severity of alerts that will trigger notifications to be sent under
Notification types .
f. Select Save .
NOTE
This feature cannot be set using portal for SQL Managed Instance (use PowerShell or REST API). For more information, see
Dynamic Data Masking.
4. In the Dynamic Data Masking configuration page, you may see some database columns that the
recommendations engine has flagged for masking. In order to accept the recommendations, just click
Add Mask for one or more columns and a mask is created based on the default type for this column. You
can change the masking function by clicking on the masking rule and editing the masking field format to
a different format of your choice. Be sure to click Save to save your settings.
5. To add a mask for any column in your database, at the top of the Dynamic Data Masking configuration
page, click Add Mask to open the Add Masking Rule configuration page.
6. Select the Schema , Table and Column to define the designated field for masking.
7. Select how to mask from the list of sensitive data masking categories.
8. Click Add in the data masking rule page to update the set of masking rules in the dynamic data masking
policy.
9. Type the SQL users or Azure Active Directory (Azure AD) identities that should be excluded from masking,
and have access to the unmasked sensitive data. This should be a semicolon-separated list of users. Users
with administrator privileges always have access to the original unmasked data.
TIP
To make it so the application layer can display sensitive data for application privileged users, add the SQL user or
Azure AD identity the application uses to query the database. It is highly recommended that this list contain a
minimal number of privileged users to minimize exposure of the sensitive data.
10. Click Save in the data masking configuration page to save the new or updated masking policy.
Next steps
For an overview of dynamic data masking, see dynamic data masking.
You can also implement dynamic data masking using Azure SQL Database cmdlets or the REST API.
Create server configured with user-assigned
managed identity and customer-managed TDE
9/13/2022 • 6 minutes to read • Edit Online
NOTE
Assigning a user-assigned managed identity for Azure SQL logical servers and Managed Instances is in public preview .
Prerequisites
This how-to guide assumes that you've already created an Azure Key Vault and imported a key into it to use
as the TDE protector for Azure SQL Database. For more information, see transparent data encryption with
BYOK support.
Soft-delete and Purge protection must be enabled on the key vault
You must have created a user-assigned managed identity and provided it the required TDE permissions (Get,
Wrap Key, Unwrap Key) on the above key vault. For creating a user-assigned managed identity, see Create a
user-assigned managed identity.
You must have Azure PowerShell installed and running.
[Recommended but optional] Create the key material for the TDE protector in a hardware security module
(HSM) or local key store first, and import the key material to Azure Key Vault. Follow the instructions for
using a hardware security module (HSM) and Key Vault to learn more.
1. Browse to the Select SQL deployment option page in the Azure portal.
2. If you aren't already signed in to Azure portal, sign in when prompted.
3. Under SQL databases , leave Resource type set to Single database , and select Create .
4. On the Basics tab of the Create SQL Database form, under Project details , select the desired Azure
Subscription .
5. For Resource group , select Create new , enter a name for your resource group, and select OK .
6. For Database name enter ContosoHR .
7. For Ser ver , select Create new , and fill out the New ser ver form with the following values:
Ser ver name : Enter a unique server name. Server names must be globally unique for all servers
in Azure, not just unique within a subscription. Enter something like mysqlserver135 , and the
Azure portal will let you know if it's available or not.
Ser ver admin login : Enter an admin login name, for example: azureuser .
Password : Enter a password that meets the password requirements, and enter it again in the
Confirm password field.
Location : Select a location from the dropdown list
13. On the Identity (preview) blade, select User assigned managed identity and then select Add . Select
the desired Subscription and then under User assigned managed identities select the desired user-
assigned managed identity from the selected subscription. Then select the Select button.
14. Under Primar y identity , select the same user-assigned managed identity selected in the previous step.
Next steps
Get started with Azure Key Vault integration and Bring Your Own Key support for TDE: Turn on TDE using
your own key from Key Vault.
Azure SQL Database and Azure Synapse IP firewall
rules
9/13/2022 • 12 minutes to read • Edit Online
IMPORTANT
This article does not apply to Azure SQL Managed Instance. For information about network configuration, see Connect
your application to Azure SQL Managed Instance.
Azure Synapse only supports server-level IP firewall rules. It doesn't support database-level IP firewall rules.
To use the portal or PowerShell, you must be the subscription owner or a subscription contributor.
To use Transact-SQL, you must connect to the master database as the server-level principal login or as the
Azure Active Directory administrator. (A server-level IP firewall rule must first be created by a user who has
Azure-level permissions.)
NOTE
By default, during creation of a new logical SQL server from the Azure portal, the Allow Azure Ser vices and
resources to access this ser ver setting is set to No .
NOTE
For information about portable databases in the context of business continuity, see Authentication requirements for
disaster recovery.
NOTE
To access Azure SQL Database from your local computer, ensure that the firewall on your network and local computer
allow outgoing communication on TCP port 1433.
Permissions
To be able to create and manage IP firewall rules for the Azure SQL Server, you will need to either be:
in the SQL Server Contributor role
in the SQL Security Manager role
the owner of the resource that contains the Azure SQL Server
IMPORTANT
Database-level IP firewall rules can only be created and managed by using Transact-SQL.
To improve performance, server-level IP firewall rules are temporarily cached at the database level. To refresh
the cache, see DBCC FLUSHAUTHCACHE.
TIP
You can use Database Auditing to audit server-level and database-level firewall changes.
TIP
For a tutorial, see Create a database using the Azure portal.
The following example reviews the existing rules, enables a range of IP addresses on the server Contoso, and
deletes an IP firewall rule:
To delete a server-level IP firewall rule, execute the sp_delete_firewall_rule stored procedure. The following
example deletes the rule ContosoFirewallRule:
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all development is now for
the Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az and AzureRm
modules are substantially identical.
C M DL ET L EVEL DESC RIP T IO N
TIP
For $servername specify the server name and not the fully qualified DNS name e.g. specify mysqldbser ver instead of
mysqldbser ver.database.windows.net
For PowerShell examples in the context of a quickstart, see Create DB - PowerShell and Create a single database and
configure a server-level IP firewall rule using PowerShell.
az sql server firewall-rule list Server Lists the IP firewall rules on a server
az sql server firewall-rule show Server Shows the detail of an IP firewall rule
TIP
For $servername specify the server name and not the fully qualified DNS name e.g. specify mysqldbser ver instead of
mysqldbser ver.database.windows.net
For a CLI example in the context of a quickstart, see Create DB - Azure CLI and Create a single database and configure a
server-level IP firewall rule using the Azure CLI.
Use a REST API to manage server-level IP firewall rules
API L EVEL DESC RIP T IO N
IMPORTANT
This article applies to Azure SQL Database, including Azure Synapse (formerly SQL DW). For simplicity, the term Azure SQL
Database in this article applies to databases belonging to either Azure SQL Database or Azure Synapse. This article does
not apply to Azure SQL Managed Instance because it does not have a service endpoint associated with it.
This article demonstrates a PowerShell script that takes the following actions:
1. Creates a Microsoft Azure Virtual Service endpoint on your subnet.
2. Adds the endpoint to the firewall of your server, to create a virtual network rule.
For more background, see Virtual Service endpoints for Azure SQL Database.
TIP
If all you need is to assess or add the Virtual Service endpoint type name for Azure SQL Database to your subnet, you
can skip ahead to our more direct PowerShell script.
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is
for the Az.Sql Cmdlets. For the older module, see AzureRM.Sql. The arguments for the commands in the Az module
and in the AzureRm modules are substantially identical.
Major cmdlets
This article emphasizes the New-AzSqlSer verVir tualNetworkRule cmdlet that adds the subnet endpoint to
the access control list (ACL) of your server, thereby creating a rule.
The following list shows the sequence of other major cmdlets that you must run to prepare for your call to
New-AzSqlSer verVir tualNetworkRule . In this article, these calls occur in script 3 "Virtual network rule":
1. New-AzVirtualNetworkSubnetConfig: Creates a subnet object.
2. New-AzVirtualNetwork: Creates your virtual network, giving it the subnet.
3. Set-AzVirtualNetworkSubnetConfig: Assigns a Virtual Service endpoint to your subnet.
4. Set-AzVirtualNetwork: Persists updates made to your virtual network.
5. New-AzSqlServerVirtualNetworkRule: After your subnet is an endpoint, adds your subnet as a virtual
network rule, into the ACL of your server.
This cmdlet Offers the parameter -IgnoreMissingVNetSer viceEndpoint , starting in Azure RM
PowerShell Module version 5.1.1.
NOTE
Please ensure that service endpoints are turned on for the VNet/Subnet that you want to add to your Server otherwise
creation of the VNet Firewall Rule will fail.
Script 1: Variables
This first PowerShell script assigns values to variables. The subsequent scripts depend on these variables.
IMPORTANT
Before you run this script, you can edit the values, if you like. For example, if you already have a resource group, you
might want to edit your resource group name as the assigned value.
Your subscription name should be edited into the script.
$yesno = Read-Host 'Do you need to log into Azure (only one time per powershell.exe session)? [yes/no]'
if ('yes' -eq $yesno) { Connect-AzAccount }
###########################################################
## Assignments to variables used by the later scripts. ##
###########################################################
$ResourceGroupName = 'RG-YourNameHere'
$Region = 'westcentralus'
$VNetName = 'myVNet'
$SubnetName = 'mySubnet'
$VNetAddressPrefix = '10.1.0.0/16'
$SubnetAddressPrefix = '10.1.1.0/24'
$VNetRuleName = 'myFirstVNetRule-ForAcl'
$SqlDbServerName = 'mysqldbserver-forvnet'
$SqlDbAdminLoginName = 'ServerAdmin'
$SqlDbAdminLoginPassword = 'ChangeYourAdminPassword1'
Script 2: Prerequisites
This script prepares for the next script, where the endpoint action is. This script creates for you the following
listed items, but only if they do not already exist. You can skip script 2 if you are sure these items already exist:
Azure resource group
Logical SQL server
PowerShell script 2 source code
######### Script 2 ########################################
## Ensure your Resource Group already exists. ##
###########################################################
$gottenResourceGroup = $null
$gottenResourceGroup = Get-AzResourceGroup -Name $ResourceGroupName -ErrorAction SilentlyContinue
$gottenResourceGroup = $null
###########################################################
## Ensure your server already exists. ##
###########################################################
$sqlDbServer = $null
$azSqlParams = @{
ResourceGroupName = $ResourceGroupName
ServerName = $SqlDbServerName
ErrorAction = 'SilentlyContinue'
}
$sqlDbServer = Get-AzSqlServer @azSqlParams
$sqlSrvParams = @{
ResourceGroupName = $ResourceGroupName
ServerName = $SqlDbServerName
Location = $Region
SqlAdministratorCredentials = $sqlAdministratorCredentials
}
New-AzSqlServer @sqlSrvParams
} else {
Write-Host "Good, your server already exists - $SqlDbServerName."
}
$sqlAdministratorCredentials = $null
$sqlDbServer = $null
$subnetParams = @{
Name = $SubnetName
AddressPrefix = $SubnetAddressPrefix
ServiceEndpoint = $ServiceEndpointTypeName_SqlDb
}
$subnet = New-AzVirtualNetworkSubnetConfig @subnetParams
Write-Host "Create a virtual network '$VNetName'.`nGive the subnet to the virtual network that we created."
$vnetParams = @{
Name = $VNetName
AddressPrefix = $VNetAddressPrefix
Subnet = $subnet
ResourceGroupName = $ResourceGroupName
Location = $Region
}
$vnet = New-AzVirtualNetwork @vnetParams
###########################################################
## Create a Virtual Service endpoint on the subnet. ##
###########################################################
$vnetSubParams = @{
Name = $SubnetName
AddressPrefix = $SubnetAddressPrefix
VirtualNetwork = $vnet
ServiceEndpoint = $ServiceEndpointTypeName_SqlDb
}
$vnet = Set-AzVirtualNetworkSubnetConfig @vnetSubParams
Write-Host "Persist the updates made to the virtual network > subnet."
###########################################################
## Add the Virtual Service endpoint Id as a rule, ##
## into SQL Database ACLs. ##
###########################################################
Write-Host "Add the subnet .Id as a rule, into the ACLs for your server."
$ruleParams = @{
ResourceGroupName = $ResourceGroupName
ServerName = $SqlDbServerName
VirtualNetworkRuleName = $VNetRuleName
VirtualNetworkSubnetId = $subnet.Id
}
New-AzSqlServerVirtualNetworkRule @ruleParams
Write-Host "Verify that the rule is in the SQL Database ACL."
$rule2Params = @{
ResourceGroupName = $ResourceGroupName
ServerName = $SqlDbServerName
VirtualNetworkRuleName = $VNetRuleName
}
Get-AzSqlServerVirtualNetworkRule @rule2Params
Script 4: Clean-up
This final script deletes the resources that the previous scripts created for the demonstration. However, the script
asks for confirmation before it deletes the following:
Logical SQL server
Azure Resource Group
You can run script 4 any time after script 1 completes.
PowerShell script 4 source code
######### Script 4 ########################################
## Clean-up phase A: Unconditional deletes. ##
## ##
## 1. The test rule is deleted from SQL Database ACL. ##
## 2. The test endpoint is deleted from the subnet. ##
## 3. The test virtual network is deleted. ##
###########################################################
$removeParams = @{
ResourceGroupName = $ResourceGroupName
ServerName = $SqlDbServerName
VirtualNetworkRuleName = $VNetRuleName
ErrorAction = 'SilentlyContinue'
}
Remove-AzSqlServerVirtualNetworkRule @removeParams
Write-Host "Delete the virtual network (thus also deletes the subnet)."
$removeParams = @{
Name = $VNetName
ResourceGroupName = $ResourceGroupName
ErrorAction = 'SilentlyContinue'
}
Remove-AzVirtualNetwork @removeParams
###########################################################
## Clean-up phase B: Conditional deletes. ##
## ##
## These might have already existed, so user might ##
## want to keep. ##
## ##
## 1. Logical SQL server ##
## 2. Azure resource group ##
###########################################################
$yesno = Read-Host 'CAUTION !: Do you want to DELETE your server AND your resource group? [yes/no]'
if ('yes' -eq $yesno) {
Write-Host "Remove the server."
$removeParams = @{
ServerName = $SqlDbServerName
ResourceGroupName = $ResourceGroupName
ErrorAction = 'SilentlyContinue'
}
Remove-AzSqlServer @removeParams
IMPORTANT
Before you run this script, you must edit the values assigned to the $-variables, near the top of the script.
### 1. LOG into to your Azure account, needed only once per PS session. Assign variables.
$yesno = Read-Host 'Do you need to log into Azure (only one time per powershell.exe session)? [yes/no]'
if ('yes' -eq $yesno) { Connect-AzAccount }
$SubscriptionName = 'yourSubscriptionName'
Select-AzSubscription -SubscriptionName "$SubscriptionName"
$ResourceGroupName = 'yourRGName'
$VNetName = 'yourVNetName'
$SubnetName = 'yourSubnetName'
$SubnetAddressPrefix = 'Obtain this value from the Azure portal.' # Looks roughly like: '10.0.0.0/24'
### 2. Search for your virtual network, and then for your subnet.
# Search for the virtual network.
$vnet = $null
$vnet = Get-AzVirtualNetwork -ResourceGroupName $ResourceGroupName -Name $VNetName
$subnet = $null
for ($nn = 0; $nn -lt $vnet.Subnets.Count; $nn++) {
$subnet = $vnet.Subnets[$nn]
if ($subnet.Name -eq $SubnetName) { break }
$subnet = $null
}
if ($null -eq $subnet) {
Write-Host "Caution: No subnet found by the name '$SubnetName'"
Return
}
### 4. Add a Virtual Service endpoint of type name 'Microsoft.Sql', on your subnet.
$setParams = @{
Name = $SubnetName
AddressPrefix = $SubnetAddressPrefix
VirtualNetwork = $vnet
ServiceEndpoint = $ServiceEndpointTypeName_SqlDb
}
$vnet = Set-AzVirtualNetworkSubnetConfig @setParams
WARNING
If you reduce the current retention period, you lose the ability to restore to points in time older than the new retention
period. Backups that are no longer needed to provide PITR within the new retention period are deleted.
If you increase the current retention period, you don't immediately gain the ability to restore to older points in time within
the new retention period. You gain that ability over time, as the system starts to retain backups for longer periods.
NOTE
These APIs will affect only the PITR retention period. If you configured long-term retention (LTR) for your database, it
won't be affected. For information about how to change long-term retention periods, see Long-term retention.
Hyperscale databases don't support configuring the differential backup frequency because differential backups don't
apply to Hyperscale databases.
Azure portal
Azure CLI
PowerShell
REST API
To change the PITR backup retention period or the differential backup frequency for active databases by using
the Azure portal:
1. Go to the logical server in Azure with the databases whose retention period you want to change.
2. Select Backups on the left pane, and then select the Retention policies tab.
3. Select the databases for which you want to change the PITR backup retention.
4. Select Configure policies from the action bar.
Configure backup storage redundancy
You can configure backup storage redundancy for databases in Azure SQL Database when you create your
database. You can also change the storage redundancy after the database is already created.
Backup storage redundancy changes made to existing databases apply to future backups only. The default value
is geo-redundant storage. For differences in pricing between locally redundant, zone-redundant, and geo-
redundant backup storage, see the SQL Database pricing page.
Storage redundancy for Hyperscale databases is unique. To learn more, review Hyperscale backup storage
redundancy.
Azure portal
Azure CLI
PowerShell
REST API
In the Azure portal, you can choose a backup storage redundancy option when you create your database. You
can later update the backup storage redundancy from the Compute & storage page of your database settings.
When you're creating your database, choose the backup storage redundancy option on the Basics tab.
For existing databases, go to your database in the Azure portal. Select Compute & storage under Settings ,
and then choose your desired option for backup storage redundancy.
Next steps
Database backups are an essential part of any business continuity and disaster recovery strategy because
they help protect your data from accidental corruption or deletion. To learn about the other business
continuity solutions for SQL Database, see Business continuity overview.
For information about how to configure, manage, and restore from long-term retention of automated
backups in Azure Blob Storage by using the Azure portal, see Manage long-term backup retention by using
the Azure portal.
For information about how to configure, manage, and restore from long-term retention of automated
backups in Azure Blob Storage by using PowerShell, see Manage long-term backup retention by using
PowerShell.
Get more information about how to restore a database to a point in time by using the Azure portal.
Get more information about how to restore a database to a point in time by using PowerShell.
Manage Azure SQL Database long-term backup
retention
9/13/2022 • 9 minutes to read • Edit Online
Prerequisites
Portal
Azure CLI
PowerShell
You can configure SQL Database to retain automated backups for a period longer than the retention period for
your service tier.
1. In the Azure portal, navigate to your server and then select Backups . Select the Retention policies tab
to modify your backup retention settings.
2. On the Retention policies tab, select the database(s) on which you want to set or modify long-term
backup retention policies. Unselected databases will not be affected.
3. In the Configure policies pane, specify your desired retention period for weekly, monthly, or yearly
backups. Choose a retention period of '0' to indicate that no long-term backup retention should be set.
4. Select Apply to apply the chosen retention settings to all selected databases.
IMPORTANT
When you enable a long-term backup retention policy, it may take up to 7 days for the first backup to become visible and
available to restore. For details of the LTR backup cadence, see long-term backup retention.
View backups and restore from a backup
View the backups that are retained for a specific database with an LTR policy, and restore from those backups.
Portal
Azure CLI
PowerShell
1. In the Azure portal, navigate to your server and then select Backups . To view the available LTR backups
for a specific database, select Manage under the Available LTR backups column. A pane will appear with
a list of the available LTR backups for the selected database.
2. In the Available LTR backups pane that appears, review the available backups. You may select a backup
to restore from or to delete.
3. To restore from an available LTR backup, select the backup from which you want to restore, and then
select Restore .
4. Choose a name for your new database, then select Review + Create to review the details of your
Restore. Select Create to restore your database from the chosen backup.
5. On the toolbar, select the notification icon to view the status of the restore job.
6. When the restore job is completed, open the SQL databases page to view the newly restored database.
NOTE
From here, you can connect to the restored database using SQL Server Management Studio to perform needed tasks,
such as to extract a bit of data from the restored database to copy into the existing database or to delete the existing
database and rename the restored database to the existing database name.
Limitations
When restoring from an LTR backup, the read scale property is disabled. To enable, read scale on the restored
database, update the database after it has been created.
You need to specify the target service level objective, when restoring from an LTR backup, which was created
when the database was in an elastic pool.
Next steps
To learn about service-generated automatic backups, see automatic backups
To learn about long-term backup retention, see long-term backup retention
Restore a database from a backup in Azure SQL
Database
9/13/2022 • 10 minutes to read • Edit Online
IMPORTANT
You can't overwrite an existing database during restore.
Database restore operations don't restore the tags of the original database.
When you're using the Standard or Premium service tier in the DTU purchasing model, your database restore
might incur an extra storage cost. The extra cost happens when the maximum size of the restored database is
greater than the amount of storage included with the target database's service tier and service objective.
For pricing details of extra storage, see the SQL Database pricing page. If the actual amount of used space is less
than the amount of storage included, you can avoid this extra cost by setting the maximum database size to the
included amount.
Recovery time
Several factors affect the recovery time to restore a database through automated database backups:
The size of the database
The compute size of the database
The number of transaction logs involved
The amount of activity that needs to be replayed to recover to the restore point
The network bandwidth if the restore is to a different region
The number of concurrent restore requests that are processed in the target region
For a large or very active database, the restore might take several hours. A prolonged outage in a region might
cause a high number of geo-restore requests for disaster recovery. When there are many requests, the recovery
time for individual databases can increase. Most database restores finish in less than 12 hours.
For a single subscription, you have the following limitations on the number of concurrent restore requests.
These limitations apply to any combination of point-in-time restores, geo-restores, and restores from long-term
retention backup.
Permissions
To recover by using automated backups, you must be either:
A member of the Contributor role or the SQL Server Contributor role in the subscription or resource group
that contains the logical server
The subscription or resource group owner
For more information, see Azure RBAC: Built-in roles.
You can recover by using the Azure portal, PowerShell, or the REST API. You can't use Transact-SQL.
Point-in-time restore
You can restore any database to an earlier point in time within its retention period. The restore request can
specify any service tier or compute size for the restored database. When you're restoring a database into an
elastic pool, ensure that you have sufficient resources in the pool to accommodate the database.
When the restore is complete, it creates a new database on the same server as the original database. The
restored database is charged at normal rates, based on its service tier and compute size. You don't incur charges
until the database restore is complete.
You generally restore a database to an earlier point for recovery purposes. You can treat the restored database
as a replacement for the original database or use it as a data source to update the original database.
IMPORTANT
You can run a restore only on the same server. Point-in-time restore doesn't support cross-server restoration.
You can't perform a point-in-time restore on a geo-secondary database. You can do so only on a primary database.
The BackupFrequency parameter isn't supported for Hyperscale databases.
Database replacement
If you want the restored database to be a replacement for the original database, you should specify the
original database's compute size and service tier. You can then rename the original database and give the
restored database the original name by using the ALTER DATABASE command in T-SQL.
Data recover y
If you plan to retrieve data from the restored database to recover from a user or application error, you
need to write and run a data recovery script that extracts data from the restored database and applies to
the original database. Although the restore operation might take a long time to complete, the restoring
database is visible in the database list throughout the restore process.
If you delete the database during the restore, the restore operation will be canceled. You won't be charged
for the database that did not complete the restore.
Azure portal
Azure CLI
PowerShell
Rest API
To recover a database to a point in time by using the Azure portal, open the database overview page and select
Restore on the toolbar. Choose the backup source, and then select the point-in-time backup point from which a
new database will be created.
To recover a long-term backup by using the Azure portal, go to your logical server. Select Backups under
Settings , and then select Manage under Available LTR backups for the database you're trying to restore.
Deleted database restore
You can restore a deleted database to the deletion time, or an earlier point in time, on the same server by using
the Azure portal, the Azure CLI, Azure PowerShell, and the Rest API.
IMPORTANT
If you delete a server, all its databases are also deleted and can't be recovered. You can't restore a deleted server.
Azure portal
Azure CLI
PowerShell
Rest API
To recover a deleted database to the deletion time by using the Azure portal, open the server's overview page
and select Deleted databases . Select a deleted database that you want to restore, and then enter the name for
the new database that will be created with data restored from the backup.
TIP
It might take several minutes for recently deleted databases to appear on the Deleted databases page in the Azure
portal, or when you want to display deleted databases programmatically.
Geo-restore
You can use geo-restore to restore a deleted database by using the Azure portal, the Azure CLI, Azure
PowerShell, and the Rest API.
IMPORTANT
Geo-restore is available only for databases configured with geo-redundant backup storage. If you're not currently
using geo-replicated backups for a database, you can change this by configuring backup storage redundancy.
You can perform geo-restore only on databases that reside in the same subscription.
Geo-restore uses geo-replicated backups as the source. You can restore a database on any logical server in any
Azure region from the most recent geo-replicated backups. You can request a geo-restore even if an outage has
made the database or the entire region inaccessible.
Geo-restore is the default recovery option when your database is unavailable because of an incident in the
hosting region. You can restore the database to a server in any other region.
There's a delay between when a backup is taken and when it's geo-replicated to an Azure blob in a different
region. As a result, the restored database can be up to one hour behind the original database. The following
illustration shows a database restore from the last available backup in another region.
Azure portal
Azure CLI
PowerShell
Rest API
From the Azure portal, you create a new single database and select an available geo-restore backup. The newly
created database contains the geo-restored backup data.
To geo-restore a single database from the Azure portal in the region and server of your choice, follow these
steps:
1. From Dashboard , select Add > Create SQL Database . On the Basics tab, enter the required information.
2. Select Additional settings .
3. For Use existing data , select Backup .
4. Select a backup from the list of available geo-restore backups.
Complete the process of creating a database from the backup. When you create a database in Azure SQL
Database, it contains the restored geo-restore backup.
Geo -restore considerations
For detailed information about using geo-restore to recover from an outage, see Recover from an outage.
Geo-restore is the most basic disaster-recovery solution available in SQL Database. It relies on automatically
created geo-replicated backups with a recovery point objective (RPO) of up to 1 hour and an estimated recovery
time objective (RTO) of up to 12 hours. It doesn't guarantee that the target region will have the capacity to
restore your databases after a regional outage, because a sharp increase of demand is likely. If your application
uses relatively small databases and is not critical to the business, geo-restore is an appropriate disaster-recovery
solution.
For business-critical applications that require large databases and must ensure business continuity, use auto-
failover groups. That feature offers a much lower RPO and RTO, and the capacity is always guaranteed.
For more information about business continuity choices, see Overview of business continuity.
NOTE
If you plan to use geo-restore as disaster-recovery solution, we recommend that you conduct periodic drills to verify
application tolerance to any loss of recent data modifications, along with all operational aspects of the recovery
procedure.
Next steps
SQL Database automated backups
Long-term retention
To learn about faster recovery options, see Active geo-replication or Auto-failover groups.
Configure an auto-failover group for Azure SQL
Database
9/13/2022 • 12 minutes to read • Edit Online
NOTE
This article covers auto-failover groups for Azure SQL Database. For Azure SQL Managed Instance, see Configure auto-
failover groups in Azure SQL Managed Instance.
Prerequisites
Consider the following prerequisites for creating your failover group for a single database:
The server login and firewall settings for the secondary server must match that of your primary server.
Create your failover group and add your single database to it using the Azure portal.
1. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL is not in the list, select All
ser vices , then type Azure SQL in the search box. (Optional) Select the star next to Azure SQL to favorite
it and add it as an item in the left-hand navigation.
2. Select the database you want to add to the failover group.
3. Select the name of the server under Ser ver name to open the settings for the server.
4. Select Failover groups under the Settings pane, and then select Add group to create a new failover
group.
5. On the Failover Group page, enter or select the required values, and then select Create .
Databases within the group : Choose the database you want to add to your failover group. Adding
the database to the failover group will automatically start the geo-replication process.
Test failover
Test failover of your failover group using the Azure portal or PowerShell.
Portal
PowerShell
3. Select Failover groups under the Settings pane and then choose the failover group you just created.
4. Review which server is primary and which server is secondary.
5. Select Failover from the task pane to fail over your failover group containing your database.
6. Select Yes on the warning that notifies you that TDS sessions will be disconnected.
7. Review which server is now primary and which server is secondary. If failover succeeded, the two servers
should have swapped roles.
8. Select Failover again to fail the servers back to their original roles.
IMPORTANT
If you need to delete the secondary database, remove it from the failover group before deleting it. Deleting a secondary
database before it is removed from the failover group can cause unpredictable behavior.
Prerequisites
Consider the following prerequisites for creating your failover group for a pooled database:
The server login and firewall settings for the secondary server must match that of your primary server.
Portal
PowerShell
Create your failover group and add your elastic pool to it using the Azure portal.
1. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL is not in the list, select All
ser vices , then type "Azure SQL" in the search box. (Optional) Select the star next to Azure SQL to
favorite it and add it as an item in the left-hand navigation.
2. Select the elastic pool you want to add to the failover group.
3. On the Over view pane, select the name of the server under Ser ver name to open the settings for the
server.
4. Select Failover groups under the Settings pane, and then select Add group to create a new failover
group.
5. On the Failover Group page, enter or select the required values, and then select Create . Either create a
new secondary server, or select an existing secondary server.
6. Select Databases within the group then choose the elastic pool you want to add to the failover group.
If an elastic pool does not already exist on the secondary server, a warning appears prompting you to
create an elastic pool on the secondary server. Select the warning, and then select OK to create the elastic
pool on the secondary server.
7. Select Select to apply your elastic pool settings to the failover group, and then select Create to create
your failover group. Adding the elastic pool to the failover group will automatically start the geo-
replication process.
Test failover
Test failover of your elastic pool using the Azure portal or PowerShell.
Portal
PowerShell
Fail your failover group over to the secondary server, and then fail back using the Azure portal.
1. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL is not in the list, select All
ser vices , then type "Azure SQL" in the search box. (Optional) Select the star next to Azure SQL to
favorite it and add it as an item in the left-hand navigation.
2. Select the elastic pool you want to add to the failover group.
3. On the Over view pane, select the name of the server under Ser ver name to open the settings for the
server.
4. Select Failover groups under the Settings pane and then choose the failover group you created in
section 2.
5. Review which server is primary, and which server is secondary.
6. Select Failover from the task pane to fail over your failover group containing your elastic pool.
7. Select Yes on the warning that notifies you that TDS sessions will be disconnected.
8. Review which server is primary, which server is secondary. If failover succeeded, the two servers should
have swapped roles.
9. Select Failover again to fail the failover group back to the original settings.
IMPORTANT
If you need to delete the secondary database, remove it from the failover group before deleting it. Deleting a secondary
database before it is removed from the failover group can cause unpredictable behavior.
IMPORTANT
When the failover group is deleted, the DNS records for the listener endpoints are also deleted. At that point, there is a
non-zero probability of somebody else creating a failover group or a server DNS alias with the same name. Because
failover group names and DNS aliases must be globally unique, this will prevent you from using the same name again. To
minimize this risk, don't use generic failover group names.
Permissions
Permissions for a failover group are managed via Azure role-based access control (Azure RBAC).
Azure RBAC write access is necessary to create and manage failover groups. The SQL Server Contributor role
has all the necessary permissions to manage failover groups.
The following table lists specific permission scopes for Azure SQL Database:
A C T IO N P ERM ISSIO N SC O P E
Fail over failover group Azure RBAC write access Failover group on new server
Remarks
Removing a failover group for a single or pooled database does not stop replication, and it does not delete
the replicated database. You will need to manually stop geo-replication and delete the database from the
secondary server if you want to add a single or pooled database back to a failover group after it's been
removed. Failing to do either may result in an error similar to
The operation cannot be performed due to multiple errors when attempting to add the database to the
failover group.
Auto-failover group name is subject to naming restrictions.
Next steps
For detailed steps configuring a failover group, see the following tutorials:
Add a single database to a failover group
Add an elastic pool to a failover group
Add a managed instance to a failover group
For an overview of Azure SQL Database high availability options, see geo-replication and auto-failover groups.
Tutorial: Configure active geo-replication and
failover (Azure SQL Database)
9/13/2022 • 5 minutes to read • Edit Online
Prerequisites
Portal
Azure CLI
To configure active geo-replication by using the Azure portal, you need the following resource:
A database in Azure SQL Database: The primary database that you want to replicate to a different
geographical region.
NOTE
When using Azure portal, you can only create a secondary database within the same subscription as the primary. If a
secondary database is required to be in a different subscription, use Create Database REST API or ALTER DATABASE
Transact-SQL API.
NOTE
If the partner database already exists, (for example, as a result of terminating a previous geo-replication relationship) the
command fails.
Portal
Azure CLI
1. In the Azure portal, browse to the database that you want to set up for geo-replication.
2. On the SQL Database page, select your database, scroll to Data management , select Replicas , and then
select Create replica .
3. Select or create the server for the secondary database, and configure the Compute + storage options if
necessary. You can select any region for your secondary server, but we recommend the paired region.
Optionally, you can add a secondary database to an elastic pool. To create the secondary database in a
pool, select Yes next to Want to use SQL elastic pool? and select a pool on the target server. A pool
must already exist on the target server. This workflow doesn't create a pool.
4. Click Review + create , review the information, and then click Create .
5. The secondary database is created and the deployment process begins.
6. When the deployment is complete, the secondary database displays its status.
7. Return to the primary database page, and then select Replicas . Your secondary database is listed under
Geo replicas .
Initiate a failover
The secondary database can be switched to become the primary.
Portal
Azure CLI
1. In the Azure portal, browse to the primary database in the geo-replication partnership.
2. Scroll to Data management , and then select Replicas .
3. In the Geo replicas list, select the database you want to become the new primary, select the ellipsis, and
then select Forced failover .
Portal
Azure CLI
1. In the Azure portal, browse to the primary database in the geo-replication partnership.
2. Select Replicas .
3. In the Geo replicas list, select the database you want to remove from the geo-replication partnership,
select the ellipsis, and then select Stop replication .
4. A confirmation window opens. Click Yes to remove the database from the geo-replication partnership.
(Set it to a read-write database not part of any replication.)
Next steps
To learn more about active geo-replication, see active geo-replication.
To learn about auto-failover groups, see Auto-failover groups
For a business continuity overview and scenarios, see Business continuity overview.
Configure and manage Azure SQL Database
security for geo-restore or failover
9/13/2022 • 4 minutes to read • Edit Online
NOTE
It is also possible to use Azure Active Directory (AAD) logins to manage your databases. For more information, see Azure
SQL logins and users.
Setting up logins on the target server involves three steps outlined below:
1. Determine logins with access to the primary database
The first step of the process is to determine which logins must be duplicated on the target server. This is
accomplished with a pair of SELECT statements, one in the logical master database on the source server and one
in the primary database itself.
Only the server admin or a member of the LoginManager server role can determine the logins on the source
server with the following SELECT statement.
Only a member of the db_owner database role, the dbo user, or server admin, can determine all of the database
user principals in the primary database.
NOTE
The INFORMATION_SCHEMA and sys users have NULL SIDs, and the guest SID is 0x00 . The dbo SID may start with
0x01060000000001648000000000048454, if the database creator was the server admin instead of a member of
DbManager .
DISABLE doesn’t change the password, so you can always enable it if needed.
Next steps
For more information on managing database access and logins, see SQL Database security: Manage
database access and login security.
For more information on contained database users, see Contained Database Users - Making Your Database
Portable.
To learn about active geo-replication, see Active geo-replication.
To learn about auto-failover groups, see Auto-failover groups.
For information about using geo-restore, see geo-restore
Query Performance Insight for Azure SQL Database
9/13/2022 • 10 minutes to read • Edit Online
Prerequisites
Query Performance Insight requires that Query Store is active on your database. It's automatically enabled for
all databases in Azure SQL Database by default. If Query Store is not running, the Azure portal will prompt you
to enable it.
NOTE
If the "Query Store is not properly configured on this database" message appears in the portal, see Optimizing the Query
Store configuration.
Permissions
You need the following Azure role-based access control (Azure RBAC) permissions to use Query Performance
Insight:
Reader , Owner , Contributor , SQL DB Contributor , or SQL Ser ver Contributor permissions are
required to view the top resource-consuming queries and charts.
Owner , Contributor , SQL DB Contributor , or SQL Ser ver Contributor permissions are required to
view query text.
For database performance recommendations, select Recommendations on the Query Performance Insight
navigation blade.
The bottom grid shows aggregated information for the visible queries:
Query ID, which is a unique identifier for the query in the database.
CPU per query during an observable interval, which depends on the aggregation function.
Duration per query, which also depends on the aggregation function.
Total number of executions for a specific query.
2. If your data becomes stale, select the Refresh button.
3. Use sliders and zoom buttons to change the observation interval and investigate consumption spikes:
4. Optionally, you can select the Custom tab to customize the view for:
Metric (CPU, duration, execution count).
Time interval (last 24 hours, past week, or past month).
Number of queries.
Aggregation function.
A detailed view opens. It shows the CPU consumption, duration, and execution count over time.
2. Select the chart features for details.
The top chart shows a line with the overall database DTU percentage. The bars are the CPU percentage
that the selected query consumed.
The second chart shows the total duration of the selected query.
The bottom chart shows the total number of executions by the selected query.
3. Optionally, use sliders, use zoom buttons, or select Settings to customize how query data is displayed, or
to pick a different time range.
IMPORTANT
Query Performance Insight does not capture any DDL queries. In some cases, it might not capture all ad hoc
queries.
In case your database is scope locked with a read-only lock, query details blade will not be able to load.
IMPORTANT
Adjusting the query view does not update the DTU line. The DTU line always shows the maximum consumption
value for the interval.
To understand database DTU consumption with more detail (up to one minute), consider creating a custom chart
in the Azure portal:
1. Select Azure SQL Database > Monitoring .
2. Select Metrics .
3. Select +Add char t .
4. Select the DTU percentage on the chart.
5. In addition, select Last 24 hours on the upper-left menu and change it to one minute.
We recommend that you use the custom DTU chart to compare with the query performance chart.
In some cases, due to the zoom level, it's possible that annotations close to each other are collapsed into a single
annotation. Query Performance Insight represents this as a group annotation icon. Selecting the group
annotation icon opens a new blade that lists the annotations.
Correlating queries and performance-tuning actions might help you to better understand your workload.
Increase the size of Query Store by connecting to a database through SSMS or the Azure portal and running the
following query. (Replace YourDB with the database name.)
ALTER DATABASE [YourDB]
SET QUERY_STORE (MAX_STORAGE_SIZE_MB = 1024);
Applying these settings will eventually make Query Store collect telemetry for new queries. If you need Query
Store to be operational right away, you can optionally choose to clear Query Store by running the following
query through SSMS or the Azure portal. (Replace YourDB with the database name.)
NOTE
Running the following query will delete all previously collected monitored telemetry in Query Store.
Next steps
Consider using Azure SQL Analytics for advanced performance monitoring of a large fleet of single and pooled
databases, elastic pools, managed instances and instance databases.
Enable automatic tuning in the Azure portal to
monitor queries and improve workload
performance
9/13/2022 • 5 minutes to read • Edit Online
NOTE
For Azure SQL Managed Instance, the supported option FORCE_LAST_GOOD_PLAN can only be configured through T-
SQL. The Azure portal based configuration and automatic index tuning options described in this article do not apply to
Azure SQL Managed Instance.
NOTE
Configuring automatic tuning options through the ARM (Azure Resource Manager) template is not supported at this
time.
TIP
The general recommendation is to manage the automatic tuning configuration at ser ver level so the same configuration
settings can be applied on every database automatically. Configure automatic tuning on an individual database only if you
need that database to have different settings than others inheriting settings from the same server.
Azure portal
To enable automatic tuning on a single database , navigate to the database in the Azure portal and select
Automatic tuning .
Individual automatic tuning settings can be separately configured for each database. You can manually configure
an individual automatic tuning option, or specify that an option inherits its settings from the server.
Once you have selected your desired configuration, click Apply .
REST API
To find out more about using a REST API to enable automatic tuning on a single database, see Azure SQL
Database automatic tuning UPDATE and GET HTTP methods.
T -SQL
To enable automatic tuning on a single database via T-SQL, connect to the database and execute the following
query:
Setting automatic tuning to AUTO will apply Azure Defaults. Setting it to INHERIT, automatic tuning
configuration will be inherited from the parent server. Choosing CUSTOM, you will need to manually configure
automatic tuning.
To configure individual automatic tuning options via T-SQL, connect to the database and execute the query such
as this one:
ALTER DATABASE current SET AUTOMATIC_TUNING (FORCE_LAST_GOOD_PLAN = ON, CREATE_INDEX = ON, DROP_INDEX = OFF)
Setting the individual tuning option to ON will override any setting that database inherited and enable the
tuning option. Setting it to OFF will also override any setting that database inherited and disable the tuning
option. Automatic tuning option for which DEFAULT is specified, will inherit the automatic tuning configuration
from the server level settings.
IMPORTANT
In the case of active geo-replication, Automatic tuning needs to be configured on the primary database only.
Automatically applied tuning actions, such as for example index create or delete will be automatically replicated to geo-
secondaries. Attempting to enable Automatic tuning via T-SQL on the read-only secondary will result in a failure as
having a different tuning configuration on the read-only secondary is not supported.
To find out more abut T-SQL options to configure automatic tuning, see ALTER DATABASE SET Options (Transact-
SQL).
Troubleshooting
Automated recommendation management is disabled
In case of error messages that automated recommendation management has been disabled, or simply disabled
by system, the most common causes are:
Query Store is not enabled, or
Query Store is in read-only mode for a specified database, or
Query Store stopped running because it ran out of allocated storage space.
The following steps can be considered to rectify this issue:
Clean up the Query Store, or modify the data retention period to "auto" by using T-SQL, or increase Query
Store maximum size. See how to configure recommended retention and capture policy for Query Store.
Use SQL Server Management Studio (SSMS) and follow these steps:
Connect to the Azure SQL Database
Right click on the database
Go to Properties and click on Query Store
Change the Operation Mode to Read-Write
Change the Store Capture Mode to Auto
Change the Size Based Cleanup Mode to Auto
Permissions
For Azure SQL Database, managing Automatic tuning in Azure portal, or using PowerShell or REST API requires
membership in Azure built-in RBAC roles.
To manage automatic tuning, the minimum required permission to grant to the user is membership in the SQL
Database contributor role. You can also consider using higher privilege roles such as SQL Server Contributor,
Contributor, and Owner.
For permissions required to manage Automatic tuning with T-SQL, see Permissions for ALTER DATABASE .
Next steps
Read the Automatic tuning article to learn more about automatic tuning and how it can help you improve
your performance.
See Performance recommendations for an overview of Azure SQL Database performance recommendations.
See Query Performance Insights to learn about viewing the performance impact of your top queries.
Email notifications for automatic tuning
9/13/2022 • 11 minutes to read • Edit Online
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is
for the Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the
AzureRm modules are substantially identical.
TIP
Record your Azure Automation account name, subscription ID, and resources (such as copy-paste to a notepad) exactly as
entered while creating the Automation app. You need this information later.
If you have several Azure subscriptions for which you would like to build the same automation, you need to
repeat this process for your other subscriptions.
# Get credentials
$Conn = Get-AutomationConnection -Name AzureRunAsConnection
Connect-AzAccount -ServicePrincipal -Tenant $Conn.TenantID -ApplicationId $Conn.ApplicationID -
CertificateThumbprint $Conn.CertificateThumbprint
# Skip if master
if ($DatabaseName -eq "master") {
continue
}
Click the "Save " button in the upper right corner to save the script. When you are satisfied with the script, click
the "Publish " button to publish this runbook.
At the main runbook pane, you can choose to click on the "Star t " button to test the script. Click on the
"Output " to view results of the script executed. This output is going to be the content of your email. The sample
output from the script can be seen in the following screenshot.
Ensure to adjust the content by customizing the PowerShell script to your needs.
With the above steps, the PowerShell script to retrieve automatic tuning recommendations is loaded in Azure
Automation. The next step is to automate and schedule the email delivery job.
TIP
To send automated emails to different recipients, create separate flows. In these additional flows, change the recipient
email address in the "To" field, and the email subject line in the "Subject" field. Creating new runbooks in Azure
Automation with customized PowerShell scripts (such as with change of Azure subscription ID) enables further
customization of automated scenarios, such as for example emailing separate recipients on Automated tuning
recommendations for separate subscriptions.
The above concludes steps required to configure the email delivery job workflow. The entire flow consisting of
three actions built is shown in the following image.
To test the flow, click on "Run Now " in the upper right corner inside the flow pane.
Statistics of running the automated jobs, showing success of email notifications sent out, can be seen from the
Flow analytics pane.
The Flow analytics pane is helpful for monitoring the success of job executions, and if required for
troubleshooting. In the case of troubleshooting, you also might want to examine the PowerShell script execution
log accessible through the Azure Automation app.
The final output of the automated email looks similar to the following email received after building and running
this solution:
By adjusting the PowerShell script, you can adjust the output and formatting of the automated email to your
needs.
You might further customize the solution to build email notifications based on a specific tuning event, and to
multiple recipients, for multiple subscriptions or databases, depending on your custom scenarios.
Next steps
Learn more on how automatic tuning can help you improve database performance, see Automatic tuning in
Azure SQL Database.
To enable automatic tuning in Azure SQL Database to manage your workload, see Enable automatic tuning.
To manually review and apply automatic tuning recommendations, see Find and apply performance
recommendations.
Find and apply performance recommendations
9/13/2022 • 5 minutes to read • Edit Online
Viewing recommendations
To view and apply performance recommendations, you need the correct Azure role-based access control (Azure
RBAC) permissions in Azure. Reader , SQL DB Contributor permissions are required to view
recommendations, and Owner , SQL DB Contributor permissions are required to execute any actions; create
or drop indexes and cancel index creation.
Use the following steps to find performance recommendations on the Azure portal:
1. Sign in to the Azure portal.
2. Go to All ser vices > SQL databases , and select your database.
3. Navigate to Performance recommendation to view available recommendations for the selected database.
Performance recommendations are shown in the table similar to the one shown on the following figure:
Recommendations are sorted by their potential impact on performance into the following categories:
IM PA C T DESC RIP T IO N
You can also view the status of the historical operations. Select a recommendation or status to see more
information.
Here is an example of the "Create index" recommendation in the Azure portal.
Applying recommendations
Azure SQL Database gives you full control over how recommendations are enabled using any of the following
three options:
Apply individual recommendations one at a time.
Enable the Automatic tuning to automatically apply recommendations.
To implement a recommendation manually, run the recommended T-SQL script against your database.
Select any recommendation to view its details and then click View script to review the exact details of how the
recommendation is created.
The database remains online while the recommendation is applied -- using performance recommendation or
automatic tuning never takes a database offline.
Apply an individual recommendation
You can review and accept recommendations one at a time.
1. On the Recommendations page, select a recommendation.
2. On the Details page, click the Apply button.
NOTE
Please note that if SQL Database Automatic tuning is enabled, and if you have manually discarded a recommendation
from the list, such recommendation will never be applied automatically. Discarding a recommendation is a handy way for
users to have Automatic tuning enabled in cases when requiring that a specific recommendation shouldn’t be applied. You
can revert this behavior by adding discarded recommendations back to the Recommendations list by selecting the Undo
Discard option.
NOTE
Please note that DROP_INDEX option is currently not compatible with applications using partition switching and index
hints.
Monitoring operations
Applying a recommendation might not happen instantaneously. The portal provides details regarding the status
of recommendation. The following are possible states that an index can be in:
STAT US DESC RIP T IO N
Reverting a recommendation
If you used the performance recommendations to apply the recommendation (meaning you did not manually
run the T-SQL script), it automatically reverts the change if it finds the performance impact to be negative. If for
any reason you simply want to revert a recommendation, you can do the following:
1. Select a successfully applied recommendation in the Tuning histor y area.
2. Click Rever t on the recommendation details page.
Next steps
Monitor your recommendations and continue to apply them to refine performance. Database workloads are
dynamic and change continuously. Azure SQL Database continues to monitor and provide recommendations
that can potentially improve your database's performance.
See Automatic tuning to learn more about the automatic tuning in Azure SQL Database.
See Performance recommendations for an overview of Azure SQL Database performance recommendations.
See Query Performance Insights to learn about viewing the performance impact of your top queries.
Additional resources
Query Store
CREATE INDEX
Azure role-based access control (Azure RBAC)
Create alerts for Azure SQL Database and Azure
Synapse Analytics using the Azure portal
9/13/2022 • 2 minutes to read • Edit Online
Overview
This article shows you how to set up alerts for databases in Azure SQL Database and Azure Synapse Analytics
using the Azure portal. Alerts can send you an email or call a web hook when some metric (for example
database size or CPU usage) reaches the threshold.
NOTE
For Azure SQL Managed Instance specific instructions, see Create alerts for Azure SQL Managed Instance.
You can receive an alert based on monitoring metrics for, or events on, your Azure services.
Metric values - The alert triggers when the value of a specified metric crosses a threshold you assign in
either direction. That is, it triggers both when the condition is first met and then afterwards when that
condition is no longer being met.
Activity log events - An alert can trigger on every event, or, only when a certain number of events occur.
You can configure an alert to do the following when it triggers:
Send email notifications to the service administrator and co-administrators
Send email to additional emails that you specify.
Call a webhook
You can configure and get information about alert rules using
The Azure portal
PowerShell
A command-line interface (CLI)
Azure Monitor REST API
Next steps
Learn more about configuring webhooks in alerts.
Database Advisor performance recommendations
for Azure SQL Database
9/13/2022 • 7 minutes to read • Edit Online
Performance overview
Performance overview provides a summary of your database performance, and helps you with performance
tuning and troubleshooting.
The Recommendations tile provides a breakdown of tuning recommendations for your database (top three
recommendations are shown if there are more). Clicking this tile takes you to Performance
recommendation options .
The Tuning activity tile provides a summary of the ongoing and completed tuning actions for your
database, giving you a quick view into the history of tuning activity. Clicking this tile takes you to the full
tuning history view for your database.
The Auto-tuning tile shows the auto-tuning configuration for your database (tuning options that are
automatically applied to your database). Clicking this tile opens the automation configuration dialog.
The Database queries tile shows the summary of the query performance for your database (overall DTU
usage and top resource consuming queries). Clicking this tile takes you to Quer y Performance Insight .
Fix schema issues recommendations appear when Azure SQL Database notices an anomaly in the number of
schema-related SQL errors that are happening on your database. This recommendation typically appears when
your database encounters multiple schema-related errors (invalid column name, invalid object name, and so on)
within an hour.
"Schema issues" are a class of syntax errors. They occur when the definition of the SQL query and the definition
of the database schema aren't aligned. For example, one of the columns that's expected by the query might be
missing in the target table or vice-versa.
The "Fix schema issue" recommendation appears when Azure SQL Database notices an anomaly in the number
of schema-related SQL errors that are happening on your database. The following table shows the errors that
are related to schema issues:
SQ L ERRO R C O DE M ESSA GE
201 Procedure or function '' expects parameter '', which was not
supplied.
Custom applications
Developers might consider developing custom applications using performance recommendations for Azure SQL
Database. All recommendations listed in the portal for a database can be accessed through Get-
AzSqlDatabaseRecommendedAction API.
Next steps
For more information about automatic tuning of database indexes and query execution plans, see Azure SQL
Database automatic tuning.
For more information about automatically monitoring database performance with automated diagnostics
and root cause analysis of performance issues, see Azure SQL Intelligent Insights.
See Query Performance Insights to learn about and view the performance impact of your top queries.
Stream data into Azure SQL Database using Azure
Stream Analytics integration (preview)
9/13/2022 • 6 minutes to read • Edit Online
Users can now ingest, process, view, and analyze real-time streaming data into a table directly from a database
in Azure SQL Database. They do so in the Azure portal using Azure Stream Analytics. This experience enables a
wide variety of scenarios such as connected car, remote monitoring, fraud detection, and many more. In the
Azure portal, you can select an events source (Event Hub/IoT Hub), view incoming real-time events, and select a
table to store events. You can also write Azure Stream Analytics Query Language queries in the portal to
transform incoming events and store them in the selected table. This new entry point is in addition to the
creation and configuration experiences that already exist in Stream Analytics. This experience starts from the
context of your database, enabling you to quickly set up a Stream Analytics job and navigate seamlessly
between the database in Azure SQL Database and Stream Analytics experiences.
Key benefits
Minimum context switching: You can start from a database in Azure SQL Database in the portal and start
ingesting real-time data into a table without switching to any other service.
Reduced number of steps: The context of your database and table is used to pre-configure a Stream Analytics
job.
Additional ease of use with preview data: Preview incoming data from the events source (Event Hub/IoT Hub)
in the context of selected table
IMPORTANT
An Azure Stream Analytics job can output to Azure SQL Database, Azure SQL Managed Instance, or Azure Synapse
Analytics. For more information, see Outputs.
Prerequisites
To complete the steps in this article, you need the following resources:
An Azure subscription. If you don't have an Azure subscription, create a free account.
A database in Azure SQL Database. For details, see Create a single database in Azure SQL Database.
A firewall rule allowing your computer to connect to the server. For details, see Create a server-level firewall
rule.
3. To start ingesting your streaming data into this database, select Create and give a name to your
streaming job, and then select Next: Input .
4. Enter your events source details, and then select Next: Output .
Input type : Event Hub/IoT Hub
Input alias : Enter a name to identify your events source
Subscription : Same as Azure SQL Database subscription
Event Hub namespace : Name for namespace
Event Hub name : Name of event hub within selected namespace
Event Hub policy name (Default to create new): Give a policy name
Event Hub consumer group (Default to create new): Give a consumer group name
We recommend that you create a consumer group and a policy for each new Azure Stream
Analytics job that you create from here. Consumer groups allow only five concurrent readers, so
providing a dedicated consumer group for each job will avoid any errors that might arise from
exceeding that limit. A dedicated policy allows you to rotate your key or revoke permissions
without impacting other resources.
5. Select which table you want to ingest your streaming data into. Once done, select Create .
Username , Password : Enter your credentials for SQL server authentication. Select Validate .
Table : Select Create new or Use existing . In this flow, let’s select Create . This will create a new
table when you start the stream Analytics job.
Test results : Select Test quer y and you can see the results of your streaming query
Test results schema : Shows the schema of the results of your streaming query after testing.
Make sure the test results schema matches with your output schema.
Output schema : This contains schema of the table you selected in step 5 (new or existing).
Create new: If you selected this option in step 5, you won’t see the schema yet until you start
the streaming job. When creating a new table, select the appropriate table index. For more
information about table indexing, see Clustered and Nonclustered Indexes Described.
Use existing: If you selected this option in step 5, you'll see the schema of selected table.
7. After you're done authoring & testing the query, select Save quer y . Select Star t Stream Analytics job
to start ingesting transformed data into the SQL table. Once you finalize the following fields, star t the
job.
Output star t time : This defines the time of the first output of the job.
Now: The job will start now and process new incoming data.
Custom: The job will start now but will process data from a specific point in time (that can be in
the past or the future). For more information, see How to start an Azure Stream Analytics job.
Streaming units : Azure Stream Analytics is priced by the number of streaming units required to
process the data into the service. For more information, see Azure Stream Analytics pricing.
Output data error handling :
Retry: When an error occurs, Azure Stream Analytics retries writing the event indefinitely until
the write succeeds. There's no timeout for retries. Eventually all subsequent events are blocked
from processing by the event that is retrying. This option is the default output error handling
policy.
Drop: Azure Stream Analytics will drop any output event that results in a data conversion error.
The dropped events can't be recovered for reprocessing later. All transient errors (for example,
network errors) are retried regardless of the output error handling policy configuration.
SQL Database output settings : An option for inheriting the partitioning scheme of your
previous query step, to enable fully parallel topology with multiple writers to the table. For more
information, see Azure Stream Analytics output to Azure SQL Database.
Max batch count : The recommended upper limit on the number of records sent with every bulk
insert transaction.
For more information about output error handling, see Output error policies in Azure Stream
Analytics.
8. Once you start the job, you'll see the Running job in the list, and you can take following actions:
Star t/stop the job : If the job is running, you can stop the job. If the job is stopped, you can start
the job.
Edit job : You can edit the query. If you want to do more changes to the job ex, add more
inputs/outputs, then open the job in Stream Analytics. Edit option is disabled when the job is
running.
Preview output table : You can preview the table in the SQL query editor.
Open in Stream Analytics : Open the job in Stream Analytics to view monitoring, debugging
details of the job.
Next steps
Azure Stream Analytics documentation
Azure Stream Analytics solution patterns
Diagnose and troubleshoot high CPU on Azure SQL
Database
9/13/2022 • 18 minutes to read • Edit Online
SELECT
COUNT(*) as vCores
FROM sys.dm_os_schedulers
WHERE status = N'VISIBLE ONLINE';
GO
NOTE
For databases using Gen4 hardware, the number of visible online schedulers in sys.dm_os_schedulers may be double
the number of vCores specified at database creation and shown in Azure portal.
Identify the causes of high CPU
You can measure and analyze CPU utilization using the Azure portal, Query Store interactive tools in SSMS, and
Transact-SQL queries in SSMS and Azure Data Studio.
The Azure portal and Query Store show execution statistics, such as CPU metrics, for completed queries. If you
are experiencing a current high CPU incident that may be caused by one or more ongoing long-running queries,
identify currently running queries with Transact-SQL.
Common causes of new and unusual high CPU utilization are:
New queries in the workload that use a large amount of CPU.
An increase in the frequency of regularly running queries.
Query plan regression, including regression due to parameter sensitive plan (PSP) problems, resulting in one
or more queries consuming more CPU.
A significant increase in compilation or recompilation of query plans.
Databases where queries use excessive parallelism.
To understand what is causing your high CPU incident, identify when high CPU utilization is occurring against
your database and the top queries using CPU at that time.
Examine:
Are new queries using significant CPU appearing in the workload, or are you seeing an increase in frequency
of regularly running queries? Use any of the following methods to investigate. Look for queries with limited
history (new queries), and at the frequency of execution for queries with longer history.
Review CPU metrics and related top queries in the Azure portal
Query the top recent 15 queries by CPU usage with Transact-SQL.
Use interactive Query Store tools in SSMS to identify top queries by CPU time
Are some queries in the workload using more CPU per execution than they did in the past? If so, has the
query execution plan changed? These queries may have parameter sensitive plan (PSP) problems. Use either
of the following techniques to investigate. Look for queries with multiple query execution plans with
significant variation in CPU usage:
Query the top recent 15 queries by CPU usage with Transact-SQL.
Use interactive Query Store tools in SSMS to identify top queries by CPU time
Is there evidence of a large amount of compilation or recompilation occurring? Query the most frequently
compiled queries by query hash and review how frequently they compile.
Are queries using excessive parallelism? Query your MAXDOP database scoped configuration and review
your vCore count. Excessive parallelism often occurs in databases where MAXDOP is set to 0 with a core
count higher than eight.
NOTE
Azure SQL Database requires compute resources to implement core service features such as high availability and disaster
recovery, database backup and restore, monitoring, Query Store, automatic tuning, etc. Use of these compute resources
may be particularly noticeable on databases with low vCore counts or databases in dense elastic pools. Learn more in
Resource management in Azure SQL Database.
Review CPU usage metrics and related top queries in the Azure portal
Use the Azure portal to track various CPU metrics, including the percentage of available CPU used by your
database over time. The Azure portal combines CPU metrics with information from your database's Query Store,
which allows you to identify which queries consumed CPU in your database at a given time.
Follow these steps to find CPU percentage metrics.
1. Navigate to the database in the Azure portal.
2. Under Intelligent Performance in the left menu, select Quer y Performance Insight .
The default view of Query Performance Insight shows 24 hours of data. CPU usage is shown as a percentage of
total available CPU used for the database.
The top five queries running in that period are displayed in vertical bars above the CPU usage graph. Select a
band of time on the chart or use the Customize menu to explore specific time periods. You may also increase
the number of queries shown.
Select each query ID exhibiting high CPU to open details for the query. Details include query text along with
performance history for the query. Examine if CPU has increased for the query recently.
Take note of the query ID to further investigate the query plan using Query Store in the following section.
Review query plans for top queries identified in the Azure portal
Follow these steps to use a query ID in SSMS's interactive Query Store tools to examine the query's execution
plan over time.
1. Open SSMS.
2. Connect to your Azure SQL Database in Object Explorer.
3. Expand the database node in Object Explorer
4. Expand the Quer y Store folder.
5. Open the Tracked Queries pane.
6. Enter the query ID in the Tracking quer y box at the top left of the screen and press enter.
7. If necessary, select Configure to adjust the time interval to match the time when high CPU utilization was
occurring.
The page will show the execution plan(s) and related metrics for the query over the most recent 24 hours.
Identify currently running queries with Transact-SQL
Transact-SQL allows you to identify currently running queries with CPU time they have used so far. You can also
use Transact-SQL to query recent CPU usage in your database, top queries by CPU, and queries that compiled
the most often.
You can query CPU metrics with SQL Server Management Studio (SSMS), Azure Data Studio, or the Azure
portal's query editor (preview). When using SSMS or Azure Data Studio, open a new query window and connect
it to your database (not the master database).
Find currently running queries with CPU usage and execution plans by executing the following query. CPU time
is returned in milliseconds.
SELECT
req.session_id,
req.status,
req.start_time,
req.cpu_time AS 'cpu_time_ms',
req.logical_reads,
req.dop,
s.login_name,
s.host_name,
s.program_name,
object_name(st.objectid,st.dbid) 'ObjectName',
REPLACE (REPLACE (SUBSTRING (st.text,(req.statement_start_offset/2) + 1,
((CASE req.statement_end_offset WHEN -1 THEN DATALENGTH(st.text)
ELSE req.statement_end_offset END - req.statement_start_offset)/2) + 1),
CHAR(10), ' '), CHAR(13), ' ') AS statement_text,
qp.query_plan,
qsx.query_plan as query_plan_with_in_flight_statistics
FROM sys.dm_exec_requests as req
JOIN sys.dm_exec_sessions as s on req.session_id=s.session_id
CROSS APPLY sys.dm_exec_sql_text(req.sql_handle) as st
OUTER APPLY sys.dm_exec_query_plan(req.plan_handle) as qp
OUTER APPLY sys.dm_exec_query_statistics_xml(req.session_id) as qsx
ORDER BY req.cpu_time desc;
GO
This query returns two copies of the execution plan. The column query_plan contains the execution plan from
sys.dm_exec_query_plan(). This version of the query plan contains only estimates of row counts and does not
contain any execution statistics.
If the column query_plan_with_in_flight_statistics returns an execution plan, this plan provides more
information. The query_plan_with_in_flight_statistics column returns data from
sys.dm_exec_query_statistics_xml(), which includes "in flight" execution statistics such as the actual number of
rows returned so far by a currently running query.
Review CPU usage metrics for the last hour
The following query against sys.dm_db_resource_stats returns the average CPU usage over 15-second intervals
for approximately the last hour.
SELECT
end_time,
avg_cpu_percent,
avg_instance_cpu_percent
FROM sys.dm_db_resource_stats
ORDER BY end_time DESC;
GO
It is important to not focus only on the avg_cpu_percent column. The avg_instance_cpu_percent column
includes CPU used by both user and internal workloads. If avg_instance_cpu_percent is close to 100%, CPU
resources are saturated. In this case, you should troubleshoot high CPU if app throughput is insufficient or
query latency is high.
Learn more in Resource management in Azure SQL Database.
Review the examples in sys.dm_db_resource_stats for more queries.
Query the top recent 15 queries by CPU usage
Query Store tracks execution statistics, including CPU usage, for queries. The following query returns the top 15
queries that have run in the last 2 hours, sorted by CPU usage. CPU time is returned in milliseconds.
WITH AggregatedCPU AS
(SELECT
q.query_hash,
SUM(count_executions * avg_cpu_time / 1000.0) AS total_cpu_ms,
SUM(count_executions * avg_cpu_time / 1000.0)/ SUM(count_executions) AS avg_cpu_ms,
MAX(rs.max_cpu_time / 1000.00) AS max_cpu_ms,
MAX(max_logical_io_reads) max_logical_reads,
COUNT(DISTINCT p.plan_id) AS number_of_distinct_plans,
COUNT(DISTINCT p.query_id) AS number_of_distinct_query_ids,
SUM(CASE WHEN rs.execution_type_desc='Aborted' THEN count_executions ELSE 0 END) AS
aborted_execution_count,
SUM(CASE WHEN rs.execution_type_desc='Regular' THEN count_executions ELSE 0 END) AS
regular_execution_count,
SUM(CASE WHEN rs.execution_type_desc='Exception' THEN count_executions ELSE 0 END) AS
exception_execution_count,
SUM(count_executions) AS total_executions,
MIN(qt.query_sql_text) AS sampled_query_text
FROM sys.query_store_query_text AS qt
JOIN sys.query_store_query AS q ON qt.query_text_id=q.query_text_id
JOIN sys.query_store_plan AS p ON q.query_id=p.query_id
JOIN sys.query_store_runtime_stats AS rs ON rs.plan_id=p.plan_id
JOIN sys.query_store_runtime_stats_interval AS rsi ON
rsi.runtime_stats_interval_id=rs.runtime_stats_interval_id
WHERE
rs.execution_type_desc IN ('Regular', 'Aborted', 'Exception') AND
rsi.start_time>=DATEADD(HOUR, -2, GETUTCDATE())
GROUP BY q.query_hash),
OrderedCPU AS
(SELECT *,
ROW_NUMBER() OVER (ORDER BY total_cpu_ms DESC, query_hash ASC) AS RN
FROM AggregatedCPU)
SELECT *
FROM OrderedCPU AS OD
WHERE OD.RN<=15
ORDER BY total_cpu_ms DESC;
GO
This query groups by a hashed value of the query. If you find a high value in the number_of_distinct_query_ids
column, investigate if a frequently run query isn't properly parameterized. Non-parameterized queries may be
compiled on each execution, which consumes significant CPU and affect the performance of Query Store.
To learn more about an individual query, note the query hash and use it to Identify the CPU usage and query
plan for a given query hash.
Query the most frequently compiled queries by query hash
Compiling a query plan is a CPU-intensive process. Azure SQL Database cache plans in memory for reuse. Some
queries may be frequently compiled if they are not parameterized or if RECOMPILE hints force recompilation.
Query Store tracks the number of times queries are compiled. Run the following query to identify the top 20
queries in Query Store by compilation count, along with the average number of compilations per minute:
SELECT TOP (20)
query_hash,
MIN(initial_compile_start_time) as initial_compile_start_time,
MAX(last_compile_start_time) as last_compile_start_time,
CASE WHEN DATEDIFF(mi,MIN(initial_compile_start_time), MAX(last_compile_start_time)) > 0
THEN 1.* SUM(count_compiles) / DATEDIFF(mi,MIN(initial_compile_start_time),
MAX(last_compile_start_time))
ELSE 0
END as avg_compiles_minute,
SUM(count_compiles) as count_compiles
FROM sys.query_store_query AS q
GROUP BY query_hash
ORDER BY count_compiles DESC;
GO
To learn more about an individual query, note the query hash and use it to Identify the CPU usage and query
plan for a given query hash.
Identify the CPU usage and query plan for a given query hash
Run the following query to find the individual query ID, query text, and query execution plans for a given
query_hash . CPU time is returned in milliseconds.
Replace the value for the @query_hash variable with a valid query_hash for your workload.
with query_ids as (
SELECT
q.query_hash,
q.query_id,
p.query_plan_hash,
SUM(qrs.count_executions) * AVG(qrs.avg_cpu_time)/1000. as total_cpu_time_ms,
SUM(qrs.count_executions) AS sum_executions,
AVG(qrs.avg_cpu_time)/1000. AS avg_cpu_time_ms
FROM sys.query_store_query q
JOIN sys.query_store_plan p on q.query_id=p.query_id
JOIN sys.query_store_runtime_stats qrs on p.plan_id = qrs.plan_id
WHERE q.query_hash = @query_hash
GROUP BY q.query_id, q.query_hash, p.query_plan_hash)
SELECT qid.*,
qt.query_sql_text,
p.count_compiles,
TRY_CAST(p.query_plan as XML) as query_plan
FROM query_ids as qid
JOIN sys.query_store_query AS q ON qid.query_id=q.query_id
JOIN sys.query_store_query_text AS qt on q.query_text_id = qt.query_text_id
JOIN sys.query_store_plan AS p ON qid.query_id=p.query_id and qid.query_plan_hash=p.query_plan_hash
ORDER BY total_cpu_time_ms DESC;
GO
This query returns one row for each variation of an execution plan for the query_hash across the entire history
of your Query Store. The results are sorted by total CPU time.
Use interactive Query Store tools to track historic CPU utilization
If you prefer to use graphic tools, follow these steps to use the interactive Query Store tools in SSMS.
1. Open SSMS and connect to your database in Object Explorer.
2. Expand the database node in Object Explorer
3. Expand the Quer y Store folder.
4. Open the Overall Resource Consumption pane.
Total CPU time for your database over the last month in milliseconds is shown in the bottom-left portion of the
pane. In the default view, CPU time is aggregated by day.
Select Configure in the top right of the pane to select a different time period. You can also change the unit of
aggregation. For example, you can choose to see data for a specific date range and aggregate the data by hour.
Use interactive Query Store tools to identify top queries by CPU time
Select a bar in the chart to drill in and see queries running in a specific time period. The Top Resource
Consuming Queries pane will open. Alternately, you can open Top Resource Consuming Queries from the
Query Store node under your database in Object Explorer directly.
In the default view, the Top Resource Consuming Queries pane shows queries by Duration (ms) . Duration
may sometimes be lower than CPU time: queries using parallelism may use much more CPU time than their
overall duration. Duration may also be higher than CPU time if waits were significant. To see queries by CPU
time, select the Metric drop-down at the top left of the pane and select CPU Time(ms) .
Each bar in the top-left quadrant represents a query. Select a bar to see details for that query. The top-right
quadrant of the screen shows how many execution plans are in Query Store for that query and maps them
according to when they were executed and how much of your selected metric was used. Select each Plan ID to
control which query execution plan is displayed in the bottom half of the screen.
NOTE
For a guide to interpreting Query Store views and the shapes which appear in the Top Resource Consumers view, see Best
practices with Query Store
Consider experimenting with small changes in the MAXDOP configuration at the database level, or modifying
individual problematic queries to use a non-default MAXDOP using a query hint. For more information, see the
examples in configure max degree of parallelism.
Next steps
Learn more about monitoring and performance tuning Azure SQL Database in the following articles:
Monitoring Azure SQL Database and Azure SQL Managed Instance performance using dynamic
management views
SQL Server index architecture and design guide
Enable automatic tuning to monitor queries and improve workload performance
Query processing architecture guide
Best practices with Query Store
Detectable types of query performance bottlenecks in Azure SQL Database
Analyze and prevent deadlocks in Azure SQL Database
Understand and resolve Azure SQL Database
blocking problems
9/13/2022 • 27 minutes to read • Edit Online
Objective
The article describes blocking in Azure SQL databases and demonstrates how to troubleshoot and resolve
blocking.
In this article, the term connection refers to a single logged-on session of the database. Each connection appears
as a session ID (SPID) or session_id in many DMVs. Each of these SPIDs is often referred to as a process,
although it is not a separate process context in the usual sense. Rather, each SPID consists of the server
resources and data structures necessary to service the requests of a single connection from a given client. A
single client application may have one or more connections. From the perspective of Azure SQL Database, there
is no difference between multiple connections from a single client application on a single client computer and
multiple connections from multiple client applications or multiple client computers; they are atomic. One
connection can block another connection, regardless of the source client.
For information on troubleshooting deadlocks, see Analyze and prevent deadlocks in Azure SQL Database.
NOTE
This content is focused on Azure SQL Database. Azure SQL Database is based on the latest stable version of the
Microsoft SQL Server database engine, so much of the content is similar though troubleshooting options and tools may
differ. For more on blocking in SQL Server, see Understand and resolve SQL Server blocking problems.
Understand blocking
Blocking is an unavoidable and by-design characteristic of any relational database management system
(RDBMS) with lock-based concurrency. Blocking in a database in Azure SQL Database occurs when one session
holds a lock on a specific resource and a second SPID attempts to acquire a conflicting lock type on the same
resource. Typically, the time frame for which the first SPID locks the resource is small. When the owning session
releases the lock, the second connection is then free to acquire its own lock on the resource and continue
processing. This is normal behavior and may happen many times throughout the course of a day with no
noticeable effect on system performance.
Each new database in Azure SQL Database has the read committed snapshot (RCSI) database setting enabled by
default. Blocking between sessions reading data and sessions writing data is minimized under RCSI, which uses
row versioning to increase concurrency. However, blocking and deadlocks may still occur in databases in Azure
SQL Database because:
Queries that modify data may block one another.
Queries may run under isolation levels that increase blocking. Isolation levels may be specified in application
connection strings, query hints, or SET statements in Transact-SQL.
RCSI may be disabled, causing the database to use shared (S) locks to protect SELECT statements run under
the read committed isolation level. This may increase blocking and deadlocks.
Snapshot isolation level is also enabled by default for new databases in Azure SQL Database. Snapshot isolation
is an additional row-based isolation level that provides transaction-level consistency for data and which uses
row versions to select rows to update. To use snapshot isolation, queries or connections must explicitly set their
transaction isolation level to SNAPSHOT . This may only be done when snapshot isolation is enabled for the
database.
You can identify if RCSI and/or snapshot isolation are enabled with Transact-SQL. Connect to your database in
Azure SQL Database and run the following query:
If RCSI is enabled, the is_read_committed_snapshot_on column will return the value 1 . If snapshot isolation is
enabled, the snapshot_isolation_state_desc column will return the value ON .
The duration and transaction context of a query determine how long its locks are held and, thereby, their effect
on other queries. SELECT statements run under RCSI do not acquire shared (S) locks on the data being read, and
therefore do not block transactions that are modifying data. For INSERT, UPDATE, and DELETE statements, the
locks are held during the query, both for data consistency and to allow the query to be rolled back if necessary.
For queries executed within an explicit transaction, the type of locks and duration for which the locks are held
are determined by the type of query, the transaction isolation level, and whether lock hints are used in the query.
For a description of locking, lock hints, and transaction isolation levels, see the following articles:
Locking in the Database Engine
Customizing Locking and Row Versioning
Lock Modes
Lock Compatibility
Transactions
When locking and blocking persists to the point where there is a detrimental effect on system performance, it is
due to one of the following reasons:
A SPID holds locks on a set of resources for an extended period of time before releasing them. This type
of blocking resolves itself over time but can cause performance degradation.
A SPID holds locks on a set of resources and never releases them. This type of blocking does not resolve
itself and prevents access to the affected resources indefinitely.
In the first scenario, the situation can be very fluid as different SPIDs cause blocking on different resources over
time, creating a moving target. These situations are difficult to troubleshoot using SQL Server Management
Studio to narrow down the issue to individual queries. In contrast, the second situation results in a consistent
state that can be easier to diagnose.
NOTE
For more application development guidance, see Troubleshooting connectivity issues and other errors with Azure SQL
Database and Azure SQL Managed Instance and Transient Fault Handling.
Troubleshoot blocking
Regardless of which blocking situation we are in, the methodology for troubleshooting locking is the same.
These logical separations are what will dictate the rest of the composition of this article. The concept is to find
the head blocker and identify what that query is doing and why it is blocking. Once the problematic query is
identified (that is, what is holding locks for the prolonged period), the next step is to analyze and determine why
the blocking is happening. After we understand the why, we can then make changes by redesigning the query
and the transaction.
Steps in troubleshooting:
1. Identify the main blocking session (head blocker)
2. Find the query and transaction that is causing the blocking (what is holding locks for a prolonged period)
3. Analyze/understand why the prolonged blocking occurs
4. Resolve blocking issue by redesigning query and transaction
Now let's dive in to discuss how to pinpoint the main blocking session with an appropriate data capture.
If you already have a particular session identified, you can use DBCC INPUTBUFFER(<session_id>) to find the
last statement that was submitted by a session. Similar results can be returned with the
sys.dm_exec_input_buffer dynamic management function (DMF), in a result set that is easier to query
and filter, providing the session_id and the request_id. For example, to return the most recent query
submitted by session_id 66 and request_id 0:
Run this sample query to find the actively executing queries and their current SQL batch text or input
buffer text, using the sys.dm_exec_sql_text or sys.dm_exec_input_buffer DMVs. If the data returned by the
text field of sys.dm_exec_sql_text is NULL, the query is not currently executing. In that case, the
event_info field of sys.dm_exec_input_buffer will contain the last command string passed to the SQL
engine. This query can also be used to identify sessions blocking other sessions, including a list of
session_ids blocked per session_id.
Run this more elaborate sample query, provided by Microsoft Support, to identify the head of a multiple
session blocking chain, including the query text of the sessions involved in a blocking chain.
WITH cteHead ( session_id,request_id,wait_type,wait_resource,last_wait_type,is_user_process,request_cpu_time
,request_logical_reads,request_reads,request_writes,wait_time,blocking_session_id,memory_usage
,session_cpu_time,session_reads,session_writes,session_logical_reads
,percent_complete,est_completion_time,request_start_time,request_status,command
,plan_handle,sql_handle,statement_start_offset,statement_end_offset,most_recent_sql_handle
,session_status,group_id,query_hash,query_plan_hash)
AS ( SELECT sess.session_id, req.request_id, LEFT (ISNULL (req.wait_type, ''), 50) AS 'wait_type'
, LEFT (ISNULL (req.wait_resource, ''), 40) AS 'wait_resource', LEFT (req.last_wait_type, 50) AS
'last_wait_type'
, sess.is_user_process, req.cpu_time AS 'request_cpu_time', req.logical_reads AS 'request_logical_reads'
, req.reads AS 'request_reads', req.writes AS 'request_writes', req.wait_time,
req.blocking_session_id,sess.memory_usage
, sess.cpu_time AS 'session_cpu_time', sess.reads AS 'session_reads', sess.writes AS 'session_writes',
sess.logical_reads AS 'session_logical_reads'
, CONVERT (decimal(5,2), req.percent_complete) AS 'percent_complete', req.estimated_completion_time AS
'est_completion_time'
, req.start_time AS 'request_start_time', LEFT (req.status, 15) AS 'request_status', req.command
, req.plan_handle, req.[sql_handle], req.statement_start_offset, req.statement_end_offset,
conn.most_recent_sql_handle
, LEFT (sess.status, 15) AS 'session_status', sess.group_id, req.query_hash, req.query_plan_hash
FROM sys.dm_exec_sessions AS sess
LEFT OUTER JOIN sys.dm_exec_requests AS req ON sess.session_id = req.session_id
LEFT OUTER JOIN sys.dm_exec_connections AS conn on conn.session_id = sess.session_id
)
, cteBlockingHierarchy (head_blocker_session_id, session_id, blocking_session_id, wait_type,
wait_duration_ms,
wait_resource, statement_start_offset, statement_end_offset, plan_handle, sql_handle,
most_recent_sql_handle, [Level])
AS ( SELECT head.session_id AS head_blocker_session_id, head.session_id AS session_id,
head.blocking_session_id
, head.wait_type, head.wait_time, head.wait_resource, head.statement_start_offset,
head.statement_end_offset
, head.plan_handle, head.sql_handle, head.most_recent_sql_handle, 0 AS [Level]
FROM cteHead AS head
WHERE (head.blocking_session_id IS NULL OR head.blocking_session_id = 0)
AND head.session_id IN (SELECT DISTINCT blocking_session_id FROM cteHead WHERE blocking_session_id != 0)
UNION ALL
SELECT h.head_blocker_session_id, blocked.session_id, blocked.blocking_session_id, blocked.wait_type,
blocked.wait_time, blocked.wait_resource, h.statement_start_offset, h.statement_end_offset,
h.plan_handle, h.sql_handle, h.most_recent_sql_handle, [Level] + 1
FROM cteHead AS blocked
INNER JOIN cteBlockingHierarchy AS h ON h.session_id = blocked.blocking_session_id and
h.session_id!=blocked.session_id --avoid infinite recursion for latch type of blocking
WHERE h.wait_type COLLATE Latin1_General_BIN NOT IN ('EXCHANGE', 'CXPACKET') or h.wait_type is null
)
SELECT bh.*, txt.text AS blocker_query_or_most_recent_query
FROM cteBlockingHierarchy AS bh
OUTER APPLY sys.dm_exec_sql_text (ISNULL ([sql_handle], most_recent_sql_handle)) AS txt;
To catch long-running or uncommitted transactions, use another set of DMVs for viewing current open
transactions, including sys.dm_tran_database_transactions, sys.dm_tran_session_transactions,
sys.dm_exec_connections, and sys.dm_exec_sql_text. There are several DMVs associated with tracking
transactions, see more DMVs on transactions here.
SELECT [s_tst].[session_id],
[database_name] = DB_NAME (s_tdt.database_id),
[s_tdt].[database_transaction_begin_time],
[sql_text] = [s_est].[text]
FROM sys.dm_tran_database_transactions [s_tdt]
INNER JOIN sys.dm_tran_session_transactions [s_tst] ON [s_tst].[transaction_id] = [s_tdt].[transaction_id]
INNER JOIN sys.dm_exec_connections [s_ec] ON [s_ec].[session_id] = [s_tst].[session_id]
CROSS APPLY sys.dm_exec_sql_text ([s_ec].[most_recent_sql_handle]) AS [s_est];
Reference sys.dm_os_waiting_tasks that is at the thread/task layer of SQL. This returns information about
what SQL wait type the request is currently experiencing. Like sys.dm_exec_requests , only active requests are
returned by sys.dm_os_waiting_tasks .
NOTE
For much more on wait types including aggregated wait stats over time, see the DMV sys.dm_db_wait_stats. This DMV
returns aggregate wait stats for the current database only.
Use the sys.dm_tran_locks DMV for more granular information on what locks have been placed by queries.
This DMV can return large amounts of data on a production database, and is useful for diagnosing what
locks are currently held.
Due to the INNER JOIN on sys.dm_os_waiting_tasks , the following query restricts the output from
sys.dm_tran_locks only to currently blocked requests, their wait status, and their locks:
With DMVs, storing the query results over time will provide data points that will allow you to review blocking
over a specified time interval to identify persisted blocking or trends.
NOTE
For detailed information on deadlocks, see Analyze and prevent deadlocks in Azure SQL Database.
STAT US M EA N IN G
sys.dm_exec_sessions.open_transaction_count
This field tells you the number of open transactions in this session. If this value is greater than 0,
the SPID is within an open transaction and may be holding locks acquired by any statement within
the transaction.
sys.dm_exec_requests.open_transaction_count
Similarly, this field tells you the number of open transactions in this request. If this value is greater
than 0, the SPID is within an open transaction and may be holding locks acquired by any statement
within the transaction.
sys.dm_exec_requests.wait_type , , and last_wait_type
wait_time
If the sys.dm_exec_requests.wait_type is NULL, the request is not currently waiting for anything and
the last_wait_type value indicates the last wait_type that the request encountered. For more
information about sys.dm_os_wait_stats and a description of the most commonwaittypes, see
sys.dm_os_wait_stats. The wait_time value can be used to determine if the request is making
progress. When a query against the sys.dm_exec_requests table returns a value in the wait_time
column that is less than the wait_time value from a previous query of sys.dm_exec_requests , this
indicates that the prior lock was acquired and released and is now waiting on a new lock
(assuming non-zero wait_time ). This can be verified by comparing the wait_resource between
sys.dm_exec_requests output, which displays the resource for which the request is waiting.
Other columns
The remainingcolumns in sys.dm_exec_sessions and sys.dm_exec_request can provide insight into
the root of a problem as well. Their usefulness varies depending on the circumstances of the
problem. For example, you can determine if the problem happens only from certain clients
(hostname), on certain network libraries (net_library), when the last batch submitted by a SPID
was last_request_start_time in sys.dm_exec_sessions , how long a request had been running
using start_time in sys.dm_exec_requests , and so on.
OT H ER
SC EN A RIO WA IT T Y P E O P EN _T RA N STAT US RESO LVES? SY M P TO M S
OT H ER
SC EN A RIO WA IT T Y P E O P EN _T RA N STAT US RESO LVES? SY M P TO M S
The output of the second query indicates that the transaction nesting level is one. All the locks acquired in
the transaction are still be held until the transaction was committed or rolled back. If applications
explicitly open and commit transactions, a communication or other error could leave the session and its
transaction in an open state.
Use the script earlier in this article based on sys.dm_tran_active_transactions to identify currently
uncommitted transactions across the instance.
Resolutions :
Additionally, this class of blocking problem may also be a performance problem, and require you
to pursue it as such. If the query execution time can be diminished, the query time-out or cancel
would not occur. It is important that the application is able to handle the time-out or cancel
scenarios should they arise, but you may also benefit from examining the performance of the
query.
Applications must properly manage transaction nesting levels, or they may cause a blocking
problem following the cancellation of the query in this manner. Consider the following:
In the error handler of the client application, execute IF @@TRANCOUNT > 0 ROLLBACK TRAN
following any error, even if the client application does not believe a transaction is open.
Checking for open transactions is required, because a stored procedure called during the batch
could have started a transaction without the client application's knowledge. Certain conditions,
such as canceling the query, prevent the procedure from executing past the current statement,
so even if the procedure has logic to check IF @@ERROR <> 0 and abort the transaction, this
rollback code will not be executed in such cases.
If connection pooling is being used in an application that opens the connection and runs a
small number of queries before releasing the connection back to the pool, such as a Web-based
application, temporarily disabling connection pooling may help alleviate the problem until the
client application is modified to handle the errors appropriately. By disabling connection
pooling, releasing the connection will cause a physical disconnect of the Azure SQL Database
connection, resulting in the server rolling back any open transactions.
Use SET XACT_ABORT ON for the connection, or in any stored procedures that begin transactions
and are not cleaning up following an error. In the event of a run-time error, this setting will
abort any open transactions and return control to the client. For more information, review SET
XACT_ABORT (Transact-SQL).
NOTE
The connection is not reset until it is reused from the connection pool, so it is possible that a user could open a
transaction and then release the connection to the connection pool, but it might not be reused for several
seconds, during which time the transaction would remain open. If the connection is not reused, the transaction
will be aborted when the connection times out and is removed from the connection pool. Thus, it is optimal for
the client application to abort transactions in their error handler or use SET XACT_ABORT ON to avoid this
potential delay.
Cau t i on
Following SET XACT_ABORT ON , T-SQL statements following a statement that causes an error will not be
executed. This could affect the intended flow of existing code.
3. Blocking caused by a SPID whose corresponding client application did not fetch all result rows to
completion
After sending a query to the server, all applications must immediately fetch all result rows to completion.
If an application does not fetch all result rows, locks can be left on the tables, blocking other users. If you
are using an application that transparently submits SQL statements to the server, the application must
fetch all result rows. If it does not (and if it cannot be configured to do so), you may be unable to resolve
the blocking problem. To avoid the problem, you can restrict poorly behaved applications to a reporting
or a decision-support database, separate from the main OLTP database.
The impact of this scenario is reduced when read committed snapshot is enabled on the database, which
is the default configuration in Azure SQL Database. Learn more in the Understand blocking section of this
article.
NOTE
See guidance for retry logic for applications connecting to Azure SQL Database.
Resolution : The application must be rewritten to fetch all rows of the result to completion. This does not
rule out the use of OFFSET and FETCH in the ORDER BY clause of a query to perform server-side paging.
4. Blocking caused by a session in a rollback state
A data modification query that is KILLed, or canceled outside of a user-defined transaction, will be rolled
back. This can also occur as a side effect of the client network session disconnecting, or when a request is
selected as the deadlock victim. This can often be identified by observing the output of
sys.dm_exec_requests , which may indicate the ROLLBACK command, and the percent_complete column
may show progress.
Thanks to the Accelerated Database Recovery feature introduced in 2019, lengthy rollbacks should be
rare.
Resolution : Wait for the SPID to finish rolling back the changes that were made.
To avoid this situation, do not perform large batch write operations or index creation or maintenance
operations during busy hours on OLTP systems. If possible, perform such operations during periods of
low activity.
5. Blocking caused by an orphaned connection
If the client application traps errors or the client workstation is restarted, the network session to the
server may not be immediately canceled under some conditions. From the Azure SQL Database
perspective, the client still appears to be present, and any locks acquired may still be retained. For more
information, see How to troubleshoot orphaned connections in SQL Server.
Resolution : If the client application has disconnected without appropriately cleaning up its resources,
you can terminate the SPID by using the KILL command. The KILL command takes the SPID value as
input. For example, to kill SPID 99, issue the following command:
KILL 99
See also
Analyze and prevent deadlocks in Azure SQL Database
Monitoring and performance tuning in Azure SQL Database and Azure SQL Managed Instance
Monitoring performance by using the Query Store
Transaction Locking and Row Versioning Guide
SET TRANSACTION ISOLATION LEVEL
Quickstart: Extended events in SQL Server
Intelligent Insights using AI to monitor and troubleshoot database performance
Next steps
Azure SQL Database: Improving Performance Tuning with Automatic Tuning
Deliver consistent performance with Azure SQL
Troubleshooting connectivity issues and other errors with Azure SQL Database and Azure SQL Managed
Instance
Transient Fault Handling
Configure the max degree of parallelism (MAXDOP) in Azure SQL Database
Diagnose and troubleshoot high CPU on Azure SQL Database
Analyze and prevent deadlocks in Azure SQL
Database
9/13/2022 • 37 minutes to read • Edit Online
NOTE
Learn more about the criteria for choosing a deadlock victim in the Deadlock process list section of this article.
The application with the transaction chosen as the deadlock victim should retry the transaction, which usually
completes after the other transaction or transactions involved in the deadlock have finished.
It is a best practice to introduce a short, randomized delay before retry to avoid encountering the same deadlock
again. Learn more about how to design retry logic for transient errors.
Default isolation level in Azure SQL Database
New databases in Azure SQL Database enable read committed snapshot (RCSI) by default. RCSI changes the
behavior of the read committed isolation level to use row-versioning to provide statement-level consistency
without the use of shared (S) locks for SELECT statements.
With RCSI enabled:
Statements reading data do not block statements modifying data.
Statements modifying data do not block statements reading data.
Snapshot isolation level is also enabled by default for new databases in Azure SQL Database. Snapshot isolation
is an additional row-based isolation level that provides transaction-level consistency for data and which uses
row versions to select rows to update. To use snapshot isolation, queries or connections must explicitly set their
transaction isolation level to SNAPSHOT . This may only be done when snapshot isolation is enabled for the
database.
You can identify if RCSI and/or snapshot isolation are enabled with Transact-SQL. Connect to your database in
Azure SQL Database and run the following query:
If RCSI is enabled, the is_read_committed_snapshot_on column will return the value 1 . If snapshot isolation is
enabled, the snapshot_isolation_state_desc column will return the value ON .
If RCSI has been disabled for a database in Azure SQL Database, investigate why RCSI was disabled before re-
enabling it. Application code may have been written expecting that queries reading data will be blocked by
queries writing data, resulting in incorrect results from race conditions when RCSI is enabled.
Interpreting deadlock events
A deadlock event is emitted after the deadlock manager in Azure SQL Database detects a deadlock and selects a
transaction as the victim. In other words, if you set up alerts for deadlocks, the notification fires after an
individual deadlock has been resolved. There is no user action that needs to be taken for that deadlock.
Applications should be written to include retry logic so that they automatically continue after receiving error
1205, "Transaction (Process ID N) was deadlocked on lock resources with another process and has been chosen
as the deadlock victim. Rerun the transaction."
It's useful to set up alerts, however, as deadlocks may reoccur. Deadlock alerts enable you to investigate if a
pattern of repeat deadlocks is happening in your database, in which case you may choose to take action to
prevent deadlocks from reoccurring. Learn more about alerting in the Monitor and alert on deadlocks section of
this article.
Top methods to prevent deadlocks
The lowest risk approach to preventing deadlocks from reoccurring is generally to tune nonclustered indexes to
optimize queries involved in the deadlock.
Risk is low for this approach because tuning nonclustered indexes doesn't require changes to the query code
itself, reducing the risk of a user error when rewriting Transact-SQL that causes incorrect data to be returned
to the user.
Effective nonclustered index tuning helps queries find the data to read and modify more efficiently. By
reducing the amount of data that a query needs to access, the likelihood of blocking is reduced and
deadlocks can often be prevented.
In some cases, creating or tuning a clustered index can reduce blocking and deadlocks. Because the clustered
index is included in all nonclustered index definitions, creating or modifying a clustered index can be an IO
intensive and time consuming operation on larger tables with existing nonclustered indexes. Learn more about
Clustered index design guidelines.
When index tuning isn't successful at preventing deadlocks, other methods are available:
If the deadlock occurs only when a particular plan is chosen for one of the queries involved in the deadlock,
forcing a query plan with Query Store may prevent deadlocks from reoccurring.
Rewriting Transact-SQL for one or more transactions involved in the deadlock can also help prevent
deadlocks. Breaking apart explicit transactions into smaller transactions requires careful coding and testing
to ensure data validity when concurrent modifications occur.
Learn more about each of these approaches in the Prevent a deadlock from reoccurring section of this article.
You can collect deadlock graphs with XEvents using either the ring buffer target or an event file target.
Considerations for selecting the appropriate target type are summarized in the following table:
Ring buffer target Simple setup with Event data is cleared Collect sample trace
Transact-SQL only. when the XEvents data for testing and
session is stopped learning.
for any reason, such Create for short
as taking the term needs if you
database offline or a cannot set up a
database failover. session using an
Database resources event file target
are used to maintain immediately.
data in the ring Use as a "landing
buffer and to query pad" for trace data,
session data. when you have set
up an automated
process to persist
trace data into a
table.
Event file target Persists event data Setup is more General use when
to a blob in Azure complex and you want event data
Storage so data is requires to persist even after
available even after configuration of an the event session
the session is Azure Storage stops.
stopped. container and You want to run a
Event files may be database scoped trace that generates
downloaded from credential. larger amounts of
the Azure portal or event data than you
Azure Storage would like to persist
Explorer and in memory.
analyzed locally,
which does not
require using
database resources
to query session
data.
The ring buffer target is convenient and easy to set up, but has a limited capacity, which can cause older events
to be lost. The ring buffer does not persist events to storage and the ring buffer target is cleared when the
XEvents session is stopped. This means that any XEvents collected will not be available when the database
engine restarts for any reason, such as a failover. The ring buffer target is best suited to learning and short-term
needs if you do not have the ability to set up an XEvents session to an event file target immediately.
This sample code creates an XEvents session that captures deadlock graphs in memory using the ring buffer
target. The maximum memory allowed for the ring buffer target is 4 MB, and the session will automatically run
when the database comes online, such as after a failover.
To create and then start a XEvents session for the sqlserver.database_xml_deadlock_report event that writes to
the ring buffer target, connect to your database and run the following Transact-SQL:
CREATE EVENT SESSION [deadlocks] ON DATABASE
ADD EVENT sqlserver.database_xml_deadlock_report
ADD TARGET package0.ring_buffer
WITH (STARTUP_STATE=ON, MAX_MEMORY=4 MB)
GO
To cause a deadlock, you will need to connect two sessions to the AdventureWorksLT database. We'll refer to
these sessions as Session A and Session B .
In Session A , run the following Transact-SQL. This code begins an explicit transaction and runs a single
statement that updates the SalesLT.Product table. To do this, the transaction acquires an update (U) lock on one
row on table SalesLT.Product which is converted to an exclusive (X) lock. We leave the transaction open.
BEGIN TRAN
Now, in Session B , run the following Transact-SQL. This code doesn't explicitly begin a transaction. Instead, it
operates in autocommit transaction mode. This statement updates the SalesLT.ProductDescription table. The
update will take out an update (U) lock on 72 rows on the SalesLT.ProductDescription table. The query joins to
other tables, including the SalesLT.Product table.
To complete this update, Session B needs a shared (S) lock on rows on the table SalesLT.Product , including the
row that is locked by Session A . Session B will be blocked on SalesLT.Product .
Return to Session A . Run the following Transact-SQL statement. This runs a second UPDATE statement as part
of the open transaction.
UPDATE SalesLT.ProductDescription SET Description = Description
FROM SalesLT.ProductDescription as pd
JOIN SalesLT.ProductModelProductDescription as pmpd on
pd.ProductDescriptionID = pmpd.ProductDescriptionID
JOIN SalesLT.ProductModel as pm on
pmpd.ProductModelID = pm.ProductModelID
JOIN SalesLT.Product as p on
pm.ProductModelID=p.ProductModelID
WHERE p.Color = 'Red';
The second update statement in Session A will be blocked by Session B on the SalesLT.ProductDescription .
Session A and Session B are now mutually blocking one another. Neither transaction can proceed, as they
each need a resource that is locked by the other.
After a few seconds, the deadlock monitor will identify that the transactions in Session A and Session B are
mutually blocking one another, and that neither can make progress. You should see a deadlock occur, with
Session A chosen as the deadlock victim. An error message will appear in Session A with text similar to the
following:
Msg 1205, Level 13, State 51, Line 7 Transaction (Process ID 91) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim. Rerun the transaction.
If you set up an XEvents session writing to the ring buffer, you can query deadlock information with the
following Transact-SQL. Before running the query, replace the value of @tracename with the name of your
xEvents session.
DECLARE @tracename sysname = N'deadlocks';
WITH ring_buffer AS (
SELECT CAST(target_data AS XML) as rb
FROM sys.dm_xe_database_sessions AS s
JOIN sys.dm_xe_database_session_targets AS t
ON CAST(t.event_session_address AS BINARY(8)) = CAST(s.address AS BINARY(8))
WHERE s.name = @tracename and
t.target_name = N'ring_buffer'
), dx AS (
SELECT
dxdr.evtdata.query('.') as deadlock_xml_deadlock_report
FROM ring_buffer
CROSS APPLY rb.nodes('/RingBufferTarget/event[@name=''database_xml_deadlock_report'']') AS dxdr(evtdata)
)
SELECT
d.query('/event/data[@name=''deadlock_cycle_id'']/value').value('(/value)[1]', 'int') AS
[deadlock_cycle_id],
d.value('(/event/@timestamp)[1]', 'DateTime2') AS [deadlock_timestamp],
d.query('/event/data[@name=''database_name'']/value').value('(/value)[1]', 'nvarchar(256)') AS
[database_name],
d.query('/event/data[@name=''xml_report'']/value/deadlock') AS deadlock_xml,
LTRIM(RTRIM(REPLACE(REPLACE(d.value('.', 'nvarchar(2000)'),CHAR(10),' '),CHAR(13),' '))) as query_text
FROM dx
CROSS APPLY deadlock_xml_deadlock_report.nodes('(/event/data/value/deadlock/process-list/process/inputbuf)')
AS ib(d)
ORDER BY [deadlock_timestamp] DESC;
GO
<deadlock>
<victim-list>
<victimProcess id="process24756e75088" />
</victim-list>
<process-list>
<process id="process24756e75088" taskpriority="0" logused="6528" waitresource="KEY: 8:72057594045202432
(98ec012aa510)" waittime="192" ownerId="1011123" transactionname="user_transaction" lasttranstarted="2022-
03-08T15:44:43.490" XDES="0x2475c980428" lockMode="U" schedulerid="3" kpid="30192" status="suspended"
spid="89" sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2022-03-08T15:44:49.250"
lastbatchcompleted="2022-03-08T15:44:49.210" lastattention="1900-01-01T00:00:00.210" clientapp="Microsoft
SQL Server Management Studio - Query" hostname="LAPTOP-CHRISQ" hostpid="16716" loginname="chrisqpublic"
isolationlevel="read committed (2)" xactid="1011123" currentdb="8" currentdbname="AdventureWorksLT"
lockTimeout="4294967295" clientoption1="671096864" clientoption2="128056">
<executionStack>
<frame procname="unknown" queryhash="0xef52b103e8b9b8ca" queryplanhash="0x02b0f58d7730f798" line="1"
stmtstart="2" stmtend="792"
sqlhandle="0x02000000c58b8f1e24e8f104a930776e21254b1771f92a520000000000000000000000000000000000000000">
unknown </frame>
</executionStack>
<inputbuf>
UPDATE SalesLT.ProductDescription SET Description = Description
FROM SalesLT.ProductDescription as pd
JOIN SalesLT.ProductModelProductDescription as pmpd on
pd.ProductDescriptionID = pmpd.ProductDescriptionID
JOIN SalesLT.ProductModel as pm on
pmpd.ProductModelID = pm.ProductModelID
pmpd.ProductModelID = pm.ProductModelID
JOIN SalesLT.Product as p on
pm.ProductModelID=p.ProductModelID
WHERE p.Color = 'Red' </inputbuf>
</process>
<process id="process2476d07d088" taskpriority="0" logused="11360" waitresource="KEY: 8:72057594045267968
(39e18040972e)" waittime="2641" ownerId="1013536" transactionname="UPDATE" lasttranstarted="2022-03-
08T15:44:46.807" XDES="0x2475ca80428" lockMode="S" schedulerid="2" kpid="94040" status="suspended" spid="95"
sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2022-03-08T15:44:46.807"
lastbatchcompleted="2022-03-08T15:44:46.760" lastattention="1900-01-01T00:00:00.760" clientapp="Microsoft
SQL Server Management Studio - Query" hostname="LAPTOP-CHRISQ" hostpid="16716" loginname="chrisqpublic"
isolationlevel="read committed (2)" xactid="1013536" currentdb="8" currentdbname="AdventureWorksLT"
lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056">
<executionStack>
<frame procname="unknown" queryhash="0xef52b103e8b9b8ca" queryplanhash="0x02b0f58d7730f798" line="1"
stmtstart="2" stmtend="798"
sqlhandle="0x020000002c85bb06327c0852c0be840fc1e30efce2b7c8090000000000000000000000000000000000000000">
unknown </frame>
</executionStack>
<inputbuf>
UPDATE SalesLT.ProductDescription SET Description = Description
FROM SalesLT.ProductDescription as pd
JOIN SalesLT.ProductModelProductDescription as pmpd on
pd.ProductDescriptionID = pmpd.ProductDescriptionID
JOIN SalesLT.ProductModel as pm on
pmpd.ProductModelID = pm.ProductModelID
JOIN SalesLT.Product as p on
pm.ProductModelID=p.ProductModelID
WHERE p.Color = 'Silver'; </inputbuf>
</process>
</process-list>
<resource-list>
<keylock hobtid="72057594045202432" dbid="8" objectname="9e011567-2446-4213-9617-
bad2624ccc30.SalesLT.ProductDescription" indexname="PK_ProductDescription_ProductDescriptionID"
id="lock2474df12080" mode="U" associatedObjectId="72057594045202432">
<owner-list>
<owner id="process2476d07d088" mode="U" />
</owner-list>
<waiter-list>
<waiter id="process24756e75088" mode="U" requestType="wait" />
</waiter-list>
</keylock>
<keylock hobtid="72057594045267968" dbid="8" objectname="9e011567-2446-4213-9617-
bad2624ccc30.SalesLT.Product" indexname="PK_Product_ProductID" id="lock2474b588580" mode="X"
associatedObjectId="72057594045267968">
<owner-list>
<owner id="process24756e75088" mode="X" />
</owner-list>
<waiter-list>
<waiter id="process2476d07d088" mode="S" requestType="wait" />
</waiter-list>
</keylock>
</resource-list>
</deadlock>
6. Close the file by selecting the X on the tab at the top of the window, or by selecting File , then Close .
7. Reopen the file in SSMS by selecting File , then Open , then File . Select the file you saved with the .xdl
extension.
The deadlock graph will now display in SSMS with a visual representation of the processes and resources
involved in the deadlock.
In the XML view of a deadlock graph, the victim-list node gives an ID for the process that was the victim of
the deadlock.
In our example deadlock, the victim process ID is process24756e75088 . We can use this ID when examining
the process-list and resource-list nodes to learn more about the victim process and the resources it was locking
or requesting to lock.
Deadlock process list
The deadlock process list is a rich source of information about the transactions involved in the deadlock.
The graphic representation of the deadlock graph shows only a subset of information contained in the deadlock
graph XML. The ovals in the deadlock graph represent the process, and show information including the:
Server process ID, also known as the session ID or SPID.
Deadlock priority of the session. If two sessions have different deadlock priorities, the session with the
lower priority is chosen as the deadlock victim. In this example, both sessions have the same deadlock
priority.
The amount of transaction log used by the session in bytes. If both sessions have the same deadlock
priority, the deadlock monitor chooses the session that is less expensive to roll back as the deadlock
victim. The cost is determined by comparing the number of log bytes written to that point in each
transaction.
In our example deadlock, session_id 89 had used a lower amount of transaction log, and was selected as
the deadlock victim.
Additionally, you can view the input buffer for the last statement run in each session prior to the deadlock by
hovering the mouse over each process. The input buffer will appear in a tooltip.
Additional information is available for processes in the XML view of the deadlock graph, including:
Identifying information for the session, such as the client name, host name, and login name.
The query plan hash for the last statement run by each session prior to the deadlock. The query plan hash is
useful for retrieving more information about the query from Query Store.
In our example deadlock:
We can see that both sessions were run using the SSMS client under the chrisqpublic login.
The query plan hash of the last statement run prior to the deadlock by our deadlock victim is
0x02b0f58d7730f798 . We can see the text of this statement in the input buffer.
The query plan hash of the last statement run by the other session in our deadlock is also
0x02b0f58d7730f798 . We can see the text of this statement in the input buffer. In this case, both queries
have the same query plan hash because the queries are identical, except for a literal value used as an equality
predicate.
We'll use these values later in this article to find additional information in Query Store.
Limitations of the input buffer in the deadlock process list
There are some limitations to be aware of regarding input buffer information in the deadlock process list.
Query text may be truncated in the input buffer. The input buffer is limited to the first 4,000 characters of the
statement being executed.
Additionally, some statements involved in the deadlock may not be included in the deadlock graph. In our
example, Session A ran two update statements within a single transaction. Only the second update statement,
the update that caused the deadlock, is included in the deadlock graph. The first update statement run by
Session A played a part in the deadlock by blocking Session B . The input buffer, query_hash , and related
information for the first statement run by Session A is not included in the deadlock graph.
To identify the full Transact-SQL run in a multi-statement transaction involved in a deadlock, you will need to
either find the relevant information in the stored procedure or application code that ran the query, or run a trace
using Extended Events to capture full statements run by sessions involved in a deadlock while it occurs. If a
statement involved in the deadlock has been truncated and only partial Transact-SQL appears in the input buffer,
you can find the Transact-SQL for the statement in Query Store with the Execution Plan.
Deadlock resource list
The deadlock resource list shows which lock resources are owned and waited on by the processes in the
deadlock.
Resources are represented by rectangles in the visual representation of the deadlock:
NOTE
You may notice that database names are represented as uniquedientifers in deadlock graphs for databases in Azure SQL
Database. This is the physical_database_name for the database listed in the sys.databases and
sys.dm_user_db_resource_governance dynamic management views.
In this example deadlock:
The deadlock victim, which we have referred to as Session A :
Owns an exclusive (X) lock on a key on the PK_Product_ProductID index on the SalesLT.Product table.
Requests an update (U) lock on a key on the PK_ProductDescription_ProductDescriptionID index on the
SalesLT.ProductDescription table.
The other process, which we have referred to as Session B :
Owns an update (U) lock on a key on the PK_ProductDescription_ProductDescriptionID index on the
SalesLT.ProductDescription table.
Requests a shared (S) lock on a key on the PK_ProductDescription_ProductDescriptionID index on the
SalesLT.ProductDescription table.
We can see the same information in the XML of the deadlock graph in the resource-list node.
Find query execution plans in Query Store
It is often useful to examine the query execution plans for statements involved in the deadlock. These execution
plans can often be found in Query Store using the query plan hash from the XML view of the deadlock graph's
process list.
This Transact-SQL query looks for query plans matching the query plan hash we found for our example
deadlock. Connect to the user database in Azure SQL Database to run the query.
SELECT
qrsi.end_time as interval_end_time,
qs.query_id,
qp.plan_id,
qt.query_sql_text,
TRY_CAST(qp.query_plan as XML) as query_plan,
qrs.count_executions
FROM sys.query_store_query as qs
JOIN sys.query_store_query_text as qt on qs.query_text_id=qt.query_text_id
JOIN sys.query_store_plan as qp on qs.query_id=qp.query_id
JOIN sys.query_store_runtime_stats qrs on qp.plan_id = qrs.plan_id
JOIN sys.query_store_runtime_stats_interval qrsi on
qrs.runtime_stats_interval_id=qrsi.runtime_stats_interval_id
WHERE query_plan_hash = @query_plan_hash
ORDER BY interval_end_time, query_id;
GO
You may not be able to obtain a query execution plan from Query Store, depending on your Query Store
CLEANUP_POLICY or QUERY_CAPTURE_MODE settings. In this case, you can often get needed information by
displaying the estimated execution plan for the query.
Look for patterns that increase blocking
When examining query execution plans involved in deadlocks, look out for patterns that may contribute to
blocking and deadlocks.
Table or index scans . When queries modifying data are run under RCSI, the selection of rows to update
is done using a blocking scan where an update (U) lock is taken on the data row as data values are read. If
the data row does not meet the update criteria, the update lock is released and the next row is locked and
scanned.
Tuning indexes to help modification queries find rows more efficiently reduces the number of update
locks issued. This reduces the chances of blocking and deadlocks.
Indexed views referencing more than one table . When you modify a table that is referenced in an
indexed view, the database engine must also maintain the indexed view. This requires taking out more
locks and can lead to increased blocking and deadlocks. Indexed views may also cause update operations
to internally execute under the read committed isolation level.
Modifications to columns referenced in foreign key constraints . When you modify columns in a
table that are referenced in a FOREIGN KEY constraint, the database engine must look for related rows in
the referencing table. Row versions cannot be used for these reads. In cases where cascading updates or
deletes are enabled, the isolation level may be escalated to serializable for the duration of the statement
to protect against phantom inserts.
Lock hints . Look for table hints that specify isolation levels requiring more locks. These hints include
HOLDLOCK (which is equivalent to serializable), SERIALIZABLE , READCOMMITTEDLOCK (which disables RCSI),
and REPEATABLEREAD . Additionally, hints such as PAGLOCK , TABLOCK , UPDLOCK , and XLOCK can increase the
risks of blocking and deadlocks.
If these hints are in place, research why the hints were implemented. These hints may prevent race
conditions and ensure data validity. It may be possible to leave these hints in place and prevent future
deadlocks using an alternate method in the Prevent a deadlock from reoccurring section of this article if
necessary.
NOTE
Learn more about behavior when modifying data using row versioning in the Transaction locking and row
versioning guide.
When examining the full code for a transaction, either in an execution plan or in application query code, look for
additional problematic patterns:
User interaction in transactions . User interaction inside an explicit multi-statement transaction
significantly increases the duration of transactions. This makes it more likely for these transactions to
overlap and for blocking and deadlocks to occur.
Similarly, holding an open transaction and querying an unrelated database or system mid-transaction
significantly increases the chances of blocking and deadlocks.
Transactions accessing objects in different orders . Deadlocks are less likely to occur when
concurrent explicit multi-statement transactions follow the same patterns and access objects in the same
order.
This index scan is being performed because our update query needs to modify an indexed view named
vProductAndDescription . As mentioned in the Look for patterns that increase blocking section of this
article, indexed views referencing multiple tables may increase blocking and the likelihood of deadlocks.
If we create the following nonclustered index in the AdventureWorksLT database that "covers" the columns
from SalesLT.Product referenced by the indexed view, this helps the query find rows much more
efficiently:
NOTE
In some cases, you may wish to adjust the deadlock priority of one or more sessions involved in a deadlock if it is
important for one of the sessions to complete successfully without retrying, or when one of the queries involved in the
deadlock is not critical and should be always chosen as the victim. While this does not prevent the deadlock from
reoccurring, it may reduce the impact of future deadlocks.
Next steps
Learn more about performance in Azure SQL Database:
Understand and resolve Azure SQL Database blocking problems
Transaction Locking and Row Versioning Guide
SET TRANSACTION ISOLATION LEVEL
Azure SQL Database: Improving Performance Tuning with Automatic Tuning
Deliver consistent performance with Azure SQL
Retry logic for transient errors.
Configure the max degree of parallelism (MAXDOP)
in Azure SQL Database
9/13/2022 • 8 minutes to read • Edit Online
NOTE
This content is focused on Azure SQL Database. Azure SQL Database is based on the latest stable version of the
Microsoft SQL Server database engine, so much of the content is similar though troubleshooting and configuration
options differ. For more on MAXDOP in SQL Server, see Configure the max degree of parallelism Server Configuration
Option.
Overview
MAXDOP controls intra-query parallelism in the database engine. Higher MAXDOP values generally result in
more parallel threads per query, and faster query execution.
In Azure SQL Database, the default MAXDOP setting for each new single database and elastic pool database is 8.
This default prevents unnecessary resource utilization, while still allowing the database engine to execute
queries faster using parallel threads. It is not typically necessary to further configure MAXDOP in Azure SQL
Database workloads, though it may provide benefits as an advanced performance tuning exercise.
NOTE
In September 2020, based on years of telemetry in the Azure SQL Database service MAXDOP 8 was made the default for
new databases, as the optimal value for the widest variety of customer workloads. This default helped prevent
performance problems due to excessive parallelism. Prior to that, the default setting for new databases was MAXDOP 0.
MAXDOP was not automatically changed for existing databases created prior to September 2020.
In general, if the database engine chooses to execute a query using parallelism, execution time is faster. However,
excess parallelism can consume additional processor resources without improving query performance. At scale,
excess parallelism can negatively affect query performance for all queries executing on the same database
engine instance. Traditionally, setting an upper bound for parallelism has been a common performance tuning
exercise in SQL Server workloads.
The following table describes database engine behavior when executing queries with different MAXDOP values:
M A XDO P B EH AVIO R
NOTE
Each query executes with at least one scheduler, and one worker thread on that scheduler.
A query executing with parallelism uses additional schedulers, and additional parallel threads. Because multiple parallel
threads may execute on the same scheduler, the total number of threads used to execute a query may be higher than
specified MAXDOP value or the total number of logical processors. For more information, see Scheduling parallel tasks.
Considerations
In Azure SQL Database, you can change the default MAXDOP value:
At the query level, using the MAXDOP query hint.
At the database level, using the MAXDOP database scoped configuration.
Long-standing SQL Server MAXDOP considerations and recommendations are applicable to Azure SQL
Database.
Index operations that create or rebuild an index, or that drop a clustered index, can be resource intensive.
You can override the database MAXDOP value for index operations by specifying the MAXDOP index
option in the CREATE INDEX or ALTER INDEX statement. The MAXDOP value is applied to the statement at
execution time and is not stored in the index metadata. For more information, see Configure Parallel
Index Operations.
In addition to queries and index operations, the database scoped configuration option for MAXDOP also
controls parallelism of other statements that may use parallel execution, such as DBCC CHECKTABLE,
DBCC CHECKDB, and DBCC CHECKFILEGROUP.
Recommendations
Changing MAXDOP for the database can have major impact on query performance and resource utilization,
both positive and negative. However, there is no single MAXDOP value that is optimal for all workloads. The
recommendations for setting MAXDOP are nuanced, and depend on many factors.
Some peak concurrent workloads may operate better with a different MAXDOP than others. A properly
configured MAXDOP should reduce the risk of performance and availability incidents, and in some cases may
reduce costs by being able to avoid unnecessary resource utilization, and thus scale down to a lower service
objective.
Excessive parallelism
A higher MAXDOP often reduces duration for CPU-intensive queries. However, excessive parallelism can worsen
other concurrent workload performance by starving other queries of CPU and worker thread resources. In
extreme cases, excessive parallelism can consume all database or elastic pool resources, causing query timeouts,
errors, and application outages.
TIP
We recommend that customers avoid setting MAXDOP to 0 even if it does not appear to cause problems currently.
Excessive parallelism becomes most problematic when there are more concurrent requests than can be
supported by the CPU and worker thread resources provided by the service objective. Avoid MAXDOP 0 to
reduce the risk of potential future problems due to excessive parallelism if a database is scaled up, or if future
hardware configurations in Azure SQL Database provide more cores for the same database service objective.
Modifying MAXDOP
If you determine that a MAXDOP setting different from the default is optimal for your Azure SQL Database
workload, you can use the ALTER DATABASE SCOPED CONFIGURATION T-SQL statement. For examples, see the
Examples using Transact-SQL section below. To change MAXDOP to a non-default value for each new database
you create, add this step to your database deployment process.
If non-default MAXDOP benefits only a small subset of queries in the workload, you can override MAXDOP at
the query level by adding the OPTION (MAXDOP) hint. For examples, see the Examples using Transact-SQL
section below.
Thoroughly test your MAXDOP configuration changes with load testing involving realistic concurrent query
loads.
MAXDOP for the primary and secondary replicas can be configured independently if different MAXDOP settings
are optimal for your read-write and read-only workloads. This applies to Azure SQL Database read scale-out,
geo-replication, and Hyperscale secondary replicas. By default, all secondary replicas inherit the MAXDOP
configuration of the primary replica.
Security
Permissions
The ALTER DATABASE SCOPED CONFIGURATION statement must be executed as the server admin, as a member of the
database role db_owner , or a user that has been granted the ALTER ANY DATABASE SCOPED CONFIGURATION
permission.
Examples
These examples use the latest AdventureWorksLT sample database when the SAMPLE option is chosen for a
new single database of Azure SQL Database.
PowerShell
MAXDOP database scoped configuration
This example shows how to use ALTER DATABASE SCOPED CONFIGURATION statement to set the MAXDOP
configuration to 2 . The setting takes effect immediately for new queries. The PowerShell cmdlet Invoke-
SqlCmd executes the T-SQL queries to set and the return the MAXDOP database scoped configuration.
$dbName = "sample"
$serverName = <server name here>
$serveradminLogin = <login here>
$serveradminPassword = <password here>
$desiredMAXDOP = 8
$params = @{
'database' = $dbName
'serverInstance' = $serverName
'username' = $serveradminLogin
'password' = $serveradminPassword
'outputSqlErrors' = $true
'query' = 'ALTER DATABASE SCOPED CONFIGURATION SET MAXDOP = ' + $desiredMAXDOP + ';
SELECT [value] FROM sys.database_scoped_configurations WHERE [name] = ''MAXDOP'';'
}
Invoke-SqlCmd @params
This example is for use with Azure SQL Databases with read scale-out replicas enabled, geo-replication, and
Azure SQL Database Hyperscale secondary replicas. As an example, the primary replica is set to a different
default MAXDOP as the secondary replica, anticipating that there may be differences between a read-write and a
read-only workload.
$dbName = "sample"
$serverName = <server name here>
$serveradminLogin = <login here>
$serveradminPassword = <password here>
$desiredMAXDOP_primary = 8
$desiredMAXDOP_secondary_readonly = 1
$params = @{
'database' = $dbName
'serverInstance' = $serverName
'username' = $serveradminLogin
'password' = $serveradminPassword
'outputSqlErrors' = $true
'query' = 'ALTER DATABASE SCOPED CONFIGURATION SET MAXDOP = ' + $desiredMAXDOP_primary + ';
ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY SET MAXDOP = ' + $desiredMAXDOP_secondary_readonly +
';
SELECT [value], value_for_secondary FROM sys.database_scoped_configurations WHERE [name] = ''MAXDOP'';'
}
Invoke-SqlCmd @params
Transact-SQL
You can use the Azure portal query editor, SQL Server Management Studio (SSMS), or Azure Data Studio to
execute T-SQL queries against your Azure SQL Database.
1. Open a new query window.
2. Connect to the database where you want to change MAXDOP. You cannot change database scoped
configurations in the master database.
3. Copy and paste the following example into the query window and select Execute .
MAXDOP database scoped configuration
This example shows how to determine the current database MAXDOP database scoped configuration using the
sys.database_scoped_configurations system catalog view.
This example shows how to use ALTER DATABASE SCOPED CONFIGURATION statement to set the MAXDOP
configuration to 8 . The setting takes effect immediately.
This example is for use with Azure SQL Databases with read scale-out replicas enabled, geo-replication, and
Hyperscale secondary replicas. As an example, the primary replica is set to a different MAXDOP than the
secondary replica, anticipating that there may be differences between the read-write and read-only workloads.
All statements are executed on the primary replica. The value_for_secondary column of the
sys.database_scoped_configurations contains settings for the secondary replica.
See also
ALTER DATABASE SCOPED CONFIGURATION (Transact-SQL)
sys.database_scoped_configurations (Transact-SQL)
Configure Parallel Index Operations
Query Hints (Transact-SQL)
Set Index Options
Understand and resolve Azure SQL Database blocking problems
Next steps
Monitor and Tune for Performance
SQL Server database migration to Azure SQL
Database
9/13/2022 • 6 minutes to read • Edit Online
NOTE
To migrate a non-SQL Server database, including Microsoft Access, Sybase, MySQL Oracle, and DB2 to Azure SQL
Database, see SQL Server Migration Assistant.
NOTE
Rather than using DMA, you can also use a BACPAC file. See Import a BACPAC file to a new database in Azure SQL
Database.
TIP
You can also use transactional replication to migrate a subset of your source database. The publication that you replicate
to Azure SQL Database can be limited to a subset of the tables in the database being replicated. For each table being
replicated, you can limit the data to a subset of the rows and/or a subset of the columns.
1. Set up Distribution
Using SQL Server Management Studio (SSMS)
Using Transact-SQL
2. Create Publication
Using SQL Server Management Studio (SSMS)
Using Transact-SQL
3. Create Subscription
Using SQL Server Management Studio (SSMS)
Using Transact-SQL
Some tips and differences for migrating to SQL Database
Use a local distributor
Doing so causes a performance impact on the server.
If the performance impact is unacceptable, you can use another server but it adds complexity in
management and administration.
When selecting a snapshot folder, make sure the folder you select is large enough to hold a BCP of every
table you want to replicate.
Snapshot creation locks the associated tables until it's complete, so schedule your snapshot appropriately.
Only push subscriptions are supported in Azure SQL Database. You can only add subscribers from the source
database.
IMPORTANT
Azure SQL Managed Instance enables you to migrate an existing SQL Server instance and its databases with minimal to
no compatibility issues. See What is a managed instance.
Next steps
Use the script on the Azure SQL EMEA Engineers blog to Monitor tempdb usage during migration.
Use the script on the Azure SQL EMEA Engineers blog to Monitor the transaction log space of your database
while migration is occurring.
For a SQL Server Customer Advisory Team blog about migrating using BACPAC files, see Migrating from
SQL Server to Azure SQL Database using BACPAC Files.
For information about working with UTC time after migration, see Modifying the default time zone for your
local time zone.
For information about changing the default language of a database after migration, see How to change the
default language of Azure SQL Database.
New DBA in the cloud – Managing Azure SQL
Database after migration
9/13/2022 • 28 minutes to read • Edit Online
Basic 7
SERVIC E T IER RET EN T IO N P ERIO D IN DAY S
Standard 35
Premium 35
In addition, the Long-Term Retention (LTR) feature allows you to hold onto your backup files for a much longer
period specifically, for up to 10 years, and restore data from these backups at any point within that period.
Furthermore, the database backups are kept in geo-replicated storage to ensure resilience from regional
catastrophe. You can also restore these backups in any Azure region at any point of time within the retention
period. See Business continuity overview.
How do I ensure business continuity in the event of a datacenter-level disaster or regional catastrophe
Because your database backups are stored in geo-replicated storage to ensure that in case of a regional disaster,
you can restore the backup to another Azure region. This is called geo-restore. The RPO (Recovery Point
Objective) for this is generally < 1 hour and the ERT (Estimated Recovery Time – few minutes to hours).
For mission-critical databases, Azure SQL Database offers, active geo-replication. What this essentially does is
that it creates a geo-replicated secondary copy of your original database in another region. For example, if your
database is initially hosted in Azure West US region and you want regional disaster resilience. You'd create an
active geo replica of the database in West US to say East US. When the calamity strikes on West US, you can fail
over to the East US region. Configuring them in an auto-failover Group is even better because this ensures that
the database automatically fails over to the secondary in East US in case of a disaster. The RPO for this is < 5
seconds and the ERT < 30 seconds.
If an auto-failover group is not configured, then your application needs to actively monitor for a disaster and
initiate a failover to the secondary. You can create up to 4 such active geo-replicas in different Azure regions. It
gets even better. You can also access these secondary active geo-replicas for read-only access. This comes in
very handy to reduce latency for a geo-distributed application scenario.
How does my disaster recovery plan change from on-premises to SQL Database
In summary, SQL Server setup requires you to actively manage your Availability by using features such as
Failover Clustering, Database Mirroring, Transaction Replication, or Log Shipping and maintain and manage
backups to ensure Business Continuity. With SQL Database, the platform manages these for you, so you can
focus on developing and optimizing your database application and not worry about disaster management as
much. You can have backup and disaster recovery plans configured and working with just a few clicks on the
Azure portal (or a few commands using the PowerShell APIs).
To learn more about Disaster recovery, see: Azure SQL Database Disaster Recovery 101
IF Y O U. . . SQ L DATA B A SE / A Z URE SY N A P SE A N A LY T IC S
Prefer not to use Azure Active Directory (Azure AD) in Azure Use SQL authentication
Used AD on SQL Server on-premises Federate AD with Azure AD, and use Azure AD
authentication. With this, you can use single sign-on.
Have guest accounts from Microsoft accounts (live.com, Use Azure AD Universal authentication in SQL Database or
outlook.com) or other domains (gmail.com) dedicated SQL pool, which leverages Azure AD B2B
Collaboration.
Are logged in to Windows using your Azure AD credentials Use Azure AD integrated authentication.
from a federated domain
Are logged in to Windows using credentials from a domain Use Azure AD integrated authentication.
not federated with Azure
Have middle-tier services which need to connect to SQL Use Azure AD integrated authentication.
Database or Azure Synapse Analytics
Reserved IPs
Another option is to provision reserved IPs for your VMs, and add those specific VM IP addresses in the server
firewall settings. By assigning reserved IPs, you save the trouble of having to update the firewall rules with
changing IP addresses.
What port do I connect to SQL Database on
Port 1433. SQL Database communicates over this port. To connect from within a corporate network, you have to
add an outbound rule in the firewall settings of your organization. As a guideline, avoid exposing port 1433
outside the Azure boundary.
How can I monitor and regulate activity on my server and database in SQL Database
SQL Database Auditing
With SQL Database, you can turn ON Auditing to track database events. SQL Database Auditing records
database events and writes them into an audit log file in your Azure Storage Account. Auditing is especially
useful if you intend to gain insight into potential security and policy violations, maintain regulatory compliance
etc. It allows you to define and configure certain categories of events that you think need auditing and based on
that you can get preconfigured reports and a dashboard to get an overview of events occurring on your
database. You can apply these auditing policies either at the database level or at the server level. A guide on how
to turn on auditing for your server/database, see: Enable SQL Database Auditing.
Threat detection
With threat detection, you get the ability to act upon security or policy violations discovered by Auditing very
easily. You don't need to be a security expert to address potential threats or violations in your system. Threat
detection also has some built-in capabilities like SQL Injection detection. SQL Injection is an attempt to alter or
compromise the data and a quite common way of attacking a database application in general. Threat detection
runs multiple sets of algorithms which detect potential vulnerabilities and SQL injection attacks, as well as
anomalous database access patterns (such as access from an unusual location or by an unfamiliar principal).
Security officers or other designated administrators receive an email notification if a threat is detected on the
database. Each notification provides details of the suspicious activity and recommendations on how to further
investigate and mitigate the threat. To learn how to turn on Threat detection, see: Enable threat detection.
How do I protect my data in general on SQL Database
Encryption provides a strong mechanism to protect and secure your sensitive data from intruders. Your
encrypted data is of no use to the intruder without the decryption key. Thus, it adds an extra layer of protection
on top of the existing layers of security built in SQL Database. There are two aspects to protecting your data in
SQL Database:
Your data that is at-rest in the data and log files
Your data that is in-flight
In SQL Database, by default, your data at rest in the data and log files on the storage subsystem is completely
and always encrypted via Transparent Data Encryption [TDE]. Your backups are also encrypted. With TDE there
are no changes required on your application side that is accessing this data. The encryption and decryption
happen transparently; hence the name. For protecting your sensitive data in-flight and at rest, SQL Database
provides a feature called Always Encrypted (AE). AE is a form of client-side encryption which encrypts sensitive
columns in your database (so they are in ciphertext to database administrators and unauthorized users). The
server receives the encrypted data to begin with. The key for Always Encrypted is also stored on the client side,
so only authorized clients can decrypt the sensitive columns. The server and data administrators cannot see the
sensitive data since the encryption keys are stored on the client. AE encrypts sensitive columns in the table end
to end, from unauthorized clients to the physical disk. AE supports equality comparisons today, so DBAs can
continue to query encrypted columns as part of their SQL commands. Always Encrypted can be used with a
variety of key store options, such as Azure Key Vault, Windows certificate store, and local hardware security
modules.
Ser ver can access sensitive data No Yes, since encryption is for the data at
rest
Allowed T-SQL operations Equality comparison All T-SQL surface area is available
How can I optimize and secure the traffic between my organization and SQL Database
The network traffic between your organization and SQL Database would generally get routed over the public
network. However, if you choose to optimize this path and make it more secure, you can look into Azure
ExpressRoute. ExpressRoute essentially lets you extend your corporate network into the Azure platform over a
private connection. By doing so, you do not go over the public Internet. You also get higher security, reliability,
and routing optimization that translates to lower network latencies and much faster speeds than you would
normally experience going over the public internet. If you are planning on transferring a significant chunk of
data between your organization and Azure, using ExpressRoute can yield cost benefits. You can choose from
three different connectivity models for the connection from your organization to Azure:
Cloud Exchange Co-location
Any-to-any
Point-to-Point
ExpressRoute also allows you to burst up to 2x the bandwidth limit you purchase for no additional charge. It is
also possible to configure cross region connectivity using ExpressRoute. To see a list of ExpressRoute
connectivity providers, see: ExpressRoute Partners and Peering Locations. The following articles describe Express
Route in more detail:
Introduction on Express Route
Prerequisites
Workflows
Is SQL Database compliant with any regulatory requirements, and how does that help with my own
organization's compliance
SQL Database is compliant with a range of regulatory compliancies. To view the latest set of compliancies that
have been met by SQL Database, visit the Microsoft Trust Center and drill down on the compliancies that are
important to your organization to see if SQL Database is included under the compliant Azure services. It is
important to note that although SQL Database may be certified as a compliant service, it aids in the compliance
of your organization's service but does not automatically guarantee it.
Intelligent database monitoring and maintenance after migration
Once you've migrated your database to SQL Database, you are going to want to monitor your database (for
example, check how the resource utilization is like or DBCC checks) and perform regular maintenance (for
example, rebuild or reorganize indexes, statistics etc.). Fortunately, SQL Database is Intelligent in the sense that it
uses the historical trends and recorded metrics and statistics to proactively help you monitor and maintain your
database, so that your application runs optimally always. In some cases, Azure SQL Database can automatically
perform maintenance tasks depending on your configuration setup. There are three facets to monitoring your
database in SQL Database:
Performance monitoring and optimization.
Security optimization.
Cost optimization.
Performance monitoring and optimization
With Query Performance Insights, you can get tailored recommendations for your database workload so that
your applications can keep running at an optimal level - always. You can also set it up so that these
recommendations get applied automatically and you do not have to bother performing maintenance tasks. With
SQL Database Advisor, you can automatically implement index recommendations based on your workload - this
is called Auto-Tuning. The recommendations evolve as your application workload changes to provide you with
the most relevant suggestions. You also get the option to manually review these recommendations and apply
them at your discretion.
Security optimization
SQL Database provides actionable security recommendations to help you secure your data and threat detection
for identifying and investigating suspicious database activities that may pose a potential thread to the database.
Vulnerability assessment is a database scanning and reporting service that allows you to monitor the security
state of your databases at scale and identify security risks and drift from a security baseline defined by you. After
every scan, a customized list of actionable steps and remediation scripts is provided, as well as an assessment
report that can be used to help meet compliance requirements.
With Microsoft Defender for Cloud, you identify the security recommendations across the board and quickly
apply them.
Cost optimization
Azure SQL platform analyzes the utilization history across the databases in a server to evaluate and recommend
cost-optimization options for you. This analysis usually takes a fortnight to analyze and build up actionable
recommendations. Elastic pool is one such option. The recommendation appears on the portal as a banner:
You can also view this analysis under the "Advisor" section:
For making sure you're on the right compute size, you can monitor your query and database resource
consumption through one of the above-mentioned ways in "How do I monitor the performance and resource
utilization in SQL Database". Should you find that your queries/databases are consistently running hot on
CPU/Memory etc. you can consider scaling up to a higher compute size. Similarly, if you note that even during
your peak hours, you don't seem to use the resources as much; consider scaling down from the current compute
size.
If you have a SaaS app pattern or a database consolidation scenario, consider using an Elastic pool for cost
optimization. Elastic pool is a great way to achieve database consolidation and cost-optimization. To read more
about managing multiple databases using elastic pool, see: Manage pools and databases.
How often do I need to run database integrity checks for my database
SQL Database uses some smart techniques that allow it to handle certain classes of data corruption
automatically and without any data loss. These techniques are built in to the service and are leveraged by the
service when need arises. On a regular basis, your database backups across the service are tested by restoring
them and running DBCC CHECKDB on it. If there are issues, SQL Database proactively addresses them.
Automatic page repair is leveraged for fixing pages that are corrupt or have data integrity issues. The database
pages are always verified with the default CHECKSUM setting that verifies the integrity of the page. SQL
Database proactively monitors and reviews the data integrity of your database and, if issues arise, addresses
them with the highest priority. In addition to these, you may choose to optionally run your own integrity checks
at your will. For more information, see Data Integrity in SQL Database
Data movement after migration
How do I export and import data as BACPAC files from SQL Database using the Azure portal
Expor t : You can export your database in Azure SQL Database as a BACPAC file from the Azure portal
Impor t : You can also import data as a BACPAC file into your database in Azure SQL Database using the
Azure portal.
Next steps
Learn about SQL Database.
Import or export an Azure SQL Database without
allowing Azure services to access the server
9/13/2022 • 5 minutes to read • Edit Online
NOTE
You can also use SSH to connect to your VM.
4. Close the Connect to vir tual machine form.
5. To connect to your VM, open the downloaded RDP file.
6. When prompted, select Connect . On a Mac, you need an RDP client such as this Remote Desktop Client
from the Mac App Store.
7. Enter the username and password you specified when creating the virtual machine, then choose OK .
8. You might receive a certificate warning during the sign-in process. Choose Yes or Continue to proceed
with the connection.
Install SqlPackage
Download and install the latest version of SqlPackage.
For additional information, see SqlPackage.exe.
3. Select Set ser ver firewall on the toolbar. The Firewall settings page for the server opens.
4. Choose Add client IP on the toolbar to add your virtual machine's public IP address to a new server-
level IP firewall rule. A server-level IP firewall rule can open port 1433 for a single IP address or a range
of IP addresses.
5. Select Save . A server-level IP firewall rule is created for your virtual machine's public IP address opening
port 1433 on the server.
6. Close the Firewall settings page.
IMPORTANT
To connect to tAzure SQL Database from behind a corporate firewall, the firewall must have port 1433 open.
This example shows how to import a database using SqlPackage with Active Directory Universal Authentication.
Performance considerations
Export speeds vary due to many factors (for example, data shape) so it's impossible to predict what speed
should be expected. SqlPackage may take considerable time, particularly for large databases.
To get the best performance you can try the following strategies:
1. Make sure no other workload is running on the database. Create a copy before export may be the best
solution to ensure no other workloads are running.
2. Increase database service level objective (SLO) to better handle the export workload (primarily read I/O). If
the database is currently GP_Gen5_4, perhaps a Business Critical tier would help with read workload.
3. Make sure there are clustered indexes particularly for large tables.
4. Virtual machines (VMs) should be in the same region as the database to help avoid network constraints.
5. VMs should have SSD with adequate size for generating temp artifacts before uploading to blob storage.
6. VMs should have adequate core and memory configuration for the specific database.
Next steps
To learn how to connect to and query an imported SQL Database, see Quickstart: Azure SQL Database: Use
SQL Server Management Studio to connect and query data.
For a SQL Server Customer Advisory Team blog about migrating using BACPAC files, see Migrating from
SQL Server to Azure SQL Database using BACPAC Files.
For a discussion of the entire SQL Server database migration process, including performance
recommendations, see SQL Server database migration to Azure SQL Database.
To learn how to manage and share storage keys and shared access signatures securely, see Azure Storage
Security Guide.
Import or export an Azure SQL Database using
Private Link without allowing Azure services to
access the server
9/13/2022 • 4 minutes to read • Edit Online
To use Private Link with Import-Export, user database and Azure Storage blob container must be hosted on the
same type of Azure Cloud. For example, either both in Azure Commercial or both on Azure Gov. Hosting across
cloud types isn't supported.
This article explains how to import or export an Azure SQL Database using Private Link with Allow Azure
Services is set to OFF on the Azure SQL server.
NOTE
Import Export using Private Link for Azure SQL Database is currently in preview
IMPORTANT
Import or Export of a database from Azure SQL Managed Instance or from a database in the Hyperscale service tier using
PowerShell isn't currently supported.
A p p r o v e P r i v a t e En d P o i n t c o n n e c t i o n o n A z u r e St o r a g e
1. Go to the storage account that hosts the blob container that holds BACPAC file.
2. Open the ‘Private endpoint connections’ page in security section on the left.
3. Select the Import-Export private endpoints you want to approve.
4. Select Approve to approve the connection.
After the Private End points are approved both in Azure SQL Server and Storage account, Import or Export jobs
will be kicked off. Until then, the jobs will be on hold.
You can check the status of Import or Export jobs in Import-Export History page under Data Management
section in Azure SQL Server page.
Configure Import-Export Private Link using PowerShell
Import a Database using Private link in PowerShell
Use the New-AzSqlDatabaseImport cmdlet to submit an import database request to Azure. Depending on
database size, the import may take some time to complete. The DTU based provisioning model supports select
database max size values for each tier. When importing a database use one of these supported values.
Limitations
Import using Private Link does not support specifying a backup storage redundancy while creating a new
database and creates with the default geo-redundant backup storage redundancy. As a work around, first
create an empty database with desired backup storage redundancy using Azure portal or PowerShell and
then import the BACPAC into this empty database.
Import and Export operations are not supported in Azure SQL DB Hyperscale tier yet.
Import using REST API with private link can only be done to existing database since the API uses database
extensions. To workaround this create an empty database with desired name and call Import REST API with
Private link.
Next steps
Import or Export Azure SQL Database without allowing Azure services to access the server
Import a database from a BACPAC file
Quickstart: Import a BACPAC file to a database in
Azure SQL Database or Azure SQL Managed
Instance
9/13/2022 • 7 minutes to read • Edit Online
NOTE
The imported database's compatibility level is based on the source database's compatibility level.
IMPORTANT
After importing your database, you can choose to operate the database at its current compatibility level (level 100 for the
AdventureWorks2008R2 database) or at a higher level. For more information on the implications and options for
operating a database at a specific compatibility level, see ALTER DATABASE Compatibility Level. See also ALTER DATABASE
SCOPED CONFIGURATION for information about additional database-level settings related to compatibility levels.
NOTE
Import and Export using Private Link is in preview. Import functionality on Azure SQL Hyperscale databases is now in
preview.
The Azure portal only supports creating a single database in Azure SQL Database and only from a BACPAC file
stored in Azure Blob storage.
To migrate a database into an Azure SQL Managed Instance from a BACPAC file, use SQL Server Management
Studio or SQLPackage, using the Azure portal or Azure PowerShell is not currently supported.
NOTE
Machines processing import/export requests submitted through the Azure portal or PowerShell need to store the
BACPAC file as well as temporary files generated by the Data-Tier Application Framework (DacFX). The disk space required
varies significantly among databases with the same size and can require disk space up to 3 times the size of the database.
Machines running the import/export request only have 450GB local disk space. As a result, some requests may fail with
the error There is not enough space on the disk . In this case, the workaround is to run sqlpackage.exe on a machine
with enough local disk space. We encourage using SqlPackage to import/export databases larger than 150GB to avoid
this issue.
1. To import from a BACPAC file into a new single database using the Azure portal, open the appropriate
server page and then, on the toolbar, select Impor t database .
2. Select the storage account and the container for the BACPAC file and then select the BACPAC file from
which to import.
3. Specify the new database size (usually the same as origin) and provide the destination SQL Server
credentials. For a list of possible values for a new database in Azure SQL Database, see Create Database.
4. Click OK .
5. To monitor an import's progress, open the database's server page, and, under Settings , select
Impor t/Expor t histor y . When successful, the import has a Completed status.
6. To verify the database is live on the server, select SQL databases and verify the new database is Online .
Using SqlPackage
To import a SQL Server database using the SqlPackage command-line utility, see import parameters and
properties. You can download the latest SqlPackage for Windows, macOS, or Linux.
For scale and performance, we recommend using SqlPackage in most production environments rather than
using the Azure portal. For a SQL Server Customer Advisory Team blog about migrating using BACPAC files, see
migrating from SQL Server to Azure SQL Database using BACPAC Files.
The DTU based provisioning model supports select database max size values for each tier. When importing a
database use one of these supported values.
The following SqlPackage command imports the AdventureWorks2008R2 database from local storage to a
logical SQL server named mynewser ver20170403 . It creates a new database called myMigratedDatabase
with a Premium service tier and a P6 Service Objective. Change these values as appropriate for your
environment.
IMPORTANT
To connect to Azure SQL Database from behind a corporate firewall, the firewall must have port 1433 open. To connect to
SQL Managed Instance, you must have a point-to-site connection or an express route connection.
This example shows how to import a database using SqlPackage with Active Directory Universal Authentication.
sqlpackage.exe /a:Import /sf:testExport.bacpac /tdn:NewDacFX /tsn:apptestserver.database.windows.net
/ua:True /tid:"apptest.onmicrosoft.com"
Using PowerShell
NOTE
A SQL Managed Instance does not currently support migrating a database into an instance database from a BACPAC file
using Azure PowerShell. To import into a SQL Managed Instance, use SQL Server Management Studio or SQLPackage.
NOTE
The machines processing import/export requests submitted through portal or PowerShell need to store the bacpac file as
well as temporary files generated by Data-Tier Application Framework (DacFX). The disk space required varies significantly
among DBs with same size and can take up to 3 times of the database size. Machines running the import/export request
only have 450GB local disk space. As result, some requests may fail with "There is not enough space on the disk" error. In
this case, the workaround is to run sqlpackage.exe on a machine with enough local disk space. When importing/exporting
databases larger than 150GB, use SqlPackage to avoid this issue.
PowerShell
Azure CLI
IMPORTANT
The PowerShell Azure Resource Manager (RM) module is still supported, but all future development is for the Az.Sql
module. The AzureRM module will continue to receive bug fixes until at least December 2020. The arguments for the
commands in the Az module and in the AzureRm modules are substantially identical. For more about their compatibility,
see Introducing the new Azure PowerShell Az module.
Use the New-AzSqlDatabaseImport cmdlet to submit an import database request to Azure. Depending on
database size, the import may take some time to complete. The DTU based provisioning model supports select
database max size values for each tier. When importing a database use one of these supported values.
You can use the Get-AzSqlDatabaseImportExportStatus cmdlet to check the import's progress. Running the
cmdlet immediately after the request usually returns Status: InProgress . The import is complete when you see
Status: Succeeded .
[Console]::Write("Importing")
while ($importStatus.Status -eq "InProgress") {
$importStatus = Get-AzSqlDatabaseImportExportStatus -OperationStatusLink
$importRequest.OperationStatusLink
[Console]::Write(".")
Start-Sleep -s 10
}
[Console]::WriteLine("")
$importStatus
TIP
For another script example, see Import a database from a BACPAC file.
Limitations
Importing to a database in elastic pool isn't supported. You can import data into a single database and then
move the database to an elastic pool.
Import Export Service does not work when Allow access to Azure services is set to OFF. However you can
work around the problem by manually running sqlpackage.exe from an Azure VM or performing the export
directly in your code by using the DacFx API.
Import does not support specifying a backup storage redundancy while creating a new database and creates
with the default geo-redundant backup storage redundancy. To workaround, first create an empty database
with desired backup storage redundancy using Azure portal or PowerShell and then import the BACPAC into
this empty database.
Storage behind a firewall is currently not supported.
Additional tools
You can also use these wizards.
Import Data-tier Application Wizard in SQL Server Management Studio.
SQL Server Import and Export Wizard.
Next steps
To learn how to connect to and query Azure SQL Database from Azure Data Studio, see Quickstart: Use Azure
Data Studio to connect and query Azure SQL Database.
To learn how to connect to and query a database in Azure SQL Database, see Quickstart: Azure SQL
Database: Use SQL Server Management Studio to connect to and query data.
For a SQL Server Customer Advisory Team blog about migrating using BACPAC files, see Migrating from
SQL Server to Azure SQL Database using BACPAC Files.
For a discussion of the entire SQL Server database migration process, including performance
recommendations, see SQL Server database migration to Azure SQL Database.
To learn how to manage and share storage keys and shared access signatures securely, see Azure Storage
Security Guide.
Copy a transactionally consistent copy of a
database in Azure SQL Database
9/13/2022 • 12 minutes to read • Edit Online
Overview
A database copy is a transactionally consistent snapshot of the source database as of a point in time after the
copy request is initiated. You can select the same server or a different server for the copy. Also you can choose
to keep the backup redundancy and compute size of the source database, or use a different backup storage
redundancy and/or compute size within the same service tier. After the copy is complete, it becomes a fully
functional, independent database. The logins, users, and permissions in the copied database are managed
independently from the source database. The copy is created using the geo-replication technology. Once replica
seeding is complete, the geo-replication link is automatically terminated. All the requirements for using geo-
replication apply to the database copy operation. See Active geo-replication overview for details.
IMPORTANT
The PowerShell Azure Resource Manager (RM) module is still supported by Azure SQL Database, but all future
development is for the Az.Sql module. The AzureRM module will continue to receive bug fixes until at least December
2020. The arguments for the commands in the Az module and in the AzureRm modules are substantially identical. For
more about their compatibility, see Introducing the new Azure PowerShell Az module.
The database copy is an asynchronous operation but the target database is created immediately after the
request is accepted. If you need to cancel the copy operation while still in progress, drop the the target database
using the Remove-AzSqlDatabase cmdlet.
For a complete sample PowerShell script, see Copy a database to a new server.
NOTE
Terminating the T-SQL statement does not terminate the database copy operation. To terminate the operation, drop the
target database.
Database copy using T-SQL is not supported when connecting to the destination server over a private endpoint. If a
private endpoint is configured but public network access is allowed, database copy is supported when connected to the
destination server from a public IP address. Once the copy operation completes, public access can be denied.
IMPORTANT
Selecting backup storage redundancy when using T-SQL CREATE DATABASE ... AS COPY OF command is not supported
yet.
IMPORTANT
Both servers' firewalls must be configured to allow inbound connection from the IP of the client issuing the T-SQL CREATE
DATABASE ... AS COPY OF command. To determine the source IP address of current connection, execute
SELECT client_net_address FROM sys.dm_exec_connections WHERE session_id = @@SPID;
Similarly, the below command copies Database1 on server1 to a new database named Database2 within an
elastic pool called pool2, on server2.
-- Execute on the master database of the target server (server2) to start copying from Server1 to Server2
CREATE DATABASE Database2 AS COPY OF server1.Database1 (SERVICE_OBJECTIVE = ELASTIC_POOL( name = pool2 ) );
TIP
When copying databases in the same Azure Active Directory tenant, authorization on the source and destination servers
is simplified if you initiate the copy command using an AAD authentication login with sufficient access on both servers.
The minimum necessary level of access is membership in the dbmanager role in the master database on both servers.
For example, you can use an AAD login is a member of an AAD group designated as the server administrator on both
servers.
--Step# 1
--Create login and user in the master database of the source server.
--Step# 2
--Create the user in the source database and grant dbowner permission to the database.
--Step# 3
--Capture the SID of the user "loginname" from master database
--Step# 4
--Connect to Destination server.
--Create login and user in the master database, same as of the source server.
CREATE LOGIN loginname WITH PASSWORD = 'xxxxxxxxx', SID = [SID of loginname login on source server];
GO
CREATE USER [loginname] FOR LOGIN [loginname] WITH DEFAULT_SCHEMA=[dbo];
GO
ALTER ROLE dbmanager ADD MEMBER loginname;
GO
--Step# 5
--Execute the copy of database script from the destination server using the credentials created
NOTE
The Azure portal, PowerShell, and the Azure CLI do not support database copy to a different subscription.
TIP
Database copy using T-SQL supports copying a database from a subscription in a different Azure tenant. This is only
supported when using a SQL authentication login to log in to the target server. Creating a database copy on a logical
server in a different Azure tenant is not supported when Azure Active Directory auth is active (enabled) on either source
or target logical server.
NOTE
If you decide to cancel the copying while it is in progress, execute the DROP DATABASE statement on the new database.
IMPORTANT
If you need to create a copy with a substantially smaller service objective than the source, the target database may not
have sufficient resources to complete the seeding process and it can cause the copy operation to fail. In this scenario use
a geo-restore request to create a copy in a different server and/or a different region. See Recover an Azure SQL Database
using database backups for more information.
Resolve logins
After the new database is online on the target server, use the ALTER USER statement to remap the users from
the new database to logins on the target server. To resolve orphaned users, see Troubleshoot Orphaned Users.
See also How to manage Azure SQL Database security after disaster recovery.
All users in the new database retain the permissions that they had in the source database. The user who initiated
the database copy becomes the database owner of the new database. After the copying succeeds and before
other users are remapped, only the database owner can log in to the new database.
To learn about managing users and logins when you copy a database to a different server, see How to manage
Azure SQL Database security after disaster recovery.
Next steps
For information about logins, see Manage logins and How to manage Azure SQL Database security after
disaster recovery.
To export a database, see Export the database to a BACPAC.
Replication to Azure SQL Database
9/13/2022 • 3 minutes to read • Edit Online
NOTE
This article describes the use of transactional replication in Azure SQL Database. It is unrelated to active geo-replication,
an Azure SQL Database feature that allows you to create complete readable replicas of individual databases.
Supported configurations
Azure SQL Database can only be the push subscriber of a SQL Server publisher and distributor.
The SQL Server instance acting as publisher and/or distributor can be an instance of SQL Server running on-
premises, an Azure SQL Managed Instance, or an instance of SQL Server running on an Azure virtual
machine in the cloud.
The distribution database and the replication agents cannot be placed on a database in Azure SQL Database.
Snapshot and one-way transactional replication are supported. Peer-to-peer transactional replication and
merge replication are not supported.
Versions
To successfully replicate to a database in Azure SQL Database, SQL Server publishers and distributors must be
using (at least) one of the following versions:
Publishing to any Azure SQL Database from a SQL Server database is supported by the following versions of
SQL Server:
SQL Server 2016 and greater
SQL Server 2014 RTM CU10 (12.0.4427.24) or SP1 CU3 (12.0.2556.4)
SQL Server 2012 SP2 CU8 (11.0.5634.1) or SP3 (11.0.6020.0)
NOTE
Attempting to configure replication using an unsupported version can result in error number MSSQL_REPL20084 (The
process could not connect to Subscriber.) and MSSQL_REPL40532 (Cannot open server <name> requested by the login.
The login failed.).
To use all the features of Azure SQL Database, you must be using the latest versions of SQL Server Management
Studio and SQL Server Data Tools.
Types of replication
There are different types of replication:
Merge replication No No
Peer-to-peer No No
Bidirectional No Yes
Updatable subscriptions No No
Remarks
Only push subscriptions to Azure SQL Database are supported.
Replication can be configured by using SQL Server Management Studio or by executing Transact-SQL
statements on the publisher. You cannot configure replication by using the Azure portal.
Replication can only use SQL Server authentication logins to connect to Azure SQL Database.
Replicated tables must have a primary key.
You must have an existing Azure subscription.
The Azure SQL Database subscriber can be in any region.
A single publication on SQL Server can support both Azure SQL Database and SQL Server (on-premises and
SQL Server in an Azure virtual machine) subscribers.
Replication management, monitoring, and troubleshooting must be performed from SQL Server rather than
Azure SQL Database.
Only @subscriber_type = 0 is supported in sp_addsubscription for SQL Database.
Azure SQL Database does not support bi-directional, immediate, updatable, or peer-to-peer replication.
Replication Architecture
Scenarios
Typical Replication Scenario
1. Create a transactional replication publication on a SQL Server database.
2. On SQL Server use the New Subscription Wizard or Transact-SQL statements to create a push to
subscription to Azure SQL Database.
3. With single and pooled databases in Azure SQL Database, the initial data set is a snapshot that is created by
the Snapshot Agent and distributed and applied by the Distribution Agent. With a SQL Managed Instance
publisher, you can also use a database backup to seed the Azure SQL Database subscriber.
Data migration scenario
1. Use transactional replication to replicate data from a SQL Server database to Azure SQL Database.
2. Redirect the client or middle-tier applications to update the database copy.
3. Stop updating the SQL Server version of the table and remove the publication.
Limitations
The following options are not supported for Azure SQL Database subscriptions:
Copy file groups association
Copy table partitioning schemes
Copy index partitioning schemes
Copy user defined statistics
Copy default bindings
Copy rule bindings
Copy fulltext indexes
Copy XML XSD
Copy XML indexes
Copy permissions
Copy spatial indexes
Copy filtered indexes
Copy data compression attribute
Copy sparse column attribute
Convert filestream to MAX data types
Convert hierarchyid to MAX data types
Convert spatial to MAX data types
Copy extended properties
Limitations to be determined
Copy collation
Execution in a serialized transaction of the SP
Examples
Create a publication and a push subscription. For more information, see:
Create a Publication
Create a Push Subscription by using the server name as the subscriber (for example
N'azuresqldbdns.database.windows.net' ) and the Azure SQL Database name as the destination
database (for example AdventureWorks ).
See Also
Transactional replication
Create a Publication
Create a Push Subscription
Types of Replication
Monitoring (Replication)
Initialize a Subscription
Automate the replication of schema changes in
Azure SQL Data Sync
9/13/2022 • 8 minutes to read • Edit Online
IMPORTANT
We recommend that you read this article carefully, especially the sections about Troubleshooting and Other
considerations, before you start to implement automated schema change replication in your sync environment. We also
recommend that you read Sync data across multiple cloud and on-premises databases with SQL Data Sync. Some
database operations may break the solution described in this article. Additional domain knowledge of SQL Server and
Transact-SQL may be required to troubleshoot those issues.
This table has an identity column to track the order of schema changes. You can add more fields to log more
information if needed.
Create a table to track the history of schema changes
On all endpoints, create a table to track the ID of the most recently applied schema change command.
Create an ALTER TABLE DDL trigger in the database where schema changes are made
Create a DDL trigger for ALTER TABLE operations. You only need to create this trigger in the database where
schema changes are made. To avoid conflicts, only allow schema changes in one database in a sync group.
-- You can add your own logic to filter ALTER TABLE commands instead of replicating all of them.
The trigger inserts a record in the schema change tracking table for each ALTER TABLE command. This example
adds a filter to avoid replicating schema changes made under schema DataSync , because these are most likely
made by the Data Sync service. Add more filters if you only want to replicate certain types of schema changes.
You can also add more triggers to replicate other types of schema changes. For example, create
CREATE_PROCEDURE, ALTER_PROCEDURE and DROP_PROCEDURE triggers to replicate changes to stored
procedures.
Create a trigger on other endpoints to apply schema changes during insertion
This trigger executes the schema change command when it is synced to other endpoints. You need to create this
trigger on all the endpoints, except the one where schema changes are made (that is, in the database where the
DDL trigger AlterTableDDLTrigger is created in the previous step).
CREATE TRIGGER SchemaChangesTrigger
ON SchemaChanges
AFTER INSERT
AS
DECLARE @lastAppliedId bigint
DECLARE @id bigint
DECLARE @sqlStmt nvarchar(max)
SELECT TOP 1 @lastAppliedId=LastAppliedId FROM SchemaChangeHistory
SELECT TOP 1 @id = id, @SqlStmt = SqlStmt FROM SchemaChanges WHERE id > @lastAppliedId ORDER BY id
IF (@id = @lastAppliedId + 1)
BEGIN
EXEC sp_executesql @SqlStmt
UPDATE SchemaChangeHistory SET LastAppliedId = @id
WHILE (1 = 1)
BEGIN
SET @id = @id + 1
IF exists (SELECT id FROM SchemaChanges WHERE ID = @id)
BEGIN
SELECT @sqlStmt = SqlStmt FROM SchemaChanges WHERE ID = @id
EXEC sp_executesql @SqlStmt
UPDATE SchemaChangeHistory SET LastAppliedId = @id
END
ELSE
BREAK;
END
END
This trigger runs after the insertion and checks whether the current command should run next. The code logic
ensures that no schema change statement is skipped, and all changes are applied even if the insertion is out of
order.
Sync the schema change tracking table to all endpoints
You can sync the schema change tracking table to all endpoints using the existing sync group or a new sync
group. Make sure the changes in the tracking table can be synced to all endpoints, especially when you're using
one-direction sync.
Don't sync the schema change history table, since that table maintains different state on different endpoints.
Apply the schema changes in a sync group
Only schema changes made in the database where the DDL trigger is created are replicated. Schema changes
made in other databases are not replicated.
After the schema changes are replicated to all endpoints, you also need to take extra steps to update the sync
schema to start or stop syncing the new columns.
Add new columns
1. Make the schema change.
2. Avoid any data change where the new columns are involved until you've completed the step that creates
the trigger.
3. Wait until the schema changes are applied to all endpoints.
4. Refresh the database schema and add the new column to the sync schema.
5. Data in the new column is synced during next sync operation.
Remove columns
1. Remove the columns from the sync schema. Data Sync stops syncing data in these columns.
2. Make the schema change.
3. Refresh the database schema.
Update data types
1. Make the schema change.
2. Wait until the schema changes are applied to all endpoints.
3. Refresh the database schema.
4. If the new and old data types are not fully compatible - for example, if you change from int to bigint -
sync may fail before the steps that create the triggers are completed. Sync succeeds after a retry.
Rename columns or tables
Renaming columns or tables makes Data Sync stop working. Create a new table or column, backfill the data, and
then delete the old table or column instead of renaming.
Other types of schema changes
For other types of schema changes - for example, creating stored procedures or dropping an index- updating
the sync schema is not required.
Other Considerations
Database users who configure the hub and member databases need to have enough permission to
execute the schema change commands.
You can add more filters in the DDL trigger to only replicate schema change in selected tables or
operations.
You can only make schema changes in the database where the DDL trigger is created.
If you are making a change in a SQL Server database, make sure the schema change is supported in
Azure SQL Database.
If schema changes are made in databases other than the database where the DDL trigger is created, the
changes are not replicated. To avoid this issue, you can create DDL triggers to block changes on other
endpoints.
If you need to change the schema of the schema change tracking table, disable the DDL trigger before
you make the change, and then manually apply the change to all endpoints. Updating the schema in an
AFTER INSERT trigger on the same table does not work.
Don't reseed the identity column by using DBCC CHECKIDENT.
Don't use TRUNCATE to clean up data in the schema change tracking table.
Next steps
For more info about SQL Data Sync, see:
Overview - Sync data across multiple cloud and on-premises databases with Azure SQL Data Sync
Set up Data Sync
In the portal - Tutorial: Set up SQL Data Sync to sync data between Azure SQL Database and SQL
Server
With PowerShell
Use PowerShell to sync between multiple databases in Azure SQL Database
Use PowerShell to sync between a database in Azure SQL Database and a database in a SQL
Server instance
Data Sync Agent - Data Sync Agent for Azure SQL Data Sync
Best practices - Best practices for Azure SQL Data Sync
Monitor - Monitor SQL Data Sync with Azure Monitor logs
Troubleshoot - Troubleshoot issues with Azure SQL Data Sync
Update the sync schema
With PowerShell - Use PowerShell to update the sync schema in an existing sync group
Upgrade an app to use the latest elastic database
client library
9/13/2022 • 3 minutes to read • Edit Online
Upgrade steps
1. Upgrade your applications. In Visual Studio, download and reference the latest client library version into
all of your development projects that use the library; then rebuild and deploy.
In your Visual Studio solution, select Tools --> NuGet Package Manager --> Manage NuGet Packages
for Solution .
(Visual Studio 2013) In the left panel, select Updates , and then select the Update button on the package
Azure SQL Database Elastic Scale Client Librar y that appears in the window.
(Visual Studio 2015) Set the Filter box to Upgrade available . Select the package to update, and click the
Update button.
(Visual Studio 2017) At the top of the dialog, select Updates . Select the package to update, and click the
Update button.
Build and Deploy.
2. Upgrade your scripts. If you are using PowerShell scripts to manage shards, download the new library
version and copy it into the directory from which you execute scripts.
3. Upgrade your split-merge ser vice. If you use the elastic database split-merge tool to reorganize sharded
data, download and deploy the latest version of the tool. Detailed upgrade steps for the Service can be found
here.
4. Upgrade your Shard Map Manager databases . Upgrade the metadata supporting your Shard Maps in
Azure SQL Database. There are two ways you can accomplish this, using PowerShell or C#. Both options are
shown below.
Option 1: Upgrade metadata using PowerShell
1. Download the latest command-line utility for NuGet from here and save to a folder.
2. Open a Command Prompt, navigate to the same folder, and issue the command:
nuget install Microsoft.Azure.SqlDatabase.ElasticScale.Client
3. Navigate to the subfolder containing the new client DLL version you have just downloaded, for example:
cd .\Microsoft.Azure.SqlDatabase.ElasticScale.Client.1.0.0\lib\net45
4. Download the elastic database client upgrade script from the Script Center, and save it into the same folder
containing the DLL.
5. From that folder, run “PowerShell .\upgrade.ps1” from the command prompt and follow the prompts.
Option 2: Upgrade metadata using C#
Alternatively, create a Visual Studio application that opens your ShardMapManager, iterates over all shards, and
performs the metadata upgrade by calling the methods UpgradeLocalStore and UpgradeGlobalStore as in this
example:
ShardMapManager smm =
ShardMapManagerFactory.GetSqlShardMapManager
(connStr, ShardMapManagerLoadPolicy.Lazy);
smm.UpgradeGlobalStore();
These techniques for metadata upgrades can be applied multiple times without harm. For example, if an older
client version inadvertently creates a shard after you have already updated, you can run upgrade again across
all shards to ensure that the latest metadata version is present throughout your infrastructure.
Note: New versions of the client library published to-date continue to work with prior versions of the Shard
Map Manager metadata on Azure SQL Database, and vice-versa. However to take advantage of some of the new
features in the latest client, metadata needs to be upgraded. Note that metadata upgrades will not affect any
user-data or application-specific data, only objects created and used by the Shard Map Manager. And
applications continue to operate through the upgrade sequence described above.
Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Get started with Elastic Database Tools
9/13/2022 • 4 minutes to read • Edit Online
mvn install
4. To start the sample project, in the ./sample directory, run the following command:
5. To learn more about the client library capabilities, experiment with the various options. Feel free to
explore the code to learn about the sample app implementation.
Congratulations! You have successfully built and run your first sharded application by using Elastic Database
Tools on Azure SQL Database. Use Visual Studio or SQL Server Management Studio to connect to your database
and take a quick look at the shards that the sample created. You will notice new sample shard databases and a
shard map manager database that the sample has created.
To add the client library to your own Maven project, add the following dependency in your POM file:
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>elastic-db-tools</artifactId>
<version>1.0.0</version>
</dependency>
Congratulations! You have successfully built and run your first sharded application by using Elastic Database
Tools on SQL Database. Use Visual Studio or SQL Server Management Studio to connect to your database and
take a quick look at the shards that the sample created. You will notice new sample shard databases and a shard
map manager database that the sample has created.
IMPORTANT
We recommend that you always use the latest version of Management Studio so that you stay synchronized with updates
to Azure and SQL Database. Update SQL Server Management Studio.
Next steps
For more information about Elastic Database Tools, see the following articles:
Code samples:
Elastic Database Tools (.NET, Java)
Elastic Database Tools for Azure SQL - Entity Framework Integration
Shard Elasticity on Script Center
Blog: Elastic Scale announcement
Discussion forum: Microsoft Q&A question page for Azure SQL Database
To measure performance: Performance counters for shard map manager
Report across scaled-out cloud databases (preview)
9/13/2022 • 4 minutes to read • Edit Online
Prerequisites
Download and run the Getting started with Elastic Database tools sample.
2. In the command window, type "1" and press Enter . This creates the shard map manager, and adds two
shards to the server. Then type "3" and press Enter ; repeat the action four times. This inserts sample data
rows in your shards.
3. The Azure portal should show three new databases in your server:
At this point, cross-database queries are supported through the Elastic Database client libraries. For
example, use option 4 in the command window. The results from a multi-shard query are always a
UNION ALL of the results from all shards.
In the next section, we create a sample database endpoint that supports richer querying of the data
across shards.
"username" and "password" should be the same as login information used in step 3 of section Download
and run the sample app in the Getting star ted with Elastic Database tools article.
External data sources
To create an external data source, execute the following command on the ElasticDBQuery database:
"CustomerIDShardMap" is the name of the shard map, if you created the shard map and shard map manager
using the elastic database tools sample. However, if you used your custom setup for this sample, then it should
be the shard map name you chose in your application.
External tables
Create an external table that matches the Customers table on the shards by executing the following command
on ElasticDBQuery database:
You will notice that the query aggregates results from all the shards and gives the following output:
4. In the Data Connection Wizard type the server name and login credentials. Then click Next .
5. In the dialog box Select the database that contains the data you want , select the ElasticDBQuer y
database.
6. Select the Customers table in the list view and click Next . Then click Finish .
7. In the Impor t Data form, under Select how you want to view this data in your workbook , select
Table and click OK .
All the rows from Customers table, stored in different shards populate the Excel sheet.
You can now use Excel’s powerful data visualization functions. You can use the connection string with your
server name, database name and credentials to connect your BI and data integration tools to the elastic query
database. Make sure that SQL Server is supported as a data source for your tool. You can refer to the elastic
query database and external tables just like any other SQL Server database and SQL Server tables that you
would connect to with your tool.
Cost
There is no additional charge for using the Elastic Database Query feature.
For pricing information see SQL Database Pricing Details.
Next steps
For an overview of elastic query, see Elastic query overview.
For a vertical partitioning tutorial, see Getting started with cross-database query (vertical partitioning).
For syntax and sample queries for vertically partitioned data, see Querying vertically partitioned data)
For syntax and sample queries for horizontally partitioned data, see Querying horizontally partitioned data)
See sp_execute _remote for a stored procedure that executes a Transact-SQL statement on a single remote
Azure SQL Database or set of databases serving as shards in a horizontal partitioning scheme.
Multi-shard querying using elastic database tools
9/13/2022 • 2 minutes to read • Edit Online
Overview
With the Elastic Database tools, you can create sharded database solutions. Multi-shard quer ying is used for
tasks such as data collection/reporting that require running a query that stretches across several shards.
(Contrast this to data-dependent routing, which performs all work on a single shard.)
1. Get a RangeShardMap (Java, .NET) or ListShardMap (Java, .NET) using the Tr yGetRangeShardMap
(Java, .NET), the Tr yGetListShardMap (Java, .NET), or the GetShardMap (Java, .NET) method. See
Constructing a ShardMapManager and Get a RangeShardMap or ListShardMap.
2. Create a MultiShardConnection (Java, .NET) object.
3. Create a MultiShardStatement or MultiShardCommand (Java, .NET).
4. Set the CommandText proper ty (Java, .NET) to a T-SQL command.
5. Execute the command by calling the ExecuteQuer yAsync or ExecuteReader (Java, .NET) method.
6. View the results using the MultiShardResultSet or MultiShardDataReader (Java, .NET) class.
Example
The following code illustrates the usage of multi-shard querying using a given ShardMap named
myShardMap.
Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Deploy a split-merge service to move data between
sharded databases
9/13/2022 • 12 minutes to read • Edit Online
Prerequisites
1. Create an Azure SQL Database database that will be used as the split-merge status database. Go to the
Azure portal. Create a new SQL Database . Give the database a name and create a new administrator
and password. Be sure to record the name and password for later use.
2. Ensure that your server allows Azure Services to connect to it. In the portal, in the Firewall Settings ,
ensure the Allow access to Azure Ser vices setting is set to On . Click the "save" icon.
3. Create an Azure Storage account for diagnostics output.
4. Create an Azure Cloud Service for your Split-Merge service.
With Azure SQL Database, the connection string typically is of the form:
Server=<serverName>.database.windows.net; Database=<databaseName>;User ID=<userId>; Password=
<password>; Encrypt=True; Connection Timeout=30
4. Enter this connection string in the .cscfg file in both the SplitMergeWeb and SplitMergeWorker role
sections in the ElasticScaleMetadata setting.
5. For the SplitMergeWorker role, enter a valid connection string to Azure storage for the
WorkerRoleSynchronizationStorageAccountConnectionString setting.
Configure security
For detailed instructions to configure the security of the service, refer to the Split-Merge security configuration.
For the purposes of a simple test deployment for this tutorial, a minimal set of configuration steps will be
performed to get the service up and running. These steps enable only the one machine/account executing them
to communicate with the service.
Create a self-signed certificate
Create a new directory and from this directory execute the following command using a Developer Command
Prompt for Visual Studio window:
makecert ^
-n "CN=*.cloudapp.net" ^
-r -cy end -sky exchange -eku "1.3.6.1.5.5.7.3.1,1.3.6.1.5.5.7.3.2" ^
-a sha256 -len 2048 ^
-sr currentuser -ss root ^
-sv MyCert.pvk MyCert.cer
You are asked for a password to protect the private key. Enter a strong password and confirm it. You are then
prompted for the password to be used once more after that. Click Yes at the end to import it to the Trusted
Certification Authorities Root store.
Create a PFX file
Execute the following command from the same window where makecert was executed; use the same password
that you used to create the certificate:
Please note that for production deployments separate certificates should be used for the CA, for encryption, the
Server certificate and client certificates. For detailed instructions on this, see Security Configuration.
IMPORTANT
The sample scripts run on PowerShell 5.1. They do not currently run on PowerShell 6 or later.
P O W ERSH EL L F IL E ST EP S
P O W ERSH EL L F IL E ST EP S
EXEC UT ESA M P L ESP L IT M ERGE. P S1 1. Sends a split request to the Split-Merge Service web
frontend, which splits half the data from the first shard
to the second shard.
2. Polls the web frontend for the split request status and
waits until the request completes.
3. Sends a merge request to the Split-Merge Service web
frontend, which moves the data from the second shard
back to the first shard.
NOTE
The SetupSampleSplitMergeEnvironment.ps1 script creates all these databases on the same server by default to
keep the script simple. This is not a restriction of the Split-Merge Service itself.
A SQL authentication login with read/write access to the DBs will be needed for the Split-Merge service to
move data and update the shard map. Since the Split-Merge Service runs in the cloud, it does not
currently support Integrated Authentication.
Make sure the server is configured to allow access from the IP address of the machine running these
scripts. You can find this setting under SQL server / Firewalls and virtual networks / Client IP addresses.
3. Execute the SetupSampleSplitMergeEnvironment.ps1 script to create the sample environment.
Running this script will wipe out any existing shard map management data structures on the shard map
manager database and the shards. It may be useful to rerun the script if you wish to re-initialize the shard
map or shards.
Sample command line:
.\SetupSampleSplitMergeEnvironment.ps1
-UserName 'mysqluser' -Password 'MySqlPassw0rd' -ShardMapManagerServerName
'abcdefghij.database.windows.net'
4. Execute the Getmappings.ps1 script to view the mappings that currently exist in the sample environment.
.\GetMappings.ps1
-UserName 'mysqluser' -Password 'MySqlPassw0rd' -ShardMapManagerServerName
'abcdefghij.database.windows.net'
5. Execute the ExecuteSampleSplitMerge.ps1 script to execute a split operation (moving half the data on the
first shard to the second shard) and then a merge operation (moving the data back onto the first shard). If
you configured TLS and left the http endpoint disabled, ensure that you use the https:// endpoint instead.
Sample command line:
.\ExecuteSampleSplitMerge.ps1
-UserName 'mysqluser' -Password 'MySqlPassw0rd'
-ShardMapManagerServerName 'abcdefghij.database.windows.net'
-SplitMergeServiceEndpoint 'https://mysplitmergeservice.cloudapp.net'
-CertificateThumbprint '0123456789abcdef0123456789abcdef01234567'
If you receive the below error, it is most likely a problem with your Web endpoint's certificate. Try
connecting to the Web endpoint with your favorite Web browser and check if there is a certificate error.
Invoke-WebRequest : The underlying connection was closed: Could not establish trust relationship for
the SSL/TLSsecure channel.
6. Experiment with other data types! All of these scripts take an optional -ShardKeyType parameter that
allows you to specify the key type. The default is Int32, but you can also specify Int64, Guid, or Binary.
Create requests
The service can be used either by using the web UI or by importing and using the SplitMerge.psm1 PowerShell
module which will submit your requests through the web role.
The service can move data in both sharded tables and reference tables. A sharded table has a sharding key
column and has different row data on each shard. A reference table is not sharded so it contains the same row
data on every shard. Reference tables are useful for data that does not change often and is used to JOIN with
sharded tables in queries.
In order to perform a split-merge operation, you must declare the sharded tables and reference tables that you
want to have moved. This is accomplished with the SchemaInfo API. This API is in the
Microsoft.Azure.SqlDatabase.ElasticScale.ShardManagement.Schema namespace.
1. For each sharded table, create a ShardedTableInfo object describing the table's parent schema name
(optional, defaults to "dbo"), the table name, and the column name in that table that contains the sharding
key.
2. For each reference table, create a ReferenceTableInfo object describing the table's parent schema name
(optional, defaults to "dbo") and the table name.
3. Add the above TableInfo objects to a new SchemaInfo object.
4. Get a reference to a ShardMapManager object, and call GetSchemaInfoCollection .
5. Add the SchemaInfo to the SchemaInfoCollection , providing the shard map name.
An example of this can be seen in the SetupSampleSplitMergeEnvironment.ps1 script.
The Split-Merge service does not create the target database (or schema for any tables in the database) for you.
They must be pre-created before sending a request to the service.
Troubleshooting
You may see the below message when running the sample PowerShell scripts:
Invoke-WebRequest : The underlying connection was closed: Could not establish trust relationship for the
SSL/TLS secure channel.
This error means that your TLS/SSL certificate is not configured correctly. Please follow the instructions in
section 'Connecting with a web browser'.
If you cannot submit requests you may see this:
[Exception] System.Data.SqlClient.SqlException (0x80131904): Could not find stored procedure
'dbo.InsertRequest'.
In this case, check your configuration file, in particular the setting for
WorkerRoleSynchronizationStorageAccountConnectionString . This error typically indicates that the
worker role could not successfully initialize the metadata database on first use.
Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Split-merge security configuration
9/13/2022 • 12 minutes to read • Edit Online
Configuring certificates
Certificates are configured in two ways.
1. To Configure the TLS/SSL Certificate
2. To Configure Client Certificates
To obtain certificates
Certificates can be obtained from public Certificate Authorities (CAs) or from the Windows Certificate Service.
These are the preferred methods to obtain certificates.
If those options are not available, you can generate self-signed cer tificates .
%ProgramFiles(x86)%\Windows Kits\x.y\bin\x86
Get the WDK from Windows 8.1: Download kits and tools
Allowed IP addresses
Access to the service endpoints can be restricted to specific ranges of IP addresses.
<EndpointAcls>
<EndpointAcl role="SplitMergeWeb" endPoint="HttpIn" accessControl="DenyAll" />
<EndpointAcl role="SplitMergeWeb" endPoint="HttpsIn" accessControl="AllowAll" />
</EndpointAcls>
The rules in an access control group are configured in a <AccessControl name=""> section of the service
configuration file.
The format is explained in Network Access Control Lists documentation. For example, to allow only IPs in the
range 100.100.0.0 to 100.100.255.255 to access the HTTPS endpoint, the rules would look like this:
<AccessControl name="Retricted">
<Rule action="permit" description="Some" order="1" remoteSubnet="100.100.0.0/16"/>
<Rule action="deny" description="None" order="2" remoteSubnet="0.0.0.0/0" />
</AccessControl>
<EndpointAcls>
<EndpointAcl role="SplitMergeWeb" endPoint="HttpsIn" accessControl="Restricted" />
</EndpointAcls>
Refer to the documentation for Dynamic IP Security in IIS for other supported values.
makecert ^
-n "CN=myservice.cloudapp.net" ^
-e MM/DD/YYYY ^
-r -cy end -sky exchange -eku "1.3.6.1.5.5.7.3.1" ^
-a sha256 -len 2048 ^
-sv MySSL.pvk MySSL.cer
To customize:
-n with the service URL. Wildcards ("CN=*.cloudapp.net") and alternative names
("CN=myservice1.cloudapp.net, CN=myservice2.cloudapp.net") are supported.
-e with the certificate expiration date Create a strong password and specify it when prompted.
Then, copy the same thumbprint as the TLS/SSL certificate in the CA certificate setting:
makecert ^
-n "CN=MyCA" ^
-e MM/DD/YYYY ^
-r -cy authority -h 1 ^
-a sha256 -len 2048 ^
-sr localmachine -ss my ^
MyCA.cer
To customize it
-e with the certification expiration date
Update the value of the following setting with the same thumbprint:
Customizing:
-n with an ID for to the client that will be authenticated with this certificate
-e with the certificate expiration date
MyID.pvk and MyID.cer with unique filenames for this client certificate
This command will prompt for a password to be created and then used once. Use a strong password.
Customizing:
MyID.pvk and MyID.cer with the filename for the client certificate
Customizing:
MyID.pvk and MyID.cer with the filename for the encryption certificate
Find certificate
Follow these steps:
1. Run mmc.exe.
2. File -> Add/Remove Snap-in…
3. Select Cer tificates .
4. Click Add .
5. Choose the certificate store location.
6. Click Finish .
7. Click OK .
8. Expand Cer tificates .
9. Expand the certificate store node.
10. Expand the Certificate child node.
11. Select a certificate in the list.
Export certificate
In the Cer tificate Expor t Wizard :
1. Click Next .
2. Select Yes , then Expor t the private key .
3. Click Next .
4. Select the desired output file format.
5. Check the desired options.
6. Check Password .
7. Enter a strong password and confirm it.
8. Click Next .
9. Type or browse a filename where to store the certificate (use a .PFX extension).
10. Click Next .
11. Click Finish .
12. Click OK .
Import certificate
In the Certificate Import Wizard:
1. Select the store location.
Select Current User if only processes running under current user will access the service
Select Local Machine if other processes in this computer will access the service
2. Click Next .
3. If importing from a file, confirm the file path.
4. If importing a .PFX file:
a. Enter the password protecting the private key
b. Select import options
5. Select "Place" certificates in the following store
6. Click Browse .
7. Select the desired store.
8. Click Finish .
If the Trusted Root Certification Authority store was chosen, click Yes .
9. Click OK on all dialog windows.
Upload certificate
In the Azure portal
1. Select Cloud Ser vices .
2. Select the cloud service.
3. On the top menu, click Cer tificates .
4. On the bottom bar, click Upload .
5. Select the certificate file.
6. If it is a .PFX file, enter the password for the private key.
7. Once completed, copy the certificate thumbprint from the new entry in the list.
Credentials stored in this database are encrypted. However, as a best practice, ensure that both web and worker
roles of your service deployments are kept up to date and secure as they both have access to the metadata
database and the certificate used for encryption and decryption of stored credentials.
Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Adding a shard using Elastic Database tools
9/13/2022 • 2 minutes to read • Edit Online
// sm is a RangeShardMap object.
// Add a new shard to hold the range being added.
Shard shard2 = null;
For the .NET version, you can also use PowerShell as an alternative to create a new Shard Map Manager. An
example is available here.
IMPORTANT
Use this technique only if you are certain that the range for the updated mapping is empty. The preceding methods do
not check data for the range being moved, so it is best to include checks in your code. If rows exist in the range being
moved, the actual data distribution will not match the updated shard map. Use the split-merge tool to perform the
operation instead in these cases.
Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Using the RecoveryManager class to fix shard map
problems
9/13/2022 • 8 minutes to read • Edit Online
For term definitions, see Elastic Database tools glossary. To understand how the ShardMapManager is used to
manage data in a sharded solution, see Shard map management.
In this example, the RecoveryManager is initialized from the ShardMapManager. The ShardMapManager
containing a ShardMap is also already initialized.
Since this application code manipulates the shard map itself, the credentials used in the factory method (in the
preceding example, smmConnectionString) should be credentials that have read-write permissions on the GSM
database referenced by the connection string. These credentials are typically different from credentials used to
open connections for data-dependent routing. For more information, see Using credentials in the elastic
database client.
rm.DetachShard(s.Location, customerMap);
The shard map reflects the shard location in the GSM before the deletion of the shard. Because the shard was
deleted, it is assumed this was intentional, and the sharding key range is no longer in use. If not, you can execute
point-in time restore. to recover the shard from an earlier point-in-time. (In that case, review the following
section to detect shard inconsistencies.) To recover, see Point in time recovery.
Since it is assumed the database deletion was intentional, the final administrative cleanup action is to delete the
entry to the shard in the shard map manager. This prevents the application from inadvertently writing
information to a range that is not expected.
rm.DetectMappingDifferences(location, shardMapName);
The RecoveryToken parameter enumerates the differences in the mappings between the GSM and the LSM
for the specific shard.
The MappingDifferenceResolution enumeration is used to indicate the method for resolving the difference
between the shard mappings.
MappingDifferenceResolution.KeepShardMapping is recommended that when the LSM contains the
accurate mapping and therefore the mapping in the shard should be used. This is typically the case if there is
a failover: the shard now resides on a new server. Since the shard must first be removed from the GSM
(using the RecoveryManager.DetachShard method), a mapping no longer exists on the GSM. Therefore, the
LSM must be used to re-establish the shard mapping.
rm.AttachShard(location, shardMapName)
The location parameter is the server name and database name, of the shard being attached.
The shardMapName parameter is the shard map name. This is only required when multiple shard maps are
managed by the same shard map manager. Optional.
This example adds a shard to the shard map that has been recently restored from an earlier point-in time. Since
the shard (namely the mapping for the shard in the LSM) has been restored, it is potentially inconsistent with
the shard entry in the GSM. Outside of this example code, the shard was restored and renamed to the original
name of the database. Since it was restored, it is assumed the mapping in the LSM is the trusted mapping.
rm.AttachShard(s.Location, customerMap);
var gs = rm.DetectMappingDifferences(s.Location);
foreach (RecoveryToken g in gs)
{
rm.ResolveMappingDifferences(g, MappingDifferenceResolution.KeepShardMapping);
}
Best practices
Geo-failover and recovery are operations typically managed by a cloud administrator of the application
intentionally utilizing Azure SQL Database business continuity features. Business continuity planning requires
processes, procedures, and measures to ensure that business operations can continue without interruption. The
methods available as part of the RecoveryManager class should be used within this work flow to ensure the
GSM and LSM are kept up-to-date based on the recovery action taken. There are five basic steps to properly
ensuring the GSM and LSM reflect the accurate information after a failover event. The application code to
execute these steps can be integrated into existing tools and workflow.
1. Retrieve the RecoveryManager from the ShardMapManager.
2. Detach the old shard from the shard map.
3. Attach the new shard to the shard map, including the new shard location.
4. Detect inconsistencies in the mapping between the GSM and LSM.
5. Resolve differences between the GSM and the LSM, trusting the LSM.
This example performs the following steps:
1. Removes shards from the Shard Map that reflect shard locations before the failover event.
2. Attaches shards to the Shard Map reflecting the new shard locations (the parameter
"Configuration.SecondaryServer" is the new server name but the same database name).
3. Retrieves the recovery tokens by detecting mapping differences between the GSM and the LSM for each
shard.
4. Resolves the inconsistencies by trusting the mapping from the LSM of each shard.
Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Migrate existing databases to scale out
9/13/2022 • 4 minutes to read • Edit Online
Overview
To migrate an existing sharded database:
1. Prepare the shard map manager database.
2. Create the shard map.
3. Prepare the individual shards.
4. Add mappings to the shard map.
These techniques can be implemented using either the .NET Framework client library, or the PowerShell scripts
found at Azure SQL Database - Elastic Database tools scripts. The examples here use the PowerShell scripts.
For more information about the ShardMapManager, see Shard map management. For an overview of the Elastic
Database tools, see Elastic Database features overview.
The multi-tenant model assigns several tenants to an individual database (and you can distribute groups of
tenants across multiple databases). Use this model when you expect each tenant to have small data needs. In
this model, assign a range of tenants to a database using range mapping .
Or you can implement a multi-tenant database model using a list mapping to assign multiple tenants to an
individual database. For example, DB1 is used to store information about tenant ID 1 and 5, and DB2 stores data
for tenant 7 and tenant 10.
Based on your choice, choose one of these options:
Option 1: Create a shard map for a list mapping
Create a shard map using the ShardMapManager object.
Step 4 option 3: Map the data for multiple tenants on an individual database
For each tenant, run the Add-ListMapping (option 1).
Summary
Once you have completed the setup, you can begin to use the Elastic Database client library. You can also use
data-dependent routing and multi-shard query.
Next steps
Get the PowerShell scripts from Azure Elastic Database tools scripts.
The Elastic database tools client library is available on GitHub: Azure/elastic-db-tools.
Use the split-merge tool to move data to or from a multi-tenant model to a single tenant model. See Split merge
tool.
Additional resources
For information on common data architecture patterns of multi-tenant software-as-a-service (SaaS) database
applications, see Design Patterns for Multi-tenant SaaS Applications with Azure SQL Database.
Prerequisites
To create the performance category and counters, the user must be a part of the local Administrators
group on the machine hosting the application.
To create a performance counter instance and update the counters, the user must be a member of either the
Administrators or Performance Monitor Users group.
You can also use this PowerShell script to execute the method. The method creates the following performance
counters:
Cached mappings : Number of mappings cached for the shard map.
DDR operations/sec : Rate of data dependent routing operations for the shard map. This counter is updated
when a call to OpenConnectionForKey() results in a successful connection to the destination shard.
Mapping lookup cache hits/sec : Rate of successful cache lookup operations for mappings in the shard
map.
Mapping lookup cache misses/sec : Rate of failed cache lookup operations for mappings in the shard
map.
Mappings added or updated in cache/sec : Rate at which mappings are being added or updated in cache
for the shard map.
Mappings removed from cache/sec : Rate at which mappings are being removed from cache for the
shard map.
Performance counters are created for each cached shard map per process.
Notes
The following events trigger the creation of the performance counters:
Initialization of the ShardMapManager with eager loading, if the ShardMapManager contains any shard
maps. These include the GetSqlShardMapManager and the TryGetSqlShardMapManager methods.
Successful lookup of a shard map (using GetShardMap(), GetListShardMap() or GetRangeShardMap()).
Successful creation of shard map using CreateShardMap().
The performance counters will be updated by all cache operations performed on the shard map and mappings.
Successful removal of the shard map using DeleteShardMap() results in deletion of the performance counters
instance.
Best practices
Creation of the performance category and counters should be performed only once before the creation of
ShardMapManager object. Every execution of the command CreatePerformanceCategoryAndCounters()
clears the previous counters (losing data reported by all instances) and creates new ones.
Performance counter instances are created per process. Any application crash or removal of a shard map
from the cache will result in deletion of the performance counters instances.
See also
Elastic Database features overview
Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Elastic Database client library with Entity Framework
9/13/2022 • 16 minutes to read • Edit Online
Requirements
When working with both the elastic database client library and Entity Framework APIs, you want to retain the
following properties:
Scale-out : To add or remove databases from the data tier of the sharded application as necessary for the
capacity demands of the application. This means control over the creation and deletion of databases and
using the elastic database shard map manager APIs to manage databases, and mappings of shardlets.
Consistency : The application employs sharding, and uses the data-dependent routing capabilities of the
client library. To avoid corruption or wrong query results, connections are brokered through the shard map
manager. This also retains validation and consistency.
Code First : To retain the convenience of EF’s code first paradigm. In Code First, classes in the application are
mapped transparently to the underlying database structures. The application code interacts with DbSets that
mask most aspects involved in the underlying database processing.
Schema : Entity Framework handles initial database schema creation and subsequent schema evolution
through migrations. By retaining these capabilities, adapting your app is easy as the data evolves.
The following guidance instructs how to satisfy these requirements for Code First applications using elastic
database tools.
// Only static methods are allowed in calls into base class c'tors.
private static DbConnection CreateDDRConnection(
ShardMap shardMap,
T shardingKey,
string connectionStr)
{
// No initialization
Database.SetInitializer<ElasticScaleContext<T>>(null);
// Ask shard map to broker a validated connection for the given key
SqlConnection conn = shardMap.OpenConnectionForKey<T>
(shardingKey, connectionStr, ConnectionOptions.Validate);
return conn;
}
Main points
A new constructor replaces the default constructor in the DbContext subclass
The new constructor takes the arguments that are required for data-dependent routing through elastic
database client library:
the shard map to access the data-dependent routing interfaces,
the sharding key to identify the shardlet,
a connection string with the credentials for the data-dependent routing connection to the shard.
The call to the base class constructor takes a detour into a static method that performs all the steps
necessary for data-dependent routing.
It uses the OpenConnectionForKey call of the elastic database client interfaces on the shard map to
establish an open connection.
The shard map creates the open connection to the shard that holds the shardlet for the given sharding
key.
This open connection is passed back to the base class constructor of DbContext to indicate that this
connection is to be used by EF instead of letting EF create a new connection automatically. This way
the connection has been tagged by the elastic database client API so that it can guarantee consistency
under shard map management operations.
Use the new constructor for your DbContext subclass instead of the default constructor in your code. Here is an
example:
The new constructor opens the connection to the shard that holds the data for the shardlet identified by the
value of tenantid1 . The code in the using block stays unchanged to access the DbSet for blogs using EF on the
shard for tenantid1 . This changes semantics for the code in the using block such that all database operations
are now scoped to the one shard where tenantid1 is kept. For instance, a LINQ query over the blogs DbSet
would only return blogs stored on the current shard, but not the ones stored on other shards.
Transient faults handling
The Microsoft Patterns & Practices team published the The Transient Fault Handling Application Block. The
library is used with elastic scale client library in combination with EF. However, ensure that any transient
exception returns to a place where you can ensure that the new constructor is being used after a transient fault
so that any new connection attempt is made using the constructors you tweaked. Otherwise, a connection to the
correct shard is not guaranteed, and there are no assurances the connection is maintained as changes to the
shard map occur.
The following code sample illustrates how a SQL retry policy can be used around the new DbContext subclass
constructors:
SqlDatabaseUtils.SqlRetryPolicy.ExecuteAction(() =>
{
using (var db = new ElasticScaleContext<int>(
sharding.ShardMap,
tenantId1,
connStrBldr.ConnectionString))
{
var blog = new Blog { Name = name };
db.Blogs.Add(blog);
db.SaveChanges();
…
}
});
// Enter a new shard - i.e. an empty database - to the shard map, allocate a first tenant to it
// and kick off EF initialization of the database to deploy schema
public void RegisterNewShard(string server, string database, string connStr, int key)
{
// Go into a DbContext to trigger migrations and schema deployment for the new shard.
// This requires an un-opened connection.
using (var db = new ElasticScaleContext<int>(connStrBldr.ConnectionString))
{
// Run a query to engage EF migrations
(from b in db.Blogs
select b).Count();
}
// Register the mapping of the tenant to the shard in the shard map.
// After this step, data-dependent routing on the shard map can be used
this.ShardMap.CreatePointMapping(key, shard);
}
This sample shows the method RegisterNewShard that registers the shard in the shard map, deploys the
schema through EF migrations, and stores a mapping of a sharding key to the shard. It relies on a constructor of
the DbContext subclass (ElasticScaleContext in the sample) that takes a SQL connection string as input. The
code of this constructor is straight-forward, as the following example shows:
// C'tor to deploy schema and migrations to a new shard
protected internal ElasticScaleContext(string connectionString)
: base(SetInitializerForConnection(connectionString))
{
}
// Only static methods are allowed in calls into base class c'tors
private static string SetInitializerForConnection(string connectionString)
{
// You want existence checks so that the schema can get deployed
Database.SetInitializer<ElasticScaleContext<T>>(
new CreateDatabaseIfNotExists<ElasticScaleContext<T>>());
return connectionString;
}
One might have used the version of the constructor inherited from the base class. But the code needs to ensure
that the default initializer for EF is used when connecting. Hence the short detour into the static method before
calling into the base class constructor with the connection string. Note that the registration of shards should run
in a different app domain or process to ensure that the initializer settings for EF do not conflict.
Limitations
The approaches outlined in this document entail a couple of limitations:
EF applications that use LocalDb first need to migrate to a regular SQL Server database before using elastic
database client library. Scaling out an application through sharding with Elastic Scale is not possible with
LocalDb . Note that development can still use LocalDb .
Any changes to the application that imply database schema changes need to go through EF migrations on all
shards. The sample code for this document does not demonstrate how to do this. Consider using Update-
Database with a ConnectionString parameter to iterate over all shards; or extract the T-SQL script for the
pending migration using Update-Database with the -Script option and apply the T-SQL script to your shards.
Given a request, it is assumed that all of its database processing is contained within a single shard as
identified by the sharding key provided by the request. However, this assumption does not always hold true.
For example, when it is not possible to make a sharding key available. To address this, the client library
provides the MultiShardQuer y class that implements a connection abstraction for querying over several
shards. Learning to use the MultiShardQuer y in combination with EF is beyond the scope of this document
Conclusion
Through the steps outlined in this document, EF applications can use the elastic database client library's
capability for data-dependent routing by refactoring constructors of the DbContext subclasses used in the EF
application. This limits the changes required to those places where DbContext classes already exist. In addition,
EF applications can continue to benefit from automatic schema deployment by combining the steps that invoke
the necessary EF migrations with the registration of new shards and mappings in the shard map.
Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Using the elastic database client library with Dapper
9/13/2022 • 8 minutes to read • Edit Online
Dapper overview
Dapper is an object-relational mapper. It maps .NET objects from your application to a relational database (and
vice versa). The first part of the sample code illustrates how you can integrate the elastic database client library
with Dapper-based applications. The second part of the sample code illustrates how to integrate when using
both Dapper and DapperExtensions.
The mapper functionality in Dapper provides extension methods on database connections that simplify
submitting T-SQL statements for execution or querying the database. For instance, Dapper makes it easy to map
between your .NET objects and the parameters of SQL statements for Execute calls, or to consume the results of
your SQL queries into .NET objects using Quer y calls from Dapper.
When using DapperExtensions, you no longer need to provide the SQL statements. Extensions methods such as
GetList or Inser t over the database connection create the SQL statements behind the scenes.
Another benefit of Dapper and also DapperExtensions is that the application controls the creation of the
database connection. This helps interact with the elastic database client library which brokers database
connections based on the mapping of shardlets to databases.
To get the Dapper assemblies, see Dapper dot net. For the Dapper extensions, see DapperExtensions.
Technical guidance
Data-dependent routing with Dapper
With Dapper, the application is typically responsible for creating and opening the connections to the underlying
database. Given a type T by the application, Dapper returns query results as .NET collections of type T. Dapper
performs the mapping from the T-SQL result rows to the objects of type T. Similarly, Dapper maps .NET objects
into SQL values or parameters for data manipulation language (DML) statements. Dapper offers this
functionality via extension methods on the regular SqlConnection object from the ADO .NET SQL Client libraries.
The SQL connection returned by the Elastic Scale APIs for DDR are also regular SqlConnection objects. This
allows us to directly use Dapper extensions over the type returned by the client library’s DDR API, as it is also a
simple SQL Client connection.
These observations make it straightforward to use connections brokered by the elastic database client library
for Dapper.
This code example (from the accompanying sample) illustrates the approach where the sharding key is provided
by the application to the library to broker the connection to the right shard.
The call to the OpenConnectionForKey API replaces the default creation and opening of a SQL Client connection.
The OpenConnectionForKey call takes the arguments that are required for data-dependent routing:
The shard map to access the data-dependent routing interfaces
The sharding key to identify the shardlet
The credentials (user name and password) to connect to the shard
The shard map object creates a connection to the shard that holds the shardlet for the given sharding key. The
elastic database client APIs also tag the connection to implement its consistency guarantees. Since the call to
OpenConnectionForKey returns a regular SQL Client connection object, the subsequent call to the Execute
extension method from Dapper follows the standard Dapper practice.
Queries work very much the same way - you first open the connection using OpenConnectionForKey from the
client API. Then you use the regular Dapper extension methods to map the results of your SQL query into .NET
objects:
Note that the using block with the DDR connection scopes all database operations within the block to the one
shard where tenantId1 is kept. The query only returns blogs stored on the current shard, but not the ones stored
on any other shards.
SqlDatabaseUtils.SqlRetryPolicy.ExecuteAction(() =>
{
using (SqlConnection sqlconn =
shardingLayer.ShardMap.OpenConnectionForKey(tenantId2, connStrBldr.ConnectionString,
ConnectionOptions.Validate))
{
var blog = new Blog { Name = name2 };
sqlconn.Insert(blog);
}
});
Limitations
The approaches outlined in this document entail a couple of limitations:
The sample code for this document does not demonstrate how to manage schema across shards.
Given a request, we assume that all its database processing is contained within a single shard as identified by
the sharding key provided by the request. However, this assumption does not always hold, for example, when
it is not possible to make a sharding key available. To address this, the elastic database client library includes
the MultiShardQuery class. The class implements a connection abstraction for querying over several shards.
Using MultiShardQuery in combination with Dapper is beyond the scope of this document.
Conclusion
Applications using Dapper and DapperExtensions can easily benefit from elastic database tools for Azure SQL
Database. Through the steps outlined in this document, those applications can use the tool's capability for data-
dependent routing by changing the creation and opening of new SqlConnection objects to use the
OpenConnectionForKey call of the elastic database client library. This limits the application changes required to
those places where new connections are created and opened.
Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Get started with cross-database queries (vertical
partitioning) (preview)
9/13/2022 • 3 minutes to read • Edit Online
Prerequisites
ALTER ANY EXTERNAL DATA SOURCE permission is required. This permission is included with the ALTER
DATABASE permission. ALTER ANY EXTERNAL DATA SOURCE permissions are needed to refer to the underlying
data source.
Now, execute following query on the Customers database to create the CustomerInformation table and
input the sample data.
The "master_key_password" is a strong password of your choosing used to encrypt the connection
credentials. The "username" and "password" should be the username and password used to log in into
the Customers database (create a new user in Customers database if one does not already exists).
Authentication using Azure Active Directory with elastic queries is not currently supported.
External data sources
To create an external data source, execute the following command on the Orders database:
External tables
Create an external table on the Orders database, which matches the definition of the CustomerInformation table:
Cost
Currently, the elastic database query feature is included into the cost of your Azure SQL Database.
For pricing information, see SQL Database Pricing.
Next steps
For an overview of elastic query, see Elastic query overview.
For syntax and sample queries for vertically partitioned data, see Querying vertically partitioned data)
For a horizontal partitioning (sharding) tutorial, see Getting started with elastic query for horizontal
partitioning (sharding).
For syntax and sample queries for horizontally partitioned data, see Querying horizontally partitioned data)
See sp_execute _remote for a stored procedure that executes a Transact-SQL statement on a single remote
Azure SQL Database or set of databases serving as shards in a horizontal partitioning scheme.
Reporting across scaled-out cloud databases
(preview)
9/13/2022 • 7 minutes to read • Edit Online
Sharded databases distribute rows across a scaled out data tier. The schema is identical on all participating
databases, also known as horizontal partitioning. Using an elastic query, you can create reports that span all
databases in a sharded database.
For a quickstart, see Reporting across scaled-out cloud databases.
For non-sharded databases, see Query across cloud databases with different schemas.
Prerequisites
Create a shard map using the elastic database client library. see Shard map management. Or use the sample
app in Get started with elastic database tools.
Alternatively, see Migrate existing databases to scaled-out databases.
The user must possess ALTER ANY EXTERNAL DATA SOURCE permission. This permission is included with
the ALTER DATABASE permission.
ALTER ANY EXTERNAL DATA SOURCE permissions are needed to refer to the underlying data source.
Overview
These statements create the metadata representation of your sharded data tier in the elastic query database.
1. CREATE MASTER KEY
2. CREATE DATABASE SCOPED CREDENTIAL
3. CREATE EXTERNAL DATA SOURCE
4. CREATE EXTERNAL TABLE
NOTE
Make sure that the "<username>" does not include any "@servername" suffix.
<External_Data_Source> ::=
CREATE EXTERNAL DATA SOURCE <data_source_name> WITH
(TYPE = SHARD_MAP_MANAGER,
LOCATION = '<fully_qualified_server_name>',
DATABASE_NAME = ‘<shardmap_database_name>',
CREDENTIAL = <credential_name>,
SHARD_MAP_NAME = ‘<shardmapname>’
) [;]
Example
The external data source references your shard map. An elastic query then uses the external data source and the
underlying shard map to enumerate the databases that participate in the data tier. The same credentials are used
to read the shard map and to access the data on the shards during the processing of an elastic query.
<sharded_external_table_options> ::=
DATA_SOURCE = <External_Data_Source>,
[ SCHEMA_NAME = N'nonescaped_schema_name',]
[ OBJECT_NAME = N'nonescaped_object_name',]
DISTRIBUTION = SHARDED(<sharding_column_name>) | REPLICATED |ROUND_ROBIN
Example
WITH
(
DATA_SOURCE = MyExtSrc,
SCHEMA_NAME = 'orders',
OBJECT_NAME = 'order_details',
DISTRIBUTION=SHARDED(ol_w_id)
);
Remarks
The DATA_SOURCE clause defines the external data source (a shard map) that is used for the external table.
The SCHEMA_NAME and OBJECT_NAME clauses map the external table definition to a table in a different
schema. If omitted, the schema of the remote object is assumed to be “dbo” and its name is assumed to be
identical to the external table name being defined. This is useful if the name of your remote table is already
taken in the database where you want to create the external table. For example, you want to define an external
table to get an aggregate view of catalog views or DMVs on your scaled out data tier. Since catalog views and
DMVs already exist locally, you cannot use their names for the external table definition. Instead, use a different
name and use the catalog view’s or the DMV’s name in the SCHEMA_NAME and/or OBJECT_NAME clauses. (See
the example below.)
The DISTRIBUTION clause specifies the data distribution used for this table. The query processor utilizes the
information provided in the DISTRIBUTION clause to build the most efficient query plans.
1. SHARDED means data is horizontally partitioned across the databases. The partitioning key for the data
distribution is the <sharding_column_name> parameter.
2. REPLICATED means that identical copies of the table are present on each database. It is your responsibility
to ensure that the replicas are identical across the databases.
3. ROUND_ROBIN means that the table is horizontally partitioned using an application-dependent
distribution method.
Data tier reference : The external table DDL refers to an external data source. The external data source specifies
a shard map that provides the external table with the information necessary to locate all the databases in your
data tier.
Security considerations
Users with access to the external table automatically gain access to the underlying remote tables under the
credential given in the external data source definition. Avoid undesired elevation of privileges through the
credential of the external data source. Use GRANT or REVOKE for an external table as though it were a regular
table.
Once you have defined your external data source and your external tables, you can now use full T-SQL over your
external tables.
select
w_id as warehouse,
o_c_id as customer,
count(*) as cnt_orderline,
max(ol_quantity) as max_quantity,
avg(ol_amount) as avg_amount,
min(ol_delivery_d) as min_deliv_date
from warehouse
join orders
on w_id = o_w_id
join order_line
on o_id = ol_o_id and o_w_id = ol_w_id
where w_id > 100 and w_id < 200
group by w_id, o_c_id
EXEC sp_execute_remote
N'MyExtSrc',
N'select count(w_id) as foo from warehouse'
Best practices
Ensure that the elastic query endpoint database has been given access to the shardmap database and all
shards through the SQL Database firewalls.
Validate or enforce the data distribution defined by the external table. If your actual data distribution is
different from the distribution specified in your table definition, your queries may yield unexpected results.
Elastic query currently does not perform shard elimination when predicates over the sharding key would
allow it to safely exclude certain shards from processing.
Elastic query works best for queries where most of the computation can be done on the shards. You typically
get the best query performance with selective filter predicates that can be evaluated on the shards or joins
over the partitioning keys that can be performed in a partition-aligned way on all shards. Other query
patterns may need to load large amounts of data from the shards to the head node and may perform poorly
Next steps
For an overview of elastic query, see Elastic query overview.
For a vertical partitioning tutorial, see Getting started with cross-database query (vertical partitioning).
For syntax and sample queries for vertically partitioned data, see Querying vertically partitioned data)
For a horizontal partitioning (sharding) tutorial, see Getting started with elastic query for horizontal
partitioning (sharding).
See sp_execute _remote for a stored procedure that executes a Transact-SQL statement on a single remote
Azure SQL Database or set of databases serving as shards in a horizontal partitioning scheme.
Query across cloud databases with different
schemas (preview)
9/13/2022 • 5 minutes to read • Edit Online
Vertically partitioned databases use different sets of tables on different databases. That means that the schema
is different on different databases. For instance, all tables for inventory are on one database while all
accounting-related tables are on a second database.
Prerequisites
The user must possess ALTER ANY EXTERNAL DATA SOURCE permission. This permission is included with
the ALTER DATABASE permission.
ALTER ANY EXTERNAL DATA SOURCE permissions are needed to refer to the underlying data source.
Overview
NOTE
Unlike with horizontal partitioning, these DDL statements do not depend on defining a data tier with a shard map
through the elastic database client library.
NOTE
Ensure that the <username> does not include any "@ser vername" suffix.
<External_Data_Source> ::=
CREATE EXTERNAL DATA SOURCE <data_source_name> WITH
(TYPE = RDBMS,
LOCATION = ’<fully_qualified_server_name>’,
DATABASE_NAME = ‘<remote_database_name>’,
CREDENTIAL = <credential_name>
) [;]
IMPORTANT
The TYPE parameter must be set to RDBMS.
Example
The following example illustrates the use of the CREATE statement for external data sources.
External Tables
Syntax:
CREATE EXTERNAL TABLE [ database_name . [ schema_name ] . | schema_name . ] table_name
( { <column_definition> } [ ,...n ])
{ WITH ( <rdbms_external_table_options> ) }
)[;]
<rdbms_external_table_options> ::=
DATA_SOURCE = <External_Data_Source>,
[ SCHEMA_NAME = N'nonescaped_schema_name',]
[ OBJECT_NAME = N'nonescaped_object_name',]
Example
The following example shows how to retrieve the list of external tables from the current database:
Remarks
Elastic query extends the existing external table syntax to define external tables that use external data sources of
type RDBMS. An external table definition for vertical partitioning covers the following aspects:
Schema : The external table DDL defines a schema that your queries can use. The schema provided in your
external table definition needs to match the schema of the tables in the remote database where the actual
data is stored.
Remote database reference : The external table DDL refers to an external data source. The external data
source specifies the server name and database name of the remote database where the actual table data is
stored.
Using an external data source as outlined in the previous section, the syntax to create external tables is as
follows:
The DATA_SOURCE clause defines the external data source (i.e. the remote database in vertical partitioning) that
is used for the external table.
The SCHEMA_NAME and OBJECT_NAME clauses allow mapping the external table definition to a table in a
different schema on the remote database, or to a table with a different name, respectively. This mapping is
useful if you want to define an external table to a catalog view or DMV on your remote database - or any other
situation where the remote table name is already taken locally.
The following DDL statement drops an existing external table definition from the local catalog. It does not impact
the remote database.
Permissions for CREATE/DROP EXTERNAL TABLE : ALTER ANY EXTERNAL DATA SOURCE permissions are
needed for external table DDL, which is also needed to refer to the underlying data source.
Security considerations
Users with access to the external table automatically gain access to the underlying remote tables under the
credential given in the external data source definition. Carefully manage access to the external table, in order to
avoid undesired elevation of privileges through the credential of the external data source. Regular SQL
permissions can be used to GRANT or REVOKE access to an external table just as though it were a regular table.
SELECT
c_id as customer,
c_lastname as customer_name,
count(*) as cnt_orderline,
max(ol_quantity) as max_quantity,
avg(ol_amount) as avg_amount,
min(ol_delivery_d) as min_deliv_date
FROM customer
JOIN orders
ON c_id = o_c_id
JOIN order_line
ON o_id = ol_o_id and o_c_id = ol_c_id
WHERE c_id = 100
EXEC sp_execute_remote
N'MyExtSrc',
N'select count(w_id) as foo from warehouse'
Next steps
For an overview of elastic query, see Elastic query overview.
For limitations of elastic query, see Preview limitations
For a vertical partitioning tutorial, see Getting started with cross-database query (vertical partitioning).
For a horizontal partitioning (sharding) tutorial, see Getting started with elastic query for horizontal
partitioning (sharding).
For syntax and sample queries for horizontally partitioned data, see Querying horizontally partitioned data)
See sp_execute _remote for a stored procedure that executes a Transact-SQL statement on a single remote
Azure SQL Database or set of databases serving as shards in a horizontal partitioning scheme.
Get the required values for authenticating an
application to access Azure SQL Database from
code
9/13/2022 • 2 minutes to read • Edit Online
IMPORTANT
The PowerShell Azure Resource Manager (RM) module is still supported by SQL Database, but all future development is
for the Az.Sql module. The AzureRM module will continue to receive bug fixes until at least December 2020. The
arguments for the commands in the Az module and in the AzureRm modules are substantially identical. For more about
their compatibility, see Introducing the new Azure PowerShell Az module.
# sign in to Azure
Connect-AzAccount
# for multiple subscriptions, uncomment and set to the subscription you want to work with
#$subscriptionId = "{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}"
#Set-AzContext -SubscriptionId $subscriptionId
$appName = "{app-name}" # display name for your app, must be unique in your directory
$uri = "http://{app-name}" # does not need to be a real uri
$secret = "{app-password}"
# if you still get a PrincipalNotFound error, then rerun the following until successful.
$roleassignment = New-AzRoleAssignment -RoleDefinitionName Contributor -ServicePrincipalName
$azureAdApplication.ApplicationId.Guid
See also
Create a database in Azure SQL Database with C#
Connect to Azure SQL Database by using Azure Active Directory Authentication
Designing globally available services using Azure
SQL Database
9/13/2022 • 11 minutes to read • Edit Online
NOTE
If you are using Premium or Business Critical databases and elastic pools, you can make them resilient to regional outages
by converting them to zone redundant deployment configuration. See Zone-redundant databases.
NOTE
Azure Traffic Manager is used throughout this article for illustration purposes only. You can use any load-balancing
solution that supports priority routing method.
NOTE
All transactions committed after the failover are lost during the reconnection. After the failover is completed, the
application in region B is able to reconnect and restart processing the user requests. Both the web application and the
primary database are now in region B and remain co-located.
If an outage happens in region B, the replication process between the primary and the secondary database gets
suspended but the link between the two remains intact (1). Traffic Manager detects that connectivity to Region B
is broken and marks the endpoint web app 2 as Degraded (2). The application's performance is not impacted in
this case, but the database becomes exposed and therefore at higher risk of data loss in case region A fails in
succession.
NOTE
For disaster recovery, we recommend the configuration with application deployment limited to two regions. This is
because most of the Azure geographies have only two regions. This configuration does not protect your application from
a simultaneous catastrophic failure of both regions. In an unlikely event of such a failure, you can recover your databases
in a third region using geo-restore operation.
Once the outage is mitigated, the secondary database automatically resynchronizes with the primary. During
synchronization, performance of the primary can be impacted. The specific impact depends on the amount of
data the new primary acquired since the failover.
NOTE
After the outage is mitigated, Traffic Manager will start routing the connections to the application in Region A as a higher
priority end-point. If you intend to keep the primary in Region B for a while, you should change the priority table in the
Trafic Manager profile accordingly.
NOTE
If the outage in the primary region is mitigated within the grace period, Traffic Manager detects the restoration of
connectivity in the primary region and switches user traffic back to the application instance in region A. That application
instance resumes and operates in read-write mode using the primary database in region A as illustrated by the previous
diagram.
If an outage happens in region B, Traffic Manager detects the failure of the end point web-app-2 in region B and
marks it degraded (1). In the meantime, the failover group switches the read-only listener to region A (2). This
outage does not impact the end-user experience but the primary database is exposed during the outage. The
following diagram illustrates a failure in the secondary region:
Once the outage is mitigated, the secondary database is immediately synchronized with the primary and the
read-only listener is switched back to the secondary database in region B. During synchronization performance
of the primary could be slightly impacted depending on the amount of data that needs to be synchronized.
This design pattern has several advantages :
It avoids data loss during the temporary outages.
Downtime depends only on how quickly Traffic Manager detects the connectivity failure, which is
configurable.
The tradeoff is that the application must be able to operate in read-only mode.
NOTE
The failover group configuration defines which region is used for failover. Because the new primary is in a different
geography, the failover results in longer latency for both OLTP and read-only workloads until the impacted region is back
online.
At the end of the work day in US West, for example at 4 PM local time, the active databases should be switched
to the next region, East Asia (Japan), where it is 8 AM. Then, at 4 PM in East Asia, the primary should switch to
Europe (UK) where it is 8 AM. This task can be fully automated by using Azure Logic Apps. The task involves the
following steps:
Switch primary server in the failover group to East Asia using friendly failover (1).
Remove the failover group between US West and East Asia.
Create a new failover group with the same name but between East Asia and Europe (2).
Add the primary in East Asia and secondary in Europe to this failover group (3).
The following diagram illustrates the new configuration after the planned failover:
If an outage happens in East Asia, for example, the automatic database failover is initiated by the failover group,
which effectively results in moving the application to the next region ahead of schedule (1). In that case, US West
is the only remaining secondary region until East Asia is back online. The remaining two regions serve the
customers in all three geographies by switching roles. Azure Logic Apps has to be adjusted accordingly. Because
the remaining regions get additional user traffic from East Asia, the application's performance is impacted not
only by additional latency but also by an increased number of end-user connections. Once the outage is
mitigated, the secondary database there is immediately synchronized with the current primary. The following
diagram illustrates an outage in East Asia:
NOTE
You can reduce the time when the end user’s experience in East Asia is degraded by the long latency. To do that you
should proactively deploy an application copy and create a secondary database(s) in a nearby region (e.g., the Azure
Korea Central data center) as a replacement of the offline application instance in Japan. When the latter is back online you
can decide whether to continue using Korea Central or to remove the copy of the application there and switch back to
using Japan.
Active-passive deployment for disaster Read-write access < 5 sec Failure detection time + DNS TTL
recovery with co-located database
access
Active-active deployment for Read-write access < 5 sec Failure detection time + DNS TTL
application load balancing
Active-passive deployment for data Read-only access < 5 sec Read-only access = 0
preservation
Next steps
For a business continuity overview and scenarios, see Business continuity overview
To learn about active geo-replication, see Active geo-replication.
To learn about auto-failover groups, see Auto-failover groups.
For information about active geo-replication with elastic pools, see Elastic pool disaster recovery strategies.
Disaster recovery strategies for applications using
Azure SQL Database elastic pools
9/13/2022 • 13 minutes to read • Edit Online
NOTE
If you are using Premium or Business Critical databases and elastic pools, you can make them resilient to regional outages
by converting them to zone redundant deployment configuration. See Zone-redundant databases.
If the outage was temporary, it is possible that the primary region is recovered by Azure before all the database
restores are complete in the DR region. In this case, orchestrate moving the application back to the primary
region. The process takes the steps illustrated on the next diagram.
Cancel all outstanding geo-restore requests.
Fail over the management databases to the primary region (5). After the region’s recovery, the old primaries
have automatically become secondaries. Now they switch roles again.
Change the application's connection string to point back to the primary region. Now all new accounts and
tenant databases are created in the primary region. Some existing customers see their data temporarily
unavailable.
Set all databases in the DR pool to read-only to ensure they cannot be modified in the DR region (6).
For each database in the DR pool that has changed since the recovery, rename or delete the corresponding
databases in the primary pool (7).
Copy the updated databases from the DR pool to the primary pool (8).
Delete the DR pool (9)
At this point your application is online in the primary region with all tenant databases available in the primary
pool.
Benefit
The key benefit of this strategy is low ongoing cost for data tier redundancy. Azure SQL Database automatically
backs up databases with no application rewrite at no additional cost. The cost is incurred only when the elastic
databases are restored.
Trade -off
The trade-off is that the complete recovery of all tenant databases takes significant time. The length of time
depends on the total number of restores you initiate in the DR region and overall size of the tenant databases.
Even if you prioritize some tenants' restores over others, you are competing with all the other restores that are
initiated in the same region as the service arbitrates and throttles to minimize the overall impact on the existing
customers' databases. In addition, the recovery of the tenant databases cannot start until the new elastic pool in
the DR region is created.
As in the first scenario, the management databases are quite active so you use a single geo-replicated database
for it (1). This ensures the predictable performance for new customer subscriptions, profile updates, and other
management operations. The region in which the primaries of the management databases reside is the primary
region and the region in which the secondaries of the management databases reside is the DR region.
The paying customers’ tenant databases have active databases in the “paid” pool provisioned in the primary
region. Provision a secondary pool with the same name in the DR region. Each tenant is geo-replicated to the
secondary pool (2). This enables quick recovery of all tenant databases using failover.
If an outage occurs in the primary region, the recovery steps to bring your application online are illustrated in
the next diagram:
Immediately fail over the management databases to the DR region (3).
Change the application’s connection string to point to the DR region. Now all new accounts and tenant
databases are created in the DR region. The existing trial customers see their data temporarily unavailable.
Fail over the paid tenant's databases to the pool in the DR region to immediately restore their availability (4).
Since the failover is a quick metadata level change, consider an optimization where the individual failovers
are triggered on demand by the end-user connections.
If your secondary pool eDTU size or vCore value was lower than the primary because the secondary
databases only required the capacity to process the change logs while they were secondaries, immediately
increase the pool capacity now to accommodate the full workload of all tenants (5).
Create the new elastic pool with the same name and the same configuration in the DR region for the trial
customers' databases (6).
Once the trial customers’ pool is created, use geo-restore to restore the individual trial tenant databases into
the new pool (7). Consider triggering the individual restores by the end-user connections or use some other
application-specific priority scheme.
At this point your application is back online in the DR region. All paying customers have access to their data
while the trial customers experience delay when accessing their data.
When the primary region is recovered by Azure after you have restored the application in the DR region you can
continue running the application in that region or you can decide to fail back to the primary region. If the
primary region is recovered before the failover process is completed, consider failing back right away. The
failback takes the steps illustrated in the next diagram:
Benefit
The key benefit of this strategy is that it provides the highest SLA for the paying customers. It also guarantees
that the new trials are unblocked as soon as the trial DR pool is created.
Trade -off
The trade-off is that this setup increases the total cost of the tenant databases by the cost of the secondary DR
pool for paid customers. In addition, if the secondary pool has a different size, the paying customers experience
lower performance after failover until the pool upgrade in the DR region is completed.
As in the previous scenarios, the management databases are quite active so configure them as single geo-
replicated databases (1). This ensures the predictable performance of the new customer subscriptions, profile
updates and other management operations. Region A is the primary region for the management databases and
the region B is used for recovery of the management databases.
The paying customers’ tenant databases are also geo-replicated but with primaries and secondaries split
between region A and region B (2). This way, the tenant primary databases impacted by the outage can fail over
to the other region and become available. The other half of the tenant databases are not be impacted at all.
The next diagram illustrates the recovery steps to take if an outage occurs in region A.
NOTE
The failover operation is asynchronous. To minimize the recovery time, it is important that you execute the tenant
databases' failover command in batches of at least 20 databases.
At this point your application is back online in region B. All paying customers have access to their data while the
trial customers experience delay when accessing their data.
When region A is recovered you need to decide if you want to use region B for trial customers or failback to
using the trial customers pool in region A. One criteria could be the % of trial tenant databases modified since
the recovery. Regardless of that decision, you need to re-balance the paid tenants between two pools. the next
diagram illustrates the process when the trial tenant databases fail back to region A.
Next steps
To learn about Azure SQL Database automated backups, see Azure SQL Database automated backups.
For a business continuity overview and scenarios, see Business continuity overview.
To learn about using automated backups for recovery, see restore a database from the service-initiated
backups.
To learn about faster recovery options, see Active geo-replication and Auto-failover groups.
To learn about using automated backups for archiving, see database copy.
Manage rolling upgrades of cloud applications by
using SQL Database active geo-replication
9/13/2022 • 8 minutes to read • Edit Online
NOTE
These preparation steps won't impact the production environment, which can function in full-access mode.
When the preparation steps are complete, the application is ready for the actual upgrade. The next diagram
illustrates the steps involved in the upgrade process:
1. Set the primary database to read-only mode (3). This mode guarantees that the production environment of
the web app (V1) remains read-only during the upgrade, thus preventing data divergence between the V1
and V2 database instances.
2. Disconnect the secondary database by using the planned termination mode (4). This action creates a fully
synchronized, independent copy of the primary database. This database will be upgraded.
3. Turn the secondary database to read-write mode and run the upgrade script (5).
If the upgrade finishes successfully, you're now ready to switch users to the upgraded copy the application,
which becomes a production environment. Switching involves a few more steps, as illustrated in the next
diagram:
1. Activate a swap operation between production and staging environments of the web app (6). This operation
switches the URLs of the two environments. Now contoso.azurewebsites.net points to the V2 version of the
web site and the database (production environment).
2. If you no longer need the V1 version, which became a staging copy after the swap, you can decommission
the staging environment (7).
If the upgrade process is unsuccessful (for example, due to an error in the upgrade script), consider the staging
environment to be compromised. To roll back the application to the pre-upgrade state, revert the application in
the production environment to full access. The next diagram shows the reversion steps:
1. Set the database copy to read-write mode (8). This action restores the full V1 functionality of the production
copy.
2. Perform the root-cause analysis and decommission the staging environment (9).
At this point, the application is fully functional, and you can repeat the upgrade steps.
NOTE
The rollback doesn't require DNS changes because you did not yet perform a swap operation.
The key advantage of this option is that you can upgrade an application in a single region by following a set of
simple steps. The dollar cost of the upgrade is relatively low.
The main tradeoff is that, if a catastrophic failure occurs during the upgrade, the recovery to the pre-upgrade
state involves redeploying the application in a different region and restoring the database from backup by using
geo-restore. This process results in significant downtime.
NOTE
These preparation steps won't impact the application in the production environment. It will remain fully functional in read-
write mode.
When the preparation steps are complete, the staging environment is ready for the upgrade. The next diagram
illustrates these upgrade steps:
1. Set the primary database in the production environment to read-only mode (10). This mode guarantees that
the production database (V1) won't change during the upgrade, thus preventing the data divergence
between the V1 and V2 database instances.
2. Terminate geo-replication by disconnecting the secondary (11). This action creates an independent but fully
synchronized copy of the production database. This database will be upgraded. The following example uses
Transact-SQL but PowerShell is also available.
-- Disconnect the secondary, terminating geo-replication
ALTER DATABASE [<Prod_DB>]
REMOVE SECONDARY ON SERVER [<Partner-Server>]
If the upgrade finishes successfully, you're now ready to switch users to the V2 version of the application. The
next diagram illustrates the steps involved:
1. Activate a swap operation between production and staging environments of the web app in the primary
region (13) and in the backup region (14). V2 of the application now becomes a production environment,
with a redundant copy in the backup region.
2. If you no longer need the V1 application (15 and 16), you can decommission the staging environment.
If the upgrade process is unsuccessful (for example, due to an error in the upgrade script), consider the staging
environment to be in an inconsistent state. To roll back the application to the pre-upgrade state, revert to using
V1 of the application in the production environment. The required steps are shown on the next diagram:
1. Set the primary database copy in the production environment to read-write mode (17). This action restores
full V1 functionality in the production environment.
2. Perform the root-cause analysis and repair or remove the staging environment (18 and 19).
At this point, the application is fully functional, and you can repeat the upgrade steps.
NOTE
The rollback doesn't require DNS changes because you didn't perform a swap operation.
The key advantage of this option is that you can upgrade both the application and its geo-redundant copy in
parallel without compromising your business continuity during the upgrade.
The main tradeoff is that it requires double redundancy of each application component and therefore incurs
higher dollar cost. It also involves a more complicated workflow.
Summary
The two upgrade methods described in the article differ in complexity and dollar cost, but they both focus on
minimizing how long the user is limited to read-only operations. That time is directly defined by the duration of
the upgrade script. It doesn't depend on the database size, the service tier you chose, the website configuration,
or other factors that you can't easily control. All preparation steps are decoupled from the upgrade steps and
don't impact the production application. The efficiency of the upgrade script is a key factor that determines the
user experience during upgrades. So, the best way to improve that experience is to focus your efforts on making
the upgrade script as efficient as possible.
Next steps
For a business continuity overview and scenarios, see Business continuity overview.
To learn about Azure SQL Database active geo-replication, see Create readable secondary databases using
active geo-replication.
To learn about Azure SQL Database auto-failover groups, see Use auto-failover groups to enable transparent
and coordinated failover of multiple databases.
To learn about staging environments in Azure App Service, see Set up staging environments in Azure App
Service.
To learn about Azure Traffic Manager profiles, see Manage an Azure Traffic Manager profile.
Connect to SQL Database using C and C++
9/13/2022 • 5 minutes to read • Edit Online
At this point, you have configured your Azure SQL Database and are ready to connect from your C++ code.
Alternatively, you could create a DSN file using the wizard that is launched when no command arguments are
provided. We recommend that you try this option as well. You can use this DSN file for automation and
protecting your authentication settings:
Congratulations! You have now successfully connected to Azure SQL using C++ and ODBC on Windows. You
can continue reading to do the same for Linux platform as well.
sudo su
sh -c 'echo "deb [arch=amd64] https://apt-mo.trafficmanager.net/repos/mssql-ubuntu-test/ xenial main" >
/etc/apt/sources.list.d/mssqlpreview.list'
sudo apt-key adv --keyserver apt-mo.trafficmanager.net --recv-keys 417A0893
apt-get update
apt-get install msodbcsql
apt-get install unixodbc-dev-utf16 #this step is optional but recommended*
Launch Visual Studio. Under Tools -> Options -> Cross Platform -> Connection Manager, add a connection to
your Linux box:
After connection over SSH is established, create an Empty project (Linux) template:
You can then add a new C source file and replace it with this content. Using the ODBC APIs SQLAllocHandle,
SQLSetConnectAttr, and SQLDriverConnect, you should be able to initialize and establish a connection to your
database. Like with the Windows ODBC sample, you need to replace the SQLDriverConnect call with the details
from your database connection string parameters copied from the Azure portal previously.
retcode = SQLDriverConnect(
hdbc, NULL, "Driver=ODBC Driver 13 for SQL"
"Server;Server=<yourserver>;Uid=<yourusername>;Pwd=<"
"yourpassword>;database=<yourdatabase>",
SQL_NTS, outstr, sizeof(outstr), &outstrlen, SQL_DRIVER_NOPROMPT);
To launch your application, bring up the Linux Console from the Debug menu:
If your connection was successful, you should now see the current database name printed in the Linux Console:
Congratulations! You have successfully completed the tutorial and can now connect to your Azure SQL Database
from C++ on Windows and Linux platforms.
Next steps
Review the SQL Database Development Overview
More information on the ODBC API Reference
Additional resources
Design Patterns for Multi-tenant SaaS Applications with Azure SQL Database
Explore all the capabilities of SQL Database
Connect Excel to a database in Azure SQL
Database or Azure SQL Managed Instance, and
create a report
9/13/2022 • 4 minutes to read • Edit Online
4. In the SQL Ser ver database dialog box, select Database on the left side, and then enter in your User
Name and Password for the server you want to connect to. Select Connect to open the Navigator .
TIP
Depending on your network environment, you may not be able to connect or you may lose the connection if the
server doesn't allow traffic from your client IP address. Go to the Azure portal, click SQL servers, click your server,
click firewall under settings and add your client IP address. See How to configure firewall settings for details.
5. In the Navigator , select the database you want to work with from the list, select the tables or views you
want to work with (we chose vGetAllCategories ), and then select Load to move the data from your
database to your Excel spreadsheet.
2. In the Data Connection Wizard , type in your server name and your SQL Database credentials. Select
Next .
a. Select the database that contains your data from the drop-down.
b. Select the table or view you're interested in. We chose vGetAllCategories.
c. Select Next .
3. Select the location of your file, the File Name , and the Friendly Name in the next screen of the Data
Connection Wizard. You can also choose to save the password in the file, though this can potentially
expose your data to unwanted access. Select Finish when ready.
4. Select how you want to import your data. We chose to do a PivotTable. You can also modify the
properties of the connection by select Proper ties . Select OK when ready. If you did not choose to save
the password with the file, then you will be prompted to enter your credentials.
5. Verify that your new connection has been saved by expanding the Data tab, and selecting Existing
Connections .
Next steps
Learn how to Connect and query with SQL Server Management Studio for advanced querying and analysis.
Learn about the benefits of elastic pools.
Learn how to create a web application that connects to Azure SQL Database on the back-end.
Ports beyond 1433 for ADO.NET 4.5
9/13/2022 • 2 minutes to read • Edit Online
IMPORTANT
For information about connectivity architecture, see Azure SQL Database connectivity architecture.
Outside vs inside
For connections to Azure SQL Database, we must first ask whether your client program runs outside or inside
the Azure cloud boundary. The subsections discuss two common scenarios.
Outside: Client runs on your desktop computer
Port 1433 is the only port that must be open on your desktop computer that hosts your SQL Database client
application.
Inside: Client runs on Azure
When your client runs inside the Azure cloud boundary, it uses what we can call a direct route to interact with
SQL Database. After a connection is established, further interactions between the client and database involve no
Azure SQL Database Gateway.
The sequence is as follows:
1. ADO.NET 4.5 (or later) initiates a brief interaction with the Azure cloud, and receives a dynamically
identified port number.
The dynamically identified port number is in the range of 11000-11999.
2. ADO.NET then connects to SQL Database directly, with no middleware in between.
3. Queries are sent directly to the database, and results are returned directly to the client.
Ensure that the port ranges of 11000-11999 on your Azure client machine are left available for ADO.NET 4.5
client interactions with SQL Database.
In particular, ports in the range must be free of any other outbound blockers.
On your Azure VM, the Windows Firewall with Advanced Security controls the port settings.
You can use the firewall's user interface to add a rule for which you specify the TCP protocol along
with a port range with the syntax like 11000-11999 .
Version clarifications
This section clarifies the monikers that refer to product versions. It also lists some pairings of versions between
products.
ADO.NET
ADO.NET 4.0 supports the TDS 7.3 protocol, but not 7.4.
ADO.NET 4.5 and later supports the TDS 7.4 protocol.
ODBC
Microsoft SQL Server ODBC 11 or above
JDBC
Microsoft SQL Server JDBC 4.2 or above (JDBC 4.0 actually supports TDS 7.4 but does not implement
“redirection”)
Related links
ADO.NET 4.6 was released on July 20, 2015. A blog announcement from the .NET team is available here.
ADO.NET 4.5 was released on August 15, 2012. A blog announcement from the .NET team is available
here.
A blog post about ADO.NET 4.5.1 is available here.
Microsoft ODBC Driver 17 for SQL Server https://aka.ms/downloadmsodbcsql
Connect to Azure SQL Database V12 via Redirection
https://techcommunity.microsoft.com/t5/DataCAT/Connect-to-Azure-SQL-Database-V12-via-
Redirection/ba-p/305362
TDS protocol version list
SQL Database Development Overview
Azure SQL Database firewall
Multi-tenant SaaS database tenancy patterns
9/13/2022 • 12 minutes to read • Edit Online
Each app instance is installed in a separate Azure resource group. The resource group can belong to a
subscription that is owned by either the software vendor or the tenant. In either case, the vendor can manage
the software for the tenant. Each application instance is configured to connect to its corresponding database.
Each tenant database is deployed as a single database. This model provides the greatest database isolation. But
the isolation requires that sufficient resources be allocated to each database to handle its peak loads. Here it
matters that elastic pools cannot be used for databases deployed in different resource groups or to different
subscriptions. This limitation makes this standalone single-tenant app model the most expensive solution from
an overall database cost perspective.
Vendor management
The vendor can access all the databases in all the standalone app instances, even if the app instances are
installed in different tenant subscriptions. The access is achieved via SQL connections. This cross-instance access
can enable the vendor to centralize schema management and cross-database query for reporting or analytics
purposes. If this kind of centralized management is desired, a catalog must be deployed that maps tenant
identifiers to database URIs. Azure SQL Database provides a sharding library that is used together to provide a
catalog. The sharding library is formally named the Elastic Database Client Library.
Tenant isolation Very high High Low; except for any single
tenant (that is alone in an
MT db).
Database cost per tenant High; is sized for peaks. Low; pools used. Lowest, for small tenants in
MT DBs.
Next steps
Deploy and explore a multi-tenant Wingtip application that uses the database-per-tenant SaaS model -
Azure SQL Database
Welcome to the Wingtip Tickets sample SaaS Azure SQL Database tenancy app
Video indexed and annotated for multi-tenant SaaS
app using Azure SQL Database
9/13/2022 • 4 minutes to read • Edit Online
3. Agenda, 0:04:09
4. Multi-tenant web app, 0:05:00
Next steps
First tutorial article
Multi-tenant applications with elastic database tools
and row-level security
9/13/2022 • 11 minutes to read • Edit Online
NOTE
The tenant identifier might consist of more than one column. For convenience is this discussion, we informally assume a
single-column TenantId.
// Ask shard map to broker a validated connection for the given key.
SqlConnection conn = null;
try
{
conn = shardMap.OpenConnectionForKey(
shardingKey,
connectionStr,
ConnectionOptions.Validate);
return conn;
}
catch (Exception)
{
if (conn != null)
{
conn.Dispose();
}
throw;
}
}
// ...
Now the SESSION_CONTEXT is automatically set with the specified TenantId whenever ElasticScaleContext is
invoked:
// Program.cs
SqlDatabaseUtils.SqlRetryPolicy.ExecuteAction(() =>
{
using (var db = new ElasticScaleContext<int>(
sharding.ShardMap, tenantId, connStrBldr.ConnectionString))
{
var query = from b in db.Blogs
orderby b.Name
select b;
ADO.NET SqlClient
For applications using ADO.NET SqlClient, create a wrapper function around method
ShardMap.OpenConnectionForKey. Have the wrapper automatically set TenantId in the SESSION_CONTEXT to
the current TenantId before returning a connection. To ensure that SESSION_CONTEXT is always set, you should
only open connections using this wrapper function.
// Program.cs
// Wrapper function for ShardMap.OpenConnectionForKey() that
// automatically sets SESSION_CONTEXT with the correct
// tenantId before returning a connection.
// As a best practice, you should only open connections using this method
// to ensure that SESSION_CONTEXT is always set before executing a query.
// ...
public static SqlConnection OpenConnectionForTenant(
ShardMap shardMap, int tenantId, string connectionStr)
{
SqlConnection conn = null;
try
{
// Ask shard map to broker a validated connection for the given key.
conn = shardMap.OpenConnectionForKey(
tenantId, connectionStr, ConnectionOptions.Validate);
return conn;
}
catch (Exception)
{
if (conn != null)
{
conn.Dispose();
}
throw;
}
}
// ...
Console.WriteLine(@"--
All blogs for TenantId {0} (using ADO.NET SqlClient):", tenantId4);
TIP
In a complex project you might need to add the predicate on hundreds of tables, which could be tedious. There is a helper
stored procedure that automatically generates a security policy, and adds a predicate on all tables in a schema. For more
information, see the blog post at Apply Row-Level Security to all tables - helper script (blog).
Now if you run the sample application again, tenants see only rows that belong to them. In addition, the
application cannot insert rows that belong to tenants other than the one currently connected to the shard
database. Also, the app cannot update the TenantId in any rows it can see. If the app attempts to do either, a
DbUpdateException is raised.
If you add a new table later, ALTER the security policy to add FILTER and BLOCK predicates on the new table.
ALTER SECURITY POLICY rls.tenantAccessPolicy
ADD FILTER PREDICATE rls.fn_tenantAccessPredicate(TenantId) ON dbo.MyNewTable,
ADD BLOCK PREDICATE rls.fn_tenantAccessPredicate(TenantId) ON dbo.MyNewTable;
GO
Now the application does not need to specify a TenantId when inserting rows:
SqlDatabaseUtils.SqlRetryPolicy.ExecuteAction(() =>
{
using (var db = new ElasticScaleContext<int>(
sharding.ShardMap, tenantId, connStrBldr.ConnectionString))
{
// The default constraint sets TenantId automatically!
var blog = new Blog { Name = name };
db.Blogs.Add(blog);
db.SaveChanges();
}
});
NOTE
If you use default constraints for an Entity Framework project, it is recommended that you NOT include the TenantId
column in your EF data model. This recommendation is because Entity Framework queries automatically supply default
values that override the default constraints created in T-SQL that use SESSION_CONTEXT. To use default constraints in the
sample project, for instance, you should remove TenantId from DataClasses.cs (and run Add-Migration in the Package
Manager Console) and use T-SQL to ensure that the field only exists in the database tables. This way, EF does
automatically supply incorrect default values when inserting data.
Maintenance
Adding new shards : Execute the T-SQL script to enable RLS on any new shards, otherwise queries on these
shards are not be filtered.
Adding new tables : Add a FILTER and BLOCK predicate to the security policy on all shards whenever a new
table is created. Otherwise queries on the new table are not be filtered. This addition can be automated by
using a DDL trigger, as described in Apply Row-Level Security automatically to newly created tables (blog).
Summary
Elastic database tools and row-level security can be used together to scale out an application's data tier with
support for both multi-tenant and single-tenant shards. Multi-tenant shards can be used to store data more
efficiently. This efficiency is pronounced where a large number of tenants have only a few rows of data. Single-
tenant shards can support premium tenants which have stricter performance and isolation requirements. For
more information, see Row-Level Security reference.
Additional resources
What is an Azure elastic pool?
Scaling out with Azure SQL Database
Design Patterns for Multi-tenant SaaS Applications with Azure SQL Database
Authentication in multitenant apps, using Azure AD and OpenID Connect
Tailspin Surveys application
Each sample includes the application code, plus management scripts and tutorials that explore a range of design
and management patterns. Each sample deploys in less that five minutes. All three can be deployed side-by-side
so you can compare the differences in design and management.
Next steps
Conceptual descriptions
A more detailed explanation of the application tenancy patterns is available at Multi-tenant SaaS database
tenancy patterns
Tutorials and code
Standalone app per tenant:
Tutorials for standalone app.
Code for standalone app, on GitHub.
Database per tenant:
Tutorials for database per tenant.
Code for database per tenant, on GitHub.
Sharded multi-tenant:
Tutorials for sharded multi-tenant.
Code for sharded multi-tenant, on GitHub.
General guidance for working with Wingtip Tickets
sample SaaS apps
9/13/2022 • 3 minutes to read • Edit Online
Next steps
Deploy the Wingtip Tickets SaaS Standalone Application
Deploy the Wingtip Tickets SaaS Database per Tenant application
Deploy the Wingtip Tickets SaaS Multi-tenant Database application
Deploy and explore a standalone single-tenant
application that uses Azure SQL Database
9/13/2022 • 4 minutes to read • Edit Online
Dogwood Dojo
It's best to use only lowercase letters, numbers, and hyphens in your resource names.
For Resource group , select Create new, and then provide a lowercase Name for the resource
group. wingtip-sa-<venueName>-<user> is the recommended pattern. For <venueName>,
replace the venue name with no spaces. For <user>, replace the user value from below. With this
pattern, resource group names might be wingtip-sa-contosoconcerthall-af1, wingtip-sa-
dogwooddojo-af1, wingtip-sa-fabrikamjazzclub-af1.
Select a Location from the drop-down list.
For User - We recommend a short user value, such as your initials plus a digit: for example, af1.
3. Deploy the application .
Click to agree to the terms and conditions.
Click Purchase .
4. Monitor the status of all three deployments by clicking Notifications (the bell icon to the right of the
search box). Deploying the apps takes around five minutes.
Additional resources
To learn about multi-tenant SaaS applications, see Design patterns for multi-tenant SaaS applications.
Next steps
In this tutorial you learned:
How to deploy the Wingtip Tickets SaaS Standalone Application.
About the servers and databases that make up the app.
How to delete sample resources to stop related billing.
Next, try the Provision and Catalog tutorial in which you'll explore the use of a catalog of tenants that enables a
range of cross-tenant scenarios such as schema management and tenant analytics.
Provision and catalog new tenants using the
application per tenant SaaS pattern
9/13/2022 • 7 minutes to read • Edit Online
When deploying an application for a tenant, the app and database are provisioned in a new resource group
created for the tenant. Using separate resource groups isolates each tenant's application resources and allows
them to be managed independently. Within each resource group, each application instance is configured to
access its corresponding database directly. This connection model contrasts with other patterns that use a
catalog to broker connections between the app and the database. And as there is no resource sharing, each
tenant database must be provisioned with sufficient resources to handle its peak load. This pattern tends to be
used for SaaS applications with fewer tenants, where there is a strong emphasis on tenant isolation and less
emphasis on resource costs.
Using a tenant catalog with the application per tenant pattern
While each tenant’s app and database are fully isolated, various management and analytics scenarios may
operate across tenants. For example, applying a schema change for a new release of the application requires
changes to the schema of each tenant database. Reporting and analytics scenarios may also require access to all
the tenant databases regardless of where they are deployed.
The tenant catalog holds a mapping between a tenant identifier and a tenant database, allowing an identifier to
be resolved to a server and database name. In the Wingtip SaaS app, the tenant identifier is computed as a hash
of the tenant name, although other schemes could be used. While standalone applications don't need the
catalog to manage connections, the catalog can be used to scope other actions to a set of tenant databases. For
example, Elastic Query can use the catalog to determine the set of databases across which queries are
distributed for cross-tenant reporting.
IMPORTANT
Do not edit the data in the catalog database or the local shard map in the tenant databases directly. Direct updates are
not supported due to the high risk of data corruption. Instead, edit the mapping data by using EDCL APIs only.
Tenant provisioning
Each tenant requires a new Azure resource group, which must be created before resources can be provisioned
within it. Once the resource group exists, an Azure Resource Management template can be used to deploy the
application components and the database, and then configure the database connection. To initialize the database
schema, the template can import a bacpac file. Alternatively, the database can be created as a copy of a
‘template’ database. The database is then further updated with initial venue data and registered in the catalog.
Tutorial
In this tutorial you learn how to:
Provision a catalog
Register the sample tenant databases that you deployed earlier in the catalog
Provision an additional tenant and register it in the catalog
An Azure Resource Manager template is used to deploy and configure the application, create the tenant
database, and then import a bacpac file to initialize it. The import request may be queued for several minutes
before it is actioned.
At the end of this tutorial, you have a set of standalone tenant applications, with each database registered in the
catalog.
Prerequisites
To complete this tutorial, make sure the following prerequisites are completed:
Azure PowerShell is installed. For details, see Getting started with Azure PowerShell
The three sample tenant apps are deployed. To deploy these apps in less than five minutes, see Deploy and
explore the Wingtip Tickets SaaS Standalone Application pattern.
Next steps
In this tutorial you learned:
How to deploy the Wingtip Tickets SaaS Standalone Application.
About the servers and databases that make up the app.
How to delete sample resources to stop related billing.
You can explore how the catalog is used to support various cross-tenant scenarios using the database-per-
tenant version of the Wingtip Tickets SaaS application.
Introduction to a multitenant SaaS app that uses the
database-per-tenant pattern with Azure SQL
Database
9/13/2022 • 2 minutes to read • Edit Online
Application architecture
The Wingtip SaaS app uses the database-per-tenant model. It uses SQL elastic pools to maximize efficiency. For
provisioning and mapping tenants to their data, a catalog database is used. The core Wingtip SaaS application
uses a pool with three sample tenants, plus the catalog database. The catalog and tenant servers have been
provisioned with DNS aliases. These aliases are used to maintain a reference to the active resources used by the
Wingtip application. These aliases are updated to point to recovery resources in the disaster recovery tutorials.
Completing many of the Wingtip SaaS tutorials results in add-ons to the initial deployment. Add-ons such as
analytic databases and cross-database schema management are introduced.
As you go through the tutorials and work with the app, focus on the SaaS patterns as they relate to the data tier.
In other words, focus on the data tier, and don't overanalyze the app itself. Understanding the implementation of
these SaaS patterns is key to implementing these patterns in your applications. Also consider any necessary
modifications for your specific business requirements.
SQL Database Wingtip SaaS tutorials
After you deploy the app, explore the following tutorials that build on the initial deployment. These tutorials
explore common SaaS patterns that take advantage of built-in features of SQL Database, Azure Synapse
Analytics, and other Azure services. Tutorials include PowerShell scripts with detailed explanations. The
explanations simplify understanding and implementation of the same SaaS management patterns in your
applications.
Guidance and tips for the SQL Database multitenant SaaS Download and run PowerShell scripts to prepare parts of the
app example application.
Deploy and explore the Wingtip SaaS application Deploy and explore the Wingtip SaaS application with your
Azure subscription.
Provision and catalog tenants Learn how the application connects to tenants by using a
catalog database, and how the catalog maps tenants to their
data.
Monitor and manage performance Learn how to use monitoring features of SQL Database and
set alerts when performance thresholds are exceeded.
Monitor with Azure Monitor logs Learn how to use Azure Monitor logs to monitor large
amounts of resources across multiple pools.
Restore a single tenant Learn how to restore a tenant database to a prior point in
time. Also learn how to restore to a parallel database, which
leaves the existing tenant database online.
Manage tenant database schema Learn how to update schema and update reference data
across all tenant databases.
Run cross-tenant distributed queries Create an ad hoc analytics database, and run real-time
distributed queries across all tenants.
Run analytics on extracted tenant data Extract tenant data into an analytics database or data
warehouse for offline analytics queries.
Next steps
General guidance and tips when you deploy and use the Wingtip Tickets SaaS app example
Deploy the Wingtip SaaS application
Deploy and explore a multitenant SaaS app that
uses the database-per-tenant pattern with Azure
SQL Database
9/13/2022 • 10 minutes to read • Edit Online
Prerequisites
To complete this tutorial, make sure Azure PowerShell is installed. For more information, see Get started with
Azure PowerShell.
IMPORTANT
Some authentication and server firewalls are intentionally unsecured for demonstration purposes. We recommend
that you create a new resource group. Don't use existing resource groups, servers, or pools. Don't use this
application, scripts, or any deployed resources for production. Delete this resource group when you're finished
with the application to stop related billing.
Resource group : Select Create new , and provide the unique name you chose earlier for the
resource group.
Location : Select a location from the drop-down list.
User : Use the user name value you chose earlier.
3. Deploy the application.
a. Select to agree to the terms and conditions.
b. Select Purchase .
4. To monitor deployment status, select Notifications (the bell icon to the right of the search box).
Deploying the Wingtip Tickets SaaS app takes approximately five minutes.
IMPORTANT
Executable contents (scripts and DLLs) might be blocked by Windows when .zip files are downloaded from an external
source and extracted. Follow the steps to unblock the .zip file before you extract the scripts. Unblocking makes sure the
scripts are allowed to run.
The tenant name is parsed from the URL by the events app.
The tenant name is used to create a key.
The key is used to access the catalog to obtain the location of the tenant's database.
The catalog is implemented by using shard map management.
The Events Hub uses extended metadata in the catalog to construct the list-of-events page URLs for each
tenant.
In a production environment, typically you create a CNAME DNS record to point a company internet domain to
the Traffic Manager DNS name.
NOTE
It may not be immediately obvious what the use of the traffic manager is in this tutorial. The goal of this series of tutorials
is to showcase patterns that can handle the scale of a complex production environment. In such a case, for example, you
would have multiple web apps distributed across the globe, co-located with databases and you would need traffic
manager to route between these instances. Another set of tutorials that illustrates the use of traffic manager though are
the geo-restore and the geo-replication tutorials. In these tutorials, traffic manager is used to help to switch over to a
recovery instance of the SaaS app in the event of a regional outage.
Before you continue with the next section, leave the load generator running in the job-invoking state.
NOTE
Many Wingtip SaaS scripts use $PSScriptRoot to browse folders to call functions in other scripts. This variable is
evaluated only when the full script is executed by pressing F5. Highlighting and running a selection with F8 can
result in errors. To run the scripts, press F5.
Additional resources
For more information, see additional tutorials that build on the Wingtip Tickets SaaS database-per-tenant
application.
To learn about elastic pools, see What is an Azure SQL elastic pool?.
To learn about elastic jobs, see Manage scaled-out cloud databases.
To learn about multitenant SaaS applications, see Design patterns for multitenant SaaS applications.
Next steps
In this tutorial you learned:
How to deploy the Wingtip Tickets SaaS application.
About the servers, pools, and databases that make up the app.
How tenants are mapped to their data with the catalog.
How to provision new tenants.
How to view pool utilization to monitor tenant activity.
How to delete sample resources to stop related billing.
Next, try the Provision and catalog tutorial.
Learn how to provision new tenants and register
them in the catalog
9/13/2022 • 9 minutes to read • Edit Online
IMPORTANT
The mapping data is accessible in the catalog database, but don't edit it. Edit mapping data by using Elastic Database
Client Library APIs only. Directly manipulating the mapping data risks corrupting the catalog and isn't supported.
Trace the script's execution by using the Debug menu options. Press F10 and F11 to step over or into the called
functions. For more information about debugging PowerShell scripts, see Tips on working with and debugging
PowerShell scripts.
You don't need to explicitly follow this workflow. It explains how to debug the script.
Impor t the CatalogAndDatabaseManagement.psm1 module. It provides a catalog and tenant-level
abstraction over the Shard Management functions. This module encapsulates much of the catalog pattern
and is worth exploring.
Impor t the SubscriptionManagement.psm1 module. It contains functions for signing in to Azure
and selecting the Azure subscription you want to work with.
Get configuration details. Step into Get-Configuration by using F11, and see how the app config is
specified. Resource names and other app-specific values are defined here. Don't change these values until
you are familiar with the scripts.
Get the catalog object. Step into Get-Catalog, which composes and returns a catalog object that's used
in the higher-level script. This function uses Shard Management functions that are imported from
AzureShardManagement.psm1 . The catalog object is composed of the following elements:
$catalogServerFullyQualifiedName is constructed by using the standard stem plus your user name:
catalog-<user>.database.windows .net.
$catalogDatabaseName is retrieved from the config: tenantcatalog.
$shardMapManager object is initialized from the catalog database.
$shardMap object is initialized from the tenantcatalog shard map in the catalog database. A catalog
object is composed and returned. It's used in the higher-level script.
Calculate the new tenant key. A hash function is used to create the tenant key from the tenant name.
Check if the tenant key already exists. The catalog is checked to make sure the key is available.
The tenant database is provisioned with New-TenantDatabase. Use F11 to step into how the
database is provisioned by using an Azure Resource Manager template.
The database name is constructed from the tenant name to make it clear which shard belongs to which
tenant. You also can use other database naming conventions. A Resource Manager template creates a
tenant database by copying a template database (baseTenantDB) on the catalog server. As an alternative,
you can create a database and initialize it by importing a bacpac. Or you can execute an initialization
script from a well-known location.
The Resource Manager template is in the …\Learning Modules\Common\ folder:
tenantdatabasecopytemplate.json
The tenant database is fur ther initialized. The venue (tenant) name and the venue type are added.
You also can do other initialization here.
The tenant database is registered in the catalog. It's registered with Add-TenantDatabaseToCatalog
by using the tenant key. Use F11 to step into the details:
The catalog database is added to the shard map (the list of known databases).
The mapping that links the key value to the shard is created.
Additional metadata about the tenant (the venue's name) is added to the Tenants table in the catalog.
The Tenants table isn't part of the Shard Management schema, and it isn't installed by the EDCL. This
table illustrates how the catalog database can be extended to support additional application-specific
data.
After provisioning completes, execution returns to the original Demo-ProvisionAndCatalog script. The Events
page opens for the new tenant in the browser.
Provision a batch of tenants
This exercise provisions a batch of 17 tenants. We recommend that you provision this batch of tenants before
starting other Wingtip Tickets SaaS database-per-tenant tutorials. There are more than just a few databases to
work with.
1. In the PowerShell ISE, open ...\Learning Modules\ProvisionAndCatalog\Demo-ProvisionAndCatalog.ps1.
Change the $DemoScenario parameter to 3:
$DemoScenario = 3 , Provision a batch of tenants.
2. To run the script, press F5.
The script deploys a batch of additional tenants. It uses an Azure Resource Manager template that controls the
batch and delegates provisioning of each database to a linked template. Using templates in this way allows
Azure Resource Manager to broker the provisioning process for your script. The templates provision databases
in parallel and handle retries, if needed. The script is idempotent, so if it fails or stops for any reason, run it again.
Verify the batch of tenants that successfully deployed
In the Azure portal, browse to your list of servers and open the tenants1 server. Select SQL databases ,
and verify that the batch of 17 additional databases is now in the list.
Other provisioning patterns
Other provisioning patterns not included in this tutorial:
Pre-provisioning databases : The pre-provisioning pattern exploits the fact that databases in an elastic pool
don't add extra cost. Billing is for the elastic pool, not the databases. Idle databases consume no resources. By
pre-provisioning databases in a pool and allocating them when needed, you can reduce the time to add tenants.
The number of databases pre-provisioned can be adjusted as needed to keep a buffer suitable for the
anticipated provisioning rate.
Auto-provisioning : In the auto-provisioning pattern, a provisioning service provisions servers, pools, and
databases automatically, as needed. If you want, you can include pre-provisioning databases in elastic pools. If
databases are decommissioned and deleted, gaps in elastic pools can be filled by the provisioning service. Such
a service can be simple or complex, such as handling provisioning across multiple geographies and setting up
geo-replication for disaster recovery.
With the auto-provisioning pattern, a client application or script submits a provisioning request to a queue to be
processed by the provisioning service. It then polls the service to determine completion. If pre-provisioning is
used, requests are handled quickly. The service provisions a replacement database in the background.
Next steps
In this tutorial you learned how to:
Provision a single new tenant.
Provision a batch of additional tenants.
Step into the details of provisioning tenants and registering them into the catalog.
Try the Performance monitoring tutorial.
Additional resources
Additional tutorials that build on the Wingtip Tickets SaaS database-per-tenant application
Elastic database client library
Debug scripts in the Windows PowerShell ISE
Monitor and manage performance of Azure SQL
Database in a multi-tenant SaaS app
9/13/2022 • 15 minutes to read • Edit Online
Get the Wingtip Tickets SaaS Database Per Tenant application scripts
The Wingtip Tickets SaaS Multi-tenant Database scripts and application source code are available in the
WingtipTicketsSaaS-DbPerTenant GitHub repo. Check out the general guidance for steps to download and
unblock the Wingtip Tickets SaaS scripts.
DEM O SC EN A RIO
The load generator applies a synthetic CPU-only load to every tenant database. The generator starts a job for
each tenant database, which calls a stored procedure periodically that generates the load. The load levels (in
eDTUs), duration, and intervals are varied across all databases, simulating unpredictable tenant activity.
1. In the PowerShell ISE , open …\Learning Modules\Performance Monitoring and Management\Demo-
PerformanceMonitoringAndManagement.ps1. Keep this script open as you'll run several scenarios during
this tutorial.
2. Set $DemoScenario = 2 , Generate normal intensity load.
3. Press F5 to apply a load to all your tenant databases.
Wingtip Tickets SaaS Database Per Tenant is a SaaS app, and the real-world load on a SaaS app is typically
sporadic and unpredictable. To simulate this, the load generator produces a randomized load distributed across
all tenants. Several minutes are needed for the load pattern to emerge, so run the load generator for 3-5
minutes before attempting to monitor the load in the following sections.
IMPORTANT
The load generator is running as a series of jobs in your local PowerShell session. Keep the Demo-
PerformanceMonitoringAndManagement.ps1 tab open! If you close the tab, or suspend your machine, the load generator
stops. The load generator remains in a job-invoking state where it generates load on any new tenants that are
provisioned after the generator is started. Use Ctrl-C to stop invoking new jobs and exit the script. The load generator will
continue to run, but only on existing tenants.
Because there are additional databases in the pool beyond the top five, the pool utilization shows activity that is
not reflected in the top five databases chart. For additional details, click Database Resource Utilization :
Next steps
In this tutorial you learn how to:
Simulate usage on the tenant databases by running a provided load generator
Monitor the tenant databases as they respond to the increase in load
Scale up the Elastic pool in response to the increased database load
Provision a second Elastic pool to load balance the database activity
Restore a single tenant tutorial
Additional resources
Additional tutorials that build upon the Wingtip Tickets SaaS Database Per Tenant application deployment
SQL Elastic pools
Azure automation
Azure Monitor logs - Setting up and using Azure Monitor logs tutorial
Set up and use Azure Monitor logs with a
multitenant Azure SQL Database SaaS app
9/13/2022 • 5 minutes to read • Edit Online
NOTE
This article was recently updated to use the term Azure Monitor logs instead of Log Analytics. Log data is still stored in a
Log Analytics workspace and is still collected and analyzed by the same Log Analytics service. We are updating the
terminology to better reflect the role of logs in Azure Monitor. See Azure Monitor terminology changes for details.
Install and configure Log Analytics workspace and the Azure SQL
Analytics solution
Azure Monitor is a separate service that must be configured. Azure Monitor logs collects log data, telemetry, and
metrics in a Log Analytics workspace. Just like other resources in Azure, a Log Analytics workspace must be
created. The workspace doesn't need to be created in the same resource group as the applications it monitors.
Doing so often makes the most sense though. For the Wingtip Tickets app, use a single resource group to make
sure the workspace is deleted with the application.
1. In the PowerShell ISE, open ..\WingtipTicketsSaaS-MultiTenantDb-master\Learning Modules\Performance
Monitoring and Management\Log Analytics\Demo-LogAnalytics.ps1.
2. To run the script, press F5.
Now you can open Azure Monitor logs in the Azure portal. It takes a few minutes to collect telemetry in the Log
Analytics workspace and to make it visible. The longer you leave the system gathering diagnostic data, the more
interesting the experience is.
IMPORTANT
It might take a couple of minutes before the solution is active.
7. Change the filter setting to modify the time range. For this tutorial, select Last 1 hour .
8. Select an individual database to explore the query usage and metrics for that database.
A page opens that shows the pools and databases on the server.
11. Select a pool. On the pool page that opens, scroll to the right to see the pool metrics.
12. Back in the Log Analytics workspace, select OMS Por tal to open the workspace there.
In the Log Analytics workspace, you can explore the log and metric data further.
Monitoring and alerting in Azure Monitor logs are based on queries over the data in the workspace, unlike the
alerting defined on each resource in the Azure portal. By basing alerts on queries, you can define a single alert
that looks over all databases, rather than defining one per database. Queries are limited only by the data
available in the workspace.
For more information on how to use Azure Monitor logs to query and set alerts, see Work with alert rules in
Azure Monitor logs.
Azure Monitor logs for SQL Database charges based on the data volume in the workspace. In this tutorial, you
created a free workspace, which is limited to 500 MB per day. After that limit is reached, data is no longer added
to the workspace.
Next steps
In this tutorial you learned how to:
Install and configure Azure Monitor logs.
Use Azure Monitor logs to monitor pools and databases.
Try the Tenant analytics tutorial.
Additional resources
Additional tutorials that build on the initial Wingtip Tickets SaaS database-per-tenant application deployment
Azure Monitor logs
Restore a single tenant with a database-per-tenant
SaaS application
9/13/2022 • 6 minutes to read • Edit Online
Restore into a parallel database This pattern can be used for tasks such as review, auditing,
and compliance to allow a tenant to inspect their data from
an earlier point. The tenant's current database remains
online and unchanged.
To complete this tutorial, make sure the following prerequisites are completed:
The Wingtip SaaS app is deployed. To deploy in less than five minutes, see Deploy and explore the Wingtip
SaaS application.
Azure PowerShell is installed. For details, see Get started with Azure PowerShell.
2. Scroll the list of events, and make a note of the last event in the list.
"Accidentally" delete the last event
1. In the PowerShell ISE, open ...\Learning Modules\Business Continuity and Disaster
Recovery\RestoreTenant\Demo-RestoreTenant.ps1, and set the following value:
$DemoScenario = 1 , Delete last event (with no ticket sales).
2. Press F5 to run the script and delete the last event. The following confirmation message appears:
3. The Contoso events page opens. Scroll down and verify that the event is gone. If the event is still in the
list, select Refresh and verify that it's gone.
Restore a tenant database in parallel with the production database
This exercise restores the Contoso Concert Hall database to a point in time before the event was deleted. This
scenario assumes that you want to review the deleted data in a parallel database.
The Restore-TenantInParallel.ps1 script creates a parallel tenant database named ContosoConcertHall_old, with a
parallel catalog entry. This pattern of restore is best suited for recovering from a minor data loss. You also can
use this pattern if you need to review data for compliance or auditing purposes. It's the recommended approach
when you use active geo-replication.
1. Complete the Simulate a tenant accidentally deleting data section.
2. In the PowerShell ISE, open ...\Learning Modules\Business Continuity and Disaster
Recovery\RestoreTenant\Demo-RestoreTenant.ps1.
3. Set $DemoScenario = 2 , Restore tenant in parallel.
4. To run the script, press F5.
The script restores the tenant database to a point in time before you deleted the event. The database is restored
to a new database named ContosoConcertHall_old. The catalog metadata that exists in this restored database is
deleted, and then the database is added to the catalog by using a key constructed from the
ContosoConcertHall_old name.
The demo script opens the events page for this new tenant database in your browser. Note from the URL
http://events.wingtip-dpt.<user>.trafficmanager.net/contosoconcerthall_old that this page shows data
from the restored database where _old is added to the name.
Scroll the events listed in the browser to confirm that the event deleted in the previous section was restored.
Exposing the restored tenant as an additional tenant, with its own Events app, is unlikely to be how you provide
a tenant access to restored data. It serves to illustrate the restore pattern. Typically, you give read-only access to
the old data and retain the restored database for a defined period. In the sample, you can delete the restored
tenant entry after you're finished by running the Remove restored tenant scenario.
1. Set $DemoScenario = 4 , Remove restored tenant.
2. To run the script, press F5.
3. The ContosoConcertHall_old entry is now deleted from the catalog. Close the events page for this tenant in
your browser.
Next steps
In this tutorial, you learned how to:
Restore a database into a parallel database (side by side).
Restore a database in place.
Try the Manage tenant database schema tutorial.
Additional resources
Additional tutorials that build on the Wingtip SaaS application
Overview of business continuity with Azure SQL Database
Learn about SQL Database backups
Manage schema in a SaaS application using the
database-per-tenant pattern with Azure SQL
Database
9/13/2022 • 5 minutes to read • Edit Online
Get the Wingtip Tickets SaaS database per tenant application scripts
The application source code and management scripts are available in the WingtipTicketsSaaS-DbPerTenant
GitHub repo. Check out the general guidance for steps to download and unblock the Wingtip Tickets SaaS
scripts.
Next steps
In this tutorial you learned how to:
Create a job agent to run across T-SQL jobs multiple databases
Update reference data in all tenant databases
Create an index on a table in all tenant databases
Next, try the Ad hoc reporting tutorial to explore running distributed queries across tenant databases.
Additional resources
Additional tutorials that build upon the Wingtip Tickets SaaS Database Per Tenant application deployment
Managing scaled-out cloud databases
Cross-tenant reporting using distributed queries
9/13/2022 • 8 minutes to read • Edit Online
One opportunity with SaaS applications is to use the vast amount of tenant data stored in the cloud to gain
insights into the operation and usage of your application. These insights can guide feature development,
usability improvements, and other investments in your apps and services.
Accessing this data in a single multi-tenant database is easy, but not so easy when distributed at scale across
potentially thousands of databases. One approach is to use Elastic Query, which enables querying across a
distributed set of databases with common schema. These databases can be distributed across different resource
groups and subscriptions, but need to share a common login. Elastic Query uses a single head database in
which external tables are defined that mirror tables or views in the distributed (tenant) databases. Queries
submitted to this head database are compiled to produce a distributed query plan, with portions of the query
pushed down to the tenant databases as needed. Elastic Query uses the shard map in the catalog database to
determine the location of all tenant databases. Setup and query of the head database are straightforward using
standard Transact-SQL, and support querying from tools like Power BI and Excel.
By distributing queries across the tenant databases, Elastic Query provides immediate insight into live
production data. As Elastic Query pulls data from potentially many databases, query latency can be higher than
equivalent queries submitted to a single multi-tenant database. Design queries to minimize the data that is
returned to the head database. Elastic Query is often best suited for querying small amounts of real-time data,
as opposed to building frequently used or complex analytics queries or reports. If queries don't perform well,
look at the execution plan to see what part of the query is pushed down to the remote database and how much
data is being returned. Queries that require complex aggregation or analytical processing may be better handles
by extracting tenant data into a database or data warehouse optimized for analytics queries. This pattern is
explained in the tenant analytics tutorial.
Get the Wingtip Tickets SaaS Database Per Tenant application scripts
The Wingtip Tickets SaaS Multi-tenant Database scripts and application source code are available in the
WingtipTicketsSaaS-DbPerTenant GitHub repo. Check out the general guidance for steps to download and
unblock the Wingtip Tickets SaaS scripts.
-- Notice the plural name 'Venues'. This view projects a VenueId column.
SELECT * FROM Venues
-- This view projects the VenueId retrieved from the Venues table.
SELECT * FROM VenueEvents
In these views, the VenueId is computed as a hash of the Venue name, but any approach could be used to
introduce a unique value. This approach is similar to the way the tenant key is computed for use in the catalog.
To examine the definition of the Venues view:
1. In Object Explorer , expand contosoconcer thall > Views :
2. Right-click dbo.Venues .
3. Select Script View as > CREATE To > New Quer y Editor Window
Script any of the other Venue views to see how they add the VenueId.
With the catalog database as the external data source, queries are distributed to all databases registered
in the catalog at the time the query runs. As server names are different for each deployment, this script
gets the location of the catalog database from the current server (@@servername) where the script is
executed.
The external tables that reference the global views described in the previous section, and defined with
DISTRIBUTION = SHARDED(VenueId) . Because each VenueId maps to an individual database, this
improves performance for many scenarios as shown in the next section.
The local table VenueTypes that is created and populated. This reference data table is common in all
tenant databases, so it can be represented here as a local table and populated with the common data. For
some queries, having this table defined in the head database can reduce the amount of data that needs to
be moved to the head database.
If you include reference tables in this manner, be sure to update the table schema and data whenever you
update the tenant databases.
4. Press F5 to run the script and initialize the adhocreporting database.
Now you can run distributed queries, and gather insights across all tenants!
6. Now select the On which day were the most tickets sold? query, and press F5 .
This query does a bit more complex joining and aggregation. Most of the processing occurs remotely.
Only single rows, containing each venue's daily ticket sale count per day, are returned to the head
database.
Next steps
In this tutorial you learned how to:
Run distributed queries across all tenant databases
Deploy a reporting database and define the schema required to run distributed queries.
Now try the Tenant Analytics tutorial to explore extracting data to a separate analytics database for more
complex analytics processing.
Additional resources
Additional tutorials that build upon the Wingtip Tickets SaaS Database Per Tenant application
Elastic Query
Cross-tenant analytics using extracted data - single-
tenant app
9/13/2022 • 13 minutes to read • Edit Online
Finally, the analytics store is queried using Power BI to highlight insights into tenant behavior and their use of
the Wingtip Tickets application. You run queries that:
Show the relative popularity of each venue
Highlight patterns in ticket sales for different events
Show the relative success of different venues in selling out their event
Understanding how each tenant is using the service is used to explore options for monetizing the service and
improving the service to help tenants be more successful. This tutorial provides basic examples of the kinds of
insights that can be gleaned from tenant data.
Setup
Prerequisites
To complete this tutorial, make sure the following prerequisites are met:
The Wingtip Tickets SaaS Database Per Tenant application is deployed. To deploy in less than five minutes, see
Deploy and explore the Wingtip SaaS application
The Wingtip Tickets SaaS Database Per Tenant scripts and application source code are downloaded from
GitHub. See download instructions. Be sure to unblock the zip file before extracting its contents. Check out
the general guidance for steps to download and unblock the Wingtip Tickets SaaS scripts.
Power BI Desktop is installed. Download Power BI Desktop
The batch of additional tenants has been provisioned, see the Provision tenants tutorial .
A job account and job account database have been created. See the appropriate steps in the Schema
management tutorial .
Create data for the demo
In this tutorial, analysis is performed on ticket sales data. In the current step, you generate ticket data for all the
tenants. Later this data is extracted for analysis. Ensure you have provisioned the batch of tenants as described
earlier, so that you have a meaningful amount of data. A sufficiently large amount of data can expose a range of
different ticket purchasing patterns.
1. In PowerShell ISE, open …\Learning Modules\Operational Analytics\Tenant Analytics\Demo-
TenantAnalytics.ps1, and set the following value:
$DemoScenario = 1 Purchase tickets for events at all venues
2. Press F5 to run the script and create ticket purchasing history for every event in each venue. The script runs
for several minutes to generate tens of thousands of tickets.
Deploy the analytics store
Often there are numerous transactional databases that together hold all tenant data. You must aggregate the
tenant data from the many transactional databases into one analytics store. The aggregation enables efficient
query of the data. In this tutorial, an Azure SQL Database is used to store the aggregated data.
In the following steps, you deploy the analytics store, which is called tenantanalytics . You also deploy
predefined tables that are populated later in the tutorial:
1. In PowerShell ISE, open …\Learning Modules\Operational Analytics\Tenant Analytics\Demo-
TenantAnalytics.ps1
2. Set the $DemoScenario variable in the script to match your choice of analytics store:
To use SQL Database without column store, set $DemoScenario = 2
To use SQL Database with column store, set $DemoScenario = 3
3. Press F5 to run the demo script (that calls the Deploy-TenantAnalytics<XX>.ps1 script) which creates the
tenant analytics store.
Now that you have deployed the application and filled it with interesting tenant data, use SQL Server
Management Studio (SSMS) to connect tenants1-dpt-<User> and catalog-dpt-<User> servers using Login
= developer, Password = P@ssword1. See the introductory tutorial for more guidance.
In the Object Explorer, perform the following steps:
1. Expand the tenants1-dpt-<User> server.
2. Expand the Databases node, and see the list of tenant databases.
3. Expand the catalog-dpt-<User> server.
4. Verify that you see the analytics store and the jobaccount database.
See the following database items in the SSMS Object Explorer by expanding the analytics store node:
Tables TicketsRawData and EventsRawData hold raw extracted data from the tenant databases.
The star-schema tables are fact_Tickets , dim_Customers , dim_Venues , dim_Events , and dim_Dates .
The stored procedure is used to populate the star-schema tables from the raw data tables.
Data extraction
Create target groups
Before proceeding, ensure you have deployed the job account and jobaccount database. In the next set of steps,
Elastic Jobs is used to extract data from each tenant database, and to store the data in the analytics store. Then
the second job shreds the data and stores it into tables in the star-schema. These two jobs run against two
different target groups, namely TenantGroup and AnalyticsGroup . The extract job runs against the
TenantGroup, which contains all the tenant databases. The shredding job runs against the AnalyticsGroup, which
contains just the analytics store. Create the target groups by using the following steps:
1. In SSMS, connect to the jobaccount database in catalog-dpt-<User>.
2. In SSMS, open …\Learning Modules\Operational Analytics\Tenant Analytics\ TargetGroups.sql
3. Modify the @User variable at the top of the script, replacing <User> with the user value used when you
deployed the Wingtip SaaS app.
4. Press F5 to run the script that creates the two target groups.
Extract raw data from all tenants
Extensive data modifications might occur more frequently for ticket and customer data than for event and venue
data. Therefore, consider extracting ticket and customer data separately and more frequently than you extract
event and venue data. In this section, you define and schedule two separate jobs:
Extract ticket and customer data.
Extract event and venue data.
Each job extracts its data, and posts it into the analytics store. There a separate job shreds the extracted data into
the analytics star-schema.
1. In SSMS, connect to the jobaccount database in catalog-dpt-<User> server.
2. In SSMS, open ...\Learning Modules\Operational Analytics\Tenant Analytics\ExtractTickets.sql.
3. Modify @User at the top of the script, and replace <User> with the user name used when you deployed the
Wingtip SaaS app
4. Press F5 to run the script that creates and runs the job that extracts tickets and customers data from each
tenant database. The job saves the data into the analytics store.
5. Query the TicketsRawData table in the tenantanalytics database, to ensure that the table is populated with
tickets information from all tenants.
Repeat the preceding steps, except this time replace \ExtractTickets.sql with \ExtractVenuesEvents.sql in
step 2.
Successfully running the job populates the EventsRawData table in the analytics store with new events and
venues information from all tenants.
Data reorganization
Shred extracted data to populate star-schema tables
The next step is to shred the extracted raw data into a set of tables that are optimized for analytics queries. A
star-schema is used. A central fact table holds individual ticket sales records. Other tables are populated with
related data about venues, events, and customers. And there are time dimension tables.
In this section of the tutorial, you define and run a job that merges the extracted raw data with the data in the
star-schema tables. After the merge job is finished, the raw data is deleted, leaving the tables ready to be
populated by the next tenant data extract job.
1. In SSMS, connect to the jobaccount database in catalog-dpt-<User>.
2. In SSMS, open …\Learning Modules\Operational Analytics\Tenant Analytics\ShredRawExtractedData.sql.
3. Press F5 to run the script to define a job that calls the sp_ShredRawExtractedData stored procedure in the
analytics store.
4. Allow enough time for the job to run successfully.
Check the Lifecycle column of jobs.jobs_execution table for the status of job. Ensure that the job
Succeeded before proceeding. A successful run displays data similar to the following chart:
Data exploration
Visualize tenant data
The data in the star-schema table provides all the ticket sales data needed for your analysis. To make it easier to
see trends in large data sets, you need to visualize it graphically. In this section, you learn how to use Power BI
to manipulate and visualize the tenant data you have extracted and organized.
Use the following steps to connect to Power BI, and to import the views you created earlier:
1. Launch Power BI desktop.
2. From the Home ribbon, select Get Data , and select More… from the menu.
3. In the Get Data window, select Azure SQL Database.
4. In the database login window, enter your server name (catalog-dpt-<User>.database.windows.net). Select
Impor t for Data Connectivity Mode , and then click OK.
5. Select Database in the left pane, then enter user name = developer, and enter password = P@ssword1.
Click Connect .
6. In the Navigator pane, under the analytics database, select the star-schema tables: fact_Tickets,
dim_Events, dim_Venues, dim_Customers and dim_Dates. Then select Load .
Congratulations! You have successfully loaded the data into Power BI. Now you can start exploring interesting
visualizations to help gain insights into your tenants. Next you walk through how analytics can enable you to
provide data-driven recommendations to the Wingtip Tickets business team. The recommendations can help to
optimize the business model and customer experience.
You start by analyzing ticket sales data to see the variation in usage across the venues. Select the following
options in Power BI to plot a bar chart of the total number of tickets sold by each venue. Due to random
variation in the ticket generator, your results may be different.
The preceding plot confirms that the number of tickets sold by each venue varies. Venues that sell more tickets
are using your service more heavily than venues that sell fewer tickets. There may be an opportunity here to
tailor resource allocation according to different tenant needs.
You can further analyze the data to see how ticket sales vary over time. Select the following options in Power BI
to plot the total number of tickets sold each day for a period of 60 days.
The preceding chart displays that ticket sales spike for some venues. These spikes reinforce the idea that some
venues might be consuming system resources disproportionately. So far there is no obvious pattern in when the
spikes occur.
Next you want to further investigate the significance of these peak sale days. When do these peaks occur after
tickets go on sale? To plot tickets sold per day, select the following options in Power BI.
The preceding plot shows that some venues sell a lot of tickets on the first day of sale. As soon as tickets go on
sale at these venues, there seems to be a mad rush. This burst of activity by a few venues might impact the
service for other tenants.
You can drill into the data again to see if this mad rush is true for all events hosted by these venues. In previous
plots, you observed that Contoso Concert Hall sells a lot of tickets, and that Contoso also has a spike in ticket
sales on certain days. Play around with Power BI options to plot cumulative ticket sales for Contoso Concert Hall,
focusing on sale trends for each of its events. Do all events follow the same sale pattern?
The preceding plot for Contoso Concert Hall shows that the mad rush does not happen for all events. Play
around with the filter options to see sale trends for other venues.
The insights into ticket selling patterns might lead Wingtip Tickets to optimize their business model. Instead of
charging all tenants equally, perhaps Wingtip should introduce service tiers with different compute sizes. Larger
venues that need to sell more tickets per day could be offered a higher tier with a higher service level
agreement (SLA). Those venues could have their databases placed in pool with higher per-database resource
limits. Each service tier could have an hourly sales allocation, with additional fees charged for exceeding the
allocation. Larger venues that have periodic bursts of sales would benefit from the higher tiers, and Wingtip
Tickets can monetize their service more efficiently.
Meanwhile, some Wingtip Tickets customers complain that they struggle to sell enough tickets to justify the
service cost. Perhaps in these insights there is an opportunity to boost ticket sales for underperforming venues.
Higher sales would increase the perceived value of the service. Right click fact_Tickets and select New
measure . Enter the following expression for the new measure called AverageTicketsSold :
Select the following visualization options to plot the percentage tickets sold by each venue to determine their
relative success.
The preceding plot shows that even though most venues sell more than 80% of their tickets, some are
struggling to fill more than half the seats. Play around with the Values Well to select maximum or minimum
percentage of tickets sold for each venue.
Earlier you deepened your analysis to discover that ticket sales tend to follow predictable patterns. This
discovery might let Wingtip Tickets help underperforming venues boost ticket sales by recommending dynamic
pricing. This discover could reveal an opportunity to employ machine learning techniques to predict ticket sales
for each event. Predictions could also be made for the impact on revenue of offering discounts on ticket sales.
Power BI Embedded could be integrated into an event management application. The integration could help
visualize predicted sales and the effect of different discounts. The application could help devise an optimum
discount to be applied directly from the analytics display.
You have observed trends in tenant data from the WingTip application. You can contemplate other ways the app
can inform business decisions for SaaS application vendors. Vendors can better cater to the needs of their
tenants. Hopefully this tutorial has equipped you with tools necessary to perform analytics on tenant data to
empower your businesses to make data-driven decisions.
Next steps
In this tutorial, you learned how to:
Deployed a tenant analytics database with pre-defined star schema tables
Used elastic jobs to extract data from all the tenant database
Merge the extracted data into tables in a star-schema designed for analytics
Query an analytics database
Use Power BI for data visualization to observe trends in tenant data
Congratulations!
Additional resources
Additional tutorials that build upon the Wingtip SaaS application.
Elastic Jobs.
Cross-tenant analytics using extracted data - multi-tenant app
Explore SaaS analytics with Azure SQL Database,
Azure Synapse Analytics, Data Factory, and Power
BI
9/13/2022 • 15 minutes to read • Edit Online
Finally, the star-schema tables are queried. Query results are displayed visually using Power BI to highlight
insights into tenant behavior and their use of the application. With this star-schema, you run queries that expose:
Who is buying tickets and from which venue.
Patterns and trends in the sale of tickets.
The relative popularity of each venue.
This tutorial provides basic examples of insights that can be gleaned from the Wingtip Tickets data.
Understanding how each venue uses the service might cause the Wingtip Tickets vendor to think about different
service plans targeted at more or less active venues, for example.
Setup
Prerequisites
To complete this tutorial, make sure the following prerequisites are met:
The Wingtip Tickets SaaS Database Per Tenant application is deployed. To deploy in less than five minutes, see
Deploy and explore the Wingtip SaaS application.
The Wingtip Tickets SaaS Database Per Tenant scripts and application source code are downloaded from
GitHub. See download instructions. Be sure to unblock the zip file before extracting its contents.
Power BI Desktop is installed. Download Power BI Desktop.
The batch of additional tenants has been provisioned, see the Provision tenants tutorial .
Create data for the demo
This tutorial explores analytics over ticket sales data. In this step, you generate ticket data for all the tenants. In a
later step, this data is extracted for analysis. Ensure you provisioned the batch of tenants (as described earlier) so
that you have enough data to expose a range of different ticket purchasing patterns.
1. In PowerShell ISE, open …\Learning Modules\Operational Analytics\Tenant Analytics DW\Demo-
TenantAnalyticsDW.ps1, and set the following value:
$DemoScenario = 1 Purchase tickets for events at all venues
2. Press F5 to run the script and create ticket purchasing history for all the venues. With 20 tenants, the script
generates tens of thousands of tickets and may take 10 minutes or more.
Deploy Azure Synapse Analytics, Data Factory, and Blob Storage
In the Wingtip Tickets app, the tenants' transactional data is distributed over many databases. Azure Data
Factory (ADF) is used to orchestrate the Extract, Load, and Transform (ELT) of this data into the data warehouse.
To load data into Azure Synapse Analytics most efficiently, ADF extracts data into intermediate blob files and
then uses PolyBase to load the data into the data warehouse.
In this step, you deploy the additional resources used in the tutorial: a dedicated SQL pool called tenantanalytics,
an Azure Data Factory called dbtodwload-<user>, and an Azure storage account called wingtipstaging<user>.
The storage account is used to temporarily hold extracted data files as blobs before they are loaded into the data
warehouse. This step also deploys the data warehouse schema and defines the ADF pipelines that orchestrate
the ELT process.
1. In PowerShell ISE, open …\Learning Modules\Operational Analytics\Tenant Analytics DW\Demo-
TenantAnalyticsDW.ps1 and set:
$DemoScenario = 2 Deploy tenant analytics data warehouse, blob storage, and data factory
2. Press F5 to run the demo script and deploy the Azure resources.
Now review the Azure resources you deployed:
Tenant databases and analytics store
Use SQL Server Management Studio (SSMS) to connect to tenants1-dpt-<user> and catalog-dpt-<user>
servers. Replace <user> with the value used when you deployed the app. Use Login = developer and Password
= P@ssword1. See the introductory tutorial for more guidance.
Blob storage
1. In the Azure portal, navigate to the resource group that you used for deploying the application. Verify that
a storage account called wingtipstaging<user> has been added.
This section explores the data factory created. Follow the steps below to launch the data factory:
1. In the portal, click the data factory called dbtodwload-<user> .
2. Click Author & Monitor tile to launch the Data Factory designer in a separate tab.
In the overview page, switch to Author tab on the left panel and observe that there are three pipelines and
three datasets created.
Corresponding to the three linked services, there are three datasets that refer to the data you use in the pipeline
activities as inputs or outputs. Explore each of the datasets to observe connections and parameters used.
AzureBlob points to the configuration file containing source and target tables and columns, as well as the tracker
column in each source.
Data warehouse pattern overview
Azure Synapse is used as the analytics store to perform aggregation on the tenant data. In this sample, PolyBase
is used to load data into the data warehouse. Raw data is loaded into staging tables that have an identity column
to keep track of rows that have been transformed into the star-schema tables. The following image shows the
loading pattern:
Slowly Changing Dimension (SCD) type 1 dimension tables are used in this example. Each dimension has a
surrogate key defined using an identity column. As a best practice, the date dimension table is pre-populated to
save time. For the other dimension tables, a CREATE TABLE AS SELECT... (CTAS) statement is used to create a
temporary table containing the existing modified and non-modified rows, along with the surrogate keys. This is
done with IDENTITY_INSERT=ON. New rows are then inserted into the table with IDENTITY_INSERT=OFF. For
easy roll-back, the existing dimension table is renamed and the temporary table is renamed to become the new
dimension table. Before each run, the old dimension table is deleted.
Dimension tables are loaded before the fact table. This sequencing ensures that for each arriving fact, all
referenced dimensions already exist. As the facts are loaded, the business key for each corresponding dimension
is matched and the corresponding surrogate keys are added to each fact.
The final step of the transform deletes the staging data ready for the next execution of the pipeline.
Trigger the pipeline run
Follow the steps below to run the complete extract, load, and transform pipeline for all the tenant databases:
1. In the Author tab of the ADF user interface, select SQLDBToDW pipeline from the left pane.
2. Click Trigger and from the pulled down menu click Trigger Now . This action runs the pipeline immediately.
In a production scenario, you would define a timetable for running the pipeline to refresh the data on a
schedule.
3. Connect to the data warehouse with SSMS and query the star-schema tables to verify that data was loaded in
these tables.
Once the pipeline has completed, the fact table holds ticket sales data for all venues and the dimension tables
are populated with the corresponding venues, events, and customers.
Data Exploration
Visualize tenant data
The data in the star-schema provides all the ticket sales data needed for your analysis. Visualizing data
graphically makes it easier to see trends in large data sets. In this section, you use Power BI to manipulate and
visualize the tenant data in the data warehouse.
Use the following steps to connect to Power BI, and to import the views you created earlier:
1. Launch Power BI desktop.
2. From the Home ribbon, select Get Data , and select More… from the menu.
3. In the Get Data window, select Azure SQL Database .
4. In the database login window, enter your server name (catalog-dpt-<User>.database.windows.net ).
Select Impor t for Data Connectivity Mode , and then click OK .
5. Select Database in the left pane, then enter user name = developer, and enter password = P@ssword1.
Click Connect .
6. In the Navigator pane, under the analytics database, select the star-schema tables: fact_Tickets ,
dim_Events , dim_Venues , dim_Customers and dim_Dates . Then select Load .
Congratulations! You successfully loaded the data into Power BI. Now explore interesting visualizations to gain
insights into your tenants. Let's walk through how analytics can provide some data-driven recommendations to
the Wingtip Tickets business team. The recommendations can help to optimize the business model and
customer experience.
Start by analyzing ticket sales data to see the variation in usage across the venues. Select the options shown in
Power BI to plot a bar chart of the total number of tickets sold by each venue. (Due to random variation in the
ticket generator, your results may be different.)
The preceding plot confirms that the number of tickets sold by each venue varies. Venues that sell more tickets
are using your service more heavily than venues that sell fewer tickets. There may be an opportunity here to
tailor resource allocation according to different tenant needs.
You can further analyze the data to see how ticket sales vary over time. Select the options shown in the
following image in Power BI to plot the total number of tickets sold each day for a period of 60 days.
The preceding chart shows that ticket sales spike for some venues. These spikes reinforce the idea that some
venues might be consuming system resources disproportionately. So far there is no obvious pattern in when the
spikes occur.
Next let's investigate the significance of these peak sale days. When do these peaks occur after tickets go on
sale? To plot tickets sold per day, select the options shown in the following image in Power BI.
This plot shows that some venues sell large numbers of tickets on the first day of sale. As soon as tickets go on
sale at these venues, there seems to be a mad rush. This burst of activity by a few venues might impact the
service for other tenants.
You can drill into the data again to see if this mad rush is true for all events hosted by these venues. In previous
plots, you saw that Contoso Concert Hall sells many tickets, and that Contoso also has a spike in ticket sales on
certain days. Play around with Power BI options to plot cumulative ticket sales for Contoso Concert Hall,
focusing on sale trends for each of its events. Do all events follow the same sale pattern? Try to produce a plot
like the one below.
This plot of cumulative ticket sales over time for Contoso Concert Hall for each event shows that the mad rush
does not happen for all events. Play around with the filter options to explore sale trends for other venues.
The insights into ticket selling patterns might lead Wingtip Tickets to optimize their business model. Instead of
charging all tenants equally, perhaps Wingtip should introduce service tiers with different compute sizes. Larger
venues that need to sell more tickets per day could be offered a higher tier with a higher service level
agreement (SLA). Those venues could have their databases placed in pool with higher per-database resource
limits. Each service tier could have an hourly sales allocation, with additional fees charged for exceeding the
allocation. Larger venues that have periodic bursts of sales would benefit from the higher tiers, and Wingtip
Tickets can monetize their service more efficiently.
Meanwhile, some Wingtip Tickets customers complain that they struggle to sell enough tickets to justify the
service cost. Perhaps in these insights there is an opportunity to boost ticket sales for underperforming venues.
Higher sales would increase the perceived value of the service. Right click fact_Tickets and select New
measure . Enter the following expression for the new measure called AverageTicketsSold :
AverageTicketsSold = DIVIDE(DIVIDE(COUNTROWS(fact_Tickets),DISTINCT(dim_Venues[VenueCapacity]))*100,
COUNTROWS(dim_Events))
Select the following visualization options to plot the percentage tickets sold by each venue to determine their
relative success.
The plot above shows that even though most venues sell more than 80% of their tickets, some are struggling to
fill more than half their seats. Play around with the Values Well to select maximum or minimum percentage of
tickets sold for each venue.
Next steps
In this tutorial, you learned how to:
Create the tenant analytics store for loading.
Use Azure Data Factory (ADF) to extract data from each tenant database into the analytics data warehouse.
Optimize the extracted data (reorganize into a star-schema).
Query the analytics data warehouse.
Use Power BI for data visualization to highlight trends in tenant data and make recommendation for
improvements.
Congratulations!
Additional resources
Additional tutorials that build upon the Wingtip SaaS application.
Use geo-restore to recover a multitenant SaaS
application from database backups
9/13/2022 • 18 minutes to read • Edit Online
Geo-restore is the lowest-cost disaster recovery solution for Azure SQL Database. However, restoring from geo-
redundant backups can result in data loss of up to one hour. It can take considerable time, depending on the size
of each database.
NOTE
Recover applications with the lowest possible RPO and RTO by using geo-replication instead of geo-restore.
This tutorial explores both restore and repatriation workflows. You learn how to:
Sync database and elastic pool configuration info into the tenant catalog.
Set up a mirror image environment in a recovery region that includes application, servers, and pools.
Recover catalog and tenant databases by using geo-restore.
Use geo-replication to repatriate the tenant catalog and changed tenant databases after the outage is
resolved.
Update the catalog as each database is restored (or repatriated) to track the current location of the active
copy of each tenant's database.
Ensure that the application and tenant database are always co-located in the same Azure region to reduce
latency.
Before you start this tutorial, complete the following prerequisites:
Deploy the Wingtip Tickets SaaS database per tenant app. To deploy in less than five minutes, see Deploy and
explore the Wingtip Tickets SaaS database per tenant application.
Install Azure PowerShell. For details, see Getting started with Azure PowerShell.
NOTE
The application is recovered into the paired region of the region in which the application is deployed. For more
information, see Azure paired regions.
This tutorial uses features of Azure SQL Database and the Azure platform to address these challenges:
Azure Resource Manager templates, to reserve all needed capacity as quickly as possible. Azure Resource
Manager templates are used to provision a mirror image of the original servers and elastic pools in the
recovery region. A separate server and pool are also created for provisioning new tenants.
Elastic Database Client Library (EDCL), to create and maintain a tenant database catalog. The extended
catalog includes periodically refreshed pool and database configuration information.
Shard management recovery features of the EDCL, to maintain database location entries in the catalog
during recovery and repatriation.
Geo-restore, to recover the catalog and tenant databases from automatically maintained geo-redundant
backups.
Asynchronous restore operations, sent in tenant-priority order, are queued for each pool by the system and
processed in batches so the pool isn't overloaded. These operations can be canceled before or during
execution if necessary.
Geo-replication, to repatriate databases to the original region after the outage. There is no data loss and
minimal impact on the tenant when you use geo-replication.
SQL server DNS aliases, to allow the catalog sync process to connect to the active catalog regardless of its
location.
TIP
Hover the mouse over the location to enlarge the display.
2. Select the Contoso Concert Hall tenant and open its event page.
In the footer, notice the tenant's server name. The location is the same as the catalog server's location.
3. In the Azure portal, review and open the resource group in which you deployed the app.
Notice the resources and the region in which the app service components and SQL Database is deployed.
IMPORTANT
For simplicity, the sync process and other long-running recovery and repatriation processes are implemented in these
samples as local PowerShell jobs or sessions that run under your client user login. The authentication tokens issued when
you log in expire after several hours, and the jobs will then fail. In a production scenario, long-running processes should
be implemented as reliable Azure services of some kind, running under a service principal. See Use Azure PowerShell to
create a service principal with a certificate.
1. In the PowerShell ISE, open the ...\Learning Modules\UserConfig.psm1 file. Replace <resourcegroup> and
<user> on lines 10 and 11 with the value used when you deployed the app. Save the file.
2. In the PowerShell ISE, open the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-
RestoreFromBackup\Demo-RestoreFromBackup.ps1 script.
In this tutorial, you run each of the scenarios in this PowerShell script, so keep this file open.
3. Set the following:
$DemoScenario = 1: Start a background job that syncs tenant server and pool configuration info into the
catalog.
4. To run the sync script, select F5.
This information is used later to ensure that recovery creates a mirror image of the servers, pools, and
databases in the recovery region.
Leave the PowerShell window running in the background and continue with the rest of this tutorial.
NOTE
The sync process connects to the catalog via a DNS alias. The alias is modified during restore and repatriation to point to
the active catalog. The sync process keeps the catalog up to date with any database or pool configuration changes made
in the recovery region. During repatriation, these changes are applied to the equivalent resources in the original region.
NOTE
To explore the code for the recovery jobs, review the PowerShell scripts in the ...\Learning Modules\Business Continuity
and Disaster Recovery\DR-RestoreFromBackup\RecoveryJobs folder.
NOTE
Other tutorials in the sample are not designed to run with the app in the recovery state. If you want to explore other
tutorials, be sure to repatriate the application first.
The restore process creates all the recovery resources in a recovery resource group. The cleanup process deletes
this resource group and removes all references to the resources from the catalog.
1. In the PowerShell ISE, in the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-
RestoreFromBackup\Demo-RestoreFromBackup.ps1 script, set:
$DemoScenario = 6: Delete obsolete resources from the recovery region.
2. To run the script, select F5.
After cleaning up the scripts, the application is back where it started. At this point, you can run the script again or
try out other tutorials.
Designing the application to ensure that the app and the database
are co-located
The application is designed to always connect from an instance in the same region as the tenant's database. This
design reduces latency between the application and the database. This optimization assumes the app-to-
database interaction is chattier than the user-to-app interaction.
Tenant databases might be spread across recovery and original regions for some time during repatriation. For
each database, the app looks up the region in which the database is located by doing a DNS lookup on the
tenant server name. The server name is an alias. The aliased server name contains the region name. If the
application isn't in the same region as the database, it redirects to the instance in the same region as the server.
Redirecting to the instance in the same region as the database minimizes latency between the app and the
database.
Next steps
In this tutorial, you learned how to:
Use the tenant catalog to hold periodically refreshed configuration information, which allows a mirror image
recovery environment to be created in another region.
Recover databases into the recovery region by using geo-restore.
Update the tenant catalog to reflect restored tenant database locations.
Use a DNS alias to enable an application to connect to the tenant catalog throughout without
reconfiguration.
Use geo-replication to repatriate recovered databases to their original region after an outage is resolved.
Try the Disaster recovery for a multitenant SaaS application using database geo-replication tutorial to learn how
to use geo-replication to dramatically reduce the time needed to recover a large-scale multitenant application.
Additional resources
Additional tutorials that build upon the Wingtip SaaS application
Disaster recovery for a multi-tenant SaaS
application using database geo-replication
9/13/2022 • 17 minutes to read • Edit Online
The recovery scripts used in this tutorial and Wingtip application source code are available in the Wingtip
Tickets SaaS database per tenant GitHub repository. Check out the general guidance for steps to download and
unblock the Wingtip Tickets management scripts.
Tutorial overview
In this tutorial, you first use geo-replication to create replicas of the Wingtip Tickets application and its databases
in a different region. Then, you fail over to this region to simulate recovering from an outage. When complete,
the application is fully functional in the recovery region.
Later, in a separate repatriation step, you fail over the catalog and tenant databases in the recovery region to the
original region. The application and databases stay available throughout repatriation. When complete, the
application is fully functional in the original region.
NOTE
The application is recovered into the paired region of the region in which the application is deployed. For more
information, see Azure paired regions.
IMPORTANT
For simplicity, the sync process and other long running recovery and repatriation processes are implemented in these
tutorials as local PowerShell jobs or sessions that run under your client user login. The authentication tokens issued when
you login will expire after several hours and the jobs will then fail. In a production scenario, long-running processes should
be implemented as reliable Azure services of some kind, running under a service principal. See Use Azure PowerShell to
create a service principal with a certificate.
1. In the PowerShell ISE, open the ...\Learning Modules\UserConfig.psm1 file. Replace <resourcegroup> and
<user> on lines 10 and 11 with the value used when you deployed the app. Save the file!
2. In the PowerShell ISE, open the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-
FailoverToReplica\Demo-FailoverToReplica.ps1 script and set:
$DemoScenario = 1 , Start a background job that syncs tenant server, and pool configuration info
into the catalog
3. Press F5 to run the sync script. A new PowerShell session is opened to sync the configuration of tenant
resources.
Leave the PowerShell window running in the background and continue with the rest of the tutorial.
NOTE
The sync process connects to the catalog via a DNS alias. This alias is modified during restore and repatriation to point to
the active catalog. The sync process keeps the catalog up-to-date with any database or pool configuration changes made
in the recovery region. During repatriation, these changes are applied to the equivalent resources in the original region.
NOTE
This tutorial adds geo-replication protection to the Wingtip Tickets sample application. In a production scenario for an
application that uses geo-replication, each tenant would be provisioned with a geo-replicated database from the outset.
See Designing highly available services using Azure SQL Database
1. In the PowerShell ISE, open the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-
FailoverToReplica\Demo-FailoverToReplica.ps1 script and set the following values:
$DemoScenario = 2 , Create mirror image recovery environment and replicate catalog and tenant
databases
2. Press F5 to run the script. A new PowerShell session is opened to create the replicas.
In the Azure regions map, note the geo-replication link between the primary in the original region and the
secondary in the recovery region.
NOTE
In an outage scenario, the primary databases in the original region are offline. Force fail over on the secondary
breaks the connection to the primary without trying to apply any residual queued transactions. In a DR drill
scenario like this tutorial, if there is any update activity at the time of failover there could be some data loss. Later,
during repatriation, when you fail over databases in the recovery region back to the original region, a normal
failover is used to ensure there is no data loss.
8. Monitors the service to determine when databases have been failed over. Once a tenant database is failed
over, it updates the catalog to record the recovery state of the tenant database and mark the tenant as
online.
Tenant databases can be accessed by the application as soon as they're marked online in the catalog.
A sum of rowversion values in the tenant database is stored in the catalog. This value acts as a
fingerprint that allows the repatriation process to determine if the database has been updated in the
recovery region.
Run the script to fail over to the recovery region
Now imagine there is an outage in the region in which the application is deployed and run the recovery script:
1. In the PowerShell ISE, open the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-
FailoverToReplica\Demo-FailoverToReplica.ps1 script and set the following values:
$DemoScenario = 3 , Recover the app into a recovery region by failing over to replicas
2. Press F5 to run the script.
The script opens in a new PowerShell window and then starts a series of PowerShell jobs that run in
parallel. These jobs fail over tenant databases to the recovery region.
The recovery region is the paired region associated with the Azure region in which you deployed the
application. For more information, see Azure paired regions.
3. Monitor the status of the recovery process in the PowerShell window.
NOTE
To explore the code for the recovery jobs, review the PowerShell scripts in the ...\Learning Modules\Business Continuity
and Disaster Recovery\DR-FailoverToReplica\RecoveryJobs folder.
NOTE
With only a few databases to recover, you may not be able to refresh the browser before recovery has
completed, so you may not see the tenants while they are offline.
If you open an offline tenant's Events page directly, it displays a 'tenant offline' notification. For
example, if Contoso Concert Hall is offline, try to open http://events.wingtip-dpt.
<user>.trafficmanager.net/contosoconcerthall
Provision a new tenant in the recovery region
Even before all the existing tenant databases have failed over, you can provision new tenants in the recovery
region.
1. In the PowerShell ISE, open the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-
FailoverToReplica\Demo-FailoverToReplica.ps1 script and set the following property:
$DemoScenario = 4 , Provision a new tenant in the recovery region
2. Press F5 to run the script and provision the new tenant.
3. The Hawthorn Hall events page opens in the browser when it completes. Note from the footer that the
Hawthorn Hall database is provisioned in the recovery region.
4. In the browser, refresh the Wingtip Tickets Events Hub page to see Hawthorn Hall included.
If you provisioned Hawthorn Hall without waiting for the other tenants to restore, other tenants may
still be offline.
Next steps
In this tutorial you learned how to:
Sync database and elastic pool configuration info into the tenant catalog
Set up a recovery environment in an alternate region, comprising application, servers, and pools
Use geo-replication to replicate the catalog and tenant databases to the recovery region
Fail over the application and catalog and tenant databases to the recovery region
Fail back the application, catalog and tenant databases to the original region after the outage is resolved
You can learn more about the technologies Azure SQL Database provides to enable business continuity in the
Business Continuity Overview documentation.
Additional resources
Additional tutorials that build upon the Wingtip SaaS application
Deploy and explore a sharded multi-tenant
application
9/13/2022 • 11 minutes to read • Edit Online
Prerequisites
To complete this tutorial, make sure the following prerequisites are completed:
The latest Azure PowerShell is installed. For details, see Getting started with Azure PowerShell.
IMPORTANT
For this demonstration, do not use any pre-existing resource groups, servers, or pools. Instead, choose Create a
new resource group . Delete this resource group when you are finished with the application to stop related
billing. Do not use this application, or any resources it creates, for production. Some aspects of authentication, and
the server firewall settings, are intentionally insecure in the app to facilitate the demonstration.
For Resource group - Select Create new , and then provide a Name for the resource group (case
sensitive).
Select a Location from the drop-down list.
For User - We recommend that you choose a short User value.
3. Deploy the application .
Click to agree to the terms and conditions.
Click Purchase .
4. Monitor deployment status by clicking Notifications , which is the bell icon to the right of the search box.
Deploying the Wingtip app takes approximately five minutes.
NOTE
You must run the PowerShell scripts only by pressing the F5 key, not by pressing F8 to run a selected part of the
script. The problem with F8 is that the $PSScriptRoot variable is not evaluated. This variable is needed by many
scripts to navigate folders, invoke other scripts, or import modules.
The new Red Maple Racing tenant is added to the Tenants1 database and registered in the catalog. The new
tenant's ticket-selling Events site opens in your browser:
Refresh the Events Hub , and the new tenant now appears in the list.
3. Go back to the resource group and select the tenants1-mt server that holds the tenant databases.
The tenants1 database is a multi-tenant database in which the original three tenants, plus the first
tenant you added, are stored. It is configured as a 50 DTU Standard database.
The salixsalsa database holds the Salix Salsa dance venue as its only tenant. It is configured as a
Standard edition database with 50 DTUs by default.
Monitor the performance of the database
If the load generator has been running for several minutes, enough telemetry is available to look at the database
monitoring capabilities built into the Azure portal.
1. Browse to the tenants1-mt<user> server, and click tenants1 to view resource utilization for the
database that has four tenants in it. Each tenant is subject to a sporadic heavy load from the load
generator:
The DTU utilization chart nicely illustrates how a multi-tenant database can support an unpredictable
workload across many tenants. In this case, the load generator is applying a sporadic load of roughly 30
DTUs to each tenant. This load equates to 60% utilization of a 50 DTU database. Peaks that exceed 60%
are the result of load being applied to more than one tenant at the same time.
2. Browse to the tenants1-mt<user> server, and click the salixsalsa database. You can see the resource
utilization on this database that contains only one tenant.
The load generator is applying a similar load to each tenant, regardless of which database each tenant is in. With
only one tenant in the salixsalsa database, you can see that the database could sustain a much higher load than
the database with several tenants.
Resource allocations vary by workload
Sometimes a multi-tenant database requires more resources for good performance than does a single-tenant
database, but not always. The optimal allocation of resources depends on the particular workload characteristics
for the tenants in your system.
The workloads generated by the load generator script are for illustration purposes only.
Additional resources
To learn about multi-tenant SaaS applications, see Design patterns for multi-tenant SaaS applications.
To learn about elastic pools, see:
Elastic pools help you manage and scale multiple databases in Azure SQL Database
Scaling out with Azure SQL Database
Next steps
In this tutorial you learned:
How to deploy the Wingtip Tickets SaaS Multi-tenant Database application.
About the servers, and databases that make up the app.
Tenants are mapped to their data with the catalog.
How to provision new tenants, into a multi-tenant database and single-tenant database.
How to view pool utilization to monitor tenant activity.
How to delete sample resources to stop related billing.
Now try the Provision and catalog tutorial.
Provision and catalog new tenants in a SaaS
application using a sharded multi-tenant Azure SQL
Database
9/13/2022 • 12 minutes to read • Edit Online
Database pattern
This section, plus a few more that follow, discuss the concepts of the multi-tenant sharded database pattern.
In this multi-tenant sharded model, the table schemas inside each database include a tenant key in the primary
key of tables that store tenant data. The tenant key enables each individual database to store 0, 1, or many
tenants. The use of sharded databases makes it easy for the application system to support a very large number
of tenants. All the data for any one tenant is stored in one database. The large number of tenants are distributed
across the many sharded databases. A catalog database stores the mapping of each tenant to its database.
Isolation versus lower cost
A tenant that has a database all to itself enjoys the benefits of isolation. The tenant can have the database
restored to an earlier date without being restricted by the impact on other tenants. Database performance can
be tuned to optimize for the one tenant, again without having to compromise with other tenants. The problem is
that isolation costs more than it costs to share a database with other tenants.
When a new tenant is provisioned, it can share a database with other tenants, or it can be placed into its own
new database. Later you can change your mind and move the database to the other situation.
Databases with multiple tenants and single tenants are mixed in the same SaaS application, to optimize cost or
isolation for each tenant.
Tenant catalog pattern
When you have two or more databases that each contain at least one tenant, the application must have a way to
discover which database stores the tenant of current interest. A catalog database stores this mapping.
Tenant key
For each tenant, the Wingtip application can derive a unique key, which is the tenant key. The app extracts the
tenant name from the webpage URL. The app hashes the name to obtain the key. The app uses the key to access
the catalog. The catalog cross-references information about the database in which the tenant is stored. The app
uses the database info to connect. Other tenant key schemes can also be used.
Using a catalog allows the name or location of a tenant database to be changed after provisioning without
disrupting the application. In a multi-tenant database model, the catalog accommodates moving a tenant
between databases.
Tenant metadata beyond location
The catalog can also indicate whether a tenant is offline for maintenance or other actions. And the catalog can be
extended to store additional tenant or database metadata, such as the following items:
The service tier or edition of a database.
The version of the database schema.
The tenant name and its SLA (service level agreement).
Information to enable application management, customer support, or devops processes.
The catalog can also be used to enable cross-tenant reporting, schema management, and data extract for
analytics purposes.
Elastic Database Client Library
In Wingtip, the catalog is implemented in the tenantcatalog database. The tenantcatalog is created using the
Shard Management features of the Elastic Database Client Library (EDCL). The library enables an application to
create, manage, and use a shard map that is stored in a database. A shard map cross-references the tenant key
with its shard, meaning its sharded database.
During tenant provisioning, EDCL functions can be used from applications or PowerShell scripts to create the
entries in the shard map. Later the EDCL functions can be used to connect to the correct database. The EDCL
caches connection information to minimize the traffic on the catalog database and speed up the process of
connecting.
IMPORTANT
Do not edit the data in the catalog database through direct access! Direct updates are not supported due to the high risk
of data corruption. Instead, edit the mapping data by using EDCL APIs only.
Tutorial begins
In this tutorial, you learn how to:
Provision a tenant into a multi-tenant database
Provision a tenant into a single-tenant database
Provision a batch of tenants into both multi-tenant and single-tenant databases
Register a database and tenant mapping in a catalog
Prerequisites
To complete this tutorial, make sure the following prerequisites are completed:
Azure PowerShell is installed. For details, see Getting started with Azure PowerShell
The Wingtip Tickets SaaS Multi-tenant Database app is deployed. To deploy in less than five minutes, see
Deploy and explore the Wingtip Tickets SaaS Multi-tenant Database application
Get the Wingtip scripts and source code:
The Wingtip Tickets SaaS Multi-tenant Database scripts and application source code are available in
the WingtipTicketsSaaS-MultitenantDB GitHub repo.
See the general guidance for steps to download and unblock the Wingtip scripts.
While the Azure portal shows the tenant databases, it doesn't let you see the tenants inside the shared database.
The full list of tenants can be seen in the Events Hub webpage of Wingtip, and by browsing the catalog.
Using Wingtip Tickets events hub page
Open the Events Hub page in the browser (http:events.wingtip-mt.<USER>.trafficmanager.net)
Using catalog database
The full list of tenants and the corresponding database for each is available in the catalog. A SQL view is
provided that joins the tenant name to the database name. The view nicely demonstrates the value of extending
the metadata that is stored in the catalog.
The SQL view is available in the tenantcatalog database.
The tenant name is stored in the Tenants table.
The database name is stored in the Shard Management tables.
1. In SQL Server Management Studio (SSMS), connect to the tenants server at catalog-mt.
<USER>.database.windows.net , with Login = developer , and Password = P@ssword1
2. In the SSMS Object Explorer, browse to the views in the tenantcatalog database.
3. Right click on the view TenantsExtended and choose Select Top 1000 Rows . Note the mapping between
tenant name and database for the different tenants.
Additional resources
Elastic database client library
How to Debug Scripts in Windows PowerShell ISE
Next steps
In this tutorial you learned how to:
Provision a single new tenant into a shared multi-tenant database and its own database
Provision a batch of additional tenants
Step through the details of provisioning tenants, and registering them into the catalog
Try the Performance monitoring tutorial.
Monitor and manage performance of sharded
multi-tenant Azure SQL Database in a multi-tenant
SaaS app
9/13/2022 • 10 minutes to read • Edit Online
DEM O SC EN A RIO
The load generator applies a synthetic CPU-only load to every tenant database. The generator starts a job for
each tenant database, which calls a stored procedure periodically that generates the load. The load levels (in
DTUs), duration, and intervals are varied across all databases, simulating unpredictable tenant activity.
1. In the PowerShell ISE , open …\Learning Modules\Performance Monitoring and Management\Demo-
PerformanceMonitoringAndManagement.ps1. Keep this script open as you'll run several scenarios during
this tutorial.
2. Set $DemoScenario = 2 , Generate normal intensity load
3. Press F5 to apply a load to all your tenants.
Wingtip Tickets SaaS Multi-tenant Database is a SaaS app, and the real-world load on a SaaS app is typically
sporadic and unpredictable. To simulate this, the load generator produces a randomized load distributed across
all tenants. Several minutes are needed for the load pattern to emerge, so run the load generator for 3-5
minutes before attempting to monitor the load in the following sections.
IMPORTANT
The load generator is running as a series of jobs in a new PowerShell window. If you close the session, the load generator
stops. The load generator remains in a job-invoking state where it generates load on any new tenants that are
provisioned after the generator is started. Use Ctrl-C to stop invoking new jobs and exit the script. The load generator will
continue to run, but only on existing tenants.
Next steps
In this tutorial you learn how to:
Simulate usage on a sharded multi-tenant database by running a provided load generator
Monitor the database as it responds to the increase in load
Scale up the database in response to the increased database load
Provision a tenant into a single-tenant database
Additional resources
Azure automation
Run ad hoc analytics queries across multiple
databases (Azure SQL Database)
9/13/2022 • 7 minutes to read • Edit Online
SaaS applications can analyze the vast amount of tenant data that is stored centrally in the cloud. The analyses
reveal insights into the operation and usage of your application. These insights can guide feature development,
usability improvements, and other investments in your apps and services.
Accessing this data in a single multi-tenant database is easy, but not so easy when distributed at scale across
potentially thousands of databases. One approach is to use Elastic Query, which enables querying across a
distributed set of databases with common schema. These databases can be distributed across different resource
groups and subscriptions. Yet one common login must have access to extract data from all the databases. Elastic
Query uses a single head database in which external tables are defined that mirror tables or views in the
distributed (tenant) databases. Queries submitted to this head database are compiled to produce a distributed
query plan, with portions of the query pushed down to the tenant databases as needed. Elastic Query uses the
shard map in the catalog database to determine the location of all tenant databases. Setup and query are
straightforward using standard Transact-SQL, and support ad hoc querying from tools like Power BI and Excel.
By distributing queries across the tenant databases, Elastic Query provides immediate insight into live
production data. However, as Elastic Query pulls data from potentially many databases, query latency can
sometimes be higher than for equivalent queries submitted to a single multi-tenant database. Be sure to design
queries to minimize the data that is returned. Elastic Query is often best suited for querying small amounts of
real-time data, as opposed to building frequently used or complex analytics queries or reports. If queries do not
perform well, look at the execution plan to see what part of the query has been pushed down to the remote
database. And assess how much data is being returned. Queries that require complex analytical processing
might be better served by saving the extracted tenant data into a database that is optimized for analytics
queries. SQL Database and Azure Synapse Analytics could host such the analytics database.
This pattern for analytics is explained in the tenant analytics tutorial.
By using the catalog database as the external data source, queries are distributed to all databases
registered in the catalog when the query is run. Because server names are different for each deployment,
this initialization script gets the location of the catalog database by retrieving the current server
(@@servername) where the script is executed.
The external tables that reference tenant tables are defined with DISTRIBUTION =
SHARDED(VenueId) . This routes a query for a particular VenueId to the appropriate database and
improves performance for many scenarios as shown in the next section.
The local table VenueTypes that is created and populated. This reference data table is common in all
tenant databases, so it can be represented here as a local table and populated with the common data. For
some queries, this may reduce the amount of data moved between the tenant databases and the
adhocreporting database.
If you include reference tables in this manner, be sure to update the table schema and data whenever you
update the tenant databases.
4. Press F5 to run the script and initialize the adhocreporting database.
Now you can run distributed queries, and gather insights across all tenants!
6. Now select the On which day were the most tickets sold? query, and press F5 .
This query does a bit more complex joining and aggregation. What's important to note is that most of the
processing is done remotely, and once again, we bring back only the rows we need, returning just a single
row for each venue's aggregate ticket sale count per day.
Next steps
In this tutorial you learned how to:
Run distributed queries across all tenant databases
Deploy an ad hoc reporting database and add schema to it to run distributed queries.
Now try the Tenant Analytics tutorial to explore extracting data to a separate analytics database for more
complex analytics processing.
Additional resources
Elastic Query
Manage schema in a SaaS application that uses
sharded multi-tenant databases
9/13/2022 • 7 minutes to read • Edit Online
Prerequisites
The Wingtip Tickets multi-tenant database app must already be deployed:
For instructions, see the first tutorial, which introduces the Wingtip Tickets SaaS multi-tenant database
app:
Deploy and explore a sharded multi-tenant application that uses Azure SQL Database.
The deploy process runs for less than five minutes.
You must have the sharded multi-tenant version of Wingtip installed. The versions for Standalone and
Database per tenant do not support this tutorial.
The latest version of SQL Server Management Studio (SSMS) must be installed. Download and Install
SSMS.
Azure PowerShell must be installed. For details, see Getting started with Azure PowerShell.
NOTE
This tutorial uses features of the Azure SQL Database service that are in a limited preview (Elastic Database jobs). If you
wish to do this tutorial, provide your subscription ID to [email protected] with subject=Elastic Jobs Preview.
After you receive confirmation that your subscription has been enabled, download and install the latest pre-release jobs
cmdlets. This preview is limited, so contact [email protected] for related questions or support.
NOTE
This tutorial uses features of the SQL Database service that are in a limited preview (Elastic Database jobs). If you wish to
do this tutorial, provide your subscription ID to [email protected] with subject=Elastic Jobs Preview. After you
receive confirmation that your subscription has been enabled, download and install the latest pre-release jobs cmdlets.
This preview is limited, so contact [email protected] for related questions or support.
Additional resources
Managing scaled-out cloud databases
Next steps
In this tutorial you learned how to:
Create a job agent to run T-SQL jobs across multiple databases
Update reference data in all tenant databases
Create an index on a table in all tenant databases
Next, try the Ad hoc reporting tutorial to explore running distributed queries across tenant databases.
Cross-tenant analytics using extracted data - multi-
tenant app
9/13/2022 • 13 minutes to read • Edit Online
Finally, the star-schema tables are queried. The query results are displayed visually to highlight insights into
tenant behavior and their use of the application. With this star-schema, you can run queries that help discover
items like the following:
Who is buying tickets and from which venue.
Hidden patterns and trends in the following areas:
The sales of tickets.
The relative popularity of each venue.
Understanding how consistently each tenant is using the service provides an opportunity to create service plans
to cater to their needs. This tutorial provides basic examples of insights that can be gleaned from tenant data.
Setup
Prerequisites
To complete this tutorial, make sure the following prerequisites are met:
The Wingtip Tickets SaaS Multi-tenant Database application is deployed. To deploy in less than five minutes,
see Deploy and explore the Wingtip Tickets SaaS Multi-tenant Database application
The Wingtip SaaS scripts and application source code are downloaded from GitHub. Be sure to unblock the
zip file before extracting its contents. Check out the general guidance for steps to download and unblock the
Wingtip Tickets SaaS scripts.
Power BI Desktop is installed. Download Power BI Desktop
The batch of additional tenants has been provisioned, see the Provision tenants tutorial .
A job agent and job agent database have been created. See the appropriate steps in the Schema
management tutorial .
Create data for the demo
In this tutorial, analysis is performed on ticket sales data. In the current step, you generate ticket data for all the
tenants. Later this data is extracted for analysis. Ensure you have provisioned the batch of tenants as described
earlier, so that you have a meaningful amount of data. A sufficiently large amount of data can expose a range of
different ticket purchasing patterns.
1. In PowerShell ISE , open …\Learning Modules\Operational Analytics\Tenant Analytics\Demo-
TenantAnalytics.ps1, and set the following value:
$DemoScenario = 1 Purchase tickets for events at all venues
2. Press F5 to run the script and create ticket purchasing history for every event in each venue. The script runs
for several minutes to generate tens of thousands of tickets.
Deploy the analytics store
Often there are numerous transactional sharded databases that together hold all tenant data. You must
aggregate the tenant data from the sharded database into one analytics store. The aggregation enables efficient
query of the data. In this tutorial, an Azure SQL Database database is used to store the aggregated data.
In the following steps, you deploy the analytics store, which is called tenantanalytics . You also deploy
predefined tables that are populated later in the tutorial:
1. In PowerShell ISE, open …\Learning Modules\Operational Analytics\Tenant Analytics\Demo-
TenantAnalytics.ps1
2. Set the $DemoScenario variable in the script to match your choice of analytics store. For learning purposes,
using the database without columnstore is recommended.
To use SQL Database without columnstore, set $DemoScenario = 2
To use SQL Database with columnstore, set $DemoScenario = 3
3. Press F5 to run the demo script (that calls the Deploy-TenantAnalytics<XX>.ps1 script) which creates the
tenant analytics store.
Now that you have deployed the application and filled it with interesting tenant data, use SQL Server
Management Studio (SSMS) to connect tenants1-mt-<User> and catalog-mt-<User> servers using Login
= developer, Password = P@ssword1.
In the Object Explorer, perform the following steps:
1. Expand the tenants1-mt-<User> server.
2. Expand the Databases node, and see tenants1 database containing multiple tenants.
3. Expand the catalog-mt-<User> server.
4. Verify that you see the analytics store and the jobaccount database.
See the following database items in the SSMS Object Explorer by expanding the analytics store node:
Tables TicketsRawData and EventsRawData hold raw extracted data from the tenant databases.
The star-schema tables are fact_Tickets , dim_Customers , dim_Venues , dim_Events , and dim_Dates .
The sp_ShredRawExtractedData stored procedure is used to populate the star-schema tables from the raw
data tables.
Data extraction
Create target groups
Before proceeding, ensure you have deployed the job account and jobaccount database. In the next set of steps,
Elastic Jobs is used to extract data from the sharded tenants database, and to store the data in the analytics
store. Then the second job shreds the data and stores it into tables in the star-schema. These two jobs run
against two different target groups, namely TenantGroup and AnalyticsGroup . The extract job runs against
the TenantGroup, which contains all the tenant databases. The shredding job runs against the AnalyticsGroup,
which contains just the analytics store. Create the target groups by using the following steps:
1. In SSMS, connect to the jobaccount database in catalog-mt-<User>.
2. In SSMS, open …\Learning Modules\Operational Analytics\Tenant Analytics\ TargetGroups.sql
3. Modify the @User variable at the top of the script, replacing <User> with the user value used when you
deployed the Wingtip Tickets SaaS Multi-tenant Database application.
4. Press F5 to run the script that creates the two target groups.
Extract raw data from all tenants
Transactions might occur more frequently for ticket and customer data than for event and venue data. Therefore,
consider extracting ticket and customer data separately and more frequently than you extract event and venue
data. In this section, you define and schedule two separate jobs:
Extract ticket and customer data.
Extract event and venue data.
Each job extracts its data, and posts it into the analytics store. There a separate job shreds the extracted data into
the analytics star-schema.
1. In SSMS, connect to the jobaccount database in catalog-mt-<User> server.
2. In SSMS, open ...\Learning Modules\Operational Analytics\Tenant Analytics\ExtractTickets.sql.
3. Modify @User at the top of the script, and replace <User> with the user name used when you deployed the
Wingtip Tickets SaaS Multi-tenant Database application.
4. Press F5 to run the script that creates and runs the job that extracts tickets and customers data from each
tenant database. The job saves the data into the analytics store.
5. Query the TicketsRawData table in the tenantanalytics database, to ensure that the table is populated with
tickets information from all tenants.
Repeat the preceding steps, except this time replace \ExtractTickets.sql with \ExtractVenuesEvents.sql in
step 2.
Successfully running the job populates the EventsRawData table in the analytics store with new events and
venues information from all tenants.
Data reorganization
Shred extracted data to populate star-schema tables
The next step is to shred the extracted raw data into a set of tables that are optimized for analytics queries. A
star-schema is used. A central fact table holds individual ticket sales records. Dimension tables are populated
with data about venues, events, customers, and purchase dates.
In this section of the tutorial, you define and run a job that merges the extracted raw data with the data in the
star-schema tables. After the merge job is finished, the raw data is deleted, leaving the tables ready to be
populated by the next tenant data extract job.
1. In SSMS, connect to the jobaccount database in catalog-mt-<User>.
2. In SSMS, open …\Learning Modules\Operational Analytics\Tenant Analytics\ShredRawExtractedData.sql.
3. Press F5 to run the script to define a job that calls the sp_ShredRawExtractedData stored procedure in the
analytics store.
4. Allow enough time for the job to run successfully.
Check the Lifecycle column of jobs.jobs_execution table for the status of job. Ensure that the job
Succeeded before proceeding. A successful run displays data similar to the following chart:
Data exploration
Visualize tenant data
The data in the star-schema table provides all the ticket sales data needed for your analysis. To make it easier to
see trends in large data sets, you need to visualize it graphically. In this section, you learn how to use Power BI
to manipulate and visualize the tenant data you have extracted and organized.
Use the following steps to connect to Power BI, and to import the views you created earlier:
1. Launch Power BI desktop.
2. From the Home ribbon, select Get Data , and select More… from the menu.
3. In the Get Data window, select Azure SQL Database.
4. In the database login window, enter your server name (catalog-mt-<User>.database.windows.net). Select
Impor t for Data Connectivity Mode , and then click OK.
5. Select Database in the left pane, then enter user name = developer, and enter password = P@ssword1.
Click Connect .
6. In the Navigator pane, under the analytics database, select the star-schema tables: fact_Tickets,
dim_Events, dim_Venues, dim_Customers and dim_Dates. Then select Load .
Congratulations! You have successfully loaded the data into Power BI. Now you can start exploring interesting
visualizations to help gain insights into your tenants. Next you walk through how analytics can enable you to
provide data-driven recommendations to the Wingtip Tickets business team. The recommendations can help to
optimize the business model and customer experience.
You start by analyzing ticket sales data to see the variation in usage across the venues. Select the following
options in Power BI to plot a bar chart of the total number of tickets sold by each venue. Due to random
variation in the ticket generator, your results may be different.
The preceding plot confirms that the number of tickets sold by each venue varies. Venues that sell more tickets
are using your service more heavily than venues that sell fewer tickets. There may be an opportunity here to
tailor resource allocation according to different tenant needs.
You can further analyze the data to see how ticket sales vary over time. Select the following options in Power BI
to plot the total number of tickets sold each day for a period of 60 days.
The preceding chart displays that ticket sales spike for some venues. These spikes reinforce the idea that some
venues might be consuming system resources disproportionately. So far there is no obvious pattern in when the
spikes occur.
Next you want to further investigate the significance of these peak sale days. When do these peaks occur after
tickets go on sale? To plot tickets sold per day, select the following options in Power BI.
The preceding plot shows that some venues sell a lot of tickets on the first day of sale. As soon as tickets go on
sale at these venues, there seems to be a mad rush. This burst of activity by a few venues might impact the
service for other tenants.
You can drill into the data again to see if this mad rush is true for all events hosted by these venues. In previous
plots, you observed that Contoso Concert Hall sells a lot of tickets, and that Contoso also has a spike in ticket
sales on certain days. Play around with Power BI options to plot cumulative ticket sales for Contoso Concert Hall,
focusing on sale trends for each of its events. Do all events follow the same sale pattern?
The preceding plot for Contoso Concert Hall shows that the mad rush does not happen for all events. Play
around with the filter options to see sale trends for other venues.
The insights into ticket selling patterns might lead Wingtip Tickets to optimize their business model. Instead of
charging all tenants equally, perhaps Wingtip should introduce service tiers with different compute sizes. Larger
venues that need to sell more tickets per day could be offered a higher tier with a higher service level
agreement (SLA). Those venues could have their databases placed in pool with higher per-database resource
limits. Each service tier could have an hourly sales allocation, with additional fees charged for exceeding the
allocation. Larger venues that have periodic bursts of sales would benefit from the higher tiers, and Wingtip
Tickets can monetize their service more efficiently.
Meanwhile, some Wingtip Tickets customers complain that they struggle to sell enough tickets to justify the
service cost. Perhaps in these insights there is an opportunity to boost ticket sales for under performing venues.
Higher sales would increase the perceived value of the service. Right click fact_Tickets and select New
measure . Enter the following expression for the new measure called AverageTicketsSold :
AverageTicketsSold = DIVIDE(DIVIDE(COUNTROWS(fact_Tickets),DISTINCT(dim_Venues[VenueCapacity]))*100,
COUNTROWS(dim_Events))
Select the following visualization options to plot the percentage tickets sold by each venue to determine their
relative success.
The preceding plot shows that even though most venues sell more than 80% of their tickets, some are
struggling to fill more than half the seats. Play around with the Values Well to select maximum or minimum
percentage of tickets sold for each venue.
Earlier you deepened your analysis to discover that ticket sales tend to follow predictable patterns. This
discovery might let Wingtip Tickets help underperforming venues boost ticket sales by recommending dynamic
pricing. This discovery could reveal an opportunity to employ machine learning techniques to predict ticket
sales for each event. Predictions could also be made for the impact on revenue of offering discounts on ticket
sales. Power BI Embedded could be integrated into an event management application. The integration could help
visualize predicted sales and the effect of different discounts. The application could help devise an optimum
discount to be applied directly from the analytics display.
You have observed trends in tenant data from the Wingtip Tickets SaaS Multi-tenant Database application. You
can contemplate other ways the app can inform business decisions for SaaS application vendors. Vendors can
better cater to the needs of their tenants. Hopefully this tutorial has equipped you with tools necessary to
perform analytics on tenant data to empower your businesses to make data-driven decisions.
Next steps
In this tutorial, you learned how to:
Deployed a tenant analytics database with pre-defined star schema tables
Used elastic jobs to extract data from all the tenant database
Merge the extracted data into tables in a star-schema designed for analytics
Query an analytics database
Use Power BI for data visualization to observe trends in tenant data
Congratulations!
Additional resources
Additional tutorials that build upon the Wingtip SaaS application.
Elastic Jobs.
Cross-tenant analytics using extracted data - single-tenant app
Azure CLI samples for Azure SQL Database and
SQL Managed Instance
9/13/2022 • 2 minutes to read • Edit Online
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Samples
Azure SQL Database
Azure SQL Managed Instance
The following table includes links to Azure CLI script examples to manage single and pooled databases in Azure
SQL Database.
Create databases
Create pooled databases Creates elastic pools, moves pooled databases, and changes
compute sizes.
Scale databases
A REA DESC RIP T IO N
Scale pooled database Scales a SQL elastic pool to a different compute size.
Configure geo-replication
Configure failover group Configures a failover group for a group of databases and
failover over databases to the secondary server.
Single database Creates a database and a failover group, adds the database
to the failover group, then tests failover to the secondary
server.
Copy a database to a new server Creates a copy of an existing database in SQL Database in a
new server.
Import a database from a BACPAC file Imports a database to SQL Database from a BACPAC file.
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment. Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here
echo "Using resource group $resourceGroup with login: $login, password: $password..."
echo "Creating $resourceGroup in $location..."
az group create --name $resourceGroup --location "$location" --tags $tag
echo "Creating $server in $location..."
az sql server create --name $server --resource-group $resourceGroup --location "$location" --admin-user
$login --admin-password $password
echo "Configuring firewall..."
az sql server firewall-rule create --resource-group $resourceGroup --server $server -n AllowYourIp --start-
ip-address $startIp --end-ip-address $endIp
echo "Creating $database on $server..."
az sql db create --resource-group $resourceGroup --server $server --name $database --sample-name
AdventureWorksLT --edition GeneralPurpose --family Gen5 --capacity 2 --zone-redundant true # zone redundancy
is only supported on premium and business critical service tiers
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
Sample reference
This script uses the following commands. Each command in the table links to command-specific documentation.
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment. Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here
# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="move-database-between-pools"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"
pool="msdocs-azuresql-pool-$randomIdentifier"
secondaryPool="msdocs-azuresql-secondary-pool-$randomIdentifier"
echo "Using resource group $resourceGroup with login: $login, password: $password..."
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
Sample reference
This script uses the following commands. Each command in the table links to command-specific documentation.
C OMMAND DESC RIP T IO N
Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Monitor and scale a single database in Azure SQL
Database using the Azure CLI
9/13/2022 • 3 minutes to read • Edit Online
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment. Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here
# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="monitor-and-scale-database"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"
echo "Using resource group $resourceGroup with login: $login, password: $password..."
echo "Scaling up $database..." # create command executes update if database already exists
az sql db create --resource-group $resourceGroup --server $server --name $database --edition GeneralPurpose
--family Gen5 --capacity 4
TIP
Use az sql db op list to get a list of operations performed on the database, and use az sql db op cancel to cancel an
update operation on the database.
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
Sample reference
This script uses the following commands. Each command in the table links to command-specific documentation.
SC RIP T DESC RIP T IO N
Next steps
For more information on the Azure CLI, see Azure CLI documentation.
Additional CLI script samples can be found in Azure CLI sample scripts.
Scale an elastic pool in Azure SQL Database using
the Azure CLI
9/13/2022 • 3 minutes to read • Edit Online
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment. Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here
# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="scale-pool"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
databaseAdditional="msdocs-azuresql-additional-db-$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"
pool="msdocs-azuresql-pool-$randomIdentifier"
echo "Using resource group $resourceGroup with login: $login, password: $password..."
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
Sample reference
This script uses the following commands. Each command in the table links to command-specific documentation.
Next steps
For more information on the Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Configure active geo-replication for a single
database in Azure SQL Database using the Azure
CLI
9/13/2022 • 3 minutes to read • Edit Online
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment. Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here
# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="setup-geodr-and-failover-single-database"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"
failoverResourceGroup="msdocs-azuresql-failover-rg-$randomIdentifier"
failoverLocation="Central US"
secondaryServer="msdocs-azuresql-secondary-server-$randomIdentifier"
echo "Using resource group $resourceGroup with login: $login, password: $password..."
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
az group delete --name $resourceGroup
Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Configure active geo-replication for a pooled
database in Azure SQL Database using the Azure
CLI
9/13/2022 • 3 minutes to read • Edit Online
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment. Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here
# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="setup-geodr-and-failover-elastic-pool"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"
pool="pool-$randomIdentifier"
failoverLocation="Central US"
failoverResourceGroup="msdocs-azuresql-failover-rg-$randomIdentifier"
secondaryServer="msdocs-azuresql-secondary-server-$randomIdentifier"
secondaryPool="msdocs-azuresql-secondary-pool-$randomIdentifier"
echo "Using resource group $resourceGroup with login: $login, password: $password..."
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
az group delete --name $resourceGroup
az group delete --name $secondaryResourceGroup
Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Configure a failover group for a group of databases
in Azure SQL Database using the Azure CLI
9/13/2022 • 3 minutes to read • Edit Online
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment. Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="setup-geodr-and-failover-database-failover-group"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"
failoverGroup="msdocs-azuresql-failover-group-$randomIdentifier"
failoverLocation="Central US"
failoverResourceGroup="msdocs-azuresql-failover-rg-$randomIdentifier"
secondaryServer="msdocs-azuresql-secondary-server-$randomIdentifier"
echo "Using resource groups $resourceGroup and $failoverResourceGroup with login: $login, password:
$password..."
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
az sql failover-group set-primary Set the primary of the failover group by failing over all
databases from the current primary server
Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Add a database to a failover group using the Azure
CLI
9/13/2022 • 3 minutes to read • Edit Online
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment. Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here
# VariableBlock
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="add-single-db-to-failover-group-az-cli"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"
failoverGroup="msdocs-azuresql-failover-group-$randomIdentifier"
failoverLocation="Central US"
secondaryServer="msdocs-azuresql-secondary-server-$randomIdentifier"
echo "Using resource group $resourceGroup with login: $login, password: $password..."
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Add an Azure SQL Database elastic pool to a
failover group using the Azure CLI
9/13/2022 • 3 minutes to read • Edit Online
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment. Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Backup an Azure SQL single database to an Azure
storage container using the Azure CLI
9/13/2022 • 3 minutes to read • Edit Online
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment. Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here
# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="backup-database"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"
storage="msdocsazuresql$randomIdentifier"
container="msdocs-azuresql-container-$randomIdentifier"
bacpac="backup.bacpac"
echo "Using resource group $resourceGroup with login: $login, password: $password..."
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
C OMMAND N OT ES
Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Restore a single database in Azure SQL Database to
an earlier point in time using the Azure CLI
9/13/2022 • 2 minutes to read • Edit Online
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
For this script, use Azure CLI locally as it takes too long to run in Cloud Shell.
Sign in to Azure
Use the following script to sign in using a specific subscription.
# Use Bash rather than Cloud Shell due to its timeout at 20 minutes when no interactive activity
# In Windows, run Bash in a Docker container to sync time zones between Azure and Bash.
# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-sql-rg-$randomIdentifier"
tag="restore-database"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
restoreServer="restoreServer-$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"
echo "Using resource group $resourceGroup with login: $login, password: $password..."
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment. Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here
# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="copy-database-to-new-server"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"
targetResourceGroup="msdocs-azuresql-targetrg-$randomIdentifier"
targetLocation="Central US"
targetServer="msdocs-azuresql-targetServer-$randomIdentifier"
targetDatabase="msdocs-azuresql-targetDatabase-$randomIdentifier"
echo "Using resource group $resourceGroup with login: $login, password: $password..."
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
az sql db copy Creates a copy of a database that uses the snapshot at the
current time.
Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Import a BACPAC file into a database in SQL
Database using the Azure CLI
9/13/2022 • 2 minutes to read • Edit Online
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
For this script, use Azure CLI locally as it takes too long to run in Cloud Shell.
Sign in to Azure
Use the following script to sign in using a specific subscription.
echo "Using resource group $resourceGroup with login: $login, password: $password..."
az storage container create --name $container --account-key $key --account-name $storage #--public-access
container
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Azure PowerShell samples for Azure SQL Database
and Azure SQL Managed Instance
9/13/2022 • 4 minutes to read • Edit Online
O P T IO N EXA M P L E/ L IN K
The following table includes links to sample Azure PowerShell scripts for Azure SQL Database.
L IN K DESC RIP T IO N
Create a single database and configure a server-level firewall This PowerShell script creates a single database and
rule configures a server-level IP firewall rule.
Create elastic pools and move pooled databases This PowerShell script creates elastic pools, moves pooled
databases, and changes compute sizes.
Configure and fail over a single database using active geo- This PowerShell script configures active geo-replication for a
replication single database and fails it over to the secondary replica.
Configure and fail over a pooled database using active geo- This PowerShell script configures active geo-replication for a
replication database in an elastic pool and fails it over to the secondary
replica.
Configure a failover group for a single database This PowerShell script creates a database and a failover
group, adds the database to the failover group, and tests
failover to the secondary server.
Configure a failover group for an elastic pool This PowerShell script creates a database, adds it to an elastic
pool, adds the elastic pool to the failover group, and tests
failover to the secondary server.
Scale a single database This PowerShell script monitors the performance metrics of a
single database, scales it to a higher compute size, and
creates an alert rule on one of the performance metrics.
Scale an elastic pool This PowerShell script monitors the performance metrics of
an elastic pool, scales it to a higher compute size, and
creates an alert rule on one of the performance metrics.
Copy a database to a new server This PowerShell script creates a copy of an existing database
in a new server.
Import a database from a bacpac file This PowerShell script imports a database into Azure SQL
Database from a bacpac file.
Sync data between databases This PowerShell script configures Data Sync to sync between
multiple databases in Azure SQL Database.
L IN K DESC RIP T IO N
Sync data between SQL Database and SQL Server on- This PowerShell script configures Data Sync to sync between
premises a database in Azure SQL Database and a SQL Server on-
premises database.
Update the SQL Data Sync sync schema This PowerShell script adds or removes items from the Data
Sync sync schema.
Additional resources
The examples listed on this page use the PowerShell cmdlets for creating and managing Azure SQL resources.
Additional cmdlets for running queries and performing many database tasks are located in the sqlserver
module. For more information, see SQL Server PowerShell.
Azure Resource Manager templates for Azure SQL
Database & SQL Managed Instance
9/13/2022 • 3 minutes to read • Edit Online
The following table includes links to Azure Resource Manager templates for Azure SQL Database.
L IN K DESC RIP T IO N
Elastic pool This template allows you to deploy an elastic pool and to
assign databases to it.
Failover groups This template creates two servers, a single database, and a
failover group in Azure SQL Database.
Threat Detection This template allows you to deploy a server and a set of
databases with Threat Detection enabled, with an email
address for alerts for each database. Threat Detection is part
of the SQL Advanced Threat Protection (ATP) offering and
provides a layer of security that responds to potential
threats over servers and databases.
Auditing to Azure Blob storage This template allows you to deploy a server with auditing
enabled to write audit logs to a Blob storage. Auditing for
Azure SQL Database tracks database events and writes them
to an audit log that can be placed in your Azure storage
account, OMS workspace, or Event Hubs.
Auditing to Azure Event Hub This template allows you to deploy a server with auditing
enabled to write audit logs to an existing event hub. In order
to send audit events to Event Hubs, set auditing settings
with Enabled State , and set
IsAzureMonitorTargetEnabled as true . Also, configure
Diagnostic Settings with the SQLSecurityAuditEvents log
category on the master database (for server-level
auditing). Auditing tracks database events and writes them
to an audit log that can be placed in your Azure storage
account, OMS workspace, or Event Hubs.
L IN K DESC RIP T IO N
Azure Web App with SQL Database This sample creates a free Azure web app and a database in
Azure SQL Database at the "Basic" service level.
Azure Web App and Redis Cache with SQL Database This template creates a web app, Redis Cache, and database
in the same resource group and creates two connection
strings in the web app for the database and Redis Cache.
Import data from Blob storage using ADF V2 This Azure Resource Manager template creates an instance
of Azure Data Factory V2 that copies data from Azure Blob
storage to SQL Database.
HDInsight cluster with a database This template allows you to create an HDInsight cluster, a
logical SQL server, a database, and two tables. This template
is used by the Use Sqoop with Hadoop in HDInsight article.
Azure Logic App that runs a SQL Stored Procedure on a This template allows you to create a logic app that will run a
schedule SQL stored procedure on schedule. Any arguments for the
procedure can be put into the body section of the template.
Provision server with Azure AD-only authentication enabled This template creates a SQL logical server with an Azure AD
admin set for the server and Azure AD-only authentication
enabled.
Azure Resource Graph sample queries for Azure
SQL Database
9/13/2022 • 2 minutes to read • Edit Online
This page is a collection of Azure Resource Graph sample queries for Azure SQL Database. For a complete list of
Azure Resource Graph samples, see Resource Graph samples by Category and Resource Graph samples by
Table.
Sample queries
List SQL Databases and their elastic pools
The following query uses leftouter join to bring together SQL Database resources and their related elastic
pools, if they've any.
Resources
| where type =~ 'microsoft.sql/servers/databases'
| project databaseId = id, databaseName = name, elasticPoolId = tolower(tostring(properties.elasticPoolId))
| join kind=leftouter (
Resources
| where type =~ 'microsoft.sql/servers/elasticpools'
| project elasticPoolId = tolower(id), elasticPoolName = name, elasticPoolState = properties.state)
on elasticPoolId
| project-away elasticPoolId1
Azure CLI
Azure PowerShell
Portal
Next steps
Learn more about the query language.
Learn more about how to explore resources.
See samples of Starter language queries.
See samples of Advanced language queries.
What is Azure SQL Managed Instance?
9/13/2022 • 16 minutes to read • Edit Online
IMPORTANT
For a list of regions where SQL Managed Instance is currently available, see Supported regions.
Azure SQL Managed Instance is designed for customers looking to migrate a large number of apps from an on-
premises or IaaS, self-built, or ISV provided environment to a fully managed PaaS cloud environment, with as
low a migration effort as possible. Using the fully automated Azure Data Migration Service, customers can lift
and shift their existing SQL Server instance to SQL Managed Instance, which offers compatibility with SQL
Server and complete isolation of customer instances with native VNet support. For more information on
migration options and tools, see Migration overview: SQL Server to Azure SQL Managed Instance.
With Software Assurance, you can exchange your existing licenses for discounted rates on SQL Managed
Instance using the Azure Hybrid Benefit for SQL Server. SQL Managed Instance is the best migration destination
in the cloud for SQL Server instances that require high security and a rich programmability surface.
Key features and capabilities
SQL Managed Instance combines the best features that are available both in Azure SQL Database and the SQL
Server database engine.
IMPORTANT
SQL Managed Instance runs with all of the features of the most recent version of SQL Server, including online operations,
automatic plan corrections, and other enterprise performance enhancements. A comparison of the features available is
explained in Feature comparison: Azure SQL Managed Instance versus SQL Server.
Isolated environment (VNet integration, single tenant Azure Resource Manager API for automating service
service, dedicated compute and storage) provisioning and scaling
Transparent data encryption (TDE) Azure portal functionality for manual service provisioning
Azure Active Directory (Azure AD) authentication, single and scaling
sign-on support Data Migration Service
Azure AD server principals (logins)
What is Windows Authentication for Azure AD principals
(Preview)
Adheres to compliance standards same as Azure SQL
Database
SQL auditing
Advanced Threat Protection
IMPORTANT
Azure SQL Managed Instance has been certified against a number of compliance standards. For more information, see the
Microsoft Azure Compliance Offerings, where you can find the most current list of SQL Managed Instance compliance
certifications, listed under SQL Database .
The key features of SQL Managed Instance are shown in the following table:
Built-in Integration Service (SSIS) No - SSIS is a part of Azure Data Factory PaaS
Built-in Reporting Service (SSRS) No - use Power BI paginated reports instead or host SSRS
on an Azure VM. While SQL Managed Instance cannot run
SSRS as a service, it can host SSRS catalog databases for a
reporting server installed on Azure Virtual Machine, using
SQL Server authentication.
Service tiers
SQL Managed Instance is available in two service tiers:
General purpose : Designed for applications with typical performance and I/O latency requirements.
Business Critical : Designed for applications with low I/O latency requirements and minimal impact of
underlying maintenance operations on the workload.
Both service tiers guarantee 99.99% availability and enable you to independently select storage size and
compute capacity. For more information on the high availability architecture of Azure SQL Managed Instance,
see High availability and Azure SQL Managed Instance.
General Purpose service tier
The following list describes key characteristics of the General Purpose service tier:
Designed for the majority of business applications with typical performance requirements
High-performance Azure Blob storage (16 TB)
Built-in high availability based on reliable Azure Blob storage and Azure Service Fabric
For more information, see Storage layer in the General Purpose tier and Storage performance best practices and
considerations for SQL Managed Instance (General Purpose).
Find more information about the difference between service tiers in SQL Managed Instance resource limits.
Business Critical service tier
The Business Critical service tier is built for applications with high I/O requirements. It offers the highest
resilience to failures using several isolated replicas.
The following list outlines the key characteristics of the Business Critical service tier:
Designed for business applications with highest performance and HA requirements
Comes with super-fast local SSD storage (up to 4 TB on Standard Series (Gen5), up to 5.5 TB on Premium
Series and up to 16 TB on Premium Series Memory-Optimized)
Built-in high availability based on Always On availability groups and Azure Service Fabric
Built-in additional read-only database replica that can be used for reporting and other read-only workloads
In-Memory OLTP that can be used for workload with high-performance requirements
Find more information about the differences between service tiers in SQL Managed Instance resource limits.
Management operations
Azure SQL Managed Instance provides management operations that you can use to automatically deploy new
managed instances, update instance properties, and delete instances when no longer needed. Detailed
explanation of management operations can be found on managed instance management operations overview
page.
IMPORTANT
Place multiple managed instances in the same subnet, wherever that is allowed by your security requirements, as that will
bring you additional benefits. Co-locating instances in the same subnet will significantly simplify networking infrastructure
maintenance and reduce instance provisioning time, since a long provisioning duration is associated with the cost of
deploying the first managed instance in a subnet.
Security features
Azure SQL Managed Instance provides a set of advanced security features that can be used to protect your data.
SQL Managed Instance auditing tracks database events and writes them to an audit log file placed in your
Azure storage account. Auditing can help you maintain regulatory compliance, understand database activity,
and gain insight into discrepancies and anomalies that could indicate business concerns or suspected
security violations.
Data encryption in motion - SQL Managed Instance secures your data by providing encryption for data in
motion using Transport Layer Security. In addition to Transport Layer Security, SQL Managed Instance offers
protection of sensitive data in flight, at rest, and during query processing with Always Encrypted. Always
Encrypted offers data security against breaches involving the theft of critical data. For example, with Always
Encrypted, credit card numbers are stored encrypted in the database always, even during query processing,
allowing decryption at the point of use by authorized staff or applications that need to process that data.
Advanced Threat Protection complements auditing by providing an additional layer of security intelligence
built into the service that detects unusual and potentially harmful attempts to access or exploit databases.
You are alerted about suspicious activities, potential vulnerabilities, and SQL injection attacks, as well as
anomalous database access patterns. Advanced Threat Protection alerts can be viewed from Microsoft
Defender for Cloud. They provide details of suspicious activity and recommend action on how to investigate
and mitigate the threat.
Dynamic data masking limits sensitive data exposure by masking it to non-privileged users. Dynamic data
masking helps prevent unauthorized access to sensitive data by enabling you to designate how much of the
sensitive data to reveal with minimal impact on the application layer. It's a policy-based security feature that
hides the sensitive data in the result set of a query over designated database fields, while the data in the
database is not changed.
Row-level security (RLS) enables you to control access to rows in a database table based on the
characteristics of the user executing a query (such as by group membership or execution context). RLS
simplifies the design and coding of security in your application. RLS enables you to implement restrictions on
data row access. For example, ensuring that workers can access only the data rows that are pertinent to their
department, or restricting a data access to only the relevant data.
Transparent data encryption (TDE) encrypts SQL Managed Instance data files, known as encrypting data at
rest. TDE performs real-time I/O encryption and decryption of the data and log files. The encryption uses a
database encryption key (DEK), which is stored in the database boot record for availability during recovery.
You can protect all your databases in a managed instance with transparent data encryption. TDE is proven
encryption-at-rest technology in SQL Server that is required by many compliance standards to protect
against theft of storage media.
Migration of an encrypted database to SQL Managed Instance is supported via Azure Database Migration
Service or native restore. If you plan to migrate an encrypted database using native restore, migration of the
existing TDE certificate from the SQL Server instance to SQL Managed Instance is a required step. For more
information about migration options, see SQL Server to Azure SQL Managed Instance Guide.
Database migration
SQL Managed Instance targets user scenarios with mass database migration from on-premises or IaaS database
implementations. SQL Managed Instance supports several database migration options that are discussed in the
migration guides. See Migration overview: SQL Server to Azure SQL Managed Instance for more information.
Backup and restore
The migration approach leverages SQL backups to Azure Blob storage. Backups stored in an Azure storage blob
can be directly restored into a managed instance using the T-SQL RESTORE command.
For a quickstart showing how to restore the Wide World Importers - Standard database backup file, see
Restore a backup file to a managed instance. This quickstart shows that you have to upload a backup file to
Azure Blob storage and secure it using a shared access signature (SAS) key.
For information about restore from URL, see Native RESTORE from URL.
IMPORTANT
Backups from a managed instance can only be restored to another managed instance. They cannot be restored to a SQL
Server instance or to Azure SQL Database.
P RO P ERT Y VA L UE C O M M EN T
@@VERSION Microsoft SQL Azure (RTM) - This value is same as in SQL Database.
12.0.2000.8 2018-03-07 Copyright (C) This does not indicate SQL engine
2018 Microsoft Corporation. version 12 (SQL Server 2014). SQL
Managed Instance always runs the
latest stable SQL engine version, which
is equal to or higher than latest
available RTM version of SQL Server.
Next steps
To learn how to create your first managed instance, see Quickstart guide.
For a features and comparison list, see SQL common features.
For more information about VNet configuration, see SQL Managed Instance VNet configuration.
For a quickstart that creates a managed instance and restores a database from a backup file, see Create a
managed instance.
For a tutorial about using Azure Database Migration Service for migration, see SQL Managed Instance
migration using Database Migration Service.
For advanced monitoring of SQL Managed Instance database performance with built-in troubleshooting
intelligence, see Monitor Azure SQL Managed Instance using Azure SQL Analytics.
For pricing information, see SQL Database pricing.
What's new in Azure SQL Managed Instance?
9/13/2022 • 12 minutes to read • Edit Online
Preview
The following table lists the features of Azure SQL Managed Instance that are currently in preview:
Managed Instance link Online replication of SQL Server databases hosted anywhere
to Azure SQL Managed Instance.
Maintenance window advance notifications Advance notifications (preview) for databases configured to
use a non-default maintenance window. Advance
notifications are in preview for Azure SQL Managed Instance.
Memory optimized premium-series hardware Deploy your SQL Managed Instance to the new memory
optimized premium-series hardware to take advantage of
the latest Intel Ice Lake CPUs. Memory optimized hardware
offers higher memory to vCore ratio.
Migrate with Log Replay Service Migrate databases from SQL Server to SQL Managed
Instance by using Log Replay Service.
SDK-style SQL project Use Microsoft.Build.Sql for SDK-style SQL projects in the SQL
Database Projects extension in Azure Data Studio or VS
Code. SDK-style SQL projects are especially advantageous
for applications shipped through pipelines or built in cross-
platform environments.
Service Broker cross-instance message exchange Support for cross-instance message exchange using Service
Broker on Azure SQL Managed Instance.
F EAT URE DETA IL S
SQL Database Projects extension An extension to develop databases for Azure SQL Database
with Azure Data Studio and VS Code. A SQL project is a local
representation of SQL objects that comprise the schema for
a single database, such as tables, stored procedures, or
functions.
Transactional Replication Replicate the changes from your tables into other databases
in SQL Managed Instance, SQL Database, or SQL Server. Or
update your tables when some rows are changed in other
instances of SQL Managed Instance or SQL Server. For
information, see Configure replication in Azure SQL
Managed Instance.
Data virtualization September 2022 Join locally stored relational data with
data queried from external data
sources, such as Azure Data Lake
Storage Gen2 or Azure Blob Storage.
Windows Auth for Azure Active August 2022 Kerberos authentication for Azure
Directory principals Active Directory (Azure AD) enables
Windows Authentication access to
Azure SQL Managed Instance.
Query Store hints August 2022 Use query hints to optimize your
query execution via the OPTION
clause.
Linked server - managed identity November 2021 Create a linked server with managed
Azure AD authentication identity authentication for your Azure
SQL Managed Instance.
Linked server - pass-through Azure November 2021 Create a linked server with pass-
AD authentication through Azure AD authentication for
your Azure SQL Managed Instance.
Long-term backup retention November 2021 Store full backups for a specific
database with configured redundancy
for up to 10 years in Azure Blob
storage, restoring the database as a
new database.
Move instance to different subnet November 2021 Move SQL Managed Instance to a
different subnet using the Azure
portal, Azure PowerShell or the Azure
CLI.
Documentation changes
Learn about significant changes to the Azure SQL Managed Instance documentation.
September 2022
C H A N GES DETA IL S
Data vir tualization GA The data virtualization feature allows users to join locally
stored relational data with data queried from external data
sources, such as Azure Data Lake Storage Gen2 or Azure
Blob Storage. This feature is now generally available. Review
Data virtualization to learn more.
August 2022
C H A N GES DETA IL S
C H A N GES DETA IL S
Windows Auth for Azure Active Director y principals Kerberos authentication for Azure Active Directory (Azure
AD) enables Windows Authentication access to Azure SQL
Managed Instance. This feature is now generally available
(GA). To learn more, review Windows Auth for Azure Active
Directory principals.
Quer y Store hints Use query hints to optimize your query execution via the
OPTION clause. This feature is now generally available (GA).
To learn more, review, Query Store hints
July 2022
C H A N GES DETA IL S
Premium-series hardware GA Deploy your SQL Managed Instance to the new premium-
series hardware to take advantage of the latest Intel Ice Lake
CPUs. The premium-series hardware is now generally
available. See Premium-series hardware to learn more.
May 2022
C H A N GES DETA IL S
SDK-style SQL projects Use Microsoft.Build.Sql for SDK-style SQL projects in the SQL
Database Projects extension in Azure Data Studio or VS
Code. This feature is currently in preview. To learn more, see
SDK-style SQL projects.
JavaScript & Python bindings Support for JavaScript and Python SQL bindings for Azure
Functions is currently in preview. See Azure SQL bindings for
Azure Functions to learn more.
March 2022
C H A N GES DETA IL S
Data vir tualization preview It's now possible to query data in external sources such as
Azure Data Lake Storage Gen2 or Azure Blob Storage, joining
it with locally stored relational data. This feature is currently
in preview. To learn more, see Data virtualization.
Managed Instance link guidance We've published a number of guides for using the Managed
Instance link feature, including how to prepare your
environment, configure replication by using SSMS, configure
replication via scripts, fail over your database by using SSMS,
fail over your database via scripts and some best practices
when using the link feature (currently in preview).
Maintenance window GA, advance notifications The maintenance window feature is now generally available,
preview allowing you to configure a maintenance schedule for your
Azure SQL Managed Instance. It's also possible to receive
advance notifications for planned maintenance events, which
is currently in preview. Review Maintenance window advance
notifications (preview) to learn more.
C H A N GES DETA IL S
Windows Auth for Azure Active Director y principals Windows Authentication for managed instances empowers
preview customers to move existing services to the cloud while
maintaining a seamless user experience, and provides the
basis for infrastructure modernization. Learn more in
Windows Authentication for Azure Active Directory
principals on Azure SQL Managed Instance.
2021
C H A N GES DETA IL S
16 TB suppor t for Business Critical preview The Business Critical service tier of SQL Managed Instance
now provides increased maximum instance storage capacity
of up to 16 TB with the new premium-series and memory
optimized premium-series hardware, which are currently in
preview. See resource limits to learn more.
16 TB suppor t for General Purpose GA Deploying a 16 TB instance to the General Purpose service
tier is now generally available. See resource limits to learn
more.
Endpoint policies preview It's now possible to configure an endpoint policy to restrict
access from a SQL Managed Instance subnet to an Azure
Storage account. This grants an extra layer of protection
against inadvertent or malicious data exfiltration. See
Endpoint policies to learn more.
Link feature preview Use the link feature for SQL Managed Instance to replicate
data from your SQL Server hosted anywhere to Azure SQL
Managed Instance, leveraging the benefits of Azure without
moving your data to Azure, to offload your workloads, for
disaster recovery, or to migrate to the cloud. See the Link
feature for SQL Managed Instance to learn more. The link
feature is currently in limited public preview.
Long-term backup retention GA Storing full backups for a specific database with configured
redundancy for up to 10 years in Azure Blob storage is now
generally available. To learn more, see Long-term backup
retention.
Move instance to different subnet GA It's now possible to move your SQL Managed Instance to a
different subnet. See Move instance to different subnet to
learn more.
C H A N GES DETA IL S
New hardware preview There are now two new hardware configurations for SQL
Managed Instance: premium-series, and a memory
optimized premium-series. Both offerings take advantage of
a new hardware powered by the latest Intel Ice Lake CPUs,
and offer a higher memory to vCore ratio to support your
most resource demanding database applications. As part of
this announcement, the Gen5 hardware has been renamed
to standard-series. The two new premium hardware
offerings are currently in preview. See resource limits to learn
more.
Split what's new The previously combined What's new article has been split
by product - What's new in SQL Database and What's new in
SQL Managed Instance, making it easier to identify what
features are currently in preview, generally available, and
significant documentation changes. Additionally, the Known
Issues in SQL Managed Instance content has moved to its
own page.
16 TB suppor t for General Purpose preview Support has been added for allocation of up to 16 TB of
space for SQL Managed Instance in the General Purpose
service tier. See resource limits to learn more. This instance
offer is currently in preview.
Parallel backup It's now possible to take backups in parallel for SQL
Managed Instance in the General Purpose tier, enabling
faster backups. See the Parallel backup for better
performance blog entry to learn more.
Azure AD-only authentication preview It's now possible to restrict authentication to your Azure SQL
Managed Instance only to Azure Active Directory users. This
feature is currently in preview. To learn more, see Azure AD-
only authentication.
Resource Health monitor Use Resource Health to monitor the health status of your
Azure SQL Managed Instance. See Resource health to learn
more.
Granular permissions for data masking GA Granular permissions for dynamic data masking for Azure
SQL Managed Instance is now generally available (GA). To
learn more, see Dynamic data masking.
User-defined routes (UDR) tables Service-aided subnet configuration for Azure SQL Managed
Instance now makes use of service tags for user-defined
routes (UDR) tables. See the connectivity architecture to
learn more.
Audit management operations The ability to audit SQL Managed Instance operations is now
generally available (GA).
Log Replay Ser vice It's now possible to migrate databases from SQL Server to
Azure SQL Managed Instance using the Log Replay Service.
To learn more, see Migrate with Log Replay Service. This
feature is currently in preview.
C H A N GES DETA IL S
Machine Learning Ser vices GA The Machine Learning Services for Azure SQL Managed
Instance are now generally available (GA). To learn more, see
Machine Learning Services for SQL Managed Instance.
Ser vice Broker message exchange The Service Broker component of Azure SQL Managed
Instance allows you to compose your applications from
independent, self-contained services, by providing native
support for reliable and secure message exchange between
the databases attached to the service. Currently in preview.
To learn more, see Service Broker.
2020
The following changes were added to SQL Managed Instance and the documentation in 2020:
C H A N GES DETA IL S
Configurable backup storage redundancy It's now possible to configure Locally redundant storage
(LRS) and zone-redundant storage (ZRS) options for backup
storage redundancy, providing more flexibility and choice. To
learn more, see Configure backup storage redundancy.
C H A N GES DETA IL S
TDE-encr ypted backup performance improvements It's now possible to set the point-in-time restore (PITR)
backup retention period, and automated compression of
backups encrypted with transparent data encryption (TDE)
are now 30 percent more efficient in consuming backup
storage space, saving costs for the end user. See Change
PITR to learn more.
Azure AD authentication improvements Automate user creation using Azure AD applications and
create individual Azure AD guest users (preview). To learn
more, see Directory readers in Azure AD
Global VNet peering suppor t Global virtual network peering support has been added to
SQL Managed Instance, improving the geo-replication
experience. See geo-replication between managed instances.
Hosting SSRS catalog databases SQL Managed Instance can now host catalog databases of
SQL Server Reporting Services (SSRS) for versions 2017 and
newer.
Enhanced management experience Using the new OPERATIONS API, it's now possible to check
the progress of long-running instance operations. To learn
more, see Management operations.
Machine learning suppor t Machine Learning Services with support for R and Python
languages now include preview support on Azure SQL
Managed Instance (Preview). To learn more, see Machine
learning with SQL Managed Instance.
Known issues
The known issues content has moved to a dedicated known issues in SQL Managed Instance article.
Contribute to content
To contribute to the Azure SQL documentation, see the Docs contributor guide.
Overview of Azure SQL Managed Instance resource
limits
9/13/2022 • 18 minutes to read • Edit Online
NOTE
For differences in supported features and T-SQL statements see Feature differences and T-SQL statement support. For
general differences between service tiers for Azure SQL Database and SQL Managed Instance review General Purpose and
Business Critical service tiers.
NOTE
The Gen5 hardware has been renamed to the standard-series (Gen5) . We are introducing the memor y optimized
premium-series hardware configuration in limited preview.
For information on previously available hardware, see Previously available hardware later in this article.
Hardware configurations have different characteristics, as described in the following table:
M EM O RY O P T IM IZ ED
P REM IUM - SERIES
STA N DA RD- SERIES ( GEN 5) P REM IUM - SERIES ( P REVIEW )
CPU Intel® E5-2673 v4 Intel® 8370C (Ice Lake) Intel® 8370C (Ice Lake)
(Broadwell) 2.3 GHz, 2.8 GHz processors 2.8 GHz processors
Intel® SP-8160 (Skylake),
and Intel® 8272CL
(Cascade Lake) 2.5 GHz
processors
Max memor y 5.1 GB per vCore 7 GB per vCore 13.6 GB per vCore
(memor y/vCore ratio) Add more vCores to get
more memory.
Max In-Memor y OLTP Instance limit: 0.8 - 1.65 GB Instance limit: 1.1 - 2.3 GB Instance limit: 2.2 - 4.5 GB
memor y per vCore per vCore per vCore
M EM O RY O P T IM IZ ED
P REM IUM - SERIES
STA N DA RD- SERIES ( GEN 5) P REM IUM - SERIES ( P REVIEW )
Max instance reser ved General Purpose: up to General Purpose: up to General Purpose: up to
storage * 16 TB 16 TB 16 TB
Business Critical: up to 4 Business Critical: up to Business Critical: up to
TB 5.5 TB 16 TB
NOTE
If your workload requires storage sizes greater than the available resource limits for Azure SQL Managed Instance,
consider the Azure SQL Database Hyperscale service tier.
M EM O RY O P T IM IZ ED
VC O RES STA N DA RD- SERIES ( GEN 5) P REM IUM - SERIES P REM IUM - SERIES
Number of vCores* 4, 8, 16, 24, 32, 40, 64, 80 Standard-series (Gen5) : 4, 8, 16,
24, 32, 40, 64, 80
Premium-series : 4, 8, 16, 24, 32, 40,
64, 80
Memor y optimized premium-
series : 4, 8, 16, 24, 32, 40, 64
*Same number of vCores is dedicated
for read-only queries.
Max database size Up to currently available instance size Up to currently available instance size
(depending on the number of vCores). (depending on the number of vCores).
Max tempDB size Limited to 24 GB/vCore (96 - 1,920 Up to currently available instance
GB) and currently available instance storage size.
storage size.
Add more vCores to get more TempDB
space.
Log file size is limited to 120 GB.
Max number of databases per instance 100 user databases, unless the 100 user databases, unless the
instance storage size limit has been instance storage size limit has been
reached. reached.
F EAT URE GEN ERA L P URP O SE B USIN ESS C RIT IC A L
Max number of database files per Up to 280, unless the instance storage 32,767 files per database, unless the
instance size or Azure Premium Disk storage instance storage size limit has been
allocation space limit has been reached.
reached.
Max data file size Maximum size of each data file is 8 TB. Up to currently available instance size
Use at least two data files for (depending on the number of vCores).
databases larger than 8 TB.
Max log file size Limited to 2 TB and currently available Limited to 2 TB and currently available
instance storage size. instance storage size.
Data/Log IOPS (approximate) 500 - 7500 per file 16 K - 320 K (4000 IOPS/vCore)
*Increase file size to get more IOPS Add more vCores to get better IO
performance.
Log write throughput limit (per 3 MB/s per vCore 4 MB/s per vCore
instance) Max 120 MB/s per instance Max 96 MB/s
22 - 65 MB/s per DB (depending on
log file size)
*Increase the file size to get better IO
performance
Data throughput (approximate) 100 - 250 MB/s per file Not limited.
*Increase the file size to get better IO
performance
Max concurrent workers 105 * number of vCores + 800 105 * number of vCores + 800
IMPORTANT
In the General Purpose and Business Critical tiers, you are charged for the maximum storage size configured for a
managed instance.
To monitor total consumed instance storage size for SQL Managed Instance, use the storage_space_used_mb
metric. To monitor the current allocated and used storage size of individual data and log files in a database using
T-SQL, use the sys.database_files view and the FILEPROPERTY(... , 'SpaceUsed') function.
TIP
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see
Manage file space in Azure SQL Database.
Throughput 100 MiB/s 150 MiB/s 200 MiB/s 250 MiB/s 250 MiB/s 250 MiB/s
per file
If you notice high IO latency on some database file or you see that IOPS/throughput is reaching the limit, you
might improve performance by increasing the file size.
There is also an instance-level limit on the max log write throughput (see above for values, e.g., 22 MB/s), so you
may not be able to reach the max file throughout on the log file because you are hitting the instance throughput
limit.
Supported regions
SQL Managed Instance can be created only in supported regions. To create a SQL Managed Instance in a region
that is currently not supported, you can send a support request via the Azure portal.
Supported subscription types can contain a limited number of resources per region. SQL Managed Instance has
two default limits per Azure region (that can be increased on-demand by creating a special support request in
the Azure portal depending on a type of subscription type:
Subnet limit : The maximum number of subnets where instances of SQL Managed Instance are deployed in
a single region.
vCore unit limit : The maximum number of vCore units that can be deployed across all instances in a single
region. One GP vCore uses one vCore unit and one BC vCore takes four vCore units. The total number of
instances is not limited as long as it is within the vCore unit limit.
NOTE
These limits are default settings and not technical limitations. The limits can be increased on-demand by creating a special
support request in the Azure portal if you need more instances in the current region. As an alternative, you can create
new instances of SQL Managed Instance in another Azure region without sending support requests.
The following table shows the default regional limits for supported subscription types (default limits can be
extended using support request described below):
Pay-as-you-go 6 320
Azure Pass 3 64
BizSpark 3 64
BizSpark Plus 3 64
MSDN Platforms 3 32
* In planning deployments, please take into consideration that Business Critical (BC) service tier requires four (4)
times more vCore capacity than General Purpose (GP) service tier. For example: 1 GP vCore = 1 vCore unit and 1
BC vCore = 4 vCore. To simplify your consumption analysis against the default limits, summarize the vCore units
across all subnets in the region where SQL Managed Instance is deployed and compare the results with the
instance unit limits for your subscription type. Max number of vCore units limit applies to each subscription
in a region. There is no limit per individual subnets except that the sum of all vCores deployed across multiple
subnets must be lower or equal to max number of vCore units .
** Larger subnet and vCore limits are available in the following regions: Australia East, East US, East US 2, North
Europe, South Central US, Southeast Asia, UK South, West Europe, West US 2.
IMPORTANT
In case your vCore and subnet limit is 0, it means that default regional limit for your subscription type is not set. You can
also use quota increase request for getting subscription access in specific region following the same procedure - providing
required vCore and subnet values.
Request a quota increase
If you need more instances in your current regions, send a support request to extend the quota using the Azure
portal. For more information, see Request quota increases for Azure SQL Database.
IMPORTANT
Gen4 hardware is being retired and is not available for new deployments, as announced on December 18, 2019.
Customers using Gen4 for Azure SQL Databases, elastic pools, or SQL managed instances should migrate to currently
available hardware, such as standard-series (Gen5), before January 31, 2023.
For more information on Gen4 hardware retirement and migration to current hardware, see our Blog post on Gen4
retirement. Existing Gen4 databases, elastic pools, and SQL managed instances will be migrated automatically to
equivalent standard-series (Gen5) hardware.
Downtime caused by automatic migration will be minimal and similar to downtime during scaling operations within
selected service tier. To avoid unplanned interruptions to workloads, migrate proactively at the time of your choice before
January 31, 2023.
Hardware characteristics
GEN 4
The amount of In-memory OLTP space in Business Critical service tier depends on the number of vCores and
hardware configuration. The following table lists limits of memory that can be used for In-memory OLTP
objects.
IN - M EM O RY O LT P SPA C E GEN 4
8 vCores 8 GB
16 vCores 20 GB
24 vCores 36 GB
IMPORTANT
Gen4 hardware is being retired and is not available for new deployments, as announced on December 18, 2019.
Customers using Gen4 for Azure SQL Databases, elastic pools, or SQL managed instances should migrate to currently
available hardware, such as standard-series (Gen5), before January 31, 2023.
For more information on Gen4 hardware retirement and migration to current hardware, see our Blog post on Gen4
retirement. Existing Gen4 databases, elastic pools, and SQL managed instances will be migrated automatically to
equivalent standard-series (Gen5) hardware.
Downtime caused by automatic migration will be minimal and similar to downtime during scaling operations within
selected service tier. To avoid unplanned interruptions to workloads, migrate proactively at the time of your choice before
January 31, 2023.
Max database size Up to currently available instance size Up to currently available instance size
(max 2 TB - 8 TB depending on the (max 1 TB - 4 TB depending on the
number of vCores). number of vCores).
Max tempDB size Limited to 24 GB/vCore (96 - 1,920 Up to currently available instance
GB) and currently available instance storage size.
storage size.
Add more vCores to get more TempDB
space.
Log file size is limited to 120 GB.
Max number of databases per instance 100 user databases, unless the 100 user databases, unless the
instance storage size limit has been instance storage size limit has been
reached. reached.
Max number of database files per Up to 280, unless the instance storage 32,767 files per database, unless the
instance size or Azure Premium Disk storage instance storage size limit has been
allocation space limit has been reached.
reached.
Max data file size Limited to currently available instance Limited to currently available instance
storage size (max 2 TB - 8 TB) and storage size (up to 1 TB - 4 TB).
Azure Premium Disk storage allocation
space. Use at least two data files for
databases larger than 8 TB.
Max log file size Limited to 2 TB and currently available Limited to 2 TB and currently available
instance storage size. instance storage size.
Data/Log IOPS (approximate) Up to 30-40 K IOPS per instance*, 500 16 K - 320 K (4000 IOPS/vCore)
- 7500 per file Add more vCores to get better IO
*Increase file size to get more IOPS performance.
Log write throughput limit (per 3 MB/s per vCore 4 MB/s per vCore
instance) Max 120 MB/s per instance Max 96 MB/s
22 - 65 MB/s per DB
*Increase the file size to get better IO
performance
Data throughput (approximate) 100 - 250 MB/s per file Not limited.
*Increase the file size to get better IO
performance
Max concurrent workers 210 * number of vCores + 800 210 * vCore count + 800
Overview
A virtual core (vCore) represents a logical CPU and offers you the option to choose the physical characteristics
of the hardware (for example, the number of cores, the memory, and the storage size). The vCore-based
purchasing model gives you flexibility, control, transparency of individual resource consumption, and a
straightforward way to translate on-premises workload requirements to the cloud. This model optimizes price,
and allows you to choose compute, memory, and storage resources based on your workload needs.
In the vCore-based purchasing model, your costs depend on the choice and usage of:
Service tier
Hardware configuration
Compute resources (the number of vCores and the amount of memory)
Reserved database storage
Actual backup storage
The virtual core (vCore) purchasing model used by Azure SQL Managed Instance provides the following
benefits:
Control over hardware configuration to better match the compute and memory requirements of the
workload.
Pricing discounts for Azure Hybrid Benefit (AHB) and Reserved Instance (RI).
Greater transparency in the hardware details that power compute, helping facilitate planning for migrations
from on-premises deployments.
Higher scaling granularity with multiple compute sizes available.
Service tiers
Service tier options in the vCore purchasing model include General Purpose and Business Critical. The service
tier generally defines the storage architecture, space and I/O limits, and business continuity options related to
availability and disaster recovery.
For more details, review resource limits.
Best for Most business workloads. Offers Offers business applications the
budget-oriented, balanced, and highest resilience to failures by using
scalable compute and storage options. several isolated replicas, and provides
the highest I/O performance.
C AT EGO RY GEN ERA L P URP O SE B USIN ESS C RIT IC A L
Pricing/billing vCore, reserved storage, and backup vCore, reserved storage, and backup
storage is charged. storage is charged.
IOPS is not charged IOPS is not charged.
NOTE
For more information on the Service Level Agreement (SLA), see SLA for Azure SQL Managed Instance.
Compute
SQL Managed Instance compute provides a specific amount of compute resources that are continuously
provisioned independent of workload activity, and bills for the amount of compute provisioned at a fixed price
per hour.
Hardware configurations
Hardware configuration options in the vCore model include standard-series (Gen5), premium-series, and
memory optimized premium-series. Hardware configuration generally defines the compute and memory limits
and other characteristics that impact workload performance.
For more information on the hardware configuration specifics and limitations, see Hardware configuration
characteristics.
In the sys.dm_user_db_resource_governance dynamic management view, hardware generation for instances
using Intel® SP-8160 (Skylake) processors appears as Gen6, while hardware generation for instances using
Intel® 8272CL (Cascade Lake) appears as Gen7. The Intel® 8370C (Ice Lake) CPUs used by premium-series
and memory optimized premium-series hardware generations appear as Gen8. Resource limits for all standard-
series (Gen5) instances are the same regardless of processor type (Broadwell, Skylake, or Cascade Lake).
Selecting a hardware configuration
You can select hardware configuration at the time of instance creation, or you can change hardware of an
existing instance.
To select hardware configuration when creating a SQL Managed Instance
For detailed information, see Create a SQL Managed Instance.
On the Basics tab, select the Configure database link in the Compute + storage section, and then select
desired hardware:
From the SQL Managed Instance page, select Pricing tier link placed under the Settings section
On the Pricing tier page, you will be able to change hardware as described in the previous steps.
When specifying hardware parameter in templates or scripts, hardware is provided by using its name. The
following table applies:
H A RDWA RE NAME
Premium-series G8IM
Hardware availability
Gen4
IMPORTANT
Gen4 hardware is being retired and is not available for new deployments, as announced on December 18, 2019.
Customers using Gen4 for Azure SQL Databases, elastic pools, or SQL managed instances should migrate to currently
available hardware, such as standard-series (Gen5), before January 31, 2023.
For more information on Gen4 hardware retirement and migration to current hardware, see our Blog post on Gen4
retirement. Existing Gen4 databases, elastic pools, and SQL managed instances will be migrated automatically to
equivalent standard-series (Gen5) hardware.
Downtime caused by automatic migration will be minimal and similar to downtime during scaling operations within
selected service tier. To avoid unplanned interruptions to workloads, migrate proactively at the time of your choice before
January 31, 2023.
Next steps
To get started, see Creating a SQL Managed Instance using the Azure portal
For pricing details, see
Azure SQL Managed Instance single instance pricing page
Azure SQL Managed Instance pools pricing page
For details about the specific compute and storage sizes available in the General Purpose and Business
Critical service tiers, see vCore-based resource limits for Azure SQL Managed Instance.
Getting started with Azure SQL Managed Instance
9/13/2022 • 6 minutes to read • Edit Online
Quickstart overview
The following quickstarts enable you to quickly create a SQL Managed Instance, configure a virtual machine or
point to site VPN connection for client application, and restore a database to your new SQL Managed Instance
using a .bak file.
Configure environment
As a first step, you would need to create your first SQL Managed Instance with the network environment where
it will be placed, and enable connection from the computer or virtual machine where you are executing queries
to SQL Managed Instance. You can use the following guides:
Create a SQL Managed Instance using the Azure portal. In the Azure portal, you configure the necessary
parameters (username/password, number of cores, and max storage amount), and automatically create
the Azure network environment without the need to know about networking details and infrastructure
requirements. You just make sure that you have a subscription type that is currently allowed to create a
SQL Managed Instance. If you have your own network that you want to use or you want to customize the
network, see configure an existing virtual network for Azure SQL Managed Instance or create a virtual
network for Azure SQL Managed Instance.
A SQL Managed Instance is created in its own VNet with no public endpoint. For client application access,
you can either create a VM in the same VNet (different subnet) or create a point-to-site VPN
connection to the VNet from your client computer using one of these quickstarts:
Enable public endpoint on your SQL Managed Instance in order to access your data directly from your
environment.
Create Azure Virtual Machine in the SQL Managed Instance VNet for client application connectivity,
including SQL Server Management Studio.
Set up point-to-site VPN connection to your SQL Managed Instance from your client computer on
which you have SQL Server Management Studio and other client connectivity applications. This is
other of two options for connectivity to your SQL Managed Instance and to its VNet.
NOTE
You can also use express route or site-to-site connection from your local network, but these approaches are
out of the scope of these quickstarts.
If you change retention period from 0 (unlimited retention) to any other value, please note that retention will
only apply to logs written after retention value was changed (logs written during the period when retention
was set to unlimited are preserved, even after retention is enabled).
As an alternative to manual creation of SQL Managed Instance, you can use PowerShell, PowerShell with
Resource Manager template, or Azure CLI to script and automate this process.
Migrate your databases
After you create a SQL Managed Instance and configure access, you can start migrating your SQL Server
databases. Migration can fail if you have some unsupported features in the source database that you want to
migrate. To avoid failures and check compatibility, you can use Data Migration Assistant (DMA) to analyze your
databases on SQL Server and find any issue that could block migration to a SQL Managed Instance, such as
existence of FileStream or multiple log files. If you resolve these issues, your databases are ready to migrate to
SQL Managed Instance. Database Experimentation Assistant is another useful tool that can record your
workload on SQL Server and replay it on a SQL Managed Instance so you can determine are there going to be
any performance issues if you migrate to a SQL Managed Instance.
Once you are sure that you can migrate your database to a SQL Managed Instance, you can use the native SQL
Server restore capabilities to restore a database into a SQL Managed Instance from a .bak file. You can use this
method to migrate databases from SQL Server database engine installed on-premises or Azure Virtual
Machines. For a quickstart, see Restore from backup to a SQL Managed Instance. In this quickstart, you restore
from a .bak file stored in Azure Blob storage using the RESTORE Transact-SQL command.
TIP
To use the BACKUP Transact-SQL command to create a backup of your database in Azure Blob storage, see SQL Server
backup to URL.
These quickstarts enable you to quickly create, configure, and restore database backup to a SQL Managed
Instance. In some scenarios, you would need to customize or automate deployment of SQL Managed Instance
and the required networking environment. These scenarios will be described below.
Next steps
Find a high-level list of supported features in SQL Managed Instance here and details and known issues here.
Learn about technical characteristics of SQL Managed Instance.
Find more advanced how-to's in how to use a SQL Managed Instance.
Identify the right Azure SQL Managed Instance SKU for your on-premises database.
Quickstart: Create an Azure SQL Managed Instance
9/13/2022 • 9 minutes to read • Edit Online
IMPORTANT
For limitations, see Supported regions and Supported subscription types.
5. Use the tabs on the Create Azure SQL Managed Instance provisioning form to add required and
optional information. The following sections describe these tabs.
Basics tab
Fill out mandatory information required on the Basics tab. This is a minimum set of information required
to provision a SQL Managed Instance.
Use the table below as a reference for information required at this tab.
Resource group A new or existing resource group. For valid resource group names, see
Naming rules and restrictions.
Managed instance name Any valid name. For valid names, see Naming rules
and restrictions.
Region The region in which you want to For information about regions, see
create the managed instance. Azure regions.
Managed instance admin login Any valid username. For valid names, see Naming rules
and restrictions. Don't use
"serveradmin" because that's a
reserved server-level role.
Ser vice Tier Select one of the options. Based on your scenario, select one of
the following options:
General Purpose : for most
production workloads, and the
default option.
Business Critical: designed
for low-latency workloads with
high resiliency to failures and
fast failovers.
Azure Hybrid Benefit Check option if applicable. For leveraging an existing license for
Azure. For more information, see Azure
Hybrid Benefit - Azure SQL Database
& SQL Managed Instance.
Backup storage redundancy Select Geo-redundant backup Storage redundancy inside Azure for
storage . backup storage. Note that this value
cannot be changed later. Geo-
redundant backup storage is default
and recommended, though Zone and
Local redundancy allow for more cost
flexibility and single region data
residency. For more information, see
Backup Storage redundancy.
To review your choices before you create a SQL Managed Instance, you can select Review + create . Or,
configure networking options by selecting Next: Networking .
Networking tab
Fill out optional information on the Networking tab. If you omit this information, the portal will apply
default settings.
Use the table below as a reference for information required at this tab.
Vir tual network Select either Create new vir tual If a network or subnet is
network or a valid virtual network unavailable, it must be modified to
and subnet. satisfy the network requirements
before you select it as a target for
the new managed instance. For
information about the requirements
for configuring the network
environment for SQL Managed
Instance, see Configure a virtual
network for SQL Managed Instance.
Connection type Choose between a proxy and a For more information about
redirect connection type. connection types, see Azure SQL
Managed Instance connection type.
Allow access from (if Public Select No Access The portal experience enables
endpoint is enabled) configuring a security group with a
public endpoint.
Select Review + create to review your choices before you create a managed instance. Or, configure
more custom settings by selecting Next: Additional settings .
Additional settings
Fill out optional information on the Additional settings tab. If you omit this information, the portal will
apply default settings.
Use the table below as a reference for information required at this tab.
Collation Choose the collation that you want For information about collations, see
to use for your managed instance. If Set or change the server collation.
you migrate databases from SQL
Server, check the source collation by
using
SELECT
SERVERPROPERTY(N'Collation')
and use that value.
Time zone Select the time zone that managed For more information, see Time
instance will observe. zones.
Use as failover secondar y Select Yes . Enable this option to use the
managed instance as a failover
group secondary.
Primar y SQL Managed Instance Choose an existing primary This step will enable post-creation
(if Use as failover secondar y is managed instance that will be joined configuration of the failover group.
set to Yes ) in the same DNS zone with the For more information, see Tutorial:
managed instance you're creating. Add a managed instance to a
failover group.
Select Review + create to review your choices before you create a managed instance. Or, configure
Azure Tags by selecting Next: Tags (recommended).
Tags
Add tags to resources in your Azure Resource Manager template (ARM template). Tags help you logically
organize your resources. The tag values show up in cost reports and allow for other management
activities by tag.
Consider at least tagging your new SQL Managed Instance with the Owner tag to identify who created,
and the Environment tag to identify whether this system is Production, Development, etc. For more
information, see Develop your naming and tagging strategy for Azure resources.
Select Review + create to proceed.
Review + create
1. Select Review + create tab to review your choices before you create a managed instance.
IMPORTANT
Deploying a managed instance is a long-running operation. Deployment of the first instance in the subnet typically takes
much longer than deploying into a subnet with existing managed instances. For average provisioning times, see Overview
of Azure SQL Managed Instance management operations.
Monitor deployment progress
1. Select the Notifications icon to view the status of the deployment.
2. Select Deployment in progress in the notification to open the SQL Managed Instance window and
further monitor the deployment progress.
TIP
If you closed your web browser or moved away from the deployment progress screen, you can monitor the
provisioning operation via the managed instance's Over view page, or via PowerShell or the Azure CLI. For more
information, see Monitor operations.
You can cancel the provisioning process through Azure portal, or via PowerShell or the Azure CLI or other tooling
using the REST API. See Canceling Azure SQL Managed Instance management operations.
IMPORTANT
Start of SQL Managed Instance creation could be delayed in cases when there exist other impacting operations, such
are long-running restore or scaling operations on other Managed Instances in the same subnet. To learn more, see
Management operations cross-impact.
In order to be able to get the status of managed instance creation, you need to have read permissions over the
resource group. If you don't have this permission or revoke it while the managed instance is in creation process, this
can cause SQL Managed Instance not to be visible in the list of resource group deployments.
To change or add routes, open the Routes in the Route table settings.
3. Return to the resource group, and select the network security group (NSG) object that was created.
4. Review the inbound and outbound security rules.
To change or add rules, open the Inbound Security Rules and Outbound security rules in the
Network security group settings.
IMPORTANT
If you have configured a public endpoint for SQL Managed Instance, you need to open ports to allow network traffic
allowing connections to SQL Managed Instance from the public internet. For more information, see Configure a public
endpoint for SQL Managed Instance.
The value copied represents a fully qualified domain name (FQDN) that can be used to connect to SQL
Managed Instance. It is similar to the following address example:
your_host_name.a1b2c3d4e5f6.database.windows.net.
Next steps
To learn about how to connect to SQL Managed Instance:
For an overview of the connection options for applications, see Connect your applications to SQL Managed
Instance.
For a quickstart that shows how to connect to SQL Managed Instance from an Azure virtual machine, see
Configure an Azure virtual machine connection.
For a quickstart that shows how to connect to SQL Managed Instance from an on-premises client computer
by using a point-to-site connection, see Configure a point-to-site connection.
To restore an existing SQL Server database from on-premises to SQL Managed Instance:
Use the Azure Database Migration Service for migration to restore from a database backup file.
Use the T-SQL RESTORE command to restore from a database backup file.
For advanced monitoring of SQL Managed Instance database performance with built-in troubleshooting
intelligence, see Monitor Azure SQL Managed Instance by using Azure SQL Analytics.
Quickstart: Create a managed instance using Azure
PowerShell
9/13/2022 • 2 minutes to read • Edit Online
In this quickstart, learn to create an instance of Azure SQL Managed Instance using Azure PowerShell.
Prerequisite
An active Azure subscription. If you don't have one, create a free account.
The latest version of Azure PowerShell.
Set variables
Creating a SQL Manged Instance requires creating several resources within Azure, and as such, the Azure
PowerShell commands rely on variables to simplify the experience. Define the variables, and then execute the
the cmdlets in each section within the same PowerShell session.
$NSnetworkModels = "Microsoft.Azure.Commands.Network.Models"
$NScollections = "System.Collections.Generic"
# The SubscriptionId in which to create these objects
$SubscriptionId = ''
# Set the resource group name and location for your managed instance
$resourceGroupName = "myResourceGroup-$(Get-Random)"
$location = "eastus2"
# Set the networking values for your managed instance
$vNetName = "myVnet-$(Get-Random)"
$vNetAddressPrefix = "10.0.0.0/16"
$miSubnetName = "myMISubnet-$(Get-Random)"
$miSubnetAddressPrefix = "10.0.0.0/24"
#Set the managed instance name for the new managed instance
$instanceName = "myMIName-$(Get-Random)"
# Set the admin login and password for your managed instance
$miAdminSqlLogin = "SqlAdmin"
$miAdminSqlPassword = "ChangeYourAdminPassword1"
# Set the managed instance service tier, compute level, and license mode
$edition = "General Purpose"
$vCores = 4
$maxStorage = 128
$computeGeneration = "Gen5"
$license = "LicenseIncluded" #"BasePrice" or LicenseIncluded if you have don't have SQL Server licence that
can be used for AHB discount
Configure networking
After your resource group is created, configure the networking resources such as the virtual network, subnets,
network security group, and routing table. This example demonstrates the use of the Delegate subnet for
Managed Instance deployment script, which is available on GitHub as delegate-subnet.ps1.
To do so, execute this PowerShell script:
# Configure virtual network, subnets, network security group, and routing table
$virtualNetwork = New-AzVirtualNetwork `
-ResourceGroupName $resourceGroupName `
-Location $location `
-Name $vNetName `
-AddressPrefix $vNetAddressPrefix
Add-AzVirtualNetworkSubnetConfig `
-Name $miSubnetName `
-VirtualNetwork $virtualNetwork `
-AddressPrefix $miSubnetAddressPrefix |
Set-AzVirtualNetwork
$scriptUrlBase = 'https://raw.githubusercontent.com/Microsoft/sql-server-
samples/master/samples/manage/azure-sql-db-managed-instance/delegate-subnet'
$parameters = @{
subscriptionId = $SubscriptionId
resourceGroupName = $resourceGroupName
virtualNetworkName = $vNetName
subnetName = $miSubnetName
}
# Create credentials
$secpassword = ConvertTo-SecureString $miAdminSqlPassword -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential ($miAdminSqlLogin, $secpassword)
This operation may take some time to complete. To learn more, see Management operations.
Clean up resources
Keep the resource group, and managed instance to go on to the next steps, and learn how to connect to your
SQL Managed Instance using a client virtual machine.
When you're finished using these resources, you can delete the resource group you created, which will also
delete the server and single database within it.
# Clean up deployment
Remove-AzResourceGroup -ResourceGroupName $resourceGroupName
Next steps
After your SQL Managed Instance is created, deploy a client VM to connect to your SQL Managed Instance, and
restore a sample database.
Create client VM Restore database
Quickstart: Create an Azure SQL Managed Instance
using Bicep
9/13/2022 • 3 minutes to read • Edit Online
This quickstart focuses on the process of deploying a Bicep file to create an Azure SQL Managed Instance and
vNet. Azure SQL Managed Instance is an intelligent, fully managed, scalable cloud database, with almost 100%
feature parity with the SQL Server database engine.
Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. It provides
concise syntax, reliable type safety, and support for code reuse. Bicep offers the best authoring experience for
your infrastructure-as-code solutions in Azure.
Prerequisites
If you don't have an Azure subscription, create a free account.
@description('Enter password.')
@secure()
param administratorLoginPassword string
@description('Enter location. If you leave this field blank resource group location would be used.')
param location string = resourceGroup().location
@description('Enter virtual network name. If you leave this field blank name will be created by the
template.')
param virtualNetworkName string = 'SQLMI-VNET'
CLI
PowerShell
NOTE
Replace <instance-name> with the name of the managed instance. Replace <admin-login> with the administrator
username. You'll be prompted to enter administratorLoginPassword .
When the deployment finishes, you should see a message indicating the deployment succeeded.
CLI
PowerShell
Clean up resources
When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and
its resources.
CLI
PowerShell
Next steps
Configure an Azure VM to connect to Azure SQL Managed Instance
Quickstart: Create an Azure SQL Managed Instance
using an ARM template
9/13/2022 • 4 minutes to read • Edit Online
This quickstart focuses on the process of deploying an Azure Resource Manager template (ARM template) to
create an Azure SQL Managed Instance and vNet. Azure SQL Managed Instance is an intelligent, fully managed,
scalable cloud database, with almost 100% feature parity with the SQL Server database engine.
An ARM template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for
your project. The template uses declarative syntax. In declarative syntax, you describe your intended deployment
without writing the sequence of programming commands to create the deployment.
If your environment meets the prerequisites and you're familiar with using ARM templates, select the Deploy to
Azure button. The template will open in the Azure portal.
Prerequisites
If you don't have an Azure subscription, create a free account.
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"metadata": {
"_generator": {
"name": "bicep",
"version": "0.6.1.6515",
"templateHash": "13317687096436273875"
}
},
"parameters": {
"managedInstanceName": {
"type": "string",
"metadata": {
"description": "Enter managed instance name."
}
},
"administratorLogin": {
"type": "string",
"metadata": {
"description": "Enter user name."
}
},
"administratorLoginPassword": {
"type": "secureString",
"metadata": {
"description": "Enter password."
}
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Enter location. If you leave this field blank resource group location would be
used."
}
},
"virtualNetworkName": {
"type": "string",
"defaultValue": "SQLMI-VNET",
"metadata": {
"description": "Enter virtual network name. If you leave this field blank name will be created by
the template."
}
},
"addressPrefix": {
"type": "string",
"defaultValue": "10.0.0.0/16",
"metadata": {
"description": "Enter virtual network address prefix."
}
},
"subnetName": {
"type": "string",
"defaultValue": "ManagedInstance",
"metadata": {
"description": "Enter subnet name."
}
},
"subnetPrefix": {
"type": "string",
"defaultValue": "10.0.0.0/24",
"metadata": {
"description": "Enter subnet address prefix."
}
},
"skuName": {
"type": "string",
"defaultValue": "GP_Gen5",
"allowedValues": [
"GP_Gen5",
"BC_Gen5"
],
"metadata": {
"description": "Enter sku name."
}
},
"vCores": {
"type": "int",
"defaultValue": 16,
"allowedValues": [
8,
16,
24,
32,
40,
64,
80
],
"metadata": {
"description": "Enter number of vCores."
}
},
"storageSizeInGB": {
"type": "int",
"defaultValue": 256,
"maxValue": 8192,
"minValue": 32,
"metadata": {
"description": "Enter storage size."
}
},
"licenseType": {
"type": "string",
"defaultValue": "LicenseIncluded",
"allowedValues": [
"BasePrice",
"LicenseIncluded"
],
"metadata": {
"description": "Enter license type."
}
}
},
"variables": {
"networkSecurityGroupName": "[format('SQLMI-{0}-NSG', parameters('managedInstanceName'))]",
"routeTableName": "[format('SQLMI-{0}-Route-Table', parameters('managedInstanceName'))]"
},
"resources": [
{
"type": "Microsoft.Network/networkSecurityGroups",
"apiVersion": "2021-08-01",
"name": "[variables('networkSecurityGroupName')]",
"location": "[parameters('location')]",
"properties": {
"securityRules": [
{
"name": "allow_tds_inbound",
"properties": {
"description": "Allow access to data",
"protocol": "Tcp",
"sourcePortRange": "*",
"destinationPortRange": "1433",
"sourceAddressPrefix": "VirtualNetwork",
"destinationAddressPrefix": "*",
"access": "Allow",
"priority": 1000,
"direction": "Inbound"
}
},
{
"name": "allow_redirect_inbound",
"properties": {
"description": "Allow inbound redirect traffic to Managed Instance inside the virtual
network",
"protocol": "Tcp",
"sourcePortRange": "*",
"destinationPortRange": "11000-11999",
"sourceAddressPrefix": "VirtualNetwork",
"destinationAddressPrefix": "*",
"access": "Allow",
"priority": 1100,
"direction": "Inbound"
}
},
{
"name": "deny_all_inbound",
"properties": {
"description": "Deny all other inbound traffic",
"protocol": "*",
"sourcePortRange": "*",
"destinationPortRange": "*",
"sourceAddressPrefix": "*",
"destinationAddressPrefix": "*",
"access": "Deny",
"priority": 4096,
"direction": "Inbound"
}
},
{
"name": "deny_all_outbound",
"properties": {
"description": "Deny all other outbound traffic",
"protocol": "*",
"sourcePortRange": "*",
"destinationPortRange": "*",
"sourceAddressPrefix": "*",
"destinationAddressPrefix": "*",
"access": "Deny",
"priority": 4096,
"direction": "Outbound"
}
}
]
}
},
{
"type": "Microsoft.Network/routeTables",
"apiVersion": "2021-08-01",
"name": "[variables('routeTableName')]",
"location": "[parameters('location')]",
"properties": {
"disableBgpRoutePropagation": false
}
},
{
"type": "Microsoft.Network/virtualNetworks",
"apiVersion": "2021-08-01",
"name": "[parameters('virtualNetworkName')]",
"location": "[parameters('location')]",
"properties": {
"addressSpace": {
"addressPrefixes": [
"[parameters('addressPrefix')]"
]
},
"subnets": [
{
"name": "[parameters('subnetName')]",
"properties": {
"addressPrefix": "[parameters('subnetPrefix')]",
"routeTable": {
"id": "[resourceId('Microsoft.Network/routeTables', variables('routeTableName'))]"
},
"networkSecurityGroup": {
"id": "[resourceId('Microsoft.Network/networkSecurityGroups',
variables('networkSecurityGroupName'))]"
},
"delegations": [
{
"name": "managedInstanceDelegation",
"properties": {
"serviceName": "Microsoft.Sql/managedInstances"
}
}
]
}
}
]
},
"dependsOn": [
"[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]",
"[resourceId('Microsoft.Network/routeTables', variables('routeTableName'))]"
]
},
{
"type": "Microsoft.Sql/managedInstances",
"apiVersion": "2021-11-01-preview",
"apiVersion": "2021-11-01-preview",
"name": "[parameters('managedInstanceName')]",
"location": "[parameters('location')]",
"sku": {
"name": "[parameters('skuName')]"
},
"identity": {
"type": "SystemAssigned"
},
"properties": {
"administratorLogin": "[parameters('administratorLogin')]",
"administratorLoginPassword": "[parameters('administratorLoginPassword')]",
"subnetId": "[resourceId('Microsoft.Network/virtualNetworks/subnets',
parameters('virtualNetworkName'), parameters('subnetName'))]",
"storageSizeInGB": "[parameters('storageSizeInGB')]",
"vCores": "[parameters('vCores')]",
"licenseType": "[parameters('licenseType')]"
},
"dependsOn": [
"[resourceId('Microsoft.Network/virtualNetworks', parameters('virtualNetworkName'))]"
]
}
]
}
IMPORTANT
Deploying a managed instance is a long-running operation. Deployment of the first instance in the subnet typically takes
much longer than deploying into a subnet with existing managed instances. For average provisioning times, see SQL
Managed Instance management operations.
PowerShell
Azure CLI
$projectName = Read-Host -Prompt "Enter a project name that is used for generating resource names"
$location = Read-Host -Prompt "Enter the location (i.e. centralus)"
$templateUri = "https://raw.githubusercontent.com/Azure/azure-quickstart-
templates/master/quickstarts/microsoft.sql/sqlmi-new-vnet/azuredeploy.json"
$resourceGroupName = "${projectName}rg"
Clean up resources
Keep the managed instance if you want to go to the Next steps, but delete the managed instance and related
resources after completing any additional tutorials. After deleting a managed instance, see Delete a subnet after
deleting a managed instance.
To delete the resource group:
PowerShell
Azure CLI
Next steps
Configure an Azure VM to connect to Azure SQL Managed Instance
Deploy Azure SQL Managed Instance to an
instance pool
9/13/2022 • 8 minutes to read • Edit Online
PowerShell
Azure CLI
To use PowerShell, install the latest version of PowerShell Core, and follow instructions to Install the Azure
PowerShell module.
Available PowerShell commands:
C M DL ET DESC RIP T IO N
For operations related to instances both inside pools and single instances, use the standard managed instance
commands, but the instance pool name property must be populated when using these commands for an
instance in a pool.
Deployment process
To deploy a managed instance into an instance pool, you must first deploy the instance pool, which is a one-time
long-running operation where the duration is the same as deploying a single instance created in an empty
subnet. After that, you can deploy a managed instance into the pool, which is a relatively fast operation that
typically takes up to five minutes. The instance pool parameter must be explicitly specified as part of this
operation.
In public preview, both actions are only supported using PowerShell and Azure Resource Manager templates.
The Azure portal experience is not currently available.
After a managed instance is deployed to a pool, you can use the Azure portal to change its properties on the
pricing tier page.
IMPORTANT
Deploying an instance pool is a long running operation that takes approximately 4.5 hours.
PowerShell
Azure CLI
$instancePool = New-AzSqlInstancePool `
-ResourceGroupName "myResourceGroup" `
-Name "mi-pool-name" `
-SubnetId $subnet.Id `
-LicenseType "LicenseIncluded" `
-VCore 8 `
-Edition "GeneralPurpose" `
-ComputeGeneration "Gen5" `
-Location "westeurope"
IMPORTANT
Because deploying an instance pool is a long running operation, you need to wait until it completes before running any of
the following steps in this article.
Deploying an instance inside a pool takes a couple of minutes. After the first instance has been created,
additional instances can be created:
Create a database
To create and manage databases in a managed instance that's inside a pool, use the single instance commands.
To create a database inside a managed instance:
$instancePool | Get-AzSqlInstancePoolUsage
To get detailed usage overview of the pool and instances inside it:
NOTE
For checking limits on number of databases per instance pool and managed instance deployed inside the pool visit
Instance pool resource limits section.
Scale
After populating a managed instance with databases, you may hit instance limits regarding storage or
performance. In that case, if pool usage has not been exceeded, you can scale your instance. Scaling a managed
instance inside a pool is an operation that takes a couple of minutes. The prerequisite for scaling is available
vCores and storage on the instance pool level.
To update the number of vCores and storage size:
Connect
To connect to a managed instance in a pool, the following two steps are required:
1. Enable the public endpoint for the instance.
2. Add an inbound rule to the network security group (NSG).
After both steps are complete, you can connect to the instance by using a public endpoint address, port, and
credentials provided during instance creation.
Enable the public endpoint
Enabling the public endpoint for an instance can be done through the Azure portal or by using the following
PowerShell command:
$instanceOne | Set-AzSqlInstance -InstancePoolName "pool-mi-001" -PublicDataEndpointEnabled $true
Restore-AzSqlInstanceDatabase -FromPointInTimeBackup `
-ResourceGroupName $resourceGroupName `
-InstanceName $managedInstanceName `
-Name $databaseName `
-PointInTime $pointInTime `
-TargetInstanceDatabaseName $targetDatabase `
-TargetResourceGroupName $targetResourceGroupName `
-TargetInstanceName $targetInstanceName
4. Point your application to the new instance and resume its workloads.
If there are multiple databases, repeat the process for each database.
Next steps
For a features and comparison list, see SQL common features.
For more information about VNet configuration, see SQL Managed Instance VNet configuration.
For a quickstart that creates a managed instance and restores a database from a backup file, see Create a
managed instance.
For a tutorial about using Azure Database Migration Service for migration, see SQL Managed Instance
migration using Database Migration Service.
For advanced monitoring of SQL Managed Instance database performance with built-in troubleshooting
intelligence, see Monitor Azure SQL Managed Instance using Azure SQL Analytics.
For pricing information, see SQL Managed Instance pricing.
Create an Azure SQL Managed Instance with a
user-assigned managed identity
9/13/2022 • 10 minutes to read • Edit Online
NOTE
If you are looking for a guide on Azure SQL Database, see Create an Azure SQL logical server using a user-assigned
managed identity
This how-to guide outlines the steps to create an Azure SQL Managed Instance with a user-assigned managed
identity. For more information on the benefits of using a user-assigned managed identity for the server identity
in Azure SQL Database, see User-assigned managed identity in Azure AD for Azure SQL.
Prerequisites
To provision a Managed Instance with a user-assigned managed identity, the SQL Managed Instance
Contributor role (or a role with greater permissions), along with an Azure RBAC role containing the following
action is required:
Microsoft.ManagedIdentity/userAssignedIdentities/*/assign/action - For example, the Managed
Identity Operator has this action.
Create a user-assigned managed identity and assign it the necessary permission to be a server or managed
instance identity. For more information, see Manage user-assigned managed identities and user-assigned
managed identity permissions for Azure SQL.
Az.Sql module 3.4 or higher is required when using PowerShell for user-assigned managed identities.
The Azure CLI 2.26.0 or higher is required to use the Azure CLI with user-assigned managed identities.
For a list of limitations and known issues with using user-assigned managed identity, see User-assigned
managed identity in Azure AD for Azure SQL
Portal
The Azure CLI
PowerShell
REST API
ARM Template
1. Browse to the Select SQL deployment option page in the Azure portal.
2. If you aren't already signed in to Azure portal, sign in when prompted.
3. Under SQL managed instances , leave Resource type set to Single instance , and select Create .
4. Fill out the mandatory information required on the Basics tab for Project details and Managed
Instance details . This is a minimum set of information required to provision a SQL Managed Instance.
For more information on the configuration options, see Quickstart: Create an Azure SQL Managed
Instance.
5. Under Authentication , select a preferred authentication model. If you're looking to only configure Azure
AD-only authentication, see our guide here.
6. Next, go through the Networking tab configuration, or leave the default settings.
7. On the Security tab, under Identity , select Configure Identities .
8. On the Identity blade, under User assigned managed identity , select Add . Select the desired
Subscription and then under User assigned managed identities select the desired user assigned
managed identity from the selected subscription. Then select the Select button.
9. Under Primar y identity , select the same user-assigned managed identity selected in the previous step.
NOTE
If the system-assigned managed identity is the primary identity, the Primar y identity field must be empty.
See also
User-assigned managed identity in Azure AD for Azure SQL
Create an Azure SQL logical server using a user-assigned managed identity
Enabling service-aided subnet configuration for
Azure SQL Managed Instance
9/13/2022 • 2 minutes to read • Edit Online
IMPORTANT
Once subnet-delegation is turned on you could not turn it off until the very last virtual cluster is removed from the
subnet. For more details on virtual cluster lifetime see the following article.
NOTE
As service-aided subnet configuration is essential feature for maintaining SLA, starting May 1st 2020, it won't be possible
to deploy managed instances in subnets that are not delegated to managed instance resource provider. On July 1st 2020
all subnets containing managed instances will be automatically delegated to managed instance resource provider.
Import-Module Az.Accounts
Import-Module Az.Sql
Connect-AzAccount
# Replace rg-name with the resource group for your managed instance, and replace mi-name with the name of
your managed instance
$mi.SubnetId
Once you find managed instance subnet you need to delegate it to Microsoft.Sql/managedInstances resource
provider as described in following article. Please note that referenced article uses
Microsoft.DBforPostgreSQL/serversv2 resource provider for example. You'll need to use
Microsoft.Sql/managedInstances resource provider instead.
IMPORTANT
Enabling service-aided configuration doesn't cause failover or interruption in connectivity for managed instances that are
already in the subnet.
Configure public endpoint in Azure SQL Managed
Instance
9/13/2022 • 3 minutes to read • Edit Online
Permissions
Due to the sensitivity of data that is in a managed instance, the configuration to enable managed instance public
endpoint requires a two-step process. This security measure adheres to separation of duties (SoD):
Enabling public endpoint on a managed instance needs to be done by the managed instance admin. The
managed instance admin can be found on Over view page of your managed instance resource.
Allowing traffic using a network security group that needs to be done by a network admin. For more
information, see network security group permissions.
Install-Module -Name Az
Import-Module Az.Accounts
Import-Module Az.Sql
Connect-AzAccount
# Replace rg-name with the resource group for your managed instance, and replace mi-name with the name of
your managed instance
2. Select the Subnets tab on the left configuration pane of your Virtual network, and make note of the
SECURITY GROUP for your managed instance.
3. Go back to your resource group that contains your managed instance. You should see the Network
security group name noted above. Select the name to go into the network security group configuration
page.
4. Select the Inbound security rules tab, and Add a rule that has higher priority than the
deny_all_inbound rule with the following settings:
Source Any IP address or Service tag For Azure services like Power
BI, select the Azure Cloud
Service Tag
For your computer or Azure
virtual machine, use NAT IP
address
NOTE
Port 3342 is used for public endpoint connections to managed instance, and cannot be changed at this point.
Next steps
Learn about using Azure SQL Managed Instance securely with public endpoint.
Configure minimal TLS version in Azure SQL
Managed Instance
9/13/2022 • 2 minutes to read • Edit Online
The Minimal Transport Layer Security (TLS) Version setting allows customers to control the version of TLS used
by their Azure SQL Managed Instance.
At present we support TLS 1.0, 1.1 and 1.2. Setting a Minimal TLS Version ensures that subsequent, newer TLS
versions are supported. For example, e.g., choosing a TLS version greater than 1.1. means only connections with
TLS 1.1 and 1.2 are accepted and TLS 1.0 is rejected. After testing to confirm your applications supports it, we
recommend setting minimal TLS version to 1.2 since it includes fixes for vulnerabilities found in previous
versions and is the highest version of TLS supported in Azure SQL Managed Instance.
For customers with applications that rely on older versions of TLS, we recommend setting the Minimal TLS
Version per the requirements of your applications. For customers that rely on applications to connect using an
unencrypted connection, we recommend not setting any Minimal TLS Version.
For more information, see TLS considerations for SQL Database connectivity.
After setting the Minimal TLS Version, login attempts from clients that are using a TLS version lower than the
Minimal TLS Version of the server will fail with following error:
Error 47072
Login failed with invalid TLS version
IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is
for the Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the
AzureRm modules are substantially identical. The following script requires the Azure PowerShell module.
The following PowerShell script shows how to Get and Set the Minimal TLS Version property at the
instance level:
Prerequisites
This quickstart uses the resources created in Create a managed instance as its starting point.
Name Any valid name For valid names, see Naming rules
and restrictions.
Address range (CIDR block) A valid range The default value is good for this
quickstart.
Network security group None The default value is good for this
quickstart.
Ser vice endpoints 0 selected The default value is good for this
quickstart.
2. Fill out the form using the information in the following table:
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N
Resource Group The resource group that you This resource group must be the
specified in the Create SQL one in which the VNet exists.
Managed Instance quickstart
Location The location for the resource group This value is populated based on the
resource group selected.
Vir tual machine name Any valid name For valid names, see Naming rules
and restrictions.
Admin Username Any valid username For valid names, see Naming rules
and restrictions. Don't use
"serveradmin" as that is a reserved
server-level role.
You use this username any time you
connect to the VM.
Vir tual Machine Size Any valid size The default in this template of
Standard_B2s is sufficient for this
quickstart.
Subnet name The name of the subnet that you Don't choose the subnet in which
created in the previous procedure you created the managed instance.
ar tifacts Location Sas token Leave blank Don't change this value.
If you used the suggested VNet name and the default subnet in creating your SQL Managed Instance, you
don't need to change last two parameters. Otherwise you should change these values to the values that
you entered when you set up the network environment.
3. Select the I agree to the terms and conditions stated above checkbox.
4. Select Purchase to deploy the Azure VM in your network.
5. Select the Notifications icon to view the status of deployment.
IMPORTANT
Do not continue until approximately 15 minutes after the virtual machine is created to give time for the post-creation
scripts to install SQL Server Management Studio.
2. Select Connect .
A Remote Desktop Protocol file (.rdp file) form appears with the public IP address and port number for
the virtual machine.
3. Select Download RDP File .
NOTE
You can also use SSH to connect to your VM.
Next steps
For a quickstart showing how to connect from an on-premises client computer using a point-to-site
connection, see Configure a point-to-site connection.
For an overview of the connection options for applications, see Connect your applications to SQL Managed
Instance.
To restore an existing SQL Server database from on-premises to a managed instance, you can use Azure
Database Migration Service for migration or the T-SQL RESTORE command to restore from a database
backup file.
Quickstart: Configure a point-to-site connection to
Azure SQL Managed Instance from on-premises
9/13/2022 • 3 minutes to read • Edit Online
Prerequisites
This quickstart:
Uses the resources created in Create a managed instance as its starting point.
Requires PowerShell 5.1 and Azure PowerShell 1.4.0 or later on your on-premises client computer. If
necessary, see the instructions for installing the Azure PowerShell module.
Requires the newest version of SQL Server Management Studio on your on-premises client computer.
$scriptUrlBase = 'https://raw.githubusercontent.com/Microsoft/sql-server-
samples/master/samples/manage/azure-sql-db-managed-instance/attach-vpn-gateway'
$parameters = @{
subscriptionId = '<subscriptionId>'
resourceGroupName = '<resourceGroupName>'
virtualNetworkName = '<virtualNetworkName>'
certificateNamePrefix = '<certificateNamePrefix>'
}
3. Paste the script in your PowerShell window and provide the required parameters. The values for
<subscriptionId> , <resourceGroup> , and <virtualNetworkName> should match the ones that you used for
the Create a managed instance quickstart. The value for <certificateNamePrefix> can be a string of your
choice.
4. Execute the PowerShell script.
IMPORTANT
Do not continue until the PowerShell script completes.
4. On your on-premises client computer, extract the files from the zip file and then open the folder with the
extracted files.
5. Open the WindowsAmd64 folder and open the VpnClientSetupAmd64.exe file.
6. If you receive a Windows protected your PC message, click More info and then click Run anyway .
7. In the User Account Control dialog box, click Yes to continue.
8. In the dialog box referencing your virtual network, select Yes to install the VPN client for your virtual
network.
4. When you're prompted that Connection Manager needs elevated privileges to update your route table,
choose Continue .
5. Select Yes in the User Account Control dialog box to continue.
You've established a VPN connection to your SQL Managed Instance VNet.
Next steps
For a quickstart showing how to connect from an Azure virtual machine, see Configure a point-to-site
connection.
For an overview of the connection options for applications, see Connect your applications to SQL Managed
Instance.
To restore an existing SQL Server database from on-premises to a managed instance, you can use Azure
Database Migration Service for migration or the T-SQL RESTORE command to restore from a database
backup file.
Manage Azure SQL Managed Instance long-term
backup retention
9/13/2022 • 10 minutes to read • Edit Online
Prerequisites
Portal
Azure CLI
PowerShell
1. In the Azure portal, select your managed instance and then click Backups . On the Retention policies
tab, select the database(s) on which you want to set or modify long-term backup retention policies.
Changes will not apply to any databases left unselected.
2. In the Configure policies pane, specify your desired retention period for weekly, monthly, or yearly
backups. Choose a retention period of '0' to indicate that no long-term backup retention should be set.
3. When complete, click Apply .
IMPORTANT
When you enable a long-term backup retention policy, it may take up to 7 days for the first backup to become visible and
available to restore. For details of the LTR backup cadence, see long-term backup retention.
View the backups that are retained for a specific database with an LTR policy, and restore from those backups.
1. In the Azure portal, select your managed instance and then click Backups . On the Available backups
tab, select the database for which you want to see available backups. Click Manage .
2. In the Manage backups pane, review the available backups.
3. Select the backup from which you want to restore, click Restore , then on the restore page specify the
new database name. The backup and source will be pre-populated on this page.
4. Click Review + Create to review your Restore details. Then click Create to restore your database from
the chosen backup.
5. On the toolbar, click the notification icon to view the status of the restore job.
6. When the restore job is completed, open the Managed Instance Over view page to view the newly
restored database.
NOTE
From here, you can connect to the restored database using SQL Server Management Studio to perform needed tasks,
such as to extract a bit of data from the restored database to copy into the existing database or to delete the existing
database and rename the restored database to the existing database name.
Next steps
To learn about service-generated automatic backups, see automatic backups
To learn about long-term backup retention, see long-term backup retention
Quickstart: Restore a database to Azure SQL
Managed Instance with SSMS
9/13/2022 • 5 minutes to read • Edit Online
NOTE
For more information on migration using Azure Database Migration Service, see Tutorial: Migrate SQL Server to an
Azure Managed Instance using Database Migration Service.
For more information on various migration methods, see SQL Server to Azure SQL Managed Instance Guide.
Prerequisites
This quickstart:
Uses resources from the Create a managed instance quickstart.
Requires the latest version of SSMS installed.
Requires SSMS to connect to SQL Managed Instance. See these quickstarts on how to connect:
Enable a public endpoint on SQL Managed Instance. This approach is recommended for this
quickstart.
Connect to SQL Managed Instance from an Azure VM.
Configure a point-to-site connection to SQL Managed Instance from on-premises.
NOTE
For more information on backing up and restoring a SQL Server database by using Blob Storage and a shared access
signature key, see SQL Server Backup to URL.
2. In Select backup devices , select Add . In Backup media type , URL is the only option that's available
because it's the only source type that's supported. Select OK .
3. In Select a Backup File Location , choose from one of three options to provide information about the
location of your backup files:
Select a pre-registered storage container from the Azure storage container list.
Enter a new storage container and a shared access signature. A new SQL credential will be registered
for you.
Select Add to browse more storage containers from your Azure subscription.
If you select Add , proceed to the next section, Browse Azure subscription storage containers. If you use a
different method to provide the location of the backup files, skip to Restore the database.
Browse Azure subscription storage containers
1. In Connect to a Microsoft Subscription , select Sign in to sign in to your Azure subscription.
2. Sign in to your Microsoft Account to initiate the session in Azure.
3. Select the subscription of the storage account that contains the backup files.
4. Select the storage account that contains the backup files.
The restore process starts. The duration depends on the size of the backup set.
3. When the restore process finishes, a dialog shows that it was successful. Select OK .
IMPORTANT
CREDENTIALmust match the container path, begin with https , and can't contain a trailing forward slash.
IDENTITY must be SHARED ACCESS SIGNATURE .
SECRET must be the shared access signature token and can't contain a leading ? .
5. Run the following statement to restore the Wide World Importers database.
If the restore process is terminated with the message ID 22003, create a new backup file that contains
backup checksums, and start the restore process again. See Enable or disable backup checksums during
backup or restore.
6. Run the following statement to track the status of your restore process.
7. When the restore process finishes, view the database in Object Explorer . You can verify that the
database is restored by using the sys.dm_operation_status view.
NOTE
A database restore operation is asynchronous and retryable. You might get an error in SSMS if the connection fails or a
time-out expires. SQL Managed Instance keeps trying to restore the database in the background, and you can track the
progress of the restore process by using the sys.dm_exec_requests and sys.dm_operation_status views.
In some phases of the restore process, you see a unique identifier instead of the actual database name in the system
views. To learn about RESTORE statement behavior differences, see T-SQL differences between SQL Server & Azure SQL
Managed Instance.
Next steps
For information about troubleshooting a backup to a URL, see SQL Server Backup to URL best practices and
troubleshooting.
For an overview of app connection options, see Connect your applications to SQL Managed Instance.
To query by using your favorite tools or languages, see Quickstarts: Azure SQL Database connect and query.
Tutorial: Security in Azure SQL Managed Instance
using Azure AD server principals (logins)
9/13/2022 • 11 minutes to read • Edit Online
Prerequisites
To complete the tutorial, make sure you have the following prerequisites:
SQL Server Management Studio (SSMS)
A managed instance
Follow this article: Quickstart: Create a managed instance
Able to access your managed instance and provisioned an Azure AD administrator for the managed instance.
To learn more, see:
Connect your application to a managed instance
SQL Managed Instance connectivity architecture
Configure and manage Azure Active Directory authentication with SQL
Limit access
Managed instances can be accessed through a private IP address. Much like an isolated SQL Server
environment, applications or users need access to the SQL Managed Instance network (VNet) before a
connection can be established. For more information, see Connect your application to SQL Managed Instance.
It is also possible to configure a service endpoint on a managed instance, which allows for public connections in
the same fashion as for Azure SQL Database. For more information, see Configure public endpoint in Azure SQL
Managed Instance.
NOTE
Even with service endpoints enabled, Azure SQL Database firewall rules do not apply. Azure SQL Managed Instance has its
own built-in firewall to manage connectivity.
USE master
GO
CREATE LOGIN login_name FROM EXTERNAL PROVIDER
GO
USE master
GO
CREATE LOGIN [[email protected]] FROM EXTERNAL PROVIDER
GO
SELECT *
FROM sys.server_principals;
GO
The following example grants the sysadmin server role to the login
[email protected]
3. In SSMS Object Explorer , right-click the server and choose New Quer y .
4. In the query window, use the following syntax to create a login for another Azure AD account:
USE master
GO
CREATE LOGIN login_name FROM EXTERNAL PROVIDER
GO
This example creates a login for the Azure AD user [email protected], whose domain aadsqlmi.net is
federated with the Azure AD aadsqlmi.onmicrosoft.com domain.
Execute the following T-SQL command. Federated Azure AD accounts are the SQL Managed Instance
replacements for on-premises Windows logins and users.
USE master
GO
CREATE LOGIN [[email protected]] FROM EXTERNAL PROVIDER
GO
5. Create a database in the managed instance using the CREATE DATABASE syntax. This database will be
used to test user logins in the next section.
a. In Object Explorer , right-click the server and choose New Quer y .
b. In the query window, use the following syntax to create a database named MyMITestDB .
6. Create a SQL Managed Instance login for a group in Azure AD. The group will need to exist in Azure AD
before you can add the login to SQL Managed Instance. See Create a basic group and add members
using Azure Active Directory. Create a group mygroup and add members to this group.
7. Open a new query window in SQL Server Management Studio.
This example assumes there exists a group called mygroup in Azure AD. Execute the following command:
USE master
GO
CREATE LOGIN [mygroup] FROM EXTERNAL PROVIDER
GO
8. As a test, log into the managed instance with the newly created login or group. Open a new connection to
the managed instance, and use the new login when authenticating.
9. In Object Explorer , right-click the server and choose New Quer y for the new connection.
10. Check server permissions for the newly created Azure AD server principal (login) by executing the
following command:
Guest users are supported as individual users (without being part of an AAD group (although they can be)) and
the logins can be created in master directly (for example, [email protected]) using the current login syntax.
For more information on granting database permissions, see Getting Started with Database Engine Permissions.
Create an Azure AD user and create a sample table
1. Log into your managed instance using a sysadmin account using SQL Server Management Studio.
2. In Object Explorer , right-click the server and choose New Quer y .
3. In the query window, use the following syntax to create an Azure AD user from an Azure AD server
principal (login):
The following example creates a user [email protected] from the login [email protected]:
USE MyMITestDB
GO
CREATE USER [[email protected]] FROM LOGIN [[email protected]]
GO
4. It's also supported to create an Azure AD user from an Azure AD server principal (login) that is a group.
The following example creates a login for the Azure AD group mygroup that exists in your Azure AD
instance.
USE MyMITestDB
GO
CREATE USER [mygroup] FROM LOGIN [mygroup]
GO
All users that belong to mygroup can access the MyMITestDB database.
IMPORTANT
When creating a USER from an Azure AD server principal (login), specify the user_name as the same login_name
from LOGIN.
USE MyMITestDB
GO
CREATE TABLE TestTable
(
AccountNum varchar(10),
City varchar(255),
Name varchar(255),
State varchar(2)
);
6. Create a connection in SSMS with the user that was created. You'll notice that you cannot see the table
TestTable that was created by the sysadmin earlier. We need to provide the user with permissions to
read data from the database.
7. You can check the current permission the user has by executing the following command:
The following example provides the user [email protected] and the group mygroup with db_datareader
permissions on the MyMITestDB database:
USE MyMITestDB
GO
ALTER ROLE db_datareader ADD MEMBER [[email protected]]
GO
ALTER ROLE db_datareader ADD MEMBER [mygroup]
GO
4. Check the Azure AD user that was created in the database exists by executing the following command:
5. Create a new connection to the managed instance with the user that has been added to the
db_datareader role.
SELECT *
FROM TestTable
Are you able to see data from the table? You should see the columns being returned.
USE MyMITestDB
GO
CREATE PROCEDURE dbo.usp_Demo
WITH EXECUTE AS '[email protected]'
AS
SELECT user_name();
GO
4. Use the following command to see that the user you're impersonating when executing the stored
procedure is [email protected] .
Exec dbo.usp_Demo
4. In a new query window, execute the following command to create the user mygroup in the new database
MyMITestDB2 , and grant SELECT permissions on that database to mygroup:
USE MyMITestDB2
GO
CREATE USER [mygroup] FROM LOGIN [mygroup]
GO
GRANT SELECT TO [mygroup]
GO
5. Sign into the managed instance using SQL Server Management Studio as a member of the Azure AD
group mygroup. Open a new query window and execute the cross-database SELECT statement:
USE MyMITestDB
SELECT * FROM MyMITestDB2..TestTable2
GO
Next steps
Enable security features
See the SQL Managed Instance security features article for a comprehensive list of ways to secure your
database. The following security features are discussed:
SQL Managed Instance auditing
Always Encrypted
Threat detection
Dynamic data masking
Row-level security
Transparent data encryption (TDE)
SQL Managed Instance capabilities
For a complete overview of SQL Managed Instance capabilities, see:
SQL Managed Instance capabilities
Tutorial: Add SQL Managed Instance to a failover
group
9/13/2022 • 34 minutes to read • Edit Online
IMPORTANT
When going through this tutorial, ensure you are configuring your resources with the prerequisites for setting up
failover groups for SQL Managed Instance.
Creating a managed instance can take a significant amount of time. As a result, this tutorial may take several hours to
complete. For more information on provisioning times, see SQL Managed Instance management operations.
Prerequisites
Portal
PowerShell
Create the resource group and your primary managed instance using the Azure portal.
1. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL is not in the list, select All
ser vices , and then type Azure SQL in the search box. (Optional) Select the star next to Azure SQL to
favorite it and add it as an item in the left-hand navigation.
2. Select + Add to open the Select SQL deployment option page. You can view additional information
about the different databases by selecting Show details on the Databases tile.
3. Select Create on the SQL Managed Instances tile.
4. On the Create Azure SQL Managed Instance page, on the Basics tab:
a. Under Project Details , select your Subscription from the drop-down and then choose to Create
New resource group. Type in a name for your resource group, such as myResourceGroup .
b. Under SQL Managed Instance Details , provide the name of your managed instance, and the region
where you would like to deploy your managed instance. Leave Compute + storage at default values.
c. Under Administrator Account , provide an admin login, such as azureuser , and a complex admin
password.
5. Leave the rest of the settings at default values, and select Review + create to review your SQL Managed
Instance settings.
6. Select Create to create your primary managed instance.
To verify the subnet range of your primary virtual network, follow these steps:
1. In the Azure portal, navigate to your resource group and select the virtual network for your primary
instance.
2. Select Subnets under Settings and note the Address range of the subnet created automatically during
creation of your primary instance. The subnet IP address range of the virtual network for the secondary
managed instance must not overlap with the IP address range of the subnet hosting primary instance.
To create a virtual network, follow these steps:
1. In the Azure portal, select Create a resource and search for virtual network.
2. Select the Vir tual Network option and then select Create on the next page.
3. Fill out the required fields to configure the virtual network for your secondary managed instance, and
then select Create .
The following table shows the required fields and corresponding values for the secondary virtual
network:
F IEL D VA L UE
Address space The address space for your virtual network, such as
10.128.0.0/16 .
Portal
PowerShell
1. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL is not in the list, select All
ser vices , and then type Azure SQL in the search box. (Optional) Select the star next to Azure SQL to
add it as a favorite item in the left-hand navigation.
2. Select + Add to open the Select SQL deployment option page. You can view additional information
about the different databases by selecting Show details on the Databases tile.
3. Select Create on the SQL managed instances tile.
4. On the Basics tab of the Create Azure SQL Managed Instance page, fill out the required fields to
configure your secondary managed instance.
The following table shows the values necessary for the secondary managed instance:
F IEL D VA L UE
SQL Managed Instance name The name of your new secondary managed instance,
such as sql-mi-secondary .
SQL Managed Instance admin login The login you want to use for your new secondary
managed instance, such as azureuser .
5. Under the Networking tab, for the Vir tual Network , select from the drop-down list the virtual network
you previously created for the secondary managed instance.
6. Under the Additional settings tab, for Geo-Replication , choose Yes to Use as failover secondary.
Select the primary managed instance from the drop-down.
Be sure that the collation and time zone match that of the primary managed instance. The primary
managed instance created in this tutorial used the default of SQL_Latin1_General_CP1_CI_AS collation and
the (UTC) Coordinated Universal Time time zone.
7. Select Review + create to review the settings for your secondary managed instance.
8. Select Create to create your secondary managed instance.
Portal
PowerShell
1. In the Azure portal, go to the Vir tual network resource for your primary managed instance.
2. Select Peerings under Settings and then select + Add.
Peering link name The name for the peering must be unique within the
virtual network.
Traffic forwarded from remote virtual network Both Allowed (default) and Block option will work for
this tutorial. For more information, see Create a peering
Virtual network gateway or Route Server Select None . For more information about the other
options available, see Create a peering.
Peering link name The name of the same peering to be used in the virtual
network hosting secondary instance.
Traffic forwarded from remote virtual network Both Allowed (default) and Block option will work for
this tutorial. For more information, see Create a peering.
Virtual network gateway or Route Server Select None . For more information about the other
options available, see Create a peering.
2. Click Add to configure the peering with the virtual network you selected. After a few seconds, select the
Refresh button and the peering status will change from Updating to Connected.
Create a failover group
In this step, you will create the failover group and add both managed instances to it.
Portal
PowerShell
5. Once failover group deployment is complete, you will be taken back to the Failover group page.
Test failover
In this step, you will fail your failover group over to the secondary server, and then fail back using the Azure
portal.
Portal
PowerShell
5. Go to the new secondary managed instance and select Failover once again to fail the primary instance
back to the primary role.
Clean up resources
Clean up resources by first deleting the managed instances, then the virtual cluster, then any remaining
resources, and finally the resource group. Failover group will be automatically deleted when you delete any of
the two instances.
Portal
PowerShell
Full script
PowerShell
Portal
Next steps
In this tutorial, you configured a failover group between two managed instances. You learned how to:
Create a primary managed instance.
Create a secondary managed instance as part of a failover group.
Test failover.
Advance to the next quickstart on how to connect to SQL Managed Instance, and how to restore a database to
SQL Managed Instance:
Connect to SQL Managed Instance Restore a database to SQL Managed Instance
Tutorial: Migrate Windows users and groups in a
SQL Server instance to Azure SQL Managed
Instance using T-SQL DDL syntax
9/13/2022 • 7 minutes to read • Edit Online
Prerequisites
To complete this tutorial, the following prerequisites apply:
The Windows domain is federated with Azure Active Directory (Azure AD).
Access to Active Directory to create users/groups.
An existing SQL Server in your on-premises environment.
An existing SQL Managed Instance. See Quickstart: Create a SQL Managed Instance.
A sysadmin in the SQL Managed Instance must be used to create Azure AD logins.
Create an Azure AD admin for SQL Managed Instance.
You can connect to your SQL Managed Instance within your network. See the following articles for additional
information:
Connect your application to Azure SQL Managed Instance
Quickstart: Configure a point-to-site connection to an Azure SQL Managed Instance from on-premises
Configure public endpoint in Azure SQL Managed Instance
Arguments
domainName
Specifies the domain name of the user.
userName
Specifies the name of the user identified inside the database.
= [email protected]
Remaps a user to the Azure AD login
groupName
Specifies the name of the group identified inside the database.
Part 1: Create logins in SQL Server for Windows users and groups
IMPORTANT
The following syntax creates a user and a group login in your SQL Server. You'll need to make sure that the user and
group exist inside your Active Directory (AD) before executing the below syntax.
The example below creates a login in SQL Server for an account named testUser1 under the domain aadsqlmi.
-- Sign into SQL Server as a sysadmin or a user that can create logins and databases
use master;
go
/** Create a Windows group login which contains one user [aadsqlmi\testGroupUser].
testGroupUser will need to be added to the migration group in Active Directory
**/
create login [aadsqlmi\migration] from windows;
go;
Part 2: Create Windows users and groups, then add roles and
permissions
Use the following syntax to create the test user.
use migration;
go
-- Create a role with some permissions and assign the user to the role
create role UserMigrationRole;
go
Use the following query to display user names assigned to a specific role:
Use the following syntax to create a group. Then add the group to the role db_owner .
Create a test table and add some data using the following syntax:
-- Create a table and add data
create table test ( a int, b int);
go
use master;
go
backup database migration to disk = 'C:\Migration\migration.bak';
go
use master
go
-- Create login for the Azure AD group [migration]. This group contains one user
[[email protected]]
create login [migration] from external provider
go
2. Check your migration for the correct database, table, and principals.
-- Switch to the database migration that is already restored for MI
use migration;
go
3. Use the ALTER USER syntax to map the on-premises user to the Azure AD login.
/** Execute the ALTER USER command to alter the Windows user [aadsqlmi\testUser1]
to map to the Azure AD user [email protected]
**/
alter user [aadsqlmi\testUser1] with login = [[email protected]];
go
4. Use the ALTER USER syntax to map the on-premises group to the Azure AD login.
/** Execute ALTER USER command to alter the Windows group [aadsqlmi\migration]
to the Azure AD group login [migration]
**/
alter user [aadsqlmi\migration] with login = [migration];
-- old group migration is changed to Azure AD migration group
go
2. Using SQL Server Management Studio (SSMS), sign into your SQL Managed Instance using Active
Director y Integrated authentication, connecting to the database migration .
a. You can also sign in using the [email protected] credentials with the SSMS option Active
Director y – Universal with MFA suppor t . However, in this case, you can't use the Single Sign On
mechanism and you must type a password. You won't need to use a federated VM to log in to your
SQL Managed Instance.
3. As part of the role member SELECT , you can select from the test table
Test authenticating to a SQL Managed Instance using a member of a Windows group migration . The user
aadsqlmi\testGroupUser should have been added to the group migration before the migration.
1. Log into the federated VM using your Azure SQL Managed Instance subscription as
aadsqlmi\testGroupUser
2. Using SSMS with Active Director y Integrated authentication, connect to the Azure SQL Managed
Instance server and the database migration
a. You can also sign in using the [email protected] credentials with the SSMS option Active
Director y – Universal with MFA suppor t . However, in this case, you can't use the Single Sign On
mechanism and you must type a password. You won't need to use a federated VM to log into your
SQL Managed Instance.
3. As part of the db_owner role, you can create a new table.
NOTE
Due to a known design issue for Azure SQL Database, a create a table statement executed as a member of a group will fail
with the following error:
Msg 2760, Level 16, State 1, Line 4 The specified schema name "[email protected]" either does
not exist or you do not have permission to use it.
The current workaround is to create a table with an existing schema in the case above <dbo.new>
Next steps
Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline using DMS
Tutorial: Configure replication between two
managed instances
9/13/2022 • 6 minutes to read • Edit Online
This tutorial is intended for an experienced audience and assumes that the user is familiar with deploying and
connecting to both managed instances and SQL Server VMs within Azure.
NOTE
This article describes the use of transactional replication in Azure SQL Managed Instance. It is unrelated to failover
groups, an Azure SQL Managed Instance feature that allows you to create complete readable replicas of individual
instances. There are additional considerations when configuring transactional replication with failover groups.
Requirements
Configuring SQL Managed Instance to function as a publisher and/or a distributor requires:
That the publisher managed instance is on the same virtual network as the distributor and the subscriber, or
VPN gateways have been configured between the virtual networks of all three entities.
Connectivity uses SQL Authentication between replication participants.
An Azure storage account share for the replication working directory.
Port 445 (TCP outbound) is open in the security rules of NSG for the managed instances to access the Azure
file share. If you encounter the error
failed to connect to azure storage <storage account name> with os error 53 , you will need to add an
outbound rule to the NSG of the appropriate SQL Managed Instance subnet.
You will also need to configure an Azure VM to connect to your managed instances.
Example: \\replstorage.file.core.windows.net\replshare
Example:
DefaultEndpointsProtocol=https;AccountName=replstorage;AccountKey=dYT5hHZVu9aTgIteGfpYE64cfis0mpKTmmc8+EP53GxuRg6TCwe5eTYWrQM4AmQSG5lb3OBskhg==;EndpointSuffix
USE [master]
GO
USE [ReplTran_PUB]
GO
CREATE TABLE ReplTest (
ID INT NOT NULL PRIMARY KEY,
c1 VARCHAR(100) NOT NULL,
dt1 DATETIME NOT NULL DEFAULT getdate()
)
GO
USE [ReplTran_PUB]
GO
USE [master]
GO
USE [ReplTran_SUB]
GO
CREATE TABLE ReplTest (
ID INT NOT NULL PRIMARY KEY,
c1 VARCHAR(100) NOT NULL,
dt1 DATETIME NOT NULL DEFAULT getdate()
)
GO
6 - Configure distribution
Connect to your sql-mi-pub managed instance using SQL Server Management Studio and run the following T-
SQL code to configure your distribution database.
USE [master]
GO
USE [master]
EXEC sp_adddistpublisher
@publisher = @@ServerName,
@distribution_db = N'distribution',
@security_mode = 0,
@login = N'$(username)',
@password = N'$(password)',
@working_directory = N'$(file_storage)',
@storage_connection_string = N'$(file_storage_key)'; -- Remove this parameter for on-premises publishers
NOTE
Be sure to use only backslashes ( \ ) for the file_storage parameter. Using a forward slash ( / ) can cause an error when
connecting to the file share.
This script configures a local publisher on the managed instance, adds a linked server, and creates a set of jobs
for the SQL Server agent.
Run the following T-SQL command again to set the login timeout back to the default value, should you need to
do so:
10 - Test replication
Once replication has been configured, you can test it by inserting new items on the publisher and watching the
changes propagate to the subscriber.
Run the following T-SQL snippet to view the rows on the subscriber:
Run the following T-SQL snippet to insert additional rows on the publisher, and then check the rows again on
the subscriber.
Clean up resources
To drop the publication, run the following T-SQL command:
To remove the replication option from the database, run the following T-SQL command:
You can clean up your Azure resources by deleting the SQL Managed Instance resources from the resource
group and then deleting the resource group SQLMI-Repl .
Next steps
You can also learn more information about transactional replication with Azure SQL Managed Instance or learn
to configure replication between a SQL Managed Instance publisher/distributor and a SQL on Azure VM
subscriber.
Tutorial: Configure transactional replication between
Azure SQL Managed Instance and SQL Server
9/13/2022 • 12 minutes to read • Edit Online
This tutorial is intended for an experienced audience and assumes that the user is familiar with deploying and
connecting to both managed instances and SQL Server VMs within Azure.
NOTE
This article describes the use of transactional replication in Azure SQL Managed Instance. It is unrelated to failover groups,
an Azure SQL Managed Instance feature that allows you to create complete readable replicas of individual instances.
There are additional considerations when configuring transactional replication with failover groups.
Prerequisites
To complete the tutorial, make sure you have the following prerequisites:
An Azure subscription.
Experience with deploying two managed instances within the same virtual network.
A SQL Server subscriber, either on-premises or on an Azure VM. This tutorial uses an Azure VM.
SQL Server Management Studio (SSMS) 18.0 or greater.
The latest version of Azure PowerShell.
Ports 445 and 1433 allow SQL traffic on both the Azure firewall and the Windows firewall.
# set variables
$ResourceGroupName = "SQLMI-Repl"
$Location = "East US 2"
NOTE
For the sake of simplicity, and because it is the most common configuration, this tutorial suggests placing the distributor
managed instance within the same virtual network as the publisher. However, it's possible to create the distributor in a
separate virtual network. To do so, you will need to configure VNet peering between the virtual networks of the publisher
and distributor, and then configure VNet peering between the virtual networks of the distributor and subscriber.
For more information about deploying a SQL Server VM to Azure, see Quickstart: Create a SQL Server VM.
# Set variables
$SubscriptionId = '<SubscriptionID>'
$resourceGroup = 'SQLMI-Repl'
$pubvNet = 'sql-mi-publisher-vnet'
$subvNet = 'sql-vm-sub-vnet'
$pubsubName = 'Pub-to-Sub-Peer'
$subpubName = 'Sub-to-Pub-Peer'
$virtualNetwork1 = Get-AzVirtualNetwork `
-ResourceGroupName $resourceGroup `
-Name $pubvNet
$virtualNetwork2 = Get-AzVirtualNetwork `
-ResourceGroupName $resourceGroup `
-Name $subvNet
Once VNet peering is established, test connectivity by launching SQL Server Management Studio (SSMS) on
SQL Server and connecting to both managed instances. For more information on connecting to a managed
instance using SSMS, see Use SSMS to connect to SQL Managed Instance.
7. Select Review + create . Review the parameters for your private DNS zone and then select Create to
create your resource.
Create an A record
1. Go to your new Private DNS zone and select Over view .
2. Select + Record set to create a new A record.
3. Provide the name of your SQL Server VM as well as the private internal IP address.
Example: \\replstorage.file.core.windows.net\replshare
Copy the storage access key connection string in the format of:
DefaultEndpointsProtocol=https;AccountName=<Storage-Account-
Name>;AccountKey=****;EndpointSuffix=core.windows.net
Example:
DefaultEndpointsProtocol=https;AccountName=replstorage;AccountKey=dYT5hHZVu9aTgIteGfpYE64cfis0mpKTmmc8+EP53GxuRg6TCwe5eTYWrQM4AmQSG5lb3OBskhg==;EndpointSuffix
Create a database
Create a new database on the publisher managed instance. To do so, follow these steps:
1. Launch SQL Server Management Studio on SQL Server.
2. Connect to the sql-mi-publisher managed instance.
3. Open a New Quer y window and execute the following T-SQL query to create the database.
-- Create table
USE [ReplTutorial]
GO
CREATE TABLE ReplTest (
ID INT NOT NULL PRIMARY KEY,
c1 VARCHAR(100) NOT NULL,
dt1 DATETIME NOT NULL DEFAULT getdate()
)
GO
Configure distribution
Once connectivity is established and you have a sample database, you can configure distribution on your
sql-mi-distributor managed instance. To do so, follow these steps:
1. Launch SQL Server Management Studio on SQL Server.
2. Connect to the sql-mi-distributor managed instance.
3. Open a New Quer y window and run the following Transact-SQL code to configure distribution on the
distributor managed instance:
NOTE
Be sure to use only backslashes ( \ ) for the @working_directory parameter. Using a forward slash ( / ) can cause
an error when connecting to the file share.
Use MASTER
EXEC sys.sp_adddistributor @distributor = 'sql-mi-distributor.b6bf57.database.windows.net', @password
= '<distributor_admin_password>'
use [ReplTutorial]
exec sp_addsubscription
@publication = N'ReplTest',
@subscriber = N'sql-vm-sub.repldns.com', -- include the DNS configured in the private DNS zone
@destination_db = N'ReplSub',
@subscription_type = N'Push',
@sync_type = N'automatic',
@article = N'all',
@update_mode = N'read only',
@subscriber_type = 0
exec sp_addpushsubscription_agent
@publication = N'ReplTest',
@subscriber = N'sql-vm-sub.repldns.com', -- include the DNS configured in the private DNS zone
@subscriber_db = N'ReplSub',
@job_login = N'azureuser',
@job_password = '<Complex Password>',
@subscriber_security_mode = 0,
@subscriber_login = N'azureuser',
@subscriber_password = '<Complex Password>',
@dts_package_location = N'Distributor'
GO
Test replication
Once replication has been configured, you can test it by inserting new items on the publisher and watching the
changes propagate to the subscriber.
Run the following T-SQL snippet to view the rows on the subscriber:
Use ReplSub
select * from dbo.ReplTest
Run the following T-SQL snippet to insert additional rows on the publisher, and then check the rows again on
the subscriber.
Use ReplTutorial
INSERT INTO ReplTest (ID, c1) VALUES (15, 'pub')
Clean up resources
1. Navigate to your resource group in the Azure portal.
2. Select the managed instance(s) and then select Delete . Type yes in the text box to confirm you want to
delete the resource and then select Delete . This process may take some time to complete in the background,
and until it's done, you will not be able to delete the virtual cluster or any other dependent resources.
Monitor the delete in the Activity tab to confirm your managed instance has been deleted.
3. Once the managed instance is deleted, delete the virtual cluster by selecting it in your resource group, and
then choosing Delete . Type yes in the text box to confirm you want to delete the resource and then select
Delete .
4. Delete any remaining resources. Type yes in the text box to confirm you want to delete the resource and
then select Delete .
5. Delete the resource group by selecting Delete resource group , typing in the name of the resource group,
myResourceGroup , and then selecting Delete .
Known errors
Windows logins are not supported
Exception Message: Windows logins are not supported in this version of SQL Server.
The agent was configured with a Windows login and needs to use a SQL Server login instead. Use the Agent
Security page of the Publication proper ties to change the login credentials to a SQL Server login.
Failed to connect to Azure Storage
Connecting to Azure Files Storage '\\replstorage.file.core.windows.net\replshare' Failed to connect to Azure
Storage '' with OS error: 53.
2019-11-19 02:21:05.07 Obtained Azure Storage Connection String for replstorage 2019-11-19 02:21:05.07
Connecting to Azure Files Storage '\replstorage.file.core.windows.net\replshare' 2019-11-19 02:21:31.21 Failed
to connect to Azure Storage '' with OS error: 53.
This is likely because port 445 is closed in either the Azure firewall, the Windows firewall, or both.
Connecting to Azure Files Storage '\\replstorage.file.core.windows.net\replshare' Failed to connect to Azure
Storage '' with OS error: 55.
Using a forward slash instead of backslash in the file path for the file share can cause this error.
This is okay: \\replstorage.file.core.windows.net\replshare
This can cause an OS 55 error: '\\replstorage.file.core.windows.net/replshare'
Could not connect to Subscriber
The process could not connect to Subscriber 'SQL-VM-SUB Could not open a connection to SQL Server [53].
A network-related or instance-specific error has occurred while establishing a connection to SQL Server.
Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to
allow remote connections.
Possible solutions:
Ensure port 1433 is open.
Ensure TCP/IP is enabled on the subscriber.
Confirm the DNS name was used when creating the subscriber.
Verify that your virtual networks are correctly linked in the private DNS zone.
Verify your A record is configured correctly.
Verify your VNet peering is configured correctly.
No publications to which you can subscribe
When you're adding a new subscription using the New Subscription wizard, on the Publication page, you
may find that there are no databases and publications listed as available options, and you might see the
following error message:
There are no publications to which you can subscribe, either because this server has no publications or
because you do not have sufficient privileges to access the publications.
While it's possible that this error message is accurate, and there really aren't publications available on the
publisher you connected to, or you're lacking sufficient permissions, this error could also be caused by an older
version of SQL Server Management Studio. Try upgrading to SQL Server Management Studio 18.0 or greater to
rule this out as a root cause.
Next steps
Enable security features
See the What is Azure SQL Managed Instance? article for a comprehensive list of ways to secure your database.
The following security features are discussed:
SQL Managed Instance auditing
Always Encrypted
Threat detection
Dynamic data masking
Row-level security
Transparent data encryption (TDE)
SQL Managed Instance capabilities
For a complete overview of managed instance capabilities, see:
SQL Managed Instance capabilities
Migration guide: IBM Db2 to Azure SQL Managed
Instance
9/13/2022 • 6 minutes to read • Edit Online
Prerequisites
To migrate your Db2 database to SQL Managed Instance, you need:
To verify that your source environment is supported.
To download SQL Server Migration Assistant (SSMA) for Db2.
A target instance of Azure SQL Managed Instance.
Connectivity and sufficient permissions to access both source and target.
Pre-migration
After you have met the prerequisites, you're ready to discover the topology of your environment and assess the
feasibility of your migration.
Assess and convert
Create an assessment by using SQL Server Migration Assistant.
To create an assessment, follow these steps:
1. Open SSMA for Db2.
2. Select File > New Project .
3. Provide a project name and a location to save your project. Then select Azure SQL Managed Instance as
the migration target from the drop-down list, and select OK .
4. On Connect to Db2 , enter values for the Db2 connection details.
5. Right-click the Db2 schema you want to migrate, and then choose Create repor t . This will generate an
HTML report. Alternatively, you can choose Create repor t from the navigation bar after selecting the
schema.
6. Review the HTML report to understand conversion statistics and any errors or warnings. You can also
open the report in Excel to get an inventory of Db2 objects and the effort required to perform schema
conversions. The default location for the report is in the report folder within SSMAProjects.
For example: drive:\<username>\Documents\SSMAProjects\MyDb2Migration\report\report_<date> .
3. Right-click the schema, and then choose Conver t Schema . Alternatively, you can choose Conver t
Schema from the top navigation bar after selecting your schema.
4. After the conversion completes, compare and review the structure of the schema to identify potential
problems. Address the problems based on the recommendations.
5. In the Output pane, select Review results . In the Error list pane, review errors.
6. Save the project locally for an offline schema remediation exercise. From the File menu, select Save
Project . This gives you an opportunity to evaluate the source and target schemas offline, and perform
remediation before you can publish the schema to SQL Managed Instance.
Migrate
After you have completed assessing your databases and addressing any discrepancies, the next step is to
execute the migration process.
To publish your schema and migrate your data, follow these steps:
1. Publish the schema. In Azure SQL Managed Instance Metadata Explorer , from the Databases node,
right-click the database. Then select Synchronize with Database .
2. Migrate the data. Right-click the database or object you want to migrate in Db2 Metadata Explorer , and
choose Migrate data . Alternatively, you can select Migrate Data from the navigation bar. To migrate
data for an entire database, select the check box next to the database name. To migrate data from
individual tables, expand the database, expand Tables , and then select the check box next to the table. To
omit data from individual tables, clear the check box.
3. Provide connection details for both Db2 and SQL Managed Instance.
4. After migration completes, view the Data Migration Repor t .
5. Connect to your instance of Azure SQL Managed Instance by using SQL Server Management Studio.
Validate the migration by reviewing the data and schema:
Post-migration
After the migration is complete, you need to go through a series of post-migration tasks to ensure that
everything is functioning as smoothly and efficiently as possible.
Remediate applications
After the data is migrated to the target environment, all the applications that formerly consumed the source
need to start consuming the target. Accomplishing this will in some cases require changes to the applications.
Perform tests
Testing consists of the following activities:
1. Develop validation tests : To test database migration, you need to use SQL queries. You must create the
validation queries to run against both the source and the target databases. Your validation queries should
cover the scope you have defined.
2. Set up the test environment : The test environment should contain a copy of the source database and the
target database. Be sure to isolate the test environment.
3. Run validation tests : Run the validation tests against the source and the target, and then analyze the
results.
4. Run performance tests : Run performance tests against the source and the target, and then analyze and
compare the results.
Advanced features
Be sure to take advantage of the advanced cloud-based features offered by Azure SQL Managed Instance, such
as built-in high availability, threat detection, and monitoring and tuning your workload.
Some SQL Server features are only available when the database compatibility level is changed to the latest
compatibility level.
Migration assets
For additional assistance, see the following resources, which were developed in support of a real-world
migration project engagement:
A SSET DESC RIP T IO N
Data workload assessment model and tool This tool provides suggested "best fit" target platforms,
cloud readiness, and application/database remediation level
for a given workload. It offers simple, one-click calculation
and report generation that helps to accelerate large estate
assessments by providing and automated and uniform
target platform decision process.
Db2 zOS data assets discovery and assessment package After running the SQL script on a database, you can export
the results to a file on the file system. Several file formats are
supported, including *.csv, so that you can capture the
results in external tools such as spreadsheets. This method
can be useful if you want to easily share results with teams
that do not have the workbench installed.
IBM Db2 LUW inventory scripts and artifacts This asset includes a SQL query that hits IBM Db2 LUW
version 11.1 system tables and provides a count of objects
by schema and object type, a rough estimate of "raw data"
in each schema, and the sizing of tables in each schema, with
results stored in a CSV format.
IBM Db2 to SQL MI - Database Compare utility The Database Compare utility is a Windows console
application that you can use to verify that the data is
identical both on source and target platforms. You can use
the tool to efficiently compare data down to the row or
column level in all or selected tables, rows, and columns.
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate
complex modernization for data platform migration projects to Microsoft's Azure data platform.
Next steps
For Microsoft and third-party services and tools to assist you with various database and data migration
scenarios, see Service and tools for data migration.
To learn more about Azure SQL Managed Instance, see:
An overview of SQL Managed Instance
Azure total cost of ownership calculator
To learn more about the framework and adoption cycle for cloud migrations, see:
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads migrated to Azure
To assess the application access layer, see Data Access Migration Toolkit.
For details on how to perform data access layer A/B testing, see Database Experimentation Assistant.
Migration guide: Oracle to Azure SQL Managed
Instance
9/13/2022 • 9 minutes to read • Edit Online
Prerequisites
Before you begin migrating your Oracle schema to SQL Managed Instance:
Verify your source environment is supported.
Download SSMA for Oracle.
Have a SQL Managed Instance target.
Obtain the necessary permissions for SSMA for Oracle and provider.
Pre-migration
After you've met the prerequisites, you're ready to discover the topology of your environment and assess the
feasibility of your migration. This part of the process involves conducting an inventory of the databases that you
need to migrate, assessing those databases for potential migration issues or blockers, and then resolving any
items you might have uncovered.
Assess
By using SSMA for Oracle, you can review database objects and data, assess databases for migration, migrate
database objects to SQL Managed Instance, and then finally migrate data to the database.
To create an assessment:
1. Open SSMA for Oracle.
2. Select File , and then select New Project .
3. Enter a project name and a location to save your project. Then select Azure SQL Managed Instance as
the migration target from the drop-down list and select OK .
4. Select Connect to Oracle . Enter values for Oracle connection details in the Connect to Oracle dialog
box.
7. Review the HTML report to understand conversion statistics and any errors or warnings. You can also
open the report in Excel to get an inventory of Oracle objects and the effort required to perform schema
conversions. The default location for the report is in the report folder within SSMAProjects.
For example, see
drive:\<username>\Documents\SSMAProjects\MyOracleMigration\report\report_2020_11_12T02_47_55\ .
Validate the data types
Validate the default data type mappings and change them based on requirements if necessary. To do so, follow
these steps:
1. In SSMA for Oracle, select Tools , and then select Project Settings .
2. Select the Type Mapping tab.
3. You can change the type mapping for each table by selecting the table in Oracle Metadata Explorer .
Convert the schema
To convert the schema:
1. (Optional) Add dynamic or ad-hoc queries to statements. Right-click the node, and then select Add
statements .
2. Select the Connect to Azure SQL Managed Instance tab.
a. Enter connection details to connect your database in SQL Database Managed Instance .
b. Select your target database from the drop-down list, or enter a new name, in which case a database
will be created on the target server.
c. Enter authentication details, and select Connect .
3. In Oracle Metadata Explorer , right-click the Oracle schema and then select Conver t Schema . Or, you
can select your schema and then select the Conver t Schema tab.
4. After the conversion finishes, compare and review the converted objects to the original objects to identify
potential problems and address them based on the recommendations.
5. Compare the converted Transact-SQL text to the original code, and review the recommendations.
6. In the output pane, select Review results and review the errors in the Error List pane.
7. Save the project locally for an offline schema remediation exercise. On the File menu, select Save
Project . This step gives you an opportunity to evaluate the source and target schemas offline and
perform remediation before you publish the schema to SQL Managed Instance.
Migrate
After you've completed assessing your databases and addressing any discrepancies, the next step is to run the
migration process. Migration involves two steps: publishing the schema and migrating the data.
To publish your schema and migrate your data:
1. Publish the schema by right-clicking the database from the Databases node in Azure SQL Managed
Instance Metadata Explorer and selecting Synchronize with Database .
2. Review the mapping between your source project and your target.
3. Migrate the data by right-clicking the schema or object you want to migrate in Oracle Metadata
Explorer and selecting Migrate Data . Or, you can select the Migrate Data tab. To migrate data for an
entire database, select the check box next to the database name. To migrate data from individual tables,
expand the database, expand Tables , and then select the checkboxes next to the tables. To omit data from
individual tables, clear the checkboxes.
4. Enter connection details for both Oracle and SQL Managed Instance.
5. After the migration is completed, view the Data Migration Repor t .
6. Connect to your instance of SQL Managed Instance by using SQL Server Management Studio, and
validate the migration by reviewing the data and schema.
Or, you can also use SQL Server Integration Services to perform the migration. To learn more, see:
Getting started with SQL Server Integration Services
SQL Server Integration Services for Azure and Hybrid Data Movement
Post-migration
After you've successfully completed the migration stage, you need to complete a series of post-migration tasks
to ensure that everything is functioning as smoothly and efficiently as possible.
Remediate applications
After the data is migrated to the target environment, all the applications that formerly consumed the source
need to start consuming the target. Accomplishing this step will require changes to the applications in some
cases.
The Data Access Migration Toolkit is an extension for Visual Studio Code that allows you to analyze your Java
source code and detect data access API calls and queries. The toolkit provides you with a single-pane view of
what needs to be addressed to support the new database back end. To learn more, see the Migrate our Java
application from Oracle blog post.
Perform tests
The test approach to database migration consists of the following activities:
1. Develop validation tests : To test the database migration, you need to use SQL queries. You must create the
validation queries to run against both the source and the target databases. Your validation queries should
cover the scope you've defined.
2. Set up a test environment : The test environment should contain a copy of the source database and the
target database. Be sure to isolate the test environment.
3. Run validation tests : Run validation tests against the source and the target, and then analyze the results.
4. Run performance tests : Run performance tests against the source and the target, and then analyze and
compare the results.
Validate migrated objects
Microsoft SQL Server Migration Assistant for Oracle Tester (SSMA Tester) allows you to test migrated database
objects. The SSMA Tester is used to verify that converted objects behave in the same way.
Create test case
1. Open SSMA for Oracle, select Tester followed by New Test Case .
3. Select the objects that are part of the test case from the Oracle object tree located in the left side.
In this example, stored procedure ADD_REGION and table REGION is selected.
To learn more, see Selecting and configuring objects to test.
4. Next, select the tables, foreign keys and other dependent objects from the Oracle object tree in the left
window.
6. Review the report after the test is completed. The report provides the statistics, any errors during the test
run and a detail report.
7. Click details to get more information.
Example of positive data validation.
NOTE
For more information about these issues and the steps to mitigate them, see the Post-migration validation and
optimization guide.
Migration assets
For more assistance with completing this migration scenario, see the following resources. They were developed
in support of a real-world migration project engagement.
T IT L E/ L IN K DESC RIP T IO N
Data Workload Assessment Model and Tool This tool provides suggested "best fit" target platforms,
cloud readiness, and application or database remediation
level for a given workload. It offers simple, one-click
calculation and report generation that helps to accelerate
large estate assessments by providing an automated and
uniform target platform decision process.
Oracle Inventory Script Artifacts This asset includes a PL/SQL query that hits Oracle system
tables and provides a count of objects by schema type,
object type, and status. It also provides a rough estimate of
raw data in each schema and the sizing of tables in each
schema, with results stored in a CSV format.
Automate SSMA Oracle Assessment Collection & This set of resources uses a .csv file as entry (sources.csv in
Consolidation the project folders) to produce the xml files that are needed
to run an SSMA assessment in console mode. The source.csv
is provided by the customer based on an inventory of
existing Oracle instances. The output files are
AssessmentReportGeneration_source_1.xml,
ServersConnectionFile.xml, and VariableValueFile.xml.
T IT L E/ L IN K DESC RIP T IO N
Oracle to SQL MI - Database Compare utility SSMA for Oracle Tester is the recommended tool to
automatically validate the database object conversion and
data migration, and it's a superset of Database Compare
functionality.
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate
complex modernization for data platform migration projects to Microsoft's Azure data platform.
Next steps
For a matrix of Microsoft and third-party services and tools that are available to assist you with various
database and data migration scenarios and specialty tasks, see Services and tools for data migration.
To learn more about SQL Managed Instance, see:
An overview of Azure SQL Managed Instance
Azure Total Cost of Ownership (TCO) Calculator
To learn more about the framework and adoption cycle for cloud migrations, see:
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads for migration to Azure
For video content, see:
Overview of the migration journey and the tools and services recommended for performing
assessment and migration
Migration overview: SQL Server to Azure SQL
Managed Instance
9/13/2022 • 17 minutes to read • Edit Online
Overview
Azure SQL Managed Instance is a recommended target option for SQL Server workloads that require a fully
managed service without having to manage virtual machines or their operating systems. SQL Managed Instance
enables you to move your on-premises applications to Azure with minimal application or database changes. It
offers complete isolation of your instances with native virtual network support.
Be sure to review the SQL Server database engine features available in Azure SQL Managed Instance to validate
the supportability of your migration target.
Considerations
The key factors to consider when you're evaluating migration options are:
Number of servers and databases
Size of databases
Acceptable business downtime during the migration process
One of the key benefits of migrating your SQL Server databases to SQL Managed Instance is that you can
choose to migrate the entire instance or just a subset of individual databases. Carefully plan to include the
following in your migration process:
All databases that need to be colocated to the same instance
Instance-level objects required for your application, including logins, credentials, SQL Agent jobs and
operators, and server-level triggers
NOTE
Azure SQL Managed Instance guarantees 99.99 percent availability, even in critical scenarios. Overhead caused by some
features in SQL Managed Instance can't be disabled. For more information, seethe Key causes of performance differences
between SQL Managed Instance and SQL Server blog entry.
Choose an appropriate target
You can use the Azure SQL migration extension for Azure Data Studio to get right-sized Azure SQL Managed
Instance recommendation. The extension collects performance data from your source SQL Server instance to
provide right-sized Azure recommendation that meets your workload's performance needs with minimal cost.
To learn more, see Get right-sized Azure recommendation for your on-premises SQL Server database(s)
The following general guidelines can help you choose the right service tier and characteristics of SQL Managed
Instance to help match your performance baseline:
Use the CPU usage baseline to provision a managed instance that matches the number of cores that your
instance of SQL Server uses. It might be necessary to scale resources to match the hardware configuration
characteristics.
Use the memory usage baseline to choose a vCore option that appropriately matches your memory
allocation.
Use the baseline I/O latency of the file subsystem to choose between the General Purpose (latency greater
than 5 ms) and Business Critical (latency less than 3 ms) service tiers.
Use the baseline throughput to preallocate the size of the data and log files to achieve expected I/O
performance.
You can choose compute and storage resources during deployment and then change them afterward by using
the Azure portal, without incurring downtime for your application.
IMPORTANT
Any discrepancy in the virtual network requirements for managed instances can prevent you from creating new instances
or using existing ones. Learn more aboutcreating newandconfiguring existingnetworks.
Another key consideration in the selection of the target service tier in Azure SQL Managed Instance (General
Purpose versus Business Critical) is the availability of certain features, like In-Memory OLTP, that are available
only in the Business Critical tier.
SQL Server VM alternative
Your business might have requirements that make SQL Server on Azure Virtual Machines a more suitable target
than Azure SQL Managed Instance.
If one of the following conditions applies to your business, consider moving to a SQL Server virtual machine
(VM) instead:
You require direct access to the operating system or file system, such as to install third-party or custom
agents on the same virtual machine with SQL Server.
You have strict dependency on features that are still not supported, such as FileStream/FileTable, PolyBase,
and cross-instance transactions.
You need to stay at a specific version of SQL Server (2012, for example).
Your compute requirements are much lower than a managed instance offers (one vCore, for example), and
database consolidation is not an acceptable option.
Migration tools
We recommend the following migration tools:
T EC H N O LO GY DESC RIP T IO N
T EC H N O LO GY DESC RIP T IO N
Azure SQL migration extension for Azure Data Studio The Azure SQL migration extension for Azure Data Studio
provides both the SQL Server assessment and migration
capabilities in Azure Data Studio. It supports migrations in
either online (for migrations that require minimal downtime)
or offline (for migrations where downtime persists through
the duration of the migration) modes.
Azure Migrate This Azure service helps you discover and assess your SQL
data estate at scale on VMware. It provides Azure SQL
deployment recommendations, target sizing, and monthly
estimates.
Azure Database Migration Service This Azure service supports migration in the offline mode for
applications that can afford downtime during the migration
process. Unlike the continuous migration in online mode,
offline mode migration runs a one-time restore of a full
database backup from the source to the target.
Native backup and restore SQL Managed Instance supports restore of native SQL
Server database backups (.bak files). It's the easiest migration
option for customers who can provide full database backups
to Azure Storage.
Log Replay Service This cloud service is enabled for SQL Managed Instance
based on SQL Server log-shipping technology. It's a
migration option for customers who can provide full,
differential, and log database backups to Azure Storage. Log
Replay Service is used to restore backup files from Azure
Blob Storage to SQL Managed Instance.
Managed Instance link This feature enables online migration to Managed Instance
using Always On technology. It’s a migration option for
customers who require database on Managed Instance to be
accessible in R/O mode while migration is in progress, who
need to keep the migration running for prolonged periods of
time (weeks or months at the time), who require true online
replication to Business Critical service tier, and for customers
who require the most performant minimum downtime
migration.
T EC H N O LO GY DESC RIP T IO N
Transactional replication Replicate data from source SQL Server database tables to
SQL Managed Instance by providing a publisher-subscriber
type migration option while maintaining transactional
consistency.
T EC H N O LO GY DESC RIP T IO N
Bulk copy The bulk copy program (bcp) tool copies data from an
instance of SQL Server into a data file. Use the tool to export
the data from your source and import the data file into the
target SQL managed instance.
Import Export Wizard/BACPAC BACPAC is a Windows file with a .bacpac extension that
encapsulates a database's schema and data. You can use
BACPAC to both export data from a SQL Server source and
import the data back into Azure SQL Managed Instance.
Azure Data Factory The Copy activity in Azure Data Factory migrates data from
source SQL Server databases to SQL Managed Instance by
using built-in connectors and an integration runtime.
Azure SQL migration extension for - Migrate single databases or multiple - Easy to setup and get started.
Azure Data Studio databases at scale. - Requires setup of self-hosted
- Can run in both online (minimal integration runtime to access on-
downtime) and offline (acceptable premises SQL Server and backups.
downtime) modes. - Includes both assessment and
migration capabilities.
Supported sources:
- SQL Server (2005 to 2019) on-
premises or Azure VM
- Amazon EC2
- GCP Compute SQL Server VM
Azure Database Migration Service - Migrate single databases or multiple - Migrations at scale can be
databases at scale. automated via PowerShell.
- Can run in both online (minimal - Time to complete migration depends
downtime) and offline modes. on database size and is affected by
backup and restore time.
Supported sources: - Sufficient downtime might be
- SQL Server (2005 to 2019) on- required.
premises or Azure VM
- Amazon EC2
- GCP Compute SQL Server VM
M IGRAT IO N O P T IO N W H EN TO USE C O N SIDERAT IO N S
Native backup and restore - Migrate individual line-of-business - Database backup uses multiple
application databases. threads to optimize data transfer to
- Quick and easy migration without a Azure Blob Storage, but partner
separate migration service or tool. bandwidth and database size can affect
transfer rate.
Supported sources: - Downtime should accommodate the
- SQL Server (2005 to 2019) on- time required to perform a full backup
premises or Azure VM and restore (which is a size of data
- Amazon EC2 operation).
- GCP Compute SQL Server VM
Log Replay Service - Migrate individual line-of-business - The migration entails making full
application databases. database backups on SQL Server and
- More control is needed for database copying backup files to Azure Blob
migrations. Storage. Log Replay Service is used to
restore backup files from Azure Blob
Supported sources: Storage to SQL Managed Instance.
- SQL Server (2008 to 2019) on- - Databases being restored during the
premises or Azure VM migration process will be in a restoring
- Amazon EC2 mode and can't be used for read or
- GCP Compute SQL Server VM write workloads until the process is
complete.
Link feature for Azure SQL Managed - Migrate individual line-of-business - The migration entails establishing a
Instance application databases. network connection between SQL
- More control is needed for database Server and SQL Managed Instance,
migrations. and opening communication ports.
- Minimum downtime migration is - Uses Always On availability group
needed. technology to replicate database near
real-time, making an exact replica of
Supported sources: the SQL Server database on SQL
- SQL Server (2016 to 2019) on- Managed Instance.
premises or Azure VM - The database can be used for read-
- Amazon EC2 only access on SQL Managed Instance
- GCP Compute SQL Server VM while migration is in progress.
- Provides the best performance
during migration with minimum
downtime.
M ET H O D O R T EC H N O LO GY W H EN TO USE C O N SIDERAT IO N S
Bulk copy - Do full or partial data migrations. - Requires downtime for exporting
- Can accommodate downtime. data from the source and importing
into the target.
Supported sources: - The file formats and data types used
- SQL Server (2005 to 2019) on- in the export or import need to be
premises or Azure VM consistent with table schemas.
- Amazon EC2
- Amazon RDS
- GCP Compute SQL Server VM
Feature interoperability
There are more considerations when you're migrating workloads that rely on other SQL Server features.
SQL Server Integration Services
Migrate SQL Server Integration Services (SSIS) packages and projects in SSISDB to Azure SQL Managed
Instance by using Azure Database Migration Service.
Only SSIS packages in SSISDB starting with SQL Server 2012 are supported for migration. Convert older SSIS
packages before migration. See the project conversion tutorial to learn more.
SQL Server Reporting Services
You can migrate SQL Server Reporting Services (SSRS) reports to paginated reports in Power BI. Use theRDL
Migration Tool to help prepare and migrate your reports. Microsoft developed this tool to help customers
migrate Report Definition Language (RDL) reports from their SSRS servers to Power BI. It's available on GitHub,
and it documents an end-to-end walkthrough of the migration scenario.
SQL Server Analysis Services
SQL Server Analysis Services tabular models from SQL Server 2012 and later can be migrated to Azure Analysis
Services, which is a platform as a service (PaaS) deployment model for the Analysis Services tabular model in
Azure. You can learn more about migrating on-premises models to Azure Analysis Services in this video tutorial.
Alternatively, you can consider migrating your on-premises Analysis Services tabular models to Power BI
Premium by using the new XMLA read/write endpoints.
High availability
The SQL Server high-availability features Always On failover cluster instances and Always On availability groups
become obsolete on the target SQL managed instance. High-availability architecture is already built into both
General Purpose (standard availability model) and Business Critical (premium availability model) service tiers
for SQL Managed Instance. The premium availability model also provides read scale-out that allows connecting
into one of the secondary nodes for read-only purposes.
Beyond the high-availability architecture that's included in SQL Managed Instance, the auto-failover groups
feature allows you to managethe replication and failover of databases in a managed instance to another region.
SQL Agent jobs
Use the offline Azure Database Migration Service option to migrate SQL Agent jobs. Otherwise, script the jobs in
Transact-SQL (T-SQL) by using SQL Server Management Studio and then manually re-create them on the target
SQL managed instance.
IMPORTANT
Currently, Azure Database Migration Service supports only jobs with T-SQL subsystem steps. Jobs with SSIS package
steps have to be manually migrated.
IMPORTANT
In-Memory OLTP is supported only in the Business Critical tier in Azure SQL Managed Instance. It's not supported in the
General Purpose tier.
If you have memory-optimized tables or memory-optimized table types in your on-premises SQL Server
instance and you want to migrate to Azure SQL Managed Instance, you should either:
Choose the Business Critical tier for your target SQL managed instance that supports In-Memory OLTP.
If you want to migrate to the General Purpose tier in Azure SQL Managed Instance, remove memory-
optimized tables, memory-optimized table types, and natively compiled SQL modules that interact with
memory-optimized objects before migrating your databases. You can use the following T-SQL query to
identify all objects that need to be removed before migration to the General Purpose tier:
To learn more about in-memory technologies, see Optimize performance by using in-memory technologies in
Azure SQL Database and Azure SQL Managed Instance.
Advanced features
Be sure to take advantage of the advanced cloud-based features in SQL Managed Instance. For example, you
don't need to worry about managing backups because the service does it for you. You can restore to any point
in time within the retention period. Additionally, you don't need to worry about setting up high availability,
becausehigh availabilityis built in.
To strengthen security, consider usingAzure AD authentication, auditing,threat detection,row-level security,
anddynamic data masking.
In addition to advanced management and security features, SQL Managed Instance provides advanced tools that
can help you monitor and tune your workload. Azure SQL Analytics allows you to monitor a large set of
managed instances in a centralized way.Automatic tuningin managed instances continuously monitors
performance of your SQL plan execution and automatically fixes the identified performance problems.
Some features are available only after the database compatibility level is changed to the latest compatibility
level (150).
Migration assets
For more assistance, see the following resources that were developed for real-world migration projects.
Data workload assessment model and tool This tool provides suggested "best fit" target platforms,
cloud readiness, and an application/database remediation
level for a workload. It offers simple, one-click calculation and
report generation that helps to accelerate large estate
assessments by providing an automated and uniform
decision process for target platforms.
Utility to move on-premises SQL Server logins to Azure SQL A PowerShell script can create a T-SQL command script to
Managed Instance re-create logins and select database users from on-premises
SQL Server to Azure SQL Managed Instance. The tool allows
automatic mapping of Windows Server Active Directory
accounts to Azure AD accounts, along with optionally
migrating SQL Server native logins.
Perfmon data collection automation by using Logman You can use the Logman tool to collect Perfmon data (to
help you understand baseline performance) and get
migration target recommendations. This tool uses
logman.exe to create the command that will create, start,
stop, and delete performance counters set on a remote SQL
Server instance.
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate
complex modernization for data platform migration projects to Microsoft's Azure data platform.
Next steps
To start migrating your SQL Server databases to Azure SQL Managed Instance, see the SQL Server to
Azure SQL Managed Instance migration guide.
For a matrix of services and tools that can help you with database and data migration scenarios as well as
specialty tasks, see Services and tools for data migration.
To learn more about Azure SQL Managed Instance, see:
Service tiers in Azure SQL Managed Instance
Differences between SQL Server and Azure SQL Managed Instance
Azure Total Cost of Ownership Calculator
To learn more about the framework and adoption cycle for cloud migrations, see:
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads migrated to Azure
To assess the application access layer, see Data Access Migration Toolkit (Preview).
For details on how to perform A/B testing at the data access layer, see Database Experimentation
Assistant.
Migration guide: SQL Server to Azure SQL
Managed Instance
9/13/2022 • 15 minutes to read • Edit Online
Prerequisites
To migrate your SQL Server to Azure SQL Managed Instance, make sure you have:
Chosen a migration method and the corresponding tools for your method.
Install the Azure SQL migration extension for Azure Data Studio.
Installed the Data Migration Assistant (DMA) on a machine that can connect to your source SQL Server.
Created a target Azure SQL Managed Instance
Configured connectivity and proper permissions to access both source and target.
Reviewed the SQL Server database engine features available in Azure SQL Managed Instance.
Pre-migration
After you've verified that your source environment is supported, start with the pre-migration stage. Discover all
of the existing data sources, assess migration feasibility, and identify any blocking issues that might prevent
your migration.
Discover
In the Discover phase, scan the network to identify all SQL Server instances and features used by your
organization.
Use Azure Migrate to assess migration suitability of on-premises servers, perform performance-based sizing,
and provide cost estimations for running them in Azure.
Alternatively, use theMicrosoft Assessment and Planning Toolkit(the "MAP Toolkit") to assess your current IT
infrastructure. The toolkit provides a powerful inventory, assessment, and reporting tool to simplify the
migration planning process.
For more information about tools available to use for the Discover phase, see Services and tools available for
data migration scenarios.
After data sources have been discovered, assess any on-premises SQL Server instance(s) that can be migrated
to Azure SQL Managed Instance to identify migration blockers or compatibility issues. Proceed to the following
steps to assess and migrate databases to Azure SQL Managed Instance:
Assess SQL Managed Instance compatibility where you should ensure that there are no blocking issues that
can prevent your migrations. This step also includes creation of a performance baseline to determine
resource usage on your source SQL Server instance. This step is needed if you want to deploy a properly
sized managed instance and verify that performance after migration isn't affected.
Choose app connectivity options.
Deploy to an optimally sized managed instance where you'll choose technical characteristics (number of
vCores, amount of memory) and performance tier (Business Critical, General Purpose) of your managed
instance.
Select migration method and migrate where you migrate your databases using offline migration or online
migration options.
Monitor and remediate applications to ensure that you have expected performance.
Assess
NOTE
If you are assessing the entire SQL Server data estate at scale on VMWare, use Azure Migrate to get Azure SQL
deployment recommendations, target sizing, and monthly estimates.
Determine whether SQL Managed Instance is compatible with the database requirements of your application.
SQL Managed Instance is designed to provide easy lift and shift migration for most existing applications that use
SQL Server. However, you may sometimes require features or capabilities that aren't yet supported and the cost
of implementing a workaround is too high.
The Azure SQL migration extension for Azure Data Studio provides a seamless wizard based experience to
assess, get Azure recommendations and migrate your SQL Server databases on-premises to SQL Server on
Azure Virtual Machines. Besides, highlighting any migration blockers or warnings, the extension also includes an
option for Azure recommendations to collect your databases' performance data to recommend a right-sized
Azure SQL Managed Instance to meet the performance needs of your workload (with the least price).
You can also use the Data Migration Assistant (version 4.1 and later) to assess databases to get:
Azure target recommendations
Azure SKU recommendations
To assess your environment using the Database Migration Assessment, follow these steps:
1. Open the Data Migration Assistant (DMA).
2. Select File and then choose New assessment .
3. Specify a project name, selectSQL Serveras the source server type, and then selectAzure SQL Managed
Instance as the target server type.
4. Select the type(s) of assessment reports that you want to generate. For example, database compatibility and
feature parity. Based on the type of assessment, the permissions required on the source SQL Server can be
different. DMA will highlight the permissions required for the chosen advisor before running the assessment.
The feature parity category provides a comprehensive set of recommendations, alternatives
available in Azure, and mitigating steps to help you plan your migration project. (sysadmin
permissions required)
The compatibility issues category identifies partially supported or unsupported feature
compatibility issues that might block migration, and recommendations to address them ( CONNECT SQL ,
VIEW SERVER STATE , and VIEW ANY DEFINITION permissions required).
5. Specify the source connection details for your SQL Server and connect to the source database.
6. Select Star t assessment .
7. When the process is complete, select and review the assessment reports for migration blocking and feature
parity issues. The assessment report can also be exported to a file that can be shared with other teams or
personnel in your organization.
8. Determine the database compatibility level that minimizes post-migration efforts.
9. Identify the best Azure SQL Managed Instance SKU for your on-premises workload.
To learn more, see Perform a SQL Server migration assessment with Data Migration Assistant.
If SQL Managed Instance isn't a suitable target for your workload, SQL Server on Azure VMs might be a viable
alternative target for your business.
Scaled assessments and analysis
If you have multiple servers or databases that require Azure readiness assessment, you can automate the
process by using scripts using one of the following options. To learn more about using scripting see Migrate
databases at scale using automation.
Az.DataMigration PowerShell module
az datamigration CLI extension
Data Migration Assistant command-line interface
Data Migration Assistant also supports consolidation of the assessment reports for analysis. If you have multiple
servers and databases that need to be assessed and analyzed at scale to provide a wider view of the data estate,
see the following links to learn more.
Performing scaled assessments using PowerShell
Analyzing assessment reports using Power BI
IMPORTANT
Running assessments at scale for multiple databases can also be automated using DMA's Command Line Utility which
also allows the results to be uploaded to Azure Migrate for further analysis and target readiness.
To learn how to create the VNet infrastructure and a managed instance, see Create a managed instance.
IMPORTANT
It is important to keep your destination VNet and subnet in accordance with managed instance VNet requirements. Any
incompatibility can prevent you from creating new instances or using those that you already created. Learn more about
creating new and configuring existing networks.
Migrate
After you have completed tasks associated with thePre-migrationstage, you're ready to perform the schema and
data migration.
Migrate your data using your chosen migration method.
SQL Managed Instance targets user scenarios requiring mass database migration from on-premises or Azure
VM database implementations. They are the optimal choice when you need to lift and shift the back end of the
applications that regularly use instance level and/or cross-database functionalities. If this is your scenario, you
can move an entire instance to a corresponding environment in Azure without the need to rearchitect your
applications.
To move SQL instances, you need to plan carefully:
The migration of all databases that need to be collocated (ones running on the same instance).
The migration of instance-level objects that your application depends on, including logins, credentials, SQL
Agent jobs and operators, and server-level triggers.
SQL Managed Instance is a managed service that allows you to delegate some of the regular DBA activities to
the platform as they're built in. Therefore, some instance-level data doesn't need to be migrated, such as
maintenance jobs for regular backups or Always On configuration, as high availability is built in.
This article covers two of the recommended migration options:
Azure SQL migration extension for Azure Data Studio - migration with near-zero downtime.
Native RESTORE DATABASE FROM URL - uses native backups from SQL Server and requires some downtime.
This guide describes the two most popular options - Azure Database Migration Service (DMS) and native
backup and restore.
For other migration tools, see Compare migration options.
Migrate using the Azure SQL migration extension for Azure Data Studio (minimal downtime )
To perform a minimal downtime migration using Azure Data Studio, follow the high level steps below. For a
detailed step-by-step tutorial, see Migrate SQL Server to an Azure SQL Managed Instance online using Azure
Data Studio:
1. Download and install Azure Data Studio and the Azure SQL migration extension.
2. Launch the Migrate to Azure SQL wizard in the extension in Azure Data Studio.
3. Select databases for assessment and view migration readiness or issues (if any). Additionally, collect
performance data and get right-sized Azure recommendation.
4. Select your Azure account and your target Azure SQL Managed Instance from your subscription.
5. Select the location of your database backups. Your database backups can either be located on an on-premises
network share or in an Azure storage blob container.
6. Create a new Azure Database Migration Service using the wizard in Azure Data Studio. If you've previously
created an Azure Database Migration Service using Azure Data Studio, you can reuse the same if desired.
7. Optional: If your backups are on an on-premises network share, download and install self-hosted integration
runtime on a machine that can connect to the source SQL Server, and the location containing the backup
files.
8. Start the database migration and monitor the progress in Azure Data Studio. You can also monitor the
progress under the Azure Database Migration Service resource in Azure portal.
9. Complete the cutover.
a. Stop all incoming transactions to the source database.
b. Make application configuration changes to point to the target database in Azure SQL Managed
Instance.
c. Take any tail log backups for the source database in the backup location specified.
d. Ensure all database backups have the status Restored in the monitoring details page.
e. Select Complete cutover in the monitoring details page.
Backup and restore
One of the key capabilities of Azure SQL Managed Instance to enable quick and easy database migration is the
native restore of database backup ( .bak ) files stored on Azure Storage. Backing up and restoring are
asynchronous operations based on the size of your database.
The following diagram provides a high-level overview of the process:
NOTE
The time to take the backup, upload it to Azure storage, and perform a native restore operation to Azure SQL Managed
Instance is based on the size of the database. Factor a sufficient downtime to accommodate the operation for large
databases.
The following table provides more information regarding the methods you can use depending on source SQL
Server version you're running:
Put backup to Azure Storage Prior to 2012 SP1 CU2 Upload .bak file directly to Azure
Storage
IMPORTANT
When you're migrating a database protected by Transparent Data Encryption to a managed instance using native
restore option, the corresponding certificate from the on-premises or Azure VM SQL Server needs to be migrated
before database restore. For detailed steps, see Migrate a TDE cert to a managed instance.
Restore of system databases is not supported. To migrate instance-level objects (stored in master or msdb
databases), we recommend to script them out and run T-SQL scripts on the destination instance.
4. Restore the backup from the Azure storage blob container. For example:
5. Once restore completes, view the database in Object Explorer within SQL Server Management Studio.
To learn more about this migration option, see Restore a database to Azure SQL Managed Instance with SSMS.
NOTE
A database restore operation is asynchronous and can be retried. You might get an error in SQL Server Management
Studio if the connection breaks or a time-out expires. Azure SQL Database will keep trying to restore database in the
background, and you can track the progress of the restore using the sys.dm_exec_requests and sys.dm_operation_status
views.
IMPORTANT
For details on the specific steps associated with performing a cutover as part of migrations using DMS, see Performing
migration cutover.
Post-migration
After you've successfully completed themigrationstage, go through a series of post-migration tasks to ensure
that everything is functioning smoothly and efficiently.
The post-migration phase is crucial for reconciling any data accuracy issues and verifying completeness, and
addressing performance issues with the workload.
Monitor and remediate applications
Once you've completed the migration to a managed instance, you should track the application behavior and
performance of your workload. This process includes the following activities:
Compare performance of the workload running on the managed instance with the performance baseline that
you created on the source SQL Server instance.
Continuously monitor performance of your workload to identify potential issues and improvement.
Perform tests
The test approach for database migration consists of the following activities:
1. Develop validation tests : To test database migration, you need to use SQL queries. You must create the
validation queries to run against both the source and the target databases. Your validation queries should
cover the scope you've defined.
2. Set up test environment : The test environment should contain a copy of the source database and the
target database. Be sure to isolate the test environment.
3. Run validation tests : Run the validation tests against the source and the target, and then analyze the
results.
4. Run performance tests : Run performance test against the source and the target, and then analyze and
compare the results.
Next steps
See Service and tools for data migration for a matrix of the Microsoft and third-party services and tools
that are available to assist you with various database and data migration scenarios as well as specialty
tasks.
To learn more about Azure SQL Managed Instance see:
Service Tiers in Azure SQL Managed Instance
Differences between SQL Server and Azure SQL Managed Instance
Azure total Cost of Ownership Calculator
To learn more about the framework and adoption cycle for Cloud migrations, see
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads migrate to Azure
To assess the Application access layer, see Data Access Migration Toolkit (Preview)
For details on how to perform Data Access Layer A/B testing see Database Experimentation Assistant.
Migration performance: SQL Server to Azure SQL
Managed Instance performance baseline
9/13/2022 • 4 minutes to read • Edit Online
Create a baseline
Ideally, performance is similar or better after migration, so it is important to measure and record baseline
performance values on the source and then compare them to the target environment. A performance baseline is
a set of parameters that define your average workload on your source.
Select a set of queries that are important to, and representative of your business workload. Measure and
document the min/average/max duration and CPU usage for these queries, as well as performance metrics on
the source server, such as average/max CPU usage, average/max disk IO latency, throughput, IOPS, average /
max page life expectancy, and average max size of tempdb.
The following resources can help define a performance baseline:
Monitor CPU usage
Monitor memory usageand determine the amount of memory used by different components such as buffer
pool, plan cache, column-store pool,In-Memory OLTP, etc. In addition, you should find average and peak
values of the Page Life Expectancy memory performance counter.
Monitor disk IO usage on the source SQL Server instance using thesys.dm_io_virtual_file_statsview
orperformance counters.
Monitor workload and query performance by examining Dynamic Management Views (or Query Store if you
are migrating from SQL Server 2016 and later). Identify average duration and CPU usage of the most
important queries in your workload.
Any performance issues on the source SQL Server should be addressed prior to migration. Migrating known
issues to any new system might cause unexpected results and invalidate any performance comparison.
Compare performance
After you have defined a baseline, compare similar workload performance on the target SQL Managed Instance.
For accuracy, it is important that the SQL Managed Instance environment is comparable to the SQL Server
environment as much as possible.
There are SQL Managed Instance infrastructure differences that make matching performance exactly unlikely.
Some queries may run faster than expected, while others may be slower. The goal of this comparison is to verify
that workload performance in the managed instance matches the performance on SQL Server (on average) and
to identify any critical queries with performance that don’t match your original performance.
Performance comparison is likely to result in the following outcomes:
Workload performance on the managed instance is aligned or better than the workload performance on
your source SQL Server. In this case, you have successfully confirmed that migration is successful.
The majority of performance parameters and queries in the workload perform as expected, with some
exceptions resulting in degraded performance. In this case, identify the differences and their importance.
If there are some important queries with degraded performance, investigate whether the underlying SQL
plans have changed or whether queries are hitting resource limits. You can mitigate this by applying
some hints on critical queries (for example, change compatibility level, legacy cardinality estimator) either
directly or using plan guides. Ensure statistics and indexes are up to date and equivalent in both
environments.
Most queries are slower on a managed instance compared to your source SQL Server instance. In this
case, try to identify the root causes of the difference such asreaching some resource limit such as IO,
memory, or instance log rate limits. If there are no resource limits causing the difference, try changing the
compatibility level of the database or change database settings like legacy cardinality estimation and
rerun the test. Review the recommendations provided by the managed instance or Query Store views to
identify the queries with regressed performance.
SQL Managed Instance has a built-in automatic plan correction feature that is enabled by default. This feature
ensures that queries that worked fine in the past do not degrade in the future. If this feature is not enabled, run
the workload with the old settings so SQL Managed Instance can learn the performance baseline. Then, enable
the feature and run the workload again with the new settings.
Make changes in the parameters of your test or upgrade to higher service tiers to reach the optimal
configuration for the workload performance that fits your needs.
Monitor performance
SQL Managed Instance provides advanced tools for monitoring and troubleshooting, and you should use them
to monitor performance on your instance. Some of the key metrics to monitor are:
CPU usage on the instance to determine if the number of vCores that you provisioned is the right match for
your workload.
Page-life expectancy on your managed instance to determineif you need additional memory.
Statistics likeINSTANCE_LOG_GOVERNORorPAGEIOLATCHthat identify storage IO issues, especially on the
General Purpose tier, where you might need to pre-allocate files to get better IO performance.
Considerations
When comparing performance, consider the following:
Settings match between source and target. Validate that various instance, database, and tempdb settings
are equivalent between the two environments. Differences in configuration, compatibility levels,
encryption settings, trace flags etc., can all skew performance.
Storage is configured according to best practices. For example, for General Purpose, you may need to
pre-allocate the size of the files to improve performance.
There are key environment differences that might cause the performance differences between a managed
instance and SQL Server. Identify risks relevant to your environment that might contribute to a
performance issue.
Query store and automatic tuning should be enabled on your SQL Managed Instance as they help you
measure workload performance and automatically mitigate potential performance issues.
Next steps
For more information to optimize your new Azure SQL Managed Instance environment, see the following
resources:
How to identify why workload performance on Azure SQL Managed Instance is different than SQL Server?
Key causes of performance differences between SQL Managed Instance and SQL Server
Storage performance best practices and considerations for Azure SQL Managed Instance (General Purpose)
Real-time performance monitoring for Azure SQL Managed Instance (this is archived, is this the intended
target?)
Assessment rules for SQL Server to Azure SQL
Managed Instance migration
9/13/2022 • 20 minutes to read • Edit Online
Rules Summary
RUL E T IT L E L EVEL C AT EGO RY DETA IL S
AnalysisCommand job
Title: AnalysisCommand job step is not suppor ted in Azure SQL Managed Instance.
Categor y : Warning
Description
It is a job step that runs an Analysis Services command. AnalysisCommand job step is not supported in Azure
SQL Managed Instance.
Recommendation
Review impacted objects section in Azure Migrate to see all jobs using Analysis Service Command job step and
evaluate if the job step or the impacted object can be removed. Alternatively, migrate to SQL Server on Azure
Virtual Machine.
More information: SQL Server Agent differences in Azure SQL Managed Instance
AnalysisQuery job
Title: AnalysisQuer y job step is not suppor ted in Azure SQL Managed Instance.
Categor y : Warning
Description
It is a job step that runs an Analysis Services query. AnalysisQuery job step is not supported in Azure SQL
Managed Instance.
Recommendation
Review impacted objects section in Azure Migrate to see all jobs using Analysis Service Query job step and
evaluate if the job step or the impacted object can be removed. Alternatively, migrate to SQL Server on Azure
Virtual Machine.
More information: SQL Server Agent differences in Azure SQL Managed Instance
Bulk insert
Title: BULK INSERT with non-Azure blob data source is not suppor ted in Azure SQL Managed
Instance.
Categor y : Issue
Description
Azure SQL Managed Instance cannot access file shares or Windows folders. See the "Impacted Objects" section
for the specific uses of BULK INSERT statements that do not reference an Azure blob. Objects with 'BULK INSERT'
where the source is not Azure blob storage will not work after migrating to Azure SQL Managed Instance.
Recommendation
You will need to convert BULK INSERT statements that use local files or file shares to use files from Azure blob
storage instead, when migrating to Azure SQL Managed Instance.
More information: Bulk Insert and OPENROWSET differences in Azure SQL Managed Instance
CLR Security
Title: CLR assemblies marked as SAFE or EXTERNAL_ACCESS are considered UNSAFE
Categor y : Warning
Description
CLR Strict Security mode is enforced in Azure SQL Managed Instance. This mode is enabled by default and
introduces breaking changes for databases containing user-defined CLR assemblies marked either SAFE or
EXTERNAL_ACCESS.
Recommendation
CLR uses Code Access Security (CAS) in the .NET Framework, which is no longer supported as a security
boundary. Beginning with SQL Server 2017 (14.x) database engine, an sp_configure option called clr strict
security is introduced to enhance the security of CLR assemblies. Clr strict security is enabled by default, and
treats SAFE and EXTERNAL_ACCESS CLR assemblies as if they were marked UNSAFE. When clr strict security is
disabled, a CLR assembly created with PERMISSION_SET = SAFE may be able to access external system
resources, call unmanaged code, and acquire sysadmin privileges. After enabling strict security, any assemblies
that are not signed will fail to load. Also, if a database has SAFE or EXTERNAL_ACCESS assemblies, RESTORE or
ATTACH DATABASE statements can complete, but the assemblies may fail to load. To load the assemblies, you
must either alter or drop and recreate each assembly so that it is signed with a certificate or asymmetric key that
has a corresponding login with the UNSAFE ASSEMBLY permission on the server.
More information: CLR strict security
Compute clause
Title: COMPUTE clause is no longer suppor ted and has been removed.
Categor y : Warning
Description
The COMPUTE clause generates totals that appear as additional summary columns at the end of the result set.
However, this clause is no longer supported in Azure SQL Managed Instance.
Recommendation
The T-SQL module needs to be rewritten using the ROLLUP operator instead. The code below demonstrates
how COMPUTE can be replaced with ROLLUP:
Cryptographic provider
Title: A use of CREATE CRYPTOGRAPHIC PROVIDER or ALTER CRYPTOGRAPHIC PROVIDER was
found, which is not suppor ted in Azure SQL Managed Instance.
Categor y : Issue
Description
Azure SQL Managed Instance does not support CRYPTOGRAPHIC PROVIDER statements because it cannot
access files. See the Impacted Objects section for the specific uses of CRYPTOGRAPHIC PROVIDER statements.
Objects with 'CREATE CRYPTOGRAPHIC PROVIDER' or 'ALTER CRYPTOGRAPHIC PROVIDER' will not work
correctly after migrating to Azure SQL Managed Instance.
Recommendation
Review objects with 'CREATE CRYPTOGRAPHIC PROVIDER' or 'ALTER CRYPTOGRAPHIC PROVIDER'. In any such
objects that are required, remove the uses of these features. Alternatively, migrate to SQL Server on Azure
Virtual Machine.
More information: Cryptographic provider differences in Azure SQL Managed Instance
Database compatibility
Title: Database compatibility level below 100 is not suppor ted
Categor y : Warning
Description
Database Compatibility Level is a valuable tool to assist in database modernization, by allowing the SQL Server
Database Engine to be upgraded, while keeping connecting applications functional status by maintaining the
same pre-upgrade Database Compatibility Level. Azure SQL Managed Instance doesn't support compatibility
levels below 100. When the database with compatibility level below 100 is restored on Azure SQL Managed
Instance, the compatibility level is upgraded to 100.
Recommendation ... Evaluate if the application functionality is intact when the database compatibility level is
upgraded to 100 on Azure SQL Managed Instance. Alternatively, migrate to SQL Server on Azure Virtual
Machine.
More information: Supported compatibility levels in Azure SQL Managed Instance
DISABLE_DEF_CNST_CHK option
Title: SET option DISABLE_DEF_CNST_CHK is no longer suppor ted and has been removed.
Categor y : Issue
Description
SET option DISABLE_DEF_CNST_CHK is no longer supported and has been removed in Azure SQL Managed
Instance.
More information: Discontinued Database Engine Functionality in SQL Server
FASTFIRSTROW hint
Title: FASTFIRSTROW quer y hint is no longer suppor ted and has been removed.
Categor y : Warning
Description
FASTFIRSTROW query hint is no longer supported and has been removed in Azure SQL Managed Instance.
Recommendation
Instead of FASTFIRSTROW query hint use OPTION (FAST n).
More information: Discontinued Database Engine Functionality in SQL Server
FileStream
Title: Filestream and Filetable are not suppor ted in Azure SQL Managed Instance.
Categor y : Issue
Description
The Filestream feature, which allows you to store unstructured data such as text documents, images, and videos
in NTFS file system, is not supported in Azure SQL Managed Instance. This database can't be migrated as
the backup containing Filestream filegroups can't be restored on Azure SQL Managed Instance.
Recommendation
Upload the unstructured files to Azure Blob storage and store metadata related to these files (name, type, URL
location, storage key etc.) in Azure SQL Managed Instance. You may have to re-engineer your application to
enable streaming blobs to and from Azure SQL Managed Instance. Alternatively, migrate to SQL Server on
Azure Virtual Machine.
More information: Streaming Blobs To and From SQL Azure blog
Heterogeneous MS DTC
Title: BEGIN DISTRIBUTED TRANSACTION with non-SQL Ser ver remote ser ver is not suppor ted in
Azure SQL Managed Instance.
Categor y : Issue
Description
Distributed transaction started by Transact SQL BEGIN DISTRIBUTED TRANSACTION and managed by Microsoft
Distributed Transaction Coordinator (MS DTC) is not supported in Azure SQL Managed Instance if the remote
server is not SQL Server.
Recommendation
Review impacted objects section in Azure Migrate to see all objects using BEGIN DISTRUBUTED TRANSACTION.
Consider migrating the participant databases to Azure SQL Managed Instance where distributed transactions
across multiple instances are supported (Currently in preview). Alternatively, migrate to SQL Server on Azure
Virtual Machine.
More information: Transactions across multiple servers for Azure SQL Managed Instance
Homogenous MS DTC
Title: BEGIN DISTRIBUTED TRANSACTION is suppor ted across multiple ser vers for Azure SQL
Managed Instance.
Categor y : Issue
Description
Distributed transaction started by Transact SQL BEGIN DISTRIBUTED TRANSACTION and managed by Microsoft
Distributed Transaction Coordinator (MS DTC) is supported across multiple servers for Azure SQL Managed
Instance.
Recommendation
Review impacted objects section in Azure Migrate to see all objects using BEGIN DISTRUBUTED TRANSACTION.
Consider migrating the participant databases to Azure SQL Managed Instance where distributed transactions
across multiple instances are supported (Currently in preview). Alternatively, migrate to SQL Server on Azure
Virtual Machine.
More information: Transactions across multiple servers for Azure SQL Managed Instance
Merge job
Title: Merge job step is not suppor ted in Azure SQL Managed Instance.
Categor y : Warning
Description
It is a job step that activates the replication Merge Agent. The Replication Merge Agent is a utility executable that
applies the initial snapshot held in the database tables to the Subscribers. It also merges incremental data
changes that occurred at the Publisher after the initial snapshot was created, and reconciles conflicts either
according to the rules you configure or using a custom resolver you create. Merge job step is not supported in
Azure SQL Managed Instance.
Recommendation
Review impacted objects section in Azure Migrate to see all jobs using Merge job step and evaluate if the job
step or the impacted object can be removed. Alternatively, migrate to SQL Server on Azure Virtual Machine
More information: SQL Server Agent differences in Azure SQL Managed Instance
MI database size
Title: Azure SQL Managed Instance does not suppor t database size greater than 8 TB.
Categor y : Issue
Description
The size of the database is greater than maximum instance reserved storage. This database can't be selected
for migration as the size exceeded the allowed limit.
Recommendation
Evaluate if the data can be archived compressed or sharded into multiple databases. Alternatively, migrate to
SQL Server on Azure Virtual Machine.
More information: Hardware characteristics of Azure SQL Managed Instance
MI instance size
Title: Maximum instance storage size in Azure SQL Managed Instance cannot be greater than 8 TB.
Categor y : Warning
Description
The size of all databases is greater than maximum instance reserved storage.
Recommendation
Consider migrating the databases to different Azure SQL Managed Instances or to SQL Server on Azure Virtual
Machine if all the databases must exist on the same instance.
More information: Hardware characteristics of Azure SQL Managed Instance
Multiple log files
Title: Azure SQL Managed Instance does not suppor t multiple log files.
Categor y : Issue
Description
SQL Server allows a database to log to multiple files. This database has multiple log files, which is not supported
in Azure SQL Managed Instance. **This database can't be migrated as the backup can't be restored on Azure
SQL Managed Instance. **
Recommendation
Azure SQL Managed Instance supports only a single log per database. You need to delete all but one of the log
files before migrating this database to Azure:
Next column
Title: Tables and Columns named NEXT will lead to an error In Azure SQL Managed Instance.
Categor y : Issue
Description
Tables or columns named NEXT were detected. Sequences, introduced in Microsoft SQL Server, use the ANSI
standard NEXT VALUE FOR function. Tables or columns named NEXT and column aliased as VALUE with the
ANSI standard AS omitted can cause an error.
Recommendation
Rewrite statements to include the ANSI standard AS keyword when aliasing a table or column. For example,
when a column is named NEXT and that column is aliased as VALUE, the query SELECT NEXT VALUE FROM
TABLE will cause an error and should be rewritten as SELECT NEXT AS VALUE FROM TABLE. Similarly, for a table
named NEXT and aliased as VALUE, the query SELECT Col1 FROM NEXT VALUE will cause an error and should
be rewritten as SELECT Col1 FROM NEXT AS VALUE.
RAISERROR
Title: Legacy style RAISERROR calls should be replaced with modern equivalents.
Categor y : Warning
Description
RAISERROR calls like the below example are termed as legacy-style because they do not include the commas
and the parenthesis. RAISERROR 50001 'this is a test'. This method of calling RAISERROR is no longer
supported and removed in Azure SQL Managed Instance.
Recommendation
Rewrite the statement using the current RAISERROR syntax, or evaluate if the modern approach of
BEGIN TRY { } END TRY BEGIN CATCH { THROW; } END CATCH is feasible.
SQL Mail
Title: SQL Mail has been no longer suppor ted.
Categor y : Warning
Description
SQL Mail has been no longer supported and removed in Azure SQL Managed Instance.
Recommendation
Use Database Mail.
More information: Discontinued Database Engine Functionality in SQL Server
SystemProcedures110
Title: Detected statements that reference removed system stored procedures that are not available
in Azure SQL Managed Instance.
Categor y : Warning
Description
Following unsupported system and extended stored procedures cannot be used in Azure SQL Managed Instance
- sp_dboption , sp_addserver , sp_dropalias , sp_activedirectory_obj , sp_activedirectory_scp , and
sp_activedirectory_start .
Recommendation
Remove references to unsupported system procedures that have been removed in Azure SQL Managed
Instance.
More information: Discontinued Database Engine Functionality in SQL Server
Transact-SQL job
Title: TSQL job step includes unsuppor ted commands in Azure SQL Managed Instance
Categor y : Warning
Description
It is a job step that runs TSQL scripts at scheduled time. TSQL job step includes unsupported commands which
are not supported in Azure SQL Managed Instance.
Recommendation
Review impacted objects section in Azure Migrate to see all jobs that include unsupported commands in Azure
SQL Managed Instance and evaluate if the job step or the impacted object can be removed. Alternatively,
migrate to SQL Server on Azure Virtual Machine.
More information: SQL Server Agent differences in Azure SQL Managed Instance
Trace flags
Title: Trace flags not suppor ted in Azure SQL Managed Instance were found
Categor y : Warning
Description
Azure SQL Managed Instance supports only limited number of global trace flags. Session trace flags aren't
supported.
Recommendation
Review impacted objects section in Azure Migrate to see all trace flags that are not supported in Azure SQL
Managed Instance and evaluate if they can be removed. Alternatively, migrate to SQL Server on Azure Virtual
Machine.
More information: Trace flags
Windows authentication
Title: Database users mapped with Windows authentication (integrated security) are not
suppor ted in Azure SQL Managed Instance
Categor y : Warning
Description
Azure SQL Managed Instance supports two types of authentication:
SQL Authentication, which uses a username and password
Azure Active Directory Authentication, which uses identities managed by Azure Active Directory and is
supported for managed and integrated domains.
Database users mapped with Windows authentication (integrated security) are not supported in Azure SQL
Managed Instance.
Recommendation
Federate the local Active Directory with Azure Active Directory. The Windows identity can then be replaced with
the equivalent Azure Active Directory identities. Alternatively, migrate to SQL Server on Azure Virtual Machine.
More information: SQL Managed Instance security capabilities
XP_cmdshell
Title: xp_cmdshell is not suppor ted in Azure SQL Managed Instance.
Categor y : Issue
Description
Xp_cmdshell, which spawns a Windows command shell and passes in a string for execution isn't supported in
Azure SQL Managed Instance.
Recommendation
Review impacted objects section in Azure Migrate to see all objects using xp_cmdshell and evaluate if the
reference to xp_cmdshell or the impacted object can be removed. Consider exploring Azure Automation that
delivers cloud-based automation and configuration service. Alternatively, migrate to SQL Server on Azure
Virtual Machine.
More information: Stored Procedure differences in Azure SQL Managed Instance
Next steps
To start migrating your SQL Server to Azure SQL Managed Instance, see the SQL Server to SQL Managed
Instance migration guide.
For a matrix of the Microsoft and third-party services and tools that are available to assist you with
various database and data migration scenarios as well as specialty tasks, see Service and tools for data
migration.
To learn more about Azure SQL Managed Instance, see:
Service Tiers in Azure SQL Managed Instance
Differences between SQL Server and Azure SQL Managed Instance
Azure total Cost of Ownership Calculator
To learn more about the framework and adoption cycle for Cloud migrations, see
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads migrate to Azure
To assess the Application access layer, see Data Access Migration Toolkit (Preview)
For details on how to perform Data Access Layer A/B testing see Database Experimentation Assistant.
Connectivity architecture for Azure SQL Managed
Instance
9/13/2022 • 12 minutes to read • Edit Online
Communication overview
The following diagram shows entities that connect to SQL Managed Instance. It also shows the resources that
need to communicate with a managed instance. The communication process at the bottom of the diagram
represents customer applications and tools that connect to SQL Managed Instance as data sources.
SQL Managed Instance is a platform as a service (PaaS) offering. Azure uses automated agents (management,
deployment, and maintenance) to manage this service based on telemetry data streams. Because Azure is
responsible for management, customers can't access the SQL Managed Instance virtual cluster machines
through Remote Desktop Protocol (RDP).
Some operations started by end users or applications might require SQL Managed Instance to interact with the
platform. One case is the creation of a SQL Managed Instance database. This resource is exposed through the
Azure portal, PowerShell, Azure CLI, and the REST API.
SQL Managed Instance depends on Azure services such as Azure Storage for backups, Azure Event Hubs for
telemetry, Azure Active Directory (Azure AD) for authentication, Azure Key Vault for Transparent Data Encryption
(TDE), and a couple of Azure platform services that provide security and supportability features. SQL Managed
Instance makes connections to these services.
All communications are encrypted and signed using certificates. To check the trustworthiness of communicating
parties, SQL Managed Instance constantly verifies these certificates through certificate revocation lists. If the
certificates are revoked, SQL Managed Instance closes the connections to protect the data.
Azure management and deployment services run outside the virtual network. SQL Managed Instance and Azure
services connect over the endpoints that have public IP addresses. When SQL Managed Instance creates an
outbound connection, on the receiving end Network Address Translation (NAT) makes the connection look like
it's coming from this public IP address.
Management traffic flows through the customer's virtual network. That means that elements of the virtual
network's infrastructure can harm management traffic by making the instance fail and become unavailable.
IMPORTANT
To improve customer experience and service availability, Azure applies a network intent policy on Azure virtual network
infrastructure elements. The policy can affect how SQL Managed Instance works. This platform mechanism transparently
communicates networking requirements to users. The policy's main goal is to prevent network misconfiguration and to
ensure normal SQL Managed Instance operations. When you delete a managed instance, the network intent policy is also
removed.
Virtual cluster connectivity architecture
Let's take a deeper dive into connectivity architecture for SQL Managed Instance. The following diagram shows
the conceptual layout of the virtual cluster.
Clients connect to SQL Managed Instance by using a host name that has the form
<mi_name>.<dns_zone>.database.windows.net . This host name resolves to a private IP address, although it's
registered in a public Domain Name System (DNS) zone and is publicly resolvable. The zone-id is automatically
generated when you create the cluster. If a newly created cluster hosts a secondary managed instance, it shares
its zone ID with the primary cluster. For more information, see Use auto failover groups to enable transparent
and coordinated failover of multiple databases.
This private IP address belongs to the internal load balancer for SQL Managed Instance. The load balancer
directs traffic to the SQL Managed Instance gateway. Because multiple managed instances can run inside the
same cluster, the gateway uses the SQL Managed Instance host name to redirect traffic to the correct SQL
engine service.
Management and deployment services connect to SQL Managed Instance by using a management endpoint
that maps to an external load balancer. Traffic is routed to the nodes only if it's received on a predefined set of
ports that only the management components of SQL Managed Instance use. A built-in firewall on the nodes is
set up to allow traffic only from Microsoft IP ranges. Certificates mutually authenticate all communication
between management components and the management plane.
Management endpoint
Azure manages SQL Managed Instance by using a management endpoint. This endpoint is inside an instance's
virtual cluster. The management endpoint is protected by a built-in firewall on the network level. On the
application level, it's protected by mutual certificate verification. To find the endpoint's IP address, see Determine
the management endpoint's IP address.
When connections start inside SQL Managed Instance (as with backups and audit logs), traffic appears to start
from the management endpoint's public IP address. You can limit access to public services from SQL Managed
Instance by setting firewall rules to allow only the IP address for SQL Managed Instance. For more information,
see Verify the SQL Managed Instance built-in firewall.
NOTE
Traffic that goes to Azure services that are inside the SQL Managed Instance region is optimized and for that reason not
NATed to the public IP address for the management endpoint. For that reason if you need to use IP-based firewall rules,
most commonly for storage, the service needs to be in a different region from SQL Managed Instance.
IMPORTANT
Due to control plane configuration specificities, service-aided subnet configuration would not enable service endpoints in
national clouds.
Network requirements
Deploy SQL Managed Instance in a dedicated subnet inside the virtual network. The subnet must have these
characteristics:
Dedicated subnet: SQL Managed Instance's subnet can't contain any other cloud service that's associated
with it, but other managed instances are allowed and it can't be a gateway subnet. The subnet can't contain
any resource but the managed instance(s), and you can't later add other types of resources in the subnet.
Subnet delegation: The SQL Managed Instance subnet needs to be delegated to the
Microsoft.Sql/managedInstances resource provider.
Network security group (NSG): An NSG needs to be associated with the SQL Managed Instance subnet.
You can use an NSG to control access to the SQL Managed Instance data endpoint by filtering traffic on port
1433 and ports 11000-11999 when SQL Managed Instance is configured for redirect connections. The
service will automatically provision and keep current rules required to allow uninterrupted flow of
management traffic.
User defined route (UDR) table: A UDR table needs to be associated with the SQL Managed Instance
subnet. You can add entries to the route table to route traffic that has on-premises private IP ranges as a
destination through the virtual network gateway or virtual network appliance (NVA). Service will
automatically provision and keep current entries required to allow uninterrupted flow of management traffic.
Sufficient IP addresses: The SQL Managed Instance subnet must have at least 32 IP addresses. For more
information, see Determine the size of the subnet for SQL Managed Instance. You can deploy managed
instances in the existing network after you configure it to satisfy the networking requirements for SQL
Managed Instance. Otherwise, create a new network and subnet.
Allowed by Azure policies: If you use Azure Policy to deny the creation or modification of resources in the
scope that includes SQL Managed Instance subnet/virtual network, such policies should not prevent
Managed Instance from managing its internal resources. The following resources need to be excluded from
deny effects to enable normal operation:
Resources of type Microsoft.Network/serviceEndpointPolicies, when resource name begins with
_e41f87a2_
All resources of type Microsoft.Network/networkIntentPolicies
All resources of type Microsoft.Network/virtualNetworks/subnets/contextualServiceEndpointPolicies
Locks on vir tual network : Locks on the dedicated subnet's virtual network, its parent resource group, or
subscription, may occasionally interfere with SQL Managed Instance's management and maintenance
operations. Take special care when you use such locks.
IMPORTANT
When you create a managed instance, a network intent policy is applied on the subnet to prevent noncompliant changes
to networking setup. This policy is a hidden resource located in the virtual network of the resource group. After the last
instance is removed from the subnet, the network intent policy is also removed. Rules below are for the informational
purposes only, and you should not deploy them using ARM template / PowerShell / CLI. If you want to use the latest
official template you could always retrieve it from the portal. Replication traffic for auto-failover groups between two SQL
Managed Instances should be direct, and not through a hub network.
1 MI SUBNET refers to the IP address range for the subnet in the form x.x.x.x/y. You can find this information in
the Azure portal, in subnet properties.
2 If the destination address is for
one of Azure's services, Azure routes the traffic directly to the service over
Azure's backbone network, rather than routing the traffic to the Internet. Traffic between Azure services does not
traverse the Internet, regardless of which Azure region the virtual network exists in, or which Azure region an
instance of the Azure service is deployed in. For more details check UDR documentation page.
In addition, you can add entries to the route table to route traffic that has on-premises private IP ranges as a
destination through the virtual network gateway or virtual network appliance (NVA).
If the virtual network includes a custom DNS, the custom DNS server must be able to resolve public DNS
records. Using additional features like Azure AD Authentication might require resolving additional FQDNs. For
more information, see Set up a custom DNS.
Networking constraints
TLS 1.2 is enforced on outbound connections : In January 2020 Microsoft enforced TLS 1.2 for intra-
service traffic in all Azure services. For Azure SQL Managed Instance, this resulted in TLS 1.2 being enforced on
outbound connections used for replication and linked server connections to SQL Server. If you are using
versions of SQL Server older than 2016 with SQL Managed Instance, please ensure that TLS 1.2 specific updates
have been applied.
The following virtual network features are currently not supported with SQL Managed Instance:
Microsoft peering : Enabling Microsoft peering on ExpressRoute circuits peered directly or transitively with
a virtual network where SQL Managed Instance resides affects traffic flow between SQL Managed Instance
components inside the virtual network and services it depends on, causing availability issues. SQL Managed
Instance deployments to virtual network with Microsoft peering already enabled are expected to fail.
Global vir tual network peering : Virtual network peering connectivity across Azure regions doesn't work
for SQL Managed Instances placed in subnets created before 9/22/2020.
AzurePlatformDNS : Using the AzurePlatformDNS service tag to block platform DNS resolution would
render SQL Managed Instance unavailable. Although SQL Managed Instance supports customer-defined
DNS for DNS resolution inside the engine, there is a dependency on platform DNS for platform operations.
NAT gateway : Using Azure Virtual Network NAT to control outbound connectivity with a specific public IP
address would render SQL Managed Instance unavailable. The SQL Managed Instance service is currently
limited to use of basic load balancer that doesn't provide coexistence of inbound and outbound flows with
Virtual Network NAT.
IPv6 for Azure Vir tual Network : Deploying SQL Managed Instance to dual stack IPv4/IPv6 virtual
networks is expected to fail. Associating network security group (NSG) or route table (UDR) containing IPv6
address prefixes to SQL Managed Instance subnet, or adding IPv6 address prefixes to NSG or UDR that is
already associated with Managed instance subnet, would render SQL Managed Instance unavailable. SQL
Managed Instance deployments to a subnet with NSG and UDR that already have IPv6 prefixes are expected
to fail.
Azure DNS private zones with a name reser ved for Microsoft ser vices : Following is the list of
reserved names: windows.net, database.windows.net, core.windows.net, blob.core.windows.net,
table.core.windows.net, management.core.windows.net, monitoring.core.windows.net,
queue.core.windows.net, graph.windows.net, login.microsoftonline.com, login.windows.net,
servicebus.windows.net, vault.azure.net. Deploying SQL Managed Instance to a virtual network with
associated Azure DNS private zone with a name reserved for Microsoft services would fail. Associating Azure
DNS private zone with reserved name with a virtual network containing Managed Instance, would render
SQL Managed Instance unavailable. Please follow Azure Private Endpoint DNS configuration for the proper
Private Link configuration.
Next steps
For an overview, seeWhat is Azure SQL Managed Instance?.
Learn how to set up a new Azure virtual network or an existing Azure virtual network where you can deploy
SQL Managed Instance.
Calculate the size of the subnet where you want to deploy SQL Managed Instance.
Learn how to create a managed instance:
From the Azure portal.
By using PowerShell.
By using an Azure Resource Manager template.
By using an Azure Resource Manager template (using JumpBox, with SSMS included).
Automated backups in Azure SQL Managed
Instance
9/13/2022 • 20 minutes to read • Edit Online
NOTE
This article provides steps about how to delete personal data from the device or service and can be used to support your
obligations under the GDPR. For general information about GDPR, see the GDPR section of the Microsoft Trust Center
and the GDPR section of the Service Trust portal.
Zone-redundant storage (ZRS) : Copies your backups synchronously across three Azure availability
zones in the primary region. It's currently available in certain regions.
Geo-redundant storage (GRS) : Copies your backups synchronously three times within a single
physical location in the primary region by using LRS. Then it copies your data asynchronously three times
to a single physical location in the paired secondary region.
The result is:
Three synchronous copies in the primary region.
Three synchronous copies in the paired region that were copied over from the primary region to the
secondary region asynchronously.
Geo-zone-redundant storage (GZRS) : Combines the high availability provided by redundancy across
availability zones with protection from regional outages provided by geo-replication. Data in a GZRS
account is copied across three Azure availability zones in the primary region. The data is also replicated to
a secondary geographic region for protection from regional disasters. In that region, you also have three
synchronous copies that were copied over from the primary region to the secondary region
asynchronously.
WARNING
Geo-restore is disabled as soon as a database is updated to use locally redundant or zone-redundant storage.
The storage redundancy diagrams all show regions with multiple availability zones (multi-az). However, there are some
regions which provide only a single availability zone and do not support ZRS or GZRS.
Backup usage
You can use these backups to:
Restore an existing database to a point in time in the past within the retention period by using the Azure
portal, Azure PowerShell, the Azure CLI, or the REST API. This operation creates a new database on either
the same instance as the original database or a different instance in the same subscription and region. It
uses a different name to avoid overwriting the original database.
After the restore finishes, you can delete the original database. Alternatively, you can both rename the
original database and rename the restored database to the original database name.
Restore a deleted database to a point in time within the retention period, including the time of deletion.
You can restore the deleted database only on the same managed instance where you created the original
database. Before you delete a database, the service takes a final transaction log backup to prevent any
data loss.
Restore a database to another geographic region. Geo-restore allows you to recover from a geographic
disaster when you can't access your database or backups in the primary region. It creates a new database
on any existing managed instance in any Azure region.
IMPORTANT
Geo-restore is available only for databases that are configured with geo-redundant backup storage. If you're not
currently using geo-replicated backups for a database, you can change this by configuring backup storage
redundancy.
Restore a database from a specific long-term backup of a database, if the database has been configured
with an LTR policy. LTR allows you to restore an old version of the database by using the Azure portal, the
Azure CLI, or Azure PowerShell to satisfy a compliance request or to run an old version of the application.
For more information, see Long-term retention.
Types of SQL backup Full, differential, log. Replicated copies of PITR Only the full backups.
backups.
Recover y point 10 minutes, based on Up to 1 hour, based on One week (or user's policy).
objective (RPO) compute size and amount geo-replication.*
of database activity.
Recover y time objective Restore usually takes less Restore usually takes less Restore usually takes less
(RTO) than 12 hours but could than 12 hours but could than 12 hours but could
take longer, depending on take longer, depending on take longer, depending on
size and activity. See size and activity. See size and activity. See
Recovery. Recovery. Recovery.
Azure storage Geo-redundant by default. Available when PITR backup Geo-redundant by default.
You can optionally configure storage redundancy is set You can configure zone-
zone-redundant or locally to geo-redundant. Not redundant or locally
redundant storage. available when PITR backup redundant storage.
storage is zone-redundant
or locally redundant.
Restoring a new Not supported Supported in any Azure Supported in any Azure
database in another region region
region
* For business-critical applications that require large databases and must ensure business continuity, use auto-
failover groups.
** All PITR backups are stored on geo-redundant storage by default, so geo-restore is enabled by default.
*** The workaround is to restore to a new server and use Resource Move to move the server to another
subscription.
Backup scheduling
The first full backup is scheduled immediately after a new database is created or restored, or after backup
redundancy changes. This backup usually finishes within 30 minutes, but it can take longer when the database is
large.
For a new database, the backup is fast. But the backup time can vary for a restored database, and it depends on
the size of the database. For example, the initial backup can take longer on a restored database or a database
copy, which would typically be larger than a new database.
After the first full backup, all further backups are scheduled and managed automatically. The exact timing of all
database backups is determined by the SQL Managed Instance service as it balances the overall system
workload. You can't change the schedule of backup jobs or disable them.
IMPORTANT
For a new, restored, or copied database, the point-in-time restore capability becomes available when the initial transaction
log backup that follows the initial full backup is created.
Backup storage consumption
With SQL Server backup and restore technology, restoring a database to a point in time requires an
uninterrupted backup chain. That chain consists of one full backup, optionally one differential backup, and one
or more transaction log backups.
Azure SQL Managed Instance backup schedules include one full backup every week. To provide PITR within the
entire retention period, the system must store additional full, differential, and transaction log backups for up to a
week longer than the configured retention period.
In other words, for any point in time during the retention period, there must be a full backup that's older than
the oldest time of the retention period. There must also be an uninterrupted chain of differential and transaction
log backups from that full backup until the next full backup.
Backups that are no longer needed to provide PITR functionality are automatically deleted. Because differential
backups and log backups require an earlier full backup to be restorable, all three backup types are purged
together in weekly sets.
For all databases, including TDE-encrypted databases, backups are compressed to reduce backup storage
compression and costs. Average backup compression ratio is 3 to 4 times. However, it can be significantly lower
or higher depending on the nature of the data and whether data compression is used in the database.
Azure SQL Managed Instance computes your total used backup storage as a cumulative value. Every hour, this
value is reported to the Azure billing pipeline. The pipeline is responsible for aggregating this hourly usage to
calculate your consumption at the end of each month. After the database is deleted, consumption decreases as
backups age out and are deleted. After all backups are deleted and PITR is no longer possible, billing stops.
IMPORTANT
Backups of a database are retained to provide PITR even if the database has been deleted. Although deleting and re-
creating a database might save storage and compute costs, it might increase backup storage costs. The reason is that the
service retains backups for each deleted database, every time it's deleted. To decrease backup costs, you can change the
retention period to 0 days, but this is possible only for deleted databases.
Backup retention
Azure SQL Managed Instance provides both short-term and long-term retention of backups. Short-term
retention allows PITR within the retention period for the database. Long-term retention provides backups for
various compliance requirements.
Short-term retention
For all new, restored, and copied databases, Azure SQL Managed Instance retains sufficient backups to allow
PITR within the last 7 days by default. The service takes regular full, differential, and log backups to ensure that
databases are restorable to any point in time within the retention period that's defined for the database or
managed instance.
You can specify your backup storage redundancy option for STR when you create your instance, and then
change it at a later time. If you change your backup redundancy option after your instance is created, new
backups will use the new redundancy option. Backup copies made with the previous STR redundancy option are
not moved or copied. They're left in the original storage account until the retention period expires, which can be
1 to 35 days.
You can change the backup retention period for each active database in the range of 1 to 35 days. As described
in Backup storage consumption, backups stored to enable PITR might be older than the retention period.
If you delete a database, the system keeps backups in the same way that it would for an online database with its
specific retention period. However, for a deleted database, the retention period is updated from 1-35 days to 0-
35 days, making it possible to delete backups manually. If you need to keep backups for longer than the
maximum short-term retention period of 35 days, you can enable long-term retention.
IMPORTANT
If you delete a managed instance, all databases on that managed instance are also deleted and can't be recovered. You
can't restore a deleted managed instance. But if you've configured long-term retention for a managed instance, LTR
backups are not deleted. You can then use those backups to restore databases to a different managed instance in the
same subscription, to a point in time when an LTR backup was taken. To learn more, review Restore long-term backup.
Long-term retention
With SQL Managed Instance, you can configure full LTR backups for up to 10 years in Azure Blob Storage. After
the LTR policy is configured, full backups are automatically copied to a different storage container weekly.
To meet various compliance requirements, you can select different retention periods for weekly, monthly, and/or
yearly full backups. The frequency depends on the policy. For example, setting W=0, M=1 would create an LTR
copy monthly. For more information about LTR, see Long-term retention.
LTR backup storage redundancy in Azure SQL Managed Instance is inherited from the backup storage
redundancy used by STR at the time that the LTR policy is defined. You can't change it, even if the STR backup
storage redundancy changes in the future.
Storage consumption depends on the selected frequency and retention periods of LTR backups. You can use the
LTR pricing calculator to estimate the cost of LTR storage.
For pricing, review the Azure SQL Managed Instance pricing page.
NOTE
An Azure invoice shows only the excess backup storage consumption, not the entire backup storage consumption. For
example, in a hypothetical scenario, if you have provisioned 4 TB of data storage, you'll get 4 TB of free backup storage
space. If you use a total of 5.8 TB of backup storage space, the Azure invoice will show only 1.8 TB, because you're
charged only for excess backup storage that you've used.
For managed instances, the total size of billable backup storage is aggregated at the instance level and is
calculated as follows:
Total billable backup storage size = (total size of full backups + total size of differential backups + total
size of log backups) – maximum instance data storage
Total billable backup storage, if any, is charged in gigabytes per month for each region, according to the rate of
the backup storage redundancy that you've used. Backup storage consumption depends on the workload and
size of individual databases and managed instances. Heavily modified databases have larger differential and log
backups, because the size of these backups is proportional to the amount of changed data. Therefore, such
databases will have higher backup charges.
As a simplified example, assume that a database has accumulated 744 GB of backup storage and that this
amount stays constant throughout an entire month because the database is completely idle. To convert this
cumulative storage consumption to hourly usage, divide it by 744.0 (31 days per month times 24 hours per
day). SQL Managed Instance will report to the Azure billing pipeline that the database consumed 1 GB of PITR
backup each hour, at a constant rate. Azure billing will aggregate this consumption and show a usage of 744 GB
for the entire month. The cost will be based on the rate for gigabytes per month in your region.
Here's another example. Suppose the same idle database has its retention increased from 7 days to 14 days in
the middle of the month. This increase results in the total backup storage doubling to 1,488 GB. SQL Managed
Instance would report 1 GB of usage for hours 1 through 372 (the first half of the month). It would report the
usage as 2 GB for hours 373 through 744 (the second half of the month). This usage would be aggregated to a
final bill of 1,116 GB per month. Retention costs don't increase immediately. They increase gradually every day,
because the backups grow until they reach the maximum retention period of 14 days.
Actual backup billing scenarios are more complex. Because the rate of changes in the database depends on the
workload and is variable over time, the size of each differential and log backup will also vary. The hourly
consumption of backup storage will fluctuate accordingly.
Each differential backup also contains all changes made in the database since the last full backup. So, the total
size of all differential backups gradually increases over the course of a week. Then it drops sharply after an older
set of full, differential, and log backups ages out.
For example, assume that a heavy write activity, such as index rebuild, runs just after a full backup is completed.
The modifications that the index rebuild makes will then be included:
In the transaction log backups taken over the duration of the rebuild.
In the next differential backup.
In every differential backup taken until the next full backup occurs.
For the last scenario in larger databases, an optimization in the service creates a full backup instead of a
differential backup if a differential backup would be excessively large otherwise. This reduces the size of all
differential backups until the following full backup.
Monitor costs
To understand backup storage costs, go to Cost Management + Billing in the Azure portal. Select Cost
Management , and then select Cost analysis . Select the desired subscription for Scope , and then filter for the
time period and service that you're interested in as follows:
1. Add a filter for Ser vice name .
2. In the dropdown list, select sql managed instance for a managed instance.
3. Add another filter for Meter subcategor y .
4. To monitor PITR backup costs, in the dropdown list, select managed instance pitr backup storage .
Meters will show up only if backup storage consumption exists.
To monitor LTR backup costs, in the dropdown list, select sql managed instance - ltr backup storage .
Meters will show up only if backup storage consumption exists.
The Storage and compute subcategories might also interest you, but they're not associated with backup
storage costs.
IMPORTANT
Meters are visible only for counters that are currently in use. If a counter is not available, it's likely that the category is not
currently being used. For example, managed instance counters won't be present for customers who do not have a
managed instance deployed. Likewise, storage counters won't be visible for resources that are not consuming storage. If
there is no PITR or LTR backup storage consumption, these meters won't be visible.
Encrypted backups
If your database is encrypted with TDE, backups are automatically encrypted at rest, including LTR backups. All
new databases in Azure SQL are configured with TDE enabled by default. For more information on TDE, see
Transparent data encryption with SQL Managed Instance.
Backup integrity
All database backups are taken with the CHECKSUM option to provide additional backup integrity. Automatic
testing of automated database backups by the Azure SQL engineering team is not currently available for Azure
SQL Managed Instance. Schedule test backup restoration and DBCC CHECKDB on your databases in SQL
Managed Instance around your workload.
IMPORTANT
Azure policies are not enforced when you're creating a database via T-SQL. To enforce data residency when you're creating
a database by using T-SQL, use LOCAL or ZONE as input to the BACKUP_STORAGE_REDUNDANCY parameter in the
CREATE DATABASE statement.
Next steps
To learn about the other SQL Managed Instance business continuity solutions, see Business continuity
overview.
For information about how to configure, manage, and restore from long-term retention of automated
backups in Azure Blob Storage by using the Azure portal, see Manage long-term backup retention by using
the Azure portal.
For information about how to configure, manage, and restore from long-term retention of automated
backups in Azure Blob Storage by using PowerShell, see Manage long-term backup retention by using
PowerShell.
For more information about how to restore a database to a point in time by using the Azure portal, see
Recover using automated database backups.
To learn all about backup storage consumption on Azure SQL Managed Instance, see Backup storage
consumption on SQL Managed Instance explained.
To learn how to fine-tune backup storage retention and costs for SQL Managed Instance, see Fine tuning
backup storage costs on SQL Managed Instance.
Auto-failover groups overview & best practices
(Azure SQL Managed Instance)
9/13/2022 • 23 minutes to read • Edit Online
NOTE
This article covers auto-failover groups for Azure SQL Managed Instance. For Azure SQL Database, see Auto-failover
groups in SQL Database.
Overview
The auto-failover groups feature allows you to manage the replication and failover of a group of databases on a
server or all user databases in a managed instance to another Azure region. It is a declarative abstraction on top
of the active geo-replication feature, designed to simplify deployment and management of geo-replicated
databases at scale.
Automatic failover
You can initiate a geo-failover manually or you can delegate it to the Azure service based on a user-defined
policy. The latter option allows you to automatically recover multiple related databases in a secondary region
after a catastrophic failure or other unplanned event that results in full or partial loss of the SQL Database or
SQL Managed Instance availability in the primary region. Typically, these are outages that cannot be
automatically mitigated by the built-in high availability infrastructure. Examples of geo-failover triggers include
natural disasters, or incidents caused by a tenant or control ring being down due to an OS kernel memory leak
on compute nodes. For more information, see Azure SQL high availability.
Offload read-only workloads
To reduce traffic to your primary databases, you can also use the secondary databases in a failover group to
offload read-only workloads. Use the read-only listener to direct read-only traffic to a readable secondary
database.
Endpoint redirection
Auto-failover groups provide read-write and read-only listener end-points that remain unchanged during geo-
failovers. This means you do not have to change the connection string for your application after a geo-failover,
because connections are automatically routed to the current primary. Whether you use manual or automatic
failover activation, a geo-failover switches all secondary databases in the group to the primary role. After the
geo-failover is completed, the DNS record is automatically updated to redirect the endpoints to the new region.
For geo-failover RPO and RTO, see Overview of Business Continuity.
Recovering an application
To achieve full business continuity, adding regional database redundancy is only part of the solution. Recovering
an application (service) end-to-end after a catastrophic failure requires recovery of all components that
constitute the service and any dependent services. Examples of these components include the client software
(for example, a browser with a custom JavaScript), web front ends, storage, and DNS. It is critical that all
components are resilient to the same failures and become available within the recovery time objective (RTO) of
your application. Therefore, you need to identify all dependent services and understand the guarantees and
capabilities they provide. Then, you must take adequate steps to ensure that your service functions during the
failover of the services on which it depends.
IMPORTANT
The name of the failover group must be globally unique within the .database.windows.net domain.
Primar y
The managed instance that hosts the primary databases in the failover group.
Secondar y
The managed instance that hosts the secondary databases in the failover group. The secondary cannot be
in the same Azure region as the primary.
DNS zone
A unique ID that is automatically generated when a new SQL Managed Instance is created. A multi-
domain (SAN) certificate for this instance is provisioned to authenticate the client connections to any
instance in the same DNS zone. The two managed instances in the same failover group must share the
DNS zone.
Failover group read-write listener
A DNS CNAME record that points to the current primary. It's created automatically when the failover
group is created and allows the read-write workload to transparently reconnect to the primary when the
primary changes after failover. When the failover group is created on a SQL Managed Instance, the DNS
CNAME record for the listener URL is formed as <fog-name>.<zone_id>.database.windows.net .
Failover group read-only listener
A DNS CNAME record that points to the current secondary. It's created automatically when the failover
group is created and allows the read-only SQL workload to transparently connect to the secondary when
the secondary changes after failover. When the failover group is created on a SQL Managed Instance, the
DNS CNAME record for the listener URL is formed as
<fog-name>.secondary.<zone_id>.database.windows.net .
NOTE
Because verification of the scale of the outage and how quickly it can be mitigated involves human actions, the
grace period cannot be set below one hour. This limitation applies to all databases in the failover group regardless
of their data synchronization state.
NOTE
The AllowReadOnlyFailoverToPrimary property only has effect if automatic failover policy is enabled and an
automatic geo-failover has been triggered. In that case, if the property is set to True, the new primary will serve
both read-write and read-only sessions.
Planned failover
Planned failover performs full data synchronization between primary and secondary databases before
the secondary switches to the primary role. This guarantees no data loss. Planned failover is used in the
following scenarios:
Perform disaster recovery (DR) drills in production when data loss is not acceptable
Relocate the databases to a different region
Return the databases to the primary region after the outage has been mitigated (failback)
NOTE
If a database contains in-memory OLTP objects, the primary databases and the target secondary geo-replica
databases should have matching service tiers, as in-memory OLTP objects are always hydrated in memory. A
lower service tier on the target geo-replica database may result in out-of-memory issues. If this happens, the
affected geo-secondary database replica may be put into a limited read-only mode called in-memor y OLTP
checkpoint-only mode. Read-only table queries are allowed, but read-only in-memory OLTP table queries are
disallowed on the affected geo-secondary database replica. Planned failover is blocked if all replicas in the geo-
secondary database are in checkpoint only mode. Unplanned failover may fail due to out-of-memory issues. To
avoid this, upgrade the service tier of the secondary database to match the primary database during the planned
failover, or drill. Service tier upgrades can be size-of-data operations, and may take a while to finish.
Unplanned failover
Unplanned or forced failover immediately switches the secondary to the primary role without waiting for
recent changes to propagate from the primary. This operation may result in data loss. Unplanned failover
is used as a recovery method during outages when the primary is not accessible. When the outage is
mitigated, the old primary will automatically reconnect and become a new secondary. A planned failover
may be executed to fail back, returning the replicas to their original primary and secondary roles.
Manual failover
You can initiate a geo-failover manually at any time regardless of the automatic failover configuration.
During an outage that impacts the primary, if automatic failover policy is not configured, a manual
failover is required to promote the secondary to the primary role. You can initiate a forced (unplanned) or
friendly (planned) failover. A friendly failover is only possible when the old primary is accessible, and can
be used to relocate the primary to the secondary region without data loss. When a failover is completed,
the DNS records are automatically updated to ensure connectivity to the new primary.
Grace period with data loss
Because the data is replicated to the secondary database using asynchronous replication, an automatic
geo-failover may result in data loss. You can customize the automatic failover policy to reflect your
application’s tolerance to data loss. By configuring GracePeriodWithDataLossHours , you can control how
long the system waits before initiating a forced failover, which may result in data loss.
If your application uses SQL Managed Instance as the data tier, follow the general guidelines and best practices
outlined in this article when designing for business continuity.
For more information about creating the secondary SQL Managed Instance in the same DNS zone as the
primary instance, see Create a secondary managed instance.
IMPORTANT
Global virtual network peering is the recommended way for establishing connectivity between two instances in a failover
group. It provides a low-latency, high-bandwidth private connection between the peered virtual networks using the
Microsoft backbone infrastructure. No public Internet, gateways, or additional encryption is required in the
communication between the peered virtual networks. Global virtual network peering is supported for instances hosted in
subnets created since 9/22/2020. To be able to use global virtual network peering for SQL managed instances hosted in
subnets created before 9/22/2020, consider configuring non-default maintenance window on the instance, as it will move
the instance into a new virtual cluster that supports global virtual network peering.
Regardless of the connectivity mechanism, there are requirements that must be fulfilled for geo-replication
traffic to flow:
The Network Security Group (NSG) rules on the subnet hosting primar y instance allow:
Inbound traffic on port 5022 and port range 11000-11999 from the subnet hosting the secondary
instance.
Outbound traffic on port 5022 and port range 11000-11999 to the subnet hosting the secondary
instance.
The Network Security Group (NSG) rules on the subnet hosting secondar y instance allow:
Inbound traffic on port 5022 and port range 11000-11999 from the subnet hosting the primary
instance.
Outbound traffic on port 5022 and port range 11000-11999 to the subnet hosting the primary
instance.
IP address ranges of VNets hosting primary and secondary instance must not overlap.
There's no indirect overlap of IP address range between the VNets hosting primary and secondary instance
and any other VNets they are peered with via local virtual network peering or other means
Additionally, if you're using other mechanisms for providing connectivity between the instances than the
recommended global virtual network peering, you need to ensure the following:
Any networking device used, like firewalls or network virtual appliances (NVAs), do not block the traffic
described above.
Routing is properly configured, and asymmetric routing is avoided.
If you deploy auto-failover groups in a hub-and-spoke network topology cross-region, replication traffic
should go directly between the two managed instance subnets rather than directed through the hub
networks. It will help you avoid connectivity and replication speed issues.
IMPORTANT
Alternative ways of providing connectivity between the instances involving additional networking devices may make
troubleshooting process in case of connectivity or replication speed issues very difficult and require active involvement of
network administrators and significantly prolong the resolution time.
Initial seeding
When establishing a failover group between managed instances, there's an initial seeding phase before data
replication starts. The initial seeding phase is the longest and most expensive part of the operation. Once initial
seeding completes data is synchronized, and only subsequent data changes are replicated. The time it takes for
the initial seeding to complete depends on the size of data, number of replicated databases, workload intensity
on the primary databases, and the speed of the link between the virtual networks hosting primary and
secondary instance that mostly depends on the way connectivity is established. Under normal circumstances,
and when connectivity is established using recommended global virtual network peering, seeding speed is up to
360 GB an hour for SQL Managed Instance. Seeding is performed for a batch of user databases in parallel - not
necessarily for all databases at the same time. Multiple batches may be needed if there are many databases
hosted on the instance.
If the speed of the link between the two instances is slower than what is necessary, the time to seed is likely to
be noticeably impacted. You can use the stated seeding speed, number of databases, total size of data, and the
link speed to estimate how long the initial seeding phase will take before data replication starts. For example, for
a single 100 GB database, the initial seed phase would take about 1.2 hours if the link is capable of pushing 84
GB per hour, and if there are no other databases being seeded. If the link can only transfer 10 GB per hour, then
seeding a 100-GB database will take about 10 hours. If there are multiple databases to replicate, seeding will be
executed in parallel, and, when combined with a slow link speed, the initial seeding phase may take considerably
longer, especially if the parallel seeding of data from all databases exceeds the available link bandwidth.
IMPORTANT
In case of an extremely low-speed or busy link causing the initial seeding phase to take days the creation of a failover
group can time out. The creation process will be automatically canceled after 6 days.
IMPORTANT
If a database is dropped on the primary managed instance, it will also be dropped automatically on the geo-secondary
managed instance.
Use the read-write listener (primary MI)
For read-write workloads, use <fog-name>.zone_id.database.windows.net as the server name. Connections will be
automatically directed to the primary. This name doesn't change after failover. The geo-failover involves
updating the DNS record, so the new client connections are routed to the new primary only after the client DNS
cache is refreshed. Because the secondary instance shares the DNS zone with the primary, the client application
will be able to reconnect to it using the same server-side SAN certificate. The existing client connections need to
be terminated and then recreated to be routed to the new primary. The read-write listener and read-only listener
cannot be reached via the public endpoint for managed instance.
The read-write listener and read-only listener can't be reached via public endpoint for managed instance.
DNS update
The DNS update of the read-write listener will happen immediately after the failover is initiated. This operation
won't result in data loss. However, the process of switching database roles can take up to 5 minutes under
normal conditions. Until it's completed, some databases in the new primary instance will still be read-only. If a
failover is initiated using PowerShell, the operation to switch the primary replica role is synchronous. If it's
initiated using the Azure portal, the UI will indicate completion status. If it's initiated using the REST API, use
standard Azure Resource Manager’s polling mechanism to monitor for completion.
IMPORTANT
Use manual planned failover to move the primary back to the original location once the outage that caused the geo-
failover is mitigated.
Scaling instances
You can scale up or scale down the primary and secondary instance to a different compute size within the same
service tier. When scaling up, we recommend that you scale up the geo-secondary first, and then scale up the
primary. When scaling down, reverse the order: scale down the primary first, and then scale down the
secondary. When you scale instance to a different service tier, this recommendation is enforced.
The sequence is recommended specifically to avoid the problem where the geo-secondary at a lower SKU gets
overloaded and must be re-seeded during an upgrade or downgrade process.
Permissions
Permissions for a failover group are managed via Azure role-based access control (Azure RBAC).
Azure RBAC write access is necessary to create and manage failover groups. The SQL Managed Instance
Contributor has all the necessary permissions to manage failover groups.
For specific permission scopes, review how to configure auto-failover groups in Azure SQL Managed Instance.
Limitations
Be aware of the following limitations:
Failover groups can't be created between two instances in the same Azure region.
Failover groups can't be renamed. You will need to delete the group and re-create it with a different name.
A failover group contains exactly two managed instances. Adding additional instances to the failover group is
unsupported.
An instance can participate only in one failover group at any moment.
Database rename isn't supported for databases in failover group. You will need to temporarily delete failover
group to be able to rename a database.
System databases aren't replicated to the secondary instance in a failover group. Therefore, scenarios that
depend on objects from the system databases such as Server Logins and Agent jobs, require objects to be
manually created on the secondary instances and also manually kept in sync after any changes made on
primary instance. The only exception is Service master Key (SMK) for SQL Managed Instance that is
replicated automatically to secondary instance during creation of failover group. Any subsequent changes of
SMK on the primary instance however will not be replicated to secondary instance. To learn more, see how to
Enable scenarios dependent on objects from the system databases.
Failover groups can't be created between instances if any of them are in an instance pool.
Programmatically manage failover groups
Auto-failover groups can also be managed programmatically using Azure PowerShell, Azure CLI, and REST API.
The following tables describe the set of commands available. Active geo-replication includes a set of Azure
Resource Manager APIs for management, including the Azure SQL Database REST API and Azure PowerShell
cmdlets. These APIs require the use of resource groups and support Azure role-based access control (Azure
RBAC). For more information on how to implement access roles, see Azure role-based access control (Azure
RBAC).
PowerShell
Azure CLI
REST API
C M DL ET DESC RIP T IO N
Next steps
For detailed tutorials, see
Add a SQL Managed Instance to a failover group
For a sample script, see:
Use PowerShell to create an auto-failover group on a SQL Managed Instance
For a business continuity overview and scenarios, see Business continuity overview
To learn about automated backups, see SQL Database automated backups.
To learn about using automated backups for recovery, see Restore a database from the service-initiated
backups.
T-SQL differences between SQL Server & Azure
SQL Managed Instance
9/13/2022 • 24 minutes to read • Edit Online
There are some PaaS limitations that are introduced in SQL Managed Instance and some behavior changes
compared to SQL Server. The differences are divided into the following categories:
Availability includes the differences in Always On Availability Groups and backups.
Security includes the differences in auditing, certificates, credentials, cryptographic providers, logins and
users, and the service key and service master key.
Configuration includes the differences in buffer pool extension, collation, compatibility levels, database
mirroring, database options, SQL Server Agent, and table options.
Functionalities include BULK INSERT/OPENROWSET, CLR, DBCC, distributed transactions, extended events,
external libraries, filestream and FileTable, full-text Semantic Search, linked servers, PolyBase, Replication,
RESTORE, Service Broker, stored procedures, functions, and triggers.
Environment settings such as VNets and subnet configurations.
Most of these features are architectural constraints and represent service features.
Temporary known issues that are discovered in SQL Managed Instance and will be resolved in the future are
described in What's new?.
Availability
Always On Availability Groups
High availability is built into SQL Managed Instance and can't be controlled by users. The following statements
aren't supported:
CREATE ENDPOINT … FOR DATABASE_MIRRORING
CREATE AVAILABILITY GROUP
ALTER AVAILABILITY GROUP
DROP AVAILABILITY GROUP
The SET HADR clause of the ALTER DATABASE statement
Backup
Azure SQL Managed Instance has automatic backups, so users can create full database COPY_ONLY backups.
Differential, log, and file snapshot backups aren't supported.
With a SQL Managed Instance, you can back up an instance database only to an Azure Blob storage account:
Only BACKUP TO URL is supported.
FILE , TAPE , and backup devices aren't supported.
Most of the general WITH options are supported.
COPY_ONLY is mandatory.
FILE_SNAPSHOT isn't supported.
Tape options: REWIND , NOREWIND , UNLOAD , and NOUNLOAD aren't supported.
Log-specific options: NORECOVERY , STANDBY , and NO_TRUNCATE aren't supported.
Limitations:
With a SQL Managed Instance, you can back up an instance database to a backup with up to 32 stripes,
which is enough for databases up to 4 TB if backup compression is used.
You can't execute BACKUP DATABASE ... WITH COPY_ONLY on a database that's encrypted with service-
managed Transparent Data Encryption (TDE). Service-managed TDE forces backups to be encrypted with
an internal TDE key. The key can't be exported, so you can't restore the backup. Use automatic backups
and point-in-time restore, or use customer-managed (BYOK) TDE instead. You also can disable encryption
on the database.
Native backups taken on a SQL Managed Instance cannot be restored to a SQL Server. This is because
SQL Managed Instance has higher internal database version compared to any version of SQL Server.
To back up or restore a database to/from an Azure storage, it is necessary to create a shared access
signature (SAS) an URI that grants you restricted access rights to Azure Storage resources Learn more on
this. Using Access keys for these scenarios is not supported.
The maximum backup stripe size by using the BACKUP command in SQL Managed Instance is 195 GB,
which is the maximum blob size. Increase the number of stripes in the backup command to reduce
individual stripe size and stay within this limit.
TIP
To work around this limitation, when you back up a database from either SQL Server in an on-premises
environment or in a virtual machine, you can:
Back up to DISK instead of backing up to URL .
Upload the backup files to Blob storage.
Restore into SQL Managed Instance.
The Restore command in SQL Managed Instance supports bigger blob sizes in the backup files because a
different blob type is used for storage of the uploaded backup files.
For information about backups using T-SQL, see BACKUP.
Security
Auditing
The key differences between auditing in Microsoft Azure SQL and in SQL Server are:
With SQL Managed Instance, auditing works at the server level. The .xel log files are stored in Azure Blob
storage.
With Azure SQL Database, auditing works at the database level. The .xel log files are stored in Azure Blob
storage.
With SQL Server, on-premises or in virtual machines, auditing works at the server level. Events are stored on
file system or Windows event logs.
XEvent auditing in SQL Managed Instance supports Azure Blob storage targets. File and Windows logs aren't
supported.
The key differences in the CREATE AUDIT syntax for auditing to Azure Blob storage are:
A new syntax TO URL is provided that you can use to specify the URL of the Azure Blob storage container
where the .xel files are placed.
The syntax TO FILE isn't supported because SQL Managed Instance can't access Windows file shares.
For more information, see:
CREATE SERVER AUDIT
ALTER SERVER AUDIT
Auditing
Certificates
SQL Managed Instance can't access file shares and Windows folders, so the following constraints apply:
The CREATE FROM / BACKUP TO file isn't supported for certificates.
The CREATE / BACKUP certificate from FILE / ASSEMBLY isn't supported. Private key files can't be used.
CREATE CERTIFICATE
FROM BINARY = asn_encoded_certificate
WITH PRIVATE KEY (<private_key_options>)
Credential
Only Azure Key Vault and SHARED ACCESS SIGNATURE identities are supported. Windows users aren't supported.
See CREATE CREDENTIAL and ALTER CREDENTIAL.
Cryptographic providers
SQL Managed Instance can't access files, so cryptographic providers can't be created:
CREATE CRYPTOGRAPHIC PROVIDER isn't supported. See CREATE CRYPTOGRAPHIC PROVIDER.
ALTER CRYPTOGRAPHIC PROVIDER isn't supported. See ALTER CRYPTOGRAPHIC PROVIDER.
Logins and users
SQL logins created by using FROM CERTIFICATE , FROM ASYMMETRIC KEY , and FROM SID are supported. See
CREATE LOGIN.
Azure Active Directory (Azure AD) server principals (logins) created with the CREATE LOGIN syntax or the
CREATE USER FROM LOGIN [Azure AD Login] syntax are supported. These logins are created at the
server level.
SQL Managed Instance supports Azure AD database principals with the syntax
CREATE USER [AADUser/AAD group] FROM EXTERNAL PROVIDER . This feature is also known as Azure AD
contained database users.
Windows logins created with the CREATE LOGIN ... FROM WINDOWS syntax aren't supported. Use Azure
Active Directory logins and users.
The Azure AD admin for the instance has unrestricted admin privileges.
Non-administrator Azure AD database-level users can be created by using the
CREATE USER ... FROM EXTERNAL PROVIDER syntax. See CREATE USER ... FROM EXTERNAL PROVIDER.
Azure AD server principals (logins) support SQL features within one SQL Managed Instance only.
Features that require cross-instance interaction, no matter whether they're within the same Azure AD
tenant or different tenants, aren't supported for Azure AD users. Examples of such features are:
SQL transactional replication.
Link server.
Setting an Azure AD login mapped to an Azure AD group as the database owner isn't supported. A
member of the Azure AD group can be a database owner, even if the login hasn't been created in the
database.
Impersonation of Azure AD server-level principals by using other Azure AD principals is supported, such
as the EXECUTE AS clause. EXECUTE AS limitations are:
EXECUTE AS USER isn't supported for Azure AD users when the name differs from the login name.
An example is when the user is created through the syntax
CREATE USER [myAadUser] FROM LOGIN [[email protected]] and impersonation is attempted through
EXEC AS USER = myAadUser . When you create a USER from an Azure AD server principal (login),
specify the user_name as the same login_name from LOGIN .
Only the SQL Server-level principals (logins) that are part of the sysadmin role can execute the
following operations that target Azure AD principals:
EXECUTE AS USER
EXECUTE AS LOGIN
To impersonate a user with EXECUTE AS statement the user needs to be mapped directly to Azure
AD server principal (login). Users that are members of Azure AD groups mapped into Azure AD
server principals cannot effectively be impersonated with EXECUTE AS statement, even though the
caller has the impersonate permissions on the specified user name.
Database export/import using bacpac files are supported for Azure AD users in SQL Managed Instance
using either SSMS V18.4 or later, or SQLPackage.exe.
The following configurations are supported using database bacpac file:
Export/import a database between different manage instances within the same Azure AD
domain.
Export a database from SQL Managed Instance and import to SQL Database within the same
Azure AD domain.
Export a database from SQL Database and import to SQL Managed Instance within the same
Azure AD domain.
Export a database from SQL Managed Instance and import to SQL Server (version 2012 or
later).
In this configuration, all Azure AD users are created as SQL Server database principals
(users) without logins. The type of users is listed as SQL and is visible as SQL_USER in
sys.database_principals ). Their permissions and roles remain in the SQL Server
database metadata and can be used for impersonation. However, they cannot be used to
access and sign in to the SQL Server using their credentials.
Only the server-level principal login, which is created by the SQL Managed Instance provisioning process,
members of the server roles, such as securityadmin or sysadmin , or other logins with ALTER ANY LOGIN
permission at the server level can create Azure AD server principals (logins) in the master database for
SQL Managed Instance.
If the login is a SQL principal, only logins that are part of the sysadmin role can use the create command
to create logins for an Azure AD account.
The Azure AD login must be a member of an Azure AD within the same directory that's used for Azure
SQL Managed Instance.
Azure AD server principals (logins) are visible in Object Explorer starting with SQL Server Management
Studio 18.0 preview 5.
A server principal with sysadmin access level is automatically created for the Azure AD admin account
once it's enabled on an instance.
During authentication, the following sequence is applied to resolve the authenticating principal:
1. If the Azure AD account exists as directly mapped to the Azure AD server principal (login), which is
present in sys.server_principals as type "E," grant access and apply permissions of the Azure AD
server principal (login).
2. If the Azure AD account is a member of an Azure AD group that's mapped to the Azure AD server
principal (login), which is present in sys.server_principals as type "X," grant access and apply
permissions of the Azure AD group login.
3. If the Azure AD account exists as directly mapped to an Azure AD user in a database, which is present
in sys.database_principals as type "E," grant access and apply permissions of the Azure AD database
user.
4. If the Azure AD account is a member of an Azure AD group that's mapped to an Azure AD user in a
database, which is present in sys.database_principals as type "X," grant access and apply permissions
of the Azure AD group user.
Service key and service master key
Master key backup isn't supported (managed by SQL Database service).
Master key restore isn't supported (managed by SQL Database service).
Service master key backup isn't supported (managed by SQL Database service).
Service master key restore isn't supported (managed by SQL Database service).
Configuration
Buffer pool extension
Buffer pool extension isn't supported.
ALTER SERVER CONFIGURATION SET BUFFER POOL EXTENSION isn't supported. See ALTER SERVER CONFIGURATION.
Collation
The default instance collation is SQL_Latin1_General_CP1_CI_AS and can be specified as a creation parameter. See
Collations.
Compatibility levels
Supported compatibility levels are 100, 110, 120, 130, 140 and 150.
Compatibility levels below 100 aren't supported.
The default compatibility level for new databases is 140. For restored databases, the compatibility level
remains unchanged if it was 100 and above.
See ALTER DATABASE Compatibility Level.
Database mirroring
Database mirroring isn't supported.
ALTER DATABASE SET PARTNER and SET WITNESS options aren't supported.
CREATE ENDPOINT … FOR DATABASE_MIRRORING isn't supported.
For more information, see ALTER DATABASE SET PARTNER and SET WITNESS and CREATE ENDPOINT … FOR
DATABASE_MIRRORING.
Database options
Multiple log files aren't supported.
In-memory objects aren't supported in the General Purpose service tier.
There's a limit of 280 files per General Purpose instance, which implies a maximum of 280 files per database.
Both data and log files in the General Purpose tier are counted toward this limit. The Business Critical tier
supports 32,767 files per database.
The database can't contain filegroups that contain filestream data. Restore fails if .bak contains FILESTREAM
data.
Every file is placed in Azure Blob storage. IO and throughput per file depend on the size of each individual
file.
CREATE DATABASE statement
The following limitations apply to CREATE DATABASE :
Files and filegroups can't be defined.
The CONTAINMENT option isn't supported.
WITH options aren't supported.
TIP
As a workaround, use ALTER DATABASE after CREATE DATABASE to set database options to add files or to set
containment.
Some ALTER DATABASE statements (for example, SET CONTAINMENT) might transiently fail, for example during
the automated database backup or right after a database is created. In this case ALTER DATABASE statement
should be retried. For more information on related error messages, see the Remarks section.
For more information, see ALTER DATABASE.
SQL Server Agent
Enabling and disabling SQL Server Agent is currently not supported in SQL Managed Instance. SQL Agent is
always running.
Job schedule trigger based on an idle CPU is not supported.
SQL Server Agent settings are read only. The procedure sp_set_agent_properties isn't supported in SQL
Managed Instance.
Jobs
T-SQL job steps are supported.
The following replication jobs are supported:
Transaction-log reader
Snapshot
Distributor
SSIS job steps are supported.
Other types of job steps aren't currently supported:
The merge replication job step isn't supported.
Queue Reader isn't supported.
Command shell isn't yet supported.
SQL Managed Instance can't access external resources, for example, network shares via robocopy.
SQL Server Analysis Services isn't supported.
Notifications are partially supported.
Email notification is supported, although it requires that you configure a Database Mail profile. SQL Server
Agent can use only one Database Mail profile, and it must be called AzureManagedInstance_dbmail_profile .
Pager isn't supported.
NetSend isn't supported.
Alerts aren't yet supported.
Proxies aren't supported.
EventLog isn't supported.
User must be directly mapped to Azure AD server principal (login) to create, modify, or execute SQL Agent
jobs. Users that are not directly mapped, for example, users that belong to an Azure AD group that has the
rights to create, modify or execute SQL Agent jobs, will not effectively be able to perform those actions. This
is due to SQL Managed Instance impersonation and EXECUTE AS limitations.
The Multi Server Administration feature for master/target (MSX/TSX) jobs are not supported.
For information about SQL Server Agent, see SQL Server Agent.
Tables
The following table types aren't supported:
FILESTREAM
FILETABLE
EXTERNAL TABLE (except PolyBase)
MEMORY_OPTIMIZED (not supported only in General Purpose tier)
For information about how to create and alter tables, see CREATE TABLE and ALTER TABLE.
Functionalities
Bulk insert / OPENROWSET
SQL Managed Instance can't access file shares and Windows folders, so the files must be imported from Azure
Blob storage:
DATASOURCE is required in the BULK INSERT command while you import files from Azure Blob storage. See
BULK INSERT.
DATASOURCE is required in the OPENROWSET function when you read the content of a file from Azure Blob
storage. See OPENROWSET.
OPENROWSET can be used to read data from Azure SQL Database, Azure SQL Managed Instance, or SQL Server
instances. Other sources such as Oracle databases or Excel files are not supported.
CLR
A SQL Managed Instance can't access file shares and Windows folders, so the following constraints apply:
Only CREATE ASSEMBLY FROM BINARY is supported. See CREATE ASSEMBLY FROM BINARY.
CREATE ASSEMBLY FROM FILE isn't supported. See CREATE ASSEMBLY FROM FILE.
ALTER ASSEMBLY can't reference files. See ALTER ASSEMBLY.
Limitations:
Backups of the corrupted databases might be restored depending on the type of the corruption, but
automated backups will not be taken until the corruption is fixed. Make sure that you run DBCC CHECKDB on
the source SQL Managed Instance and use backup WITH CHECKSUM in order to prevent this issue.
Restore of .BAK file of a database that contains any limitation described in this document (for example,
FILESTREAM or FILETABLE objects) cannot be restored on SQL Managed Instance.
.BAK files that contain multiple backup sets can't be restored.
.BAK files that contain multiple log files can't be restored.
Backups that contain databases bigger than 8 TB, active in-memory OLTP objects, or number of files that
would exceed 280 files per instance can't be restored on a General Purpose instance.
Backups that contain databases bigger than 4 TB or in-memory OLTP objects with the total size larger than
the size described in resource limits cannot be restored on Business Critical instance. For information about
restore statements, see RESTORE statements.
IMPORTANT
The same limitations apply to built-in point-in-time restore operation. As an example, General Purpose database greater
than 4 TB cannot be restored on Business Critical instance. Business Critical database with In-memory OLTP files or more
than 280 files cannot be restored on General Purpose instance.
Service broker
Cross-instance service broker message exchange is supported only between Azure SQL Managed Instances:
CREATE ROUTE : You can't use CREATE ROUTE with ADDRESS other than LOCAL or DNS name of another SQL
Managed Instance. Port is always 4022.
ALTER ROUTE : You can't use ALTER ROUTE with ADDRESS other than LOCAL or DNS name of another SQL
Managed Instance. Port is always 4022.
Transport security is supported, dialog security is not:
CREATE REMOTE SERVICE BINDING is not supported.
Service broker is enabled by default and cannot be disabled. The following ALTER DATABASE options are not
supported:
ENABLE_BROKER
DISABLE_BROKER
Environment constraints
Subnet
You cannot place any other resources (for example virtual machines) in the subnet where you have deployed
your SQL Managed Instance. Deploy these resources using a different subnet.
Subnet must have sufficient number of available IP addresses. Minimum is to have at least 32 IP addresses in
the subnet.
The number of vCores and types of instances that you can deploy in a region have some constraints and
limits.
There is a networking configuration that must be applied on the subnet.
VNET
VNet can be deployed using Resource Model - Classic Model for VNet is not supported.
After a SQL Managed Instance is created, moving the SQL Managed Instance or VNet to another resource
group or subscription is not supported.
For SQL Managed Instances hosted in virtual clusters that are created before September 22, 2020, global
peering is not supported. You can connect to these resources via ExpressRoute or VNet-to-VNet through
VNet Gateways.
Failover groups
System databases are not replicated to the secondary instance in a failover group. Therefore, scenarios that
depend on objects from the system databases will be impossible on the secondary instance unless the objects
are manually created on the secondary.
TEMPDB
The maximum file size of the tempdb system database can't be greater than 24 GB per core on a General
Purpose tier. The maximum tempdb size on a Business Critical tier is limited by the SQL Managed Instance
storage size. Tempdb log file size is limited to 120 GB on General Purpose tier. Some queries might return an
error if they need more than 24 GB per core in tempdb or if they produce more than 120 GB of log data.
Tempdb is always split into 12 data files: 1 primary, also called master, data file and 11 non-primary data files.
The file structure cannot be changed and new files cannot be added to tempdb .
Memory-optimized tempdb metadata, a new SQL Server 2019 in-memory database feature, is not
supported.
Objects created in the model database cannot be auto-created in tempdb after a restart or a failover because
tempdb does not get its initial object list from the model database. You must create objects in tempdb
manually after each restart or a failover.
MSDB
The following schemas in the msdb system database in SQL Managed Instance must be owned by their
respective predefined roles:
General roles
TargetServersRole
Fixed database roles
SQLAgentUserRole
SQLAgentReaderRole
SQLAgentOperatorRole
DatabaseMail roles:
DatabaseMailUserRole
Integration services roles:
db_ssisadmin
db_ssisltduser
db_ssisoperator
IMPORTANT
Changing the predefined role names, schema names and schema owners by customers will impact the normal operation
of the service. Any changes made to these will be reverted back to the predefined values as soon as detected, or at the
next service update at the latest to ensure normal service operation.
Error logs
SQL Managed Instance places verbose information in error logs. There are many internal system events that are
logged in the error log. Use a custom procedure to read error logs that filters out some irrelevant entries. For
more information, see SQL Managed Instance – sp_readmierrorlog or SQL Managed Instance
extension(preview) for Azure Data Studio.
Next steps
For more information about SQL Managed Instance, see What is SQL Managed Instance?
For a features and comparison list, see Azure SQL Managed Instance feature comparison.
For release updates, see What's new?.
For issues, workarounds, and resolutions, see Known issues.
For a quickstart that shows you how to create a new SQL Managed Instance, see Create a SQL Managed
Instance.
Transactional replication with Azure SQL Managed
Instance (Preview)
9/13/2022 • 7 minutes to read • Edit Online
Overview
You can use transactional replication to push changes made in an Azure SQL Managed Instance to:
A SQL Server database - on-premises or on Azure VM
A database in Azure SQL Database
An instance database in Azure SQL Managed Instance
NOTE
To use all the features of Azure SQL Managed Instance, you must be using the latest versions of SQL Server
Management Studio (SSMS) and SQL Server Data Tools (SSDT).
Components
The key components in transactional replication are the Publisher , Distributor , and Subscriber , as shown in
the following picture:
RO L E A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E
Publisher No Yes
Distributor No Yes
The Publisher publishes changes made on some tables (articles) by sending the updates to the Distributor. The
publisher can be an Azure SQL Managed Instance or a SQL Server instance.
The Distributor collects changes in the articles from a Publisher and distributes them to the Subscribers. The
Distributor can be either a Azure SQL Managed Instance or a SQL Server instance (any version as long it is equal
to or higher than the Publisher version).
The Subscriber receives changes made on the Publisher. A SQL Server instance and Azure SQL Managed
Instance can both be push and pull subscribers, though a pull subscription is not supported when the distributor
is an Azure SQL Managed Instance and the subscriber is not. A database in Azure SQL Database can only be a
push subscriber.
Azure SQL Managed Instance can support being a Subscriber from the following versions of SQL Server:
SQL Server 2016 and later
SQL Server 2014 RTM CU10 (12.0.4427.24) or SP1 CU3 (12.0.2556.4)
SQL Server 2012 SP2 CU8 (11.0.5634.1) or SP3 (11.0.6020.0) or SP4 (11.0.7001.0)
NOTE
For other versions of SQL Server that do not support publishing to objects in Azure, it is possible to utilize the
republishing data method to move data to newer versions of SQL Server.
Attempting to configure replication using an older version can result in error number MSSQL_REPL20084 (The
process could not connect to Subscriber.) and MSSQ_REPL40532 (Cannot open server <name> requested by
the login. The login failed.)
Types of replication
There are different types of replication:
Merge replication No No
Peer-to-peer No No
Bidirectional No Yes
Updatable subscriptions No No
Supportability Matrix
The transactional replication supportability matrix for Azure SQL Managed Instance is the same as the one for
SQL Server.
When to use
Transactional replication is useful in the following scenarios:
Publish changes made in one or more tables in a database and distribute them to one or many databases in
a SQL Server instance or Azure SQL Database that subscribed for the changes.
Keep several distributed databases in synchronized state.
Migrate databases from one SQL Server instance or Azure SQL Managed Instance to another database by
continuously publishing the changes.
Compare Data Sync with Transactional Replication
C AT EGO RY DATA SY N C T RA N SA C T IO N A L REP L IC AT IO N
C AT EGO RY DATA SY N C T RA N SA C T IO N A L REP L IC AT IO N
Common configurations
In general, the publisher and the distributor must be either in the cloud or on-premises. The following
configurations are supported:
Publisher with local Distributor on SQL Managed Instance
Publisher and distributor are configured within a single SQL Managed Instance and distributing changes to
another SQL Managed Instance, SQL Database, or SQL Server instance.
Publisher with remote distributor on SQL Managed Instance
In this configuration, one managed instance publishes changes to a distributor placed on another SQL Managed
Instance that can serve many source SQL Managed Instances and distribute changes to one or many targets on
Azure SQL Database, Azure SQL Managed Instance, or SQL Server.
Publisher and distributor are configured on two managed instances. There are some constraints with this
configuration:
Both managed instances are on the same vNet.
Both managed instances are in the same location.
On-premises Publisher/Distributor with remote subscriber
In this configuration, a database in Azure SQL Database or Azure SQL Managed Instance is a subscriber. This
configuration supports migration from on-premises to Azure. If a subscriber is a database in Azure SQL
Database, it must be in push mode.
Requirements
Use SQL Authentication for connectivity between replication participants.
Use an Azure Storage Account share for the working directory used by replication.
Open TCP outbound port 445 in the subnet security rules to access the Azure file share.
Open TCP outbound port 1433 when the SQL Managed Instance is the Publisher/Distributor, and the
Subscriber is not. You may also need to change the SQL Managed Instance NSG outbound security rule for
allow_linkedserver_outbound for the port 1433 Destination Ser vice tag from virtualnetwork to
internet .
Place both the publisher and distributor in the cloud, or both on-premises.
Configure VPN peering between the virtual networks of replication participants if the virtual networks are
different.
NOTE
You may encounter error 53 when connecting to an Azure Storage File if the outbound network security group (NSG) port
445 is blocked when the distributor is an Azure SQL Managed Instance database and the subscriber is on-premises.
Update the vNet NSG to resolve this issue.
3. Drop subscription metadata from the subscriber. Run the following script on the subscription database on
subscriber SQL Managed Instance:
EXEC sp_subscription_cleanup
@publisher = N'<full DNS of publisher, e.g. example.ac2d23028af5.database.windows.net>',
@publisher_db = N'<publisher database>',
@publication = N'<name of publication>';
4. Forcefully drop all replication objects from publisher by running the following script in the published
database:
EXEC sp_removedbreplication
5. Forcefully drop old distributor from original primary SQL Managed Instance (if failing back over to an old
primary that used to have a distributor). Run the following script on the master database in old
distributor SQL Managed Instance:
If a subscriber SQL Managed Instance is in a failover group, the publication should be configured to connect to
the failover group listener endpoint for the subscriber managed instance. In the event of a failover, subsequent
action by the managed instance administrator depends on the type of failover that occurred:
For a failover with no data loss, replication will continue working after failover.
For a failover with data loss, replication will work as well. It will replicate the lost changes again.
For a failover with data loss, but the data loss is outside of the distribution database retention period, the SQL
Managed Instance administrator will need to reinitialize the subscription database.
Next steps
For more information about configuring transactional replication, see the following tutorials:
Configure replication between a SQL Managed Instance publisher and subscriber
Configure replication between a SQL Managed Instance publisher, SQL Managed Instance distributor, and
SQL Server subscriber
Create a publication.
Create a push subscription by using the server name as the subscriber (for example
N'azuresqldbdns.database.windows.net and the database in Azure SQL Database name as the destination
database (for example, Adventureworks . )
See also
Replication with a SQL Managed Instance and a failover group
Replication to SQL Database
Replication to managed instance
Create a Publication
Create a Push Subscription
Types of Replication
Monitoring (Replication)
Initialize a Subscription
Link feature for Azure SQL Managed Instance
(preview)
9/13/2022 • 10 minutes to read • Edit Online
Requirements
To use the link feature, you'll need a supported version of SQL Server. The following table lists the supported
versions.
SERVIC IN G UP DAT E
SQ L SERVER VERSIO N EDIT IO N S H O ST O S REQ UIREM EN T
SQL Server 2022 (16.x) Evaluation Edition Windows Server Must sign up at
Preview https://aka.ms/mi-link-
2022-signup to participate
in preview experience.
SQL Server 2019 (15.x) Enterprise, Standard, or Windows Server SQL Server 2019 CU15
Developer (KB5008996), or above for
Enterprise and Developer
editions, and CU17
(KB5016394), or above, for
Standard editions.
SQL Server 2016 (13.x) Enterprise, Standard, or Windows Server SQL Server 2016 SP3 (KB
Developer 5003279) and SQL Server
2016 Azure Connect pack
(KB 5014242)
SSMS 18.12.1, or higher SQL Server Management Studio (SSMS) is the easiest way to
use SQL Managed Instance link. Provides graphical wizards
for automated link setup and failover for SQL Servers 2016,
2019 and 2022.
NOTE
SQL Managed Instance link feature is available in all public Azure regions. National cloud support is provided for Azure for
US Government only, and no other national clouds at this time.
Overview
The underlying technology of near real-time data replication between SQL Server and SQL Managed Instance is
based on distributed availability groups, part of the well-known and proven Always On availability group
technology stack. Extend your SQL Server on-premises availability group to SQL Managed Instance in Azure in a
safe and secure manner.
There's no need to have an existing availability group or multiple nodes. The link supports single node SQL
Server instances without existing availability groups, and also multiple-node SQL Server instances with existing
availability groups. Through the link, you can use the modern benefits of Azure without migrating your entire
SQL Server data estate to the cloud.
You can keep running the link for as long as you need it, for months and even years at a time. And for your
modernization journey, if or when you're ready to migrate to Azure, the link enables a considerably improved
migration experience with the minimum possible downtime compared to all other options available today,
providing a true online migration to SQL Managed Instance.
Supported scenarios
Data replicated through the link feature from SQL Server to Azure SQL Managed Instance can be used with
several scenarios, such as:
Use Azure ser vices without migrating to the cloud
Offload read-only workloads to Azure
Migrate to Azure
Use Azure services
Use the link feature to leverage Azure services using SQL Server data without migrating to the cloud. Examples
include reporting, analytics, backups, machine learning, and other jobs that send data to Azure.
Offload workloads to Azure
You can also use the link feature to offload workloads to Azure. For example, an application could use SQL
Server for read-write workloads, while offloading read-only workloads to SQL Managed Instance in any Azure
region worldwide. Once the link is established, the primary database on SQL Server is read/write accessible,
while replicated data to SQL Managed Instance in Azure is read-only accessible. This allows for various scenarios
where replicated databases on SQL Managed Instance can be used for read scale-out and offloading read-only
workloads to Azure. SQL Managed Instance, in parallel, can also host independent read/write databases. This
allows for copying the replicated database to another read/write database on the same managed instance for
further data processing.
The link is database scoped (one link per one database), allowing for consolidation and deconsolidation of
workloads in Azure. For example, you can replicate databases from multiple SQL Servers to a single SQL
Managed Instance in Azure (consolidation), or replicate databases from a single SQL Server to multiple
managed instances via a 1 to 1 relationship between a database and a managed instance - to any of Azure's
regions worldwide (deconsolidation). The latter provides you with an efficient way to quickly bring your
workloads closer to your customers in any region worldwide, which you can use as read-only replicas.
Migrate to Azure
The link feature also facilitates migrating from SQL Server to SQL Managed Instance, enabling:
The most performant minimum downtime migration compared to all other solutions available today
True online migration to SQL Managed Instance in any service tier
Since the link feature enables minimum downtime migration, you can migrate to your managed instance while
maintaining your primary workload online. While online migration was possible to achieve previously with
other solutions when migrating to the General Purpose service tier, the link feature now also allows for true
online migrations to the Business Critical service tier as well.
How it works
The underlying technology behind the link feature for SQL Managed Instance is distributed availability groups.
The solution supports single-node systems without existing availability groups, or multiple node systems with
existing availability groups.
Secure connectivity, such as VPN or Express Route is used between an on-premises network and Azure. If SQL
Server is hosted on an Azure VM, the internal Azure backbone can be used between the VM and managed
instance – such as, for example, global VNet peering. The trust between the two systems is established using
certificate-based authentication, in which SQL Server and SQL Managed Instance exchange their public keys.
There could exist up to 100 links from the same, or various SQL Server sources to a single SQL Managed
Instance. This limit is governed by the number of databases that could be hosted on a managed instance at this
time. Likewise, a single SQL Server can establish multiple parallel database replication links with several
managed instances in different Azure regions in a 1 to 1 relationship between a database and a managed
instance. The feature requires CU13 or higher to be installed on SQL Server 2019.
Limitations
This section describes the product’s functional limitations.
General functional limitations
Managed Instance link has a set of general limitations, and those are listed in this section. Listed limitations are
of a technical nature and are unlikely to be addressed in the foreseeable future.
Only user databases can be replicated. Replication of system databases isn't supported.
The solution doesn't replicate server level objects, agent jobs, nor user logins from SQL Server to SQL
Managed Instance.
Only one database can be placed into a single Availability Group per one Distributed Availability Group link.
Link can't be established between SQL Server and SQL Managed Instance if functionality used on SQL Server
isn't supported on SQL Managed Instance.
File tables and file streams aren't supported for replication, as SQL Managed Instance doesn't support
this.
Replicating Databases using Hekaton (In-Memory OLTP) isn't supported on SQL Managed Instance
General Purpose service tier. Hekaton is only supported on SQL Managed Instance Business Critical
service tier.
For the full list of differences between SQL Server and SQL Managed Instance, see this article.
If Change data capture (CDC), log shipping, or service broker is used with databases replicated on the SQL
Server, the database is migrated to SQL Managed Instance, during failover to Azure, clients will need to
connect using the instance name of the current global primary replica. These settings should be manually
reconfigured.
If transactional replication is used with a database on SQL Server in the case of a migration scenario, during
failover to Azure, transactional replication on SQL Managed Instance will fail and should be manually
reconfigured.
In case distributed transactions are used with database replicated from the SQL Server, and in case of
migration scenario, on the cutover to the cloud, the DTC capabilities won't be transferred. There will be no
possibility for migrated database to get involved in distributed transactions with SQL Server, as SQL
Managed Instance doesn't support distributed transactions with SQL Server at this time. For reference, SQL
Managed Instance today supports distributed transactions only between other SQL Managed Instances, see
this article.
Managed Instance link can replicate database of any size if it fits into chosen storage size of target SQL
Managed Instance.
Client Windows OS 10 and 11 cannot be used to host your SQL Server, as it will not be possible to enable
Always On required for the link. SQL Server must be hosted on Windows Server 2012 or higher.
SQL Server 2008, 2012 and 2014 cannot be supported for the link feature, as SQL engines of these releases
do not have built-in support for Distributed Availability Groups, required for the link. Upgrade to a newer
version of SQL Server is required to be able to use the link.
Preview limitations
Some Managed Instance link features and capabilities are limited at this time . Details can be found in the
following list:
Product version requirements as listed in Requirements. At this time SQL Server 2017 (14.x) is not
supported.
Private endpoint (VPN/VNET) is supported to establish the link with SQL Managed Instance. Public endpoint
can't be used to establish the link with SQL Managed Instance.
Managed Instance link authentication between SQL Server instance and SQL Managed Instance is certificate-
based, available only through exchange of certificates. Windows authentication between SQL Server and
managed instance isn't supported.
Replication of user databases from SQL Server to SQL Managed Instance is one-way. User databases from
SQL Managed Instance can't be replicated back to SQL Server.
Auto failover groups replication to secondary SQL Managed Instance can't be used in parallel while
operating the Managed Instance link with SQL Server.
The link can be used with only a single SQL Server instance installed on the OS. Using the link with SQL
Server named instances (multiple SQL Servers installed on the same OS) is not supported.
Replicated R/O databases aren't part of auto-backup process on SQL Managed Instance.
Next steps
If you're interested in using Link feature for Azure SQL Managed Instance with versions and editions that are
currently not supported, sign-up here.
For more information on the link feature, see the following:
Managed Instance link – connecting SQL Server to Azure reimagined.
Prepare for SQL Managed Instance link.
Use SQL Managed Instance link via SSMS to replicate database.
Use SQL Managed Instance link via SSMS to migrate database.
For other replication scenarios, consider:
Transactional replication with Azure SQL Managed Instance (Preview)
What is an Azure SQL Managed Instance pool
(preview)?
9/13/2022 • 9 minutes to read • Edit Online
Key capabilities
Instance pools provide the following benefits:
1. Ability to host 2-vCore instances. *Only for instances in the instance pools.
2. Predictable and fast instance deployment time (up to 5 minutes).
3. Minimal IP address allocation.
The following diagram illustrates an instance pool with multiple managed instances deployed within a virtual
network subnet.
Instance pools enable deployment of multiple instances on the same virtual machine, where the virtual
machine's compute size is based on the total number of vCores allocated for the pool. This architecture allows
partitioning of the virtual machine into multiple instances, which can be any supported size, including 2 vCores
(2-vCore instances are only available for instances in pools).
After initial deployment, management operations on instances in a pool are much faster. This is because the
deployment or extension of a virtual cluster (dedicated set of virtual machines) is not part of provisioning the
managed instance.
Because all instances in a pool share the same virtual machine, the total IP allocation does not depend on the
number of instances deployed, which is convenient for deployment in subnets with a narrow IP range.
Each pool has a fixed IP allocation of only nine IP addresses (not including the five IP addresses in the subnet
that are reserved for its own needs). For details, see the subnet size requirements for single instances.
Application scenarios
The following list provides the main use cases where instance pools should be considered:
Migration of a group of SQL Server instances at the same time, where the majority is a smaller size (for
example 2 or 4 vCores).
Scenarios where predictable and short instance creation or scaling is important. For example, deployment of
a new tenant in a multi-tenant SaaS application environment that requires instance-level capabilities.
Scenarios where having a fixed cost or spending limit is important. For example, running shared dev-test or
demo environments of a fixed (or infrequently changing) size, where you periodically deploy managed
instances when needed.
Scenarios where minimal IP address allocation in a VNet subnet is important. All instances in a pool are
sharing a virtual machine, so the number of allocated IP addresses is lower than in the case of single
instances.
Architecture
Instance pools have a similar architecture to regular (single) managed instances. To support deployments within
Azure virtual networks and to provide isolation and security for customers, instance pools also rely on virtual
clusters. Virtual clusters represent a dedicated set of isolated virtual machines deployed inside the customer's
virtual network subnet.
The main difference between the two deployment models is that instance pools allow multiple SQL Server
process deployments on the same virtual machine node, which are resource governed using Windows job
objects, while single instances are always alone on a virtual machine node.
The following diagram shows an instance pool and two individual instances deployed in the same subnet and
illustrates the main architectural details for both deployment models:
Every instance pool creates a separate virtual cluster underneath. Instances within a pool and single instances
deployed in the same subnet do not share compute resources allocated to SQL Server processes and gateway
components, which ensures performance predictability.
Resource limitations
There are several resource limitations regarding instance pools and instances inside pools:
Instance pools are available only on Gen5 hardware.
Managed instances within a pool have dedicated CPU and RAM, so the aggregated number of vCores across
all instances must be less than or equal to the number of vCores allocated to the pool.
All instance-level limits apply to instances created within a pool.
In addition to instance-level limits, there are also two limits imposed at the instance pool level:
Total storage size per pool (8 TB).
Total number of user databases per pool. This limit depends on the pool vCores value:
8 vCores pool supports up to 200 databases,
16 vCores pool supports up to 400 databases,
24 and larger vCores pool supports up to 500 databases.
Azure AD authentication can be used after creating or setting a managed instance with the -AssignIdentity
flag. For more information, see New-AzSqlInstance and Set-AzSqlInstance. Users can then set an Azure AD
admin for the instance by following Provision Azure AD admin (SQL Managed Instance).
Total storage allocation and number of databases across all instances must be lower than or equal to the limits
exposed by instance pools.
Instance pools support 8, 16, 24, 32, 40, 64, and 80 vCores.
Managed instances inside pools support 2, 4, 8, 16, 24, 32, 40, 64, and 80 vCores.
Managed instances inside pools support storage sizes between 32 GB and 8 TB, except:
2 vCore instances support sizes between 32 GB and 640 GB,
4 vCore instances support sizes between 32 GB and 2 TB.
Managed instances inside pools have limit of up to 100 user databases per instance, except 2 vCore instances
that support up to 50 user databases per instance.
The service tier property is associated with the instance pool resource, so all instances in a pool must be the
same service tier as the service tier of the pool. At this time, only the General Purpose service tier is available
(see the following section on limitations in the current preview).
Public preview limitations
The public preview has the following limitations:
Currently, only the General Purpose service tier is available.
Instance pools cannot be scaled during the public preview, so careful capacity planning before deployment is
important.
Azure portal support for instance pool creation and configuration is not yet available. All operations on
instance pools are supported through PowerShell only. Initial instance deployment in a pre-created pool is
also supported through PowerShell only. Once deployed into a pool, managed instances can be updated
using the Azure portal.
Managed instances created outside of the pool cannot be moved into an existing pool, and instances created
inside a pool cannot be moved outside as a single instance or to another pool.
Reserve capacity instance pricing is not available.
Failover groups are not supported for instances in the pool.
Performance considerations
Although managed instances within pools do have dedicated vCore and RAM, they share local disk (for tempdb
usage) and network resources. It's not likely, but it is possible to experience the noisy neighbor effect if multiple
instances in the pool have high resource consumption at the same time. If you observe this behavior, consider
deploying these instances to a bigger pool or as single instances.
Security considerations
Because instances deployed in a pool share the same virtual machine, you may want to consider disabling
features that introduce higher security risks, or to firmly control access permissions to these features. For
example, CLR integration, native backup and restore, database email, etc.
If you are experiencing issues related to a single managed instance or database within a pool, you should create
a regular support ticket for Azure SQL Managed Instance.
To create larger SQL Managed Instance deployments (with or without instance pools), you may need to obtain a
larger regional quota. For more information, see Request quota increases for Azure SQL Database. The
deployment logic for instance pools compares total vCore consumption at the pool level against your quota to
determine whether you are allowed to create new resources without further increasing your quota.
If you create instance pools on subscriptions eligible for dev-test benefit, you automatically receive discounted
rates of up to 55 percent on Azure SQL Managed Instance.
For full details on instance pool pricing, refer to the instance pools section on the SQL Managed Instance pricing
page.
Next steps
To get started with instance pools, see SQL Managed Instance pools how-to guide.
To learn how to create your first managed instance, see Quickstart guide.
For a features and comparison list, see Azure SQL common features.
For more information about VNet configuration, see SQL Managed Instance VNet configuration.
For a quickstart that creates a managed instance and restores a database from a backup file, see Create a
managed instance.
For a tutorial about using Azure Database Migration Service for migration, see SQL Managed Instance
migration using Database Migration Service.
For advanced monitoring of SQL Managed Instance database performance with built-in troubleshooting
intelligence, see Monitor Azure SQL Managed Instance using Azure SQL Analytics.
For pricing information, see SQL Managed Instance pricing.
Data virtualization with Azure SQL Managed
Instance
9/13/2022 • 13 minutes to read • Edit Online
Overview
Data virtualization provides two ways of querying files intended for different sets of scenarios:
OPENROWSET syntax – optimized for ad-hoc querying of files. Typically used to quickly explore the content
and the structure of a new set of files.
External tables – optimized for repetitive querying of files using identical syntax as if data were stored locally
in the database. External tables require several preparation steps compared to the OPENROWSET syntax, but
allow for more control over data access. External tables are typically used for analytical workloads and
reporting.
File formats
Parquet and delimited text (CSV) file formats are directly supported. The JSON file format is indirectly supported
by specifying the CSV file format where queries return every document as a separate row. You can parse rows
further using JSON_VALUE and OPENJSON .
Storage types
Files can be stored in Azure Data Lake Storage Gen2 or Azure Blob Storage. To query files, you need to provide
the location in a specific format and use the location type prefix corresponding to the type of external source
and endpoint/protocol, such as the following examples:
IMPORTANT
The provided Location type prefix is used to choose the optimal protocol for communication and to leverage any
advanced capabilities offered by the particular storage type. Using the generic https:// prefix is disabled. Always use
endpoint-specific prefixes.
Getting started
If you're new to data virtualization and want to quickly test functionality, start by querying public data sets
available in Azure Open Datasets, like the Bing COVID-19 dataset allowing anonymous access.
Use the following endpoints to query the Bing COVID-19 data sets:
Parquet:
abs://[email protected]/curated/covid-19/bing_covid-
19_data/latest/bing_covid-19_data.parquet
CSV:
abs://[email protected]/curated/covid-19/bing_covid-
19_data/latest/bing_covid-19_data.csv
For a quick start, run this simple T-SQL query to get first insights into the data set:
You can continue data set exploration by appending WHERE, GROUP BY and other clauses based on the result
set of the first query.
If the first query fails on your managed instance, that instance likely has access to Azure storage accounts
restricted and you should talk to your networking expert to enable access before you can proceed with
querying.
Once you get familiar with querying public data sets, consider switching to non-public data sets that require
providing credentials, granting access rights and configuring firewall rules. In many real-world scenarios you
will operate primarily with private data sets.
Managed Identity
Shared access signature
Managed Identity , also known as MSI, is a feature of Azure Active Directory (Azure AD) that provides instances
of Azure services - like Azure SQL Managed Instance - with an automatically managed identity in Azure AD. This
identity can be used to authorize requests for data access in non-public storage accounts.
Before accessing the data, the Azure storage administrator must grant permissions to Managed Identity to
access the data. Granting permissions to Managed Identity of the managed instance is done the same way as
granting permission to any other Azure AD user.
Creating database scoped credential for managed identity authentication is very simple:
When accessing non-public storage accounts, along with the location, you also need to reference a database
scoped credential with encapsulated authentication parameters:
--Create external data source pointing to the file path, and referencing database-scoped credential:
CREATE EXTERNAL DATA SOURCE DemoPrivateExternalDataSource
WITH (
LOCATION = 'abs://[email protected]/curated/covid-19/bing_covid-19_data/latest'
CREDENTIAL = [MyCredential]
)
SELECT TOP 10 *
FROM OPENROWSET(
BULK 'bing_covid-19_data.parquet',
DATA_SOURCE = 'DemoPublicExternalDataSource',
FORMAT = 'parquet'
) AS filerows
--Query all files with .parquet extension in folders matching name pattern:
SELECT TOP 10 *
FROM OPENROWSET(
BULK 'taxi/year=*/month=*/*.parquet',
DATA_SOURCE = 'NYCTaxiDemoDataSource',--You need to create the data source first
FORMAT = 'parquet'
) AS filerows
When querying multiple files or folders, all files accessed with the single OPENROWSET must have the same
structure (such as the same number of columns and data types). Folders can't be traversed recursively.
Schema inference
Automatic schema inference helps you quickly write queries and explore data when you don't know file
schemas. Schema inference only works with parquet format files.
While convenient, the cost is that inferred data types may be larger than the actual data types. This can lead to
poor query performance since there may not be enough information in the source files to ensure the
appropriate data type is used. For example, parquet files don't contain metadata about maximum character
column length, so the instance infers it as varchar(8000).
Use the sp_describe_first_results_set stored procedure to check the resulting data types of your query, such as
the following example:
Once you know the data types, you can then specify them using the WITH clause to improve performance:
Since the schema of CSV files can't be automatically determined, explicitly specify columns using the WITH
clause:
SELECT TOP 10 *
FROM OPENROWSET(
BULK 'population/population.csv',
DATA_SOURCE = 'PopulationDemoDataSourceCSV',
FORMAT = 'CSV')
WITH (
[country_code] VARCHAR (5) COLLATE Latin1_General_BIN2,
[country_name] VARCHAR (100) COLLATE Latin1_General_BIN2,
[year] smallint,
[population] bigint
) AS filerows
When called without a parameter, the Filepath function returns the file path that the row originates from.
When DATA_SOURCE is used in OPENROWSET , it returns the path relative to the DATA_SOURCE , otherwise it returns
full file path.
When called with a parameter, it returns part of the path that matches the wildcard on the position specified in
the parameter. For example, parameter value 1 would return part of the path that matches the first wildcard.
The Filepath function can also be used for filtering and aggregating rows:
SELECT
r.filepath() AS filepath
,r.filepath(1) AS [year]
,r.filepath(2) AS [month]
,COUNT_BIG(*) AS [rows]
FROM OPENROWSET(
BULK 'taxi/year=*/month=*/*.parquet',
DATA_SOURCE = 'NYCTaxiDemoDataSource',
FORMAT = 'parquet'
) AS r
WHERE
r.filepath(1) IN ('2017')
AND r.filepath(2) IN ('10', '11', '12')
GROUP BY
r.filepath()
,r.filepath(1)
,r.filepath(2)
ORDER BY
filepath;
It's also convenient to add columns with the file location data to a view using the Filepath function for easier
and more performant filtering. Using views can reduce the number of files and the amount of data the query on
top of the view needs to read and process when filtered by any of those columns:
CREATE VIEW TaxiRides AS
SELECT *
,filerows.filepath(1) AS [year]
,filerows.filepath(2) AS [month]
FROM OPENROWSET(
BULK 'taxi/year=*/month=*/*.parquet',
DATA_SOURCE = 'NYCTaxiDemoDataSource',
FORMAT = 'parquet'
) AS filerows
Views also enable reporting and analytic tools like Power BI to consume results of OPENROWSET .
External tables
External tables encapsulate access to files making the querying experience almost identical to querying local
relational data stored in user tables. Creating an external table requires the external data source and external file
format objects to exist:
Once the external table is created, you can query it just like any other table:
SELECT TOP 10 *
FROM tbl_TaxiRides
Just like OPENROWSET , external tables allow querying multiple files and folders by using wildcards. Schema
inference and filepath/filename functions aren't supported with external tables.
Performance considerations
There's no hard limit to the number of files or the amount of data that can be queried, but query performance
depends on the amount of data, data format, the way data is organized, and complexity of queries and joins.
Querying partitioned data
Data is often organized in subfolders also called partitions. You can instruct managed instance to query only
particular folders and files. Doing so reduces the number of files and the amount of data the query needs to
read and process, resulting in better performance. This type of query optimization is known as partition pruning
or partition elimination. You can eliminate partitions from query execution by using metadata function filepath()
in the WHERE clause of the query.
The following sample query reads NYC Yellow Taxi data files only for the last three months of 2017:
SELECT
r.filepath() AS filepath
,r.filepath(1) AS [year]
,r.filepath(2) AS [month]
,COUNT_BIG(*) AS [rows]
FROM OPENROWSET(
BULK 'csv/taxi/yellow_tripdata_*-*.csv',
DATA_SOURCE = 'SqlOnDemandDemo',
FORMAT = 'CSV',
PARSER_VERSION = '2.0',
FIRSTROW = 2
)
WITH (
vendor_id INT
) AS [r]
WHERE
r.filepath(1) IN ('2017')
AND r.filepath(2) IN ('10', '11', '12')
GROUP BY
r.filepath()
,r.filepath(1)
,r.filepath(2)
ORDER BY
filepath;
If your stored data isn't partitioned, consider partitioning it to improve query performance.
Statistics
Collecting statistics on your external data is one of the most important things you can do for query optimization.
The more the instance knows about your data, the faster it can execute queries. The SQL engine query optimizer
is a cost-based optimizer. It compares the cost of various query plans, and then chooses the plan with the lowest
cost. In most cases, it chooses the plan that will execute the fastest.
Automatic creation of statistics
Azure SQL Managed Instance analyzes incoming user queries for missing statistics. If statistics are missing, the
query optimizer automatically creates statistics on individual columns in the query predicate or join condition to
improve cardinality estimates for the query plan. Automatic creation of statistics is done synchronously so you
may incur slightly degraded query performance if your columns are missing statistics. The time to create
statistics for a single column depends on the size of the files targeted.
OPENROWSET manual statistics
Single-column statistics for the OPENROWSET path can be created using the sp_create_openrowset_statistics
stored procedure, by passing the select query with a single column as a parameter:
EXEC sys.sp_create_openrowset_statistics N'
SELECT pickup_datetime
FROM OPENROWSET(
BULK ''abs://[email protected]/curated/covid-19/bing_covid-
19_data/latest/*.parquet'',
FORMAT = ''parquet'') AS filerows
'
By default, the instance uses 100% of the data provided in the dataset to create statistics. You can optionally
specify the sample size as a percentage using the TABLESAMPLE options. To create single-column statistics for
multiple columns, execute the stored procedure for each of the columns. You can't create multi-column statistics
for the OPENROWSET path.
To update existing statistics, drop them first using the sp_drop_openrowset_statistics stored procedure, and
then recreate them using the sp_create_openrowset_statistics :
The WITH options are mandatory, and for the sample size, the allowed options are FULLSCAN and SAMPLE n
percent. To create single-column statistics for multiple columns, execute the stored procedure for each of the
columns. Multi-column statistics are not supported.
Troubleshooting
Issues with query execution are typically caused by managed instance not being able to access file location. The
related error messages may report insufficient access rights, non-existing location or file path, file being used by
another process, or that directory cannot be listed. In most cases this indicates that access to files is blocked by
network traffic control policies or due to lack of access rights. This is what should be checked:
Wrong or mistyped location path.
SAS key validity: it could be expired i.e. out of its validity period, containing a typo, starting with a question
mark.
SAS key permissions allowed: Read at minimum, and List if wildcards are used
Blocked inbound traffic on the storage account. Check Managing virtual network rules for Azure Storage for
more details and make sure that access from managed instance VNet is allowed.
Outbound traffic blocked on the managed instance using storage endpoint policy. Allow outbound traffic to
the storage account.
Managed Identity access rights: make sure the Azure AD service principal representing managed identity of
the instance has access rights granted on the storage account.
Next steps
To learn more about syntax options available with OPENROWSET, see OPENROWSET T-SQL.
For more information about creating external table in SQL Managed Instance, see CREATE EXTERNAL TABLE.
To learn more about creating external file format, see CREATE EXTERNAL FILE FORMAT
Overview of Azure SQL Managed Instance
management operations
9/13/2022 • 9 minutes to read • Edit Online
Duration
The duration of operations on the virtual cluster can vary, but typically have the longest duration.
The following table lists the long running steps that can be triggered as part of the create, update, or delete
operation. Table also lists the durations that you can typically expect, based on existing service telemetry data:
Vir tual cluster creation Creation is a synchronous step in 90% of operations finish in 4
instance management operations. hours
Vir tual cluster resizing Expansion is a synchronous step, while 90% of cluster expansions finish
(expansion or shrinking) shrinking is performed asynchronously in less than 2.5 hours
(without impact on the duration of
instance management operations).
ST EP DESC RIP T IO N EST IM AT ED DURAT IO N
Vir tual cluster deletion Virtual cluster deletion can be 90% of cluster deletions finish in
synchronous and asynchronous. 1.5 hours
Asynchronous deletion is performed in
the background and it is triggered in
case of multiple virtual clusters inside
the same subnet, when last instance in
the non-last cluster in the subnet is
deleted. Synchronous deletion of the
virtual cluster is triggered as part of
the very last instance deletion in the
subnet.
Seeding database files 1 A synchronous step, triggered during 90% of these operations execute
compute (vCores), or storage scaling in at 220 GB/hour or higher
the Business Critical service tier as well
as in changing the service tier from
General Purpose to Business Critical
(or vice versa). Duration of this
operation is proportional to the total
database size as well as current
database activity (number of active
transactions). Database activity when
updating an instance can introduce
significant variance to the total
duration.
1 When scaling compute (vCores) or storage in Business Critical service tier, or switching service tier from
General Purpose to Business Critical, seeding also includes Always On availability group seeding.
IMPORTANT
Scaling storage up or down in the General Purpose service tier consists of updating meta data and propagating response
for submitted request. It is a fast operation that completes in up to 5 minutes, without a downtime and failover.
First instance in an empty subnet Virtual cluster creation 90% of operations finish in 4 hours.
First instance of another hardware or Virtual cluster creation1 90% of operations finish in 4 hours.
maintenance window in a non-empty
subnet (for example, first Premium-
series instance in a subnet with
Standard-series instances)
Subsequent instance creation within Virtual cluster resizing 90% of operations finish in 2.5 hours.
the non-empty subnet (2nd, 3rd, etc.
instance)
1 A separate virtual cluster is created for each hardware configuration and for each maintenance window
configuration.
Categor y: Update
Instance storage scaling up/down No long-running segment 99% of operations finish in 5 minutes.
(General Purpose)
Instance storage scaling up/down - Virtual cluster resizing 90% of operations finish in 2.5 hours +
(Business Critical) - Always On availability group seeding time to seed all databases (220
GB/hour).
Instance compute (vCores) scaling up - Virtual cluster resizing 90% of operations finish in 2.5 hours.
and down (General Purpose)
Instance compute (vCores) scaling up - Virtual cluster resizing 90% of operations finish in 2.5 hours +
and down (Business Critical) - Always On availability group seeding time to seed all databases (220
GB/hour).
Instance service tier change (General - Virtual cluster resizing 90% of operations finish in 2.5 hours +
Purpose to Business Critical and vice - Always On availability group seeding time to seed all databases (220
versa) GB/hour).
Instance hardware or maintenance - Virtual cluster creation or resizing1 90% of operations finish in 4 hours
window change (General Purpose) (creation) or 2.5 hours (resizing) .
Instance hardware or maintenance - Virtual cluster creation or resizing1 90% of operations finish in 4 hours
window change (Business Critical) - Always On availability group seeding (creation) or 2.5 hours (resizing) + time
to seed all databases (220 GB/hour).
1 Managed instance must be placed in a virtual cluster with the corresponding hardware and maintenance
window. If there is no such virtual cluster in the subnet, a new one must be created first to accommodate the
instance.
Categor y: Delete
Non-last instance deletion Log tail backup for all databases 90% of operations finish in up to 1
minute.1
Last instance deletion - Log tail backup for all databases 90% of operations finish in up to 1.5
- Virtual cluster deletion hours.2
1 In case of multiple virtual clusters in the subnet, if the last instance in the virtual cluster is deleted, this
operation will immediately trigger asynchronous deletion of the virtual cluster.
2 Deletion of last instance in the subnet immediately triggers synchronous deletion of the virtual cluster.
IMPORTANT
As soon as delete operation is triggered, billing for SQL Managed Instance is disabled. Duration of the delete operation
will not impact the billing.
Instance availability
SQL Managed Instance is available during update operations , except a short downtime caused by the
failover that happens at the end of the update. It typically lasts up to 10 seconds even in case of interrupted
long-running transactions, thanks to accelerated database recovery.
NOTE
Scaling General Purpose managed instance storage will not cause a failover at the end of update.
SQL Managed Instance is not available to client applications during deployment and deletion operations.
IMPORTANT
It's not recommended to scale compute or storage of Azure SQL Managed Instance or to change the service tier at the
same time as long-running transactions (data import, data processing jobs, index rebuild, etc.). The failover of the
database at the end of the operation cancels all ongoing transactions.
Virtual cluster resizing / creation Depending on the state of subnet, virtual cluster goes into
creation or resizing.
New SQL instance startup SQL process is started on deployed virtual cluster.
Virtual cluster resizing / creation Depending on the state of subnet, virtual cluster goes into
creation or resizing.
New SQL instance startup SQL process is started on deployed virtual cluster.
Seeding database files / attaching database files Depending on the type of the update operation, either
database seeding or attaching database files is performed.
ST EP N A M E ST EP DESC RIP T IO N
Preparing failover and failover After data has been seeded or database files reattached,
system is being prepared for the failover. When everything is
set, failover is performed with a shor t downtime .
Old SQL instance cleanup Removing old SQL process from the virtual cluster
SQL instance cleanup Removing SQL process from the virtual cluster
Virtual cluster deletion Depending if the instance being deleted is last in the subnet,
virtual cluster is synchronously deleted as last step.
NOTE
As a result of scaling instances, underlying virtual cluster will go through process of releasing unused capacity and
possible capacity defragmentation, which could impact instances that did not participate in creation / scaling operations.
Next steps
To learn how to create your first managed instance, see Quickstart guide.
For a features and comparison list, see Common SQL features.
For more information about VNet configuration, see SQL Managed Instance VNet configuration.
For a quickstart that creates a managed instance and restores a database from a backup file, see Create a
managed instance.
For a tutorial about using Azure Database Migration Service for migration, see SQL Managed Instance
migration using Database Migration Service.
Monitoring Azure SQL Managed Instance
management operations
9/13/2022 • 3 minutes to read • Edit Online
Overview
All management operations can be categorized as follows:
Instance deployment (new instance creation).
Instance update (changing instance properties, such as vCores or reserved storage).
Instance deletion.
Most management operations are long running operations. Therefore there is a need to monitor the status or
follow the progress of operation steps.
There are several ways to monitor managed instance management operations:
Resource group deployments
Activity log
Managed instance operations API
The following table compares management operation monitoring options:
SUP P O RT S
O P T IO N RET EN T IO N C A N C EL C REAT E UP DAT E DEL ET E C A N C EL ST EP S
Resource Infinite1 No2 Visible Visible Not visible Visible Not visible
group
deploymen
ts
scheduled for deployment after the cancel action is performed will be canceled. Ongoing deployment is not
canceled when the resource group deployment is canceled. Since managed instance deployment consists of one
long running step (from the Azure Resource Manger perspective), canceling resource group deployment will not
cancel managed instance deployment and the operation will complete.
Managed Instance Operations - Cancel Cancels the asynchronous operation on the managed
instance.
Managed Instance Operations - List By Managed Instance Gets a list of operations performed on the managed
instance.
NOTE
Use API version 2020-02-02 to see the managed instance create operation in the list of operations. This is the default
version used in the Azure portal and the latest PowerShell and Azure CLI packages.
Monitor operations
Portal
PowerShell
Azure CLI
In the Azure portal, use the managed instance Over view page to monitor managed instance operations.
For example, the Create operation is visible at the start of the creation process on the Over view page:
Select Ongoing operation to open the Ongoing operation page and view Create or Update operations.
You can also Cancel operations from this page as well.
NOTE
Create operations submitted through Azure portal, PowerShell, Azure CLI or other tooling using REST API version 2020-
02-02 can be canceled. REST API versions older than 2020-02-02 used to submit a create operation will start the instance
deployment, but the deployment won't be listed in the Operations API and can't be cancelled.
Next steps
To learn how to create your first managed instance, see Quickstart guide.
For a features and comparison list, see common SQL features.
For more information about VNet configuration, see SQL Managed Instance VNet configuration.
For a quickstart that creates a managed instance and restores a database from a backup file, see Create a
managed instance.
For a tutorial about using Azure Database Migration Service for migration, see SQL Managed Instance
migration using Database Migration Service.
Canceling Azure SQL Managed Instance
management operations
9/13/2022 • 3 minutes to read • Edit Online
Overview
All management operations can be categorized as follows:
Instance deployment (new instance creation).
Instance update (changing instance properties, such as vCores or reserved storage).
Instance deletion.
You can monitor progress and status of management operations and cancel some of them if necessary.
The following table summarizes management operations, whether or not you can cancel them, and their typical
overall duration:
EST IM AT ED C A N C EL
C AT EGO RY O P ERAT IO N C A N C EL A B L E DURAT IO N
To cancel management operations using the Azure portal, follow these steps:
1. Go to the Azure portal
2. Go to the Over view blade of your SQL Managed Instance.
3. Select the Notification box next to the ongoing operation to open the Ongoing Operation page.
If the cancel request fails or the cancel button is not active, it means that the management operation has entered
non-cancelable state and that will finish shortly. The management operation will continue its execution until it is
completed.
Next steps
To learn how to create your first managed instance, see Quickstart guide.
For a features and comparison list, see Common SQL features.
For more information about VNet configuration, see SQL Managed Instance VNet configuration.
For a quickstart that creates a managed instance and restores a database from a backup file, see Create a
managed instance.
For a tutorial about using Azure Database Migration Service for migration, see SQL Managed Instance
migration using Database Migration Service.
Managed API reference for Azure SQL Managed
Instance
9/13/2022 • 3 minutes to read • Edit Online
IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is
for the Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the
AzureRM modules are substantially identical.
To create and manage managed instances with Azure PowerShell, use the following PowerShell cmdlets. If you
need to install or upgrade PowerShell, see Install the Azure PowerShell module.
TIP
For PowerShell example scripts, see Quickstart script: Create a managed instance using a PowerShell library.
C M DL ET DESC RIP T IO N
TIP
For an Azure CLI quickstart, see Working with SQL Managed Instance using Azure CLI.
C M DL ET DESC RIP T IO N
TIP
For quickstarts showing you how to configure and connect to a managed instance using SQL Server Management Studio
on Microsoft Windows, see Quickstart: Configure Azure VM to connect to Azure SQL Managed Instance and Quickstart:
Configure a point-to-site connection to Azure SQL Managed Instance from on-premises.
IMPORTANT
You cannot create or delete a managed instance using Transact-SQL.
Managed Instances - List By Resource Group Returns a list of managed instances in a resource group.
Managed Instance Operations - List By Managed Instance Gets a list of management operations performed on the
managed instance.
Managed Instance Operations - Get Gets the specific management operation performed on the
managed instance.
Managed Instance Operations - Cancel Cancels the specific management operation performed on
the managed instance.
Next steps
To learn about migrating a SQL Server database to Azure, see Migrate to Azure SQL Database.
For information about supported features, see Features.
Machine Learning Services in Azure SQL Managed
Instance
9/13/2022 • 2 minutes to read • Edit Online
Machine Learning Services is a feature of Azure SQL Managed Instance that provides in-database machine
learning, supporting both Python and R scripts. The feature includes Microsoft Python and R packages for high-
performance predictive analytics and machine learning. The relational data can be used in scripts through stored
procedures, T-SQL script containing Python or R statements, or Python or R code containing T-SQL.
For details on how this command affects SQL Managed Instance resources, see Resource Governance.
Enable Machine Learning Services in a failover group
In a failover group, system databases are not replicated to the secondary instance (see Limitations of failover
groups for more information).
If the SQL Managed Instance you're using is part of a failover group, do the following:
Run the sp_configure and RECONFIGURE commands on each instance of the failover group to enable
Machine Learning Services.
Install the R/Python libraries on a user database rather than the master database.
Next steps
See the key differences from SQL Server Machine Learning Services.
To learn how to use Python in Machine Learning Services, see Run Python scripts.
To learn how to use R in Machine Learning Services, see Run R scripts.
For more information about machine learning on other SQL platforms, see the SQL machine learning
documentation.
Key differences between Machine Learning Services
in Azure SQL Managed Instance and SQL Server
9/13/2022 • 2 minutes to read • Edit Online
This article describes the few, key differences in functionality between Machine Learning Services in Azure SQL
Managed Instance and SQL Server Machine Learning Services.
Language support
Machine Learning Services in both SQL Managed Instance and SQL Server support the Python and R
extensibility framework. A key difference in SQL Managed Instance is that only Python and R are supported, and
external languages such as Java cannot be added.
The initial versions of Python and R are different in SQL Managed Instance and SQL Server:
SQL Server 2017 3.5.2 and 3.7.2 (CU22 and later) 3.3.3 and 3.5.2 (CU22 and later)
SQL Server 2016 Not available 3.2.2 and 3.5.2 (SP2 CU14 and later)
* Beginning with SQL Server 2022, runtimes for R, Python, and Java, are no longer shipped or installed within
SQL Setup. Instead, install your desired R and/or Python custom runtime(s) and packages. For more
information, see Install SQL Server 2022 Machine Learning Services (Python and R) on Windows.
Resource governance
In SQL Managed Instance, it's not possible to limit R resources through Resource Governor, and external
resource pools are not supported.
By default, R resources are set to a maximum of 20% of the available SQL Managed Instance resources when
extensibility is enabled. To change this default percentage, create an Azure support ticket at
https://azure.microsoft.com/support/create-ticket/.
Extensibility is enabled with the following SQL commands (SQL Managed Instance will restart and be
unavailable for a few seconds):
To disable extensibility and restore 100% of memory and CPU resources to SQL Server, use the following
commands:
The total resources available to SQL Managed Instance depend on which service tier you choose. For more
information, see Azure SQL Database purchasing models.
Insufficient memory error
Memory usage depends on how much is used in your R scripts and the number of parallel queries being
executed. If there is insufficient memory available for R, you'll get an error message. Common error messages
are:
Unable to communicate with the runtime for 'R' script for request id: *******. Please check the
requirements of 'R' runtime
'R' script error occurred during execution of 'sp_execute_external_script' with HRESULT 0x80004004. ...an
external script error occurred: "..could not allocate memory (0 Mb) in C function 'R_AllocStringBuffer'"
An external script error occurred: Error: cannot allocate vector of size.
If you receive one of these errors, you can resolve it by scaling your database to a higher service tier.
If you encounter out of memory errors in Azure SQL Managed Instance, review
sys.dm_os_out_of_memory_events.
Next steps
See the overview, Machine Learning Services in Azure SQL Managed Instance.
To learn how to use Python in Machine Learning Services, see Run Python scripts.
To learn how to use R in Machine Learning Services, see Run R scripts.
Get started with Azure SQL Managed Instance
auditing
9/13/2022 • 8 minutes to read • Edit Online
IMPORTANT
Auditing for Azure SQL Database, Azure Synapse and Azure SQL Managed Instance is optimized for availability and
performance. During very high activity, or high network load, Azure SQL Database, Azure Synapse and Azure SQL
Managed Instance allow operations to proceed and may not record some audited events.
IMPORTANT
Use a storage account in the same region as the managed instance to avoid cross-region reads/writes.
If your storage account is behind a Virtual Network or a Firewall, please see Grant access from a virtual
network.
If you change retention period from 0 (unlimited retention) to any other value, please note that
retention will only apply to logs written after retention value was changed (logs written during the
period when retention was set to unlimited are preserved, even after retention is enabled).
IMPORTANT
Customers wishing to configure an immutable log store for their server- or database-level audit events should
follow the instructions provided by Azure Storage. (Please ensure you have selected Allow additional appends
when you configure the immutable blob storage.)
3. After you create the container for the audit logs, there are two ways to configure it as the target for the
audit logs: using T-SQL or using the SQL Server Management Studio (SSMS) UI:
Configure blob storage for audit logs using T-SQL:
a. In the containers list, click the newly created container and then click Container
proper ties .
b. Copy the container URL by clicking the copy icon and save the URL (for example, in
Notepad) for future use. The container URL format should be
https://<StorageName>.blob.core.windows.net/<ContainerName>
c. Generate an Azure Storage SAS token to grant managed instance auditing access rights to
the storage account:
Navigate to the Azure storage account where you created the container in the
previous step.
Click on Shared access signature in the Storage Settings menu.
NOTE
Renew the token upon expiry to avoid audit failures.
IMPORTANT
Remove the question mark (“?”) character from the beginning of the token.
d. Connect to your managed instance via SQL Server Management Studio or any other
supported tool.
e. Execute the following T-SQL statement to create a new credential using the container
URL and SAS token that you created in the previous steps:
f. Execute the following T-SQL statement to create a new server audit (choose your own audit
name, and use the container URL that you created in the previous steps). If not specified, the
RETENTION_DAYS default is 0 (unlimited retention):
NOTE
When using SQL Server Management Studio UI to create audit, a credential to the container with
SAS key will be automatically created.
h. After you configure the blob container as target for the audit logs, create and enable a
server audit specification or database audit specification as you would for SQL Server:
Create server audit specification T-SQL guide
Create database audit specification T-SQL guide
4. Enable the server audit that you created in step 3:
Set up auditing for your server to Event Hubs or Azure Monitor logs
Audit logs from a managed instance can be sent to Azure Event Hubs or Azure Monitor logs. This section
describes how to configure this:
1. Navigate in the Azure portal to the managed instance.
2. Click on Diagnostic settings .
3. Click on Turn on diagnostics . If diagnostics is already enabled, +Add diagnostic setting will show
instead.
4. Select SQLSecurityAuditEvents in the list of logs.
5. Select a destination for the audit events: Event Hubs, Azure Monitor logs, or both. Configure for each
target the required parameters (e.g. Log Analytics workspace).
6. Click Save .
7. Connect to the managed instance using SQL Ser ver Management Studio (SSMS) or any other
supported client.
8. Execute the following T-SQL statement to create a server audit:
9. Create and enable a server audit specification or database audit specification as you would for SQL
Server:
Create Server audit specification T-SQL guide
Create Database audit specification T-SQL guide
10. Enable the server audit created in step 8:
NOTE
This article was recently updated to use the term Azure Monitor logs instead of Log Analytics. Log data is still stored in a
Log Analytics workspace and is still collected and analyzed by the same Log Analytics service. We are updating the
terminology to better reflect the role of logs in Azure Monitor. See Azure Monitor terminology changes for details.
Next steps
For a full list of audit log consumption methods, refer to Get started with Azure SQL Database auditing.
For more information about Azure programs that support standards compliance, see the Azure Trust Center,
where you can find the most current list of compliance certifications.
Use Azure SQL Managed Instance securely with
public endpoints
9/13/2022 • 2 minutes to read • Edit Online
Scenarios
Azure SQL Managed Instance provides a private endpoint to allow connectivity from inside its virtual network.
The default option is to provide maximum isolation. However, there are scenarios where you need to provide a
public endpoint connection:
The managed instance must integrate with multi-tenant-only platform-as-a-service (PaaS) offerings.
You need higher throughput of data exchange than is possible when you're using a VPN.
Company policies prohibit PaaS inside corporate networks.
Next steps
Learn how to configure public endpoint for manage instances: Configure public endpoint
Set up trust between instances with server trust
group (Azure SQL Managed Instance)
9/13/2022 • 2 minutes to read • Edit Online
Set up group
Server trust group can be setup via Azure PowerShell or Azure CLI.
To create a server trust group by using the Azure portal, follow these steps:
1. Go to the Azure portal.
2. Navigate to Azure SQL Managed Instance that you plan to add to a server trust group.
3. On the Security settings, select the SQL trust groups tab.
4. On the SQL trust groups configuration page, select the New Group icon.
5. On the SQL trust group create blade set the Group name . It needs to be unique in the group's
subscription, resource group and region. Trust scope defines the type of cross-instance scenario that is
enabled with the server trust group. Trust scope is fixed - all available functionalities are preselected and
this cannot be changed. Select Subscription and Resource group to choose the managed instances
that will be members of the group.
Edit group
To edit a server trust group, follow these steps:
1. Go to Azure portal.
2. Navigate to a managed instance that belongs to the trust group.
3. On the Security settings select the SQL trust groups tab.
4. Select the trust group you want to edit.
5. Click Configure group .
Delete group
To delete a server trust group, follow these steps:
1. Go to the Azure portal.
2. Navigate to a managed instance that belongs to the SQL trust group.
3. On the Security settings select the SQL trust groups tab.
4. Select the trust group you want to delete.
NOTE
Deleting the SQL trust group might not immediately remove the trust between the two managed instances. Trust removal
can be enforced by invoking a failover of managed instances. Check the Known issues for the latest updates on this.
Limitations
Following limitations apply to Server Trust Groups:
Group can contain only instances of Azure SQL Managed Instance.
Trust scope cannot be changed when a group is created or modified.
The name of the server trust group must be unique for its subscription, resource group and region.
Next steps
For more information about distributed transactions in Azure SQL Managed Instance, see Distributed
transactions.
For release updates and known issues state, see What's new?.
If you have feature requests, add them to the Managed Instance forum.
What is Windows Authentication for Azure Active
Directory principals on Azure SQL Managed
Instance?
9/13/2022 • 2 minutes to read • Edit Online
Azure SQL Managed Instance is the intelligent, scalable cloud database service that combines the broadest SQL
Server database engine compatibility with the benefits of a fully managed and evergreen platform as a service.
Kerberos authentication for Azure Active Directory (Azure AD) enables Windows Authentication access to Azure
SQL Managed Instance. Windows Authentication for managed instances empowers customers to move existing
services to the cloud while maintaining a seamless user experience and provides the basis for infrastructure
modernization.
Next steps
Learn more about implementing Windows Authentication for Azure AD principals on Azure SQL Managed
Instance:
How Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory
and Kerberos (Preview)
How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and
Kerberos (Preview)
How Windows Authentication for Azure SQL
Managed Instance is implemented with Azure
Active Directory and Kerberos
9/13/2022 • 2 minutes to read • Edit Online
Windows Authentication for Azure AD principals on Azure SQL Managed Instance enables customers to move
existing services to the cloud while maintaining a seamless user experience and provides the basis for security
infrastructure modernization. To enable Windows Authentication for Azure Active Directory (Azure AD)
principals, you will turn your Azure AD tenant into an independent Kerberos realm and create an incoming trust
in the customer domain.
This configuration allows users in the customer domain to access resources in your Azure AD tenant. It will not
allow users in the Azure AD tenant to access resources in the customer domain.
The following diagram gives an overview of how Windows Authentication is implemented for a managed
instance using Azure AD and Kerberos:
Next steps
Learn more about enabling Windows Authentication for Azure AD principals on Azure SQL Managed Instance:
How to set up Windows Authentication for Azure Active Directory with the modern interactive flow
How to set up Windows Authentication for Azure AD with the incoming trust-based flow
Configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory
Troubleshoot Windows Authentication for Azure AD principals on Azure SQL Managed Instance
How to set up Windows Authentication for Azure
SQL Managed Instance using Azure Active
Directory and Kerberos
9/13/2022 • 5 minutes to read • Edit Online
This article gives an overview of how to set up infrastructure and managed instances to implement Windows
Authentication for Azure AD principals on Azure SQL Managed Instance.
There are two phases to set up Windows Authentication for Azure SQL Managed Instance using Azure Active
Directory (Azure AD) and Kerberos.
One-time infrastructure setup.
Synchronize Active Directory (AD) and Azure AD, if this hasn't already been done.
Enable the modern interactive authentication flow, when available. The modern interactive flow is
recommended for organizations with Azure AD joined or Hybrid AD joined clients running Windows
10 20H1 / Windows Server 2022 and higher where clients are joined to Azure AD or Hybrid AD.
Set up the incoming trust-based authentication flow. This is recommended for customers who can’t
use the modern interactive flow, but who have AD joined clients running Windows 10 / Windows
Server 2012 and higher.
Configuration of Azure SQL Managed Instance.
Create a system assigned service principal for each managed instance.
Clients must be joined to Azure AD or Hybrid Azure AD. You can determine if this prerequisite is met by running the
dsregcmd command: dsregcmd.exe /status
Application must connect to the managed instance via an This supports applications such as SQL Server Management
interactive session. Studio (SSMS) and web applications, but won't work for
applications that run as a service.
Azure AD tenant.
Azure AD Connect installed. Hybrid environments where identities exist both in Azure AD
and AD.
See How to set up Windows Authentication for Azure Active Directory with the modern interactive flow
(Preview) for steps to enable this authentication flow.
Incoming trust-based authentication flow
The following prerequisites are required to implement the incoming trust-based authentication flow:
Clients must be joined to AD. The domain must have a You can determine if the client is joined to AD by running
functional level of Windows Server 2012 or higher. the dsregcmd command: dsregcmd.exe /status
Azure AD Hybrid Authentication Management Module. This PowerShell module provides management features for
on-premises setup.
Azure tenant.
Azure AD Connect installed. Hybrid environments where identities exist both in Azure AD
and AD.
See How to set up Windows Authentication for Azure Active Directory with the incoming trust based flow
(Preview) for instructions on enabling this authentication flow.
Az.Sql PowerShell module This PowerShell module provides management cmdlets for
Azure SQL resources. Install this module by running the
following PowerShell command:
Install-Module -Name Az.Sql
Azure Active Directory PowerShell Module This module provides management cmdlets for Azure AD
administrative tasks such as user and service principal
management. Install this module by running the following
PowerShell command: Install-Module –Name AzureAD
P REREQ UISIT E DESC RIP T IO N
A managed instance You may create a new managed instance or use an existing
managed instance.
Limitations
The following limitations apply to Windows Authentication for Azure AD principals on Azure SQL Managed
Instance:
Not available for Linux clients
Windows Authentication for Azure AD principals is currently supported only for client machines running
Windows.
Azure AD cached logon
Windows limits how often it connects to Azure AD, so there is a potential for user accounts to not have a
refreshed Kerberos Ticket Granting Ticket (TGT) within 4 hours of an upgrade or fresh deployment of a client
machine. User accounts who do not have a refreshed TGT results in failed ticket requests from Azure AD.
As an administrator, you can trigger an online logon immediately to handle upgrade scenarios by running the
following command on the client machine, then locking and unlocking the user session to get a refreshed TGT:
dsregcmd.exe /RefreshPrt
Next steps
Learn more about implementing Windows Authentication for Azure AD principals on Azure SQL Managed
Instance:
What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance?
(Preview)
How Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory
and Kerberos (Preview)
How to set up Windows Authentication for Azure Active Directory with the modern interactive flow (Preview)
How to set up Windows Authentication for Azure AD with the incoming trust-based flow (Preview)
Configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory (Preview)
How to set up Windows Authentication for Azure
Active Directory with the modern interactive flow
9/13/2022 • 2 minutes to read • Edit Online
This article describes how to implement the modern interactive authentication flow to allow enlightened clients
running Windows 10 20H1, Windows Server 2022, or a higher version of Windows to authenticate to Azure
SQL Managed Instance using Windows Authentication. Clients must be joined to Azure Active Directory (Azure
AD) or Hybrid Azure AD.
Enabling the modern interactive authentication flow is one step in setting up Windows Authentication for Azure
SQL Managed Instance using Azure Active Directory and Kerberos (Preview). The incoming trust-based flow
(Preview) is available for AD joined clients running Windows 10 / Windows Server 2012 and higher.
With this preview, Azure AD is now its own independent Kerberos realm. Windows 10 21H1 clients are already
enlightened and will redirect clients to access Azure AD Kerberos to request a Kerberos ticket. The capability for
clients to access Azure AD Kerberos is switched off by default and can be enabled by modifying group policy.
Group policy can be used to deploy this feature in a staged manner by choosing specific clients you want to pilot
on and then expanding it to all the clients across your environment.
Prerequisites
There is no AD to Azure AD set up required for enabling software running on Azure AD Joined VMs to access
Azure SQL Managed Instance using Windows Authentication. The following prerequisites are required to
implement the modern interactive authentication flow:
Clients must be joined to Azure AD or Hybrid Azure AD. You can determine if this prerequisite is met by running the
dsregcmd command: dsregcmd.exe /status
Application must connect to the managed instance via an This supports applications such as SQL Server Management
interactive session. Studio (SSMS) and web applications, but won't work for
applications that run as a service.
Azure AD tenant.
Azure AD Connect installed. Hybrid environments where identities exist both in Azure AD
and AD.
dsregcmd.exe /RefreshPrt
Next steps
Learn more about implementing Windows Authentication for Azure AD principals on Azure SQL Managed
Instance:
What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance?
(Preview)
How Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory
and Kerberos (Preview)
How to set up Windows Authentication for Azure AD with the incoming trust-based flow (Preview)
Configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory (Preview)
Troubleshoot Windows Authentication for Azure AD principals on Azure SQL Managed Instance
How to set up Windows Authentication for Azure
AD with the incoming trust-based flow
9/13/2022 • 6 minutes to read • Edit Online
This article describes how to implement the incoming trust-based authentication flow to allow Active Directory
(AD) joined clients running Windows 10, Windows Server 2012, or higher versions of Windows to authenticate
to an Azure SQL Managed Instance using Windows Authentication. This article also shares steps to rotate a
Kerberos Key for your Azure Active Directory (Azure AD) service account and Trusted Domain Object, and steps
to remove a Trusted Domain Object and all Kerberos settings, if desired.
Enabling the incoming trust-based authentication flow is one step in setting up Windows Authentication for
Azure SQL Managed Instance using Azure Active Directory and Kerberos (Preview). The modern interactive flow
(Preview) is available for enlightened clients running Windows 10 20H1, Windows Server 2022, or a higher
version of Windows.
Permissions
To complete the steps outlined in this article, you will need:
An on-premises Active Directory administrator username and password.
Azure AD global administrator account username and password.
Prerequisites
To implement the incoming trust-based authentication flow, first ensure that the following prerequisites have
been met:
Clients must be joined to AD. The domain must have a You can determine if the client is joined to AD by running
functional level of Windows Server 2012 or higher. the dsregcmd command: dsregcmd.exe /status
Azure AD Hybrid Authentication Management Module. This PowerShell module provides management features for
on-premises setup.
Azure tenant.
Azure AD Connect installed. Hybrid environments where identities exist both in Azure AD
and AD.
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
NOTE
If you wish to use your current Windows login account for your on-premises Active Directory access, you can skip
the step where credentials are assigned to the $domainCred parameter. If you take this approach, do not include
the -DomainCredential parameter in the PowerShell commands following this step.
$domain = "your on-premesis domain name, for example contoso.com"
$domainCred = Get-Credential
If this is the first time calling any Azure AD Kerberos command, you will be prompted for Azure AD cloud
access.
Enter the password for your Azure AD global administrator account.
If your organization uses other modern authentication methods such as MFA (Azure Multi-Factor
Authentication) or Smart Card, follow the instructions as requested for sign in.
If this is the first time you're configuring Azure AD Kerberos settings, the Get-AzureAdKerberosServer
cmdlet will display empty information, as in the following sample output:
ID :
UserAccount :
ComputerAccount :
DisplayName :
DomainDnsName :
KeyVersion :
KeyUpdatedOn :
KeyUpdatedFrom :
CloudDisplayName :
CloudDomainDnsName :
CloudId :
CloudKeyVersion :
CloudKeyUpdatedOn :
CloudTrustDisplay :
If your domain already supports FIDO authentication, the Get-AzureAdKerberosServer cmdlet will display
Azure AD Service account information, as in the following sample output. Note that the
CloudTrustDisplay field returns an empty value.
ID : 25614
UserAccount : CN=krbtgt-AzureAD, CN=Users, DC=aadsqlmi, DC=net
ComputerAccount : CN=AzureADKerberos, OU=Domain Controllers, DC=aadsqlmi, DC=net
DisplayName : krbtgt_25614
DomainDnsName : aadsqlmi.net
KeyVersion : 53325
KeyUpdatedOn : 2/24/2022 9:03:15 AM
KeyUpdatedFrom : ds-aad-auth-dem.aadsqlmi.net
CloudDisplayName : krbtgt_25614
CloudDomainDnsName : aadsqlmi.net
CloudId : 25614
CloudKeyVersion : 53325
CloudKeyUpdatedOn : 2/24/2022 9:03:15 AM
CloudTrustDisplay :
After creating the Trusted Domain Object, you can check the updated Kerberos Settings using the
Get-AzureAdKerberosServer PowerShell cmdlet, as shown in the previous step. If the
Set-AzureAdKerberosServer cmdlet has been run successfully with the -SetupCloudTrust parameter, the
CloudTrustDisplay field should now return Microsoft.AzureAD.Kdc.Service.TrustDisplay , as in the
following sample output:
ID : 25614
UserAccount : CN=krbtgt-AzureAD, CN=Users, DC=aadsqlmi, DC=net
ComputerAccount : CN=AzureADKerberos, OU=Domain Controllers, DC=aadsqlmi, DC=net
DisplayName : krbtgt_25614
DomainDnsName : aadsqlmi.net
KeyVersion : 53325
KeyUpdatedOn : 2/24/2022 9:03:15 AM
KeyUpdatedFrom : ds-aad-auth-dem.aadsqlmi.net
CloudDisplayName : krbtgt_25614
CloudDomainDnsName : aadsqlmi.net
CloudId : 25614
CloudKeyVersion : 53325
CloudKeyUpdatedOn : 2/24/2022 9:03:15 AM
CloudTrustDisplay : Microsoft.AzureAD.Kdc.Service.TrustDisplay
VA L UE N A M E VA L UE
Once the key is rotated, it takes several hours to propagate the changed key between the Kerberos KDC servers.
Due to this key distribution timing, you are limited to rotating key once within 24 hours. If you need to rotate the
key again within 24 hours with any reason, for example, just after creating the Trusted Domain Object, you can
add the -Force parameter:
This command will only remove the Trusted Domain Object. If your domain supports FIDO authentication, you
can remove the Trusted Domain Object while maintaining the Azure AD Service account required for the FIDO
authentication service.
Next steps
Learn more about implementing Windows Authentication for Azure AD principals on Azure SQL Managed
Instance:
Configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory (Preview)
What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance?
(Preview)
How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and
Kerberos (Preview)
Configure Azure SQL Managed Instance for
Windows Authentication for Azure Active Directory
9/13/2022 • 2 minutes to read • Edit Online
This article describes how to configure a managed instance to support Windows Authentication for Azure AD
principals. The steps to set up Azure SQL Managed Instance are the same for both the incoming trust-based
authentication flow and the modern interactive authentication flow.
Prerequisites
The following prerequisites are required to configure a managed instance for Windows Authentication for Azure
AD principals:
Az.Sql PowerShell module This PowerShell module provides management cmdlets for
Azure SQL resources.
Azure Active Directory PowerShell Module This module provides management cmdlets for Azure AD
administrative tasks such as user and service principal
management.
A managed instance You may create a new managed instance or use an existing
managed instance. You must enable Azure AD
authentication on the managed instance.
5. Select the application with the display name matching your managed instance. The name will be in the
format: <managedinstancename> principal .
6. Select API permissions .
7. Select Grant admin consent .
Next steps
Learn more about implementing Windows Authentication for Azure AD principals on Azure SQL Managed
Instance:
Troubleshoot Windows Authentication for Azure AD principals on Azure SQL Managed Instance
What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance?
(Preview)
How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and
Kerberos (Preview)
Run a trace against Azure SQL Managed Instance
using Windows Authentication for Azure Active
Directory principals
9/13/2022 • 3 minutes to read • Edit Online
This article shows how to connect and run a trace against Azure SQL Managed Instance using Windows
Authentication for Azure Active Directory (Azure AD) principals. Windows authentication provides a convenient
way for customers to connect to a managed instance, especially for database administrators and developers
who are accustomed to launching SQL Server Management Studio (SSMS) with their Windows credentials.
This article shares two options to run a trace against a managed instance: you can trace with extended events or
with SQL Server Profiler. While SQL Server Profiler may still be used, the trace functionality used by SQL Server
Profiler is deprecated and will be removed in a future version of Microsoft SQL Server.
Prerequisites
To use Windows Authentication to connect to and run a trace against a managed instance, you must first meet
the following prerequisites:
Set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and Kerberos
(Preview).
Install SQL Server Management Studio (SSMS) on the client that is connecting to the managed instance. The
SSMS installation includes SQL Server Profiler and built-in components to create and run extended events
traces.
Enable tooling on your client machine to connect to the managed instance. This may be done by any of the
following:
Configure an Azure VM to connect to Azure SQL Managed Instance.
Configure a point-to-site connection to Azure SQL Managed Instance from on-premises.
Configure a public endpoint in Azure SQL Managed Instance.
To create or modify extended events sessions, ensure that your account has the server permission of ALTER
ANY EVENT SESSION on the managed instance.
To create or modify traces in SQL Server Profiler, ensure that your account has the server permission of
ALTER TRACE on the managed instance.
If you have not yet enabled Windows authentication for Azure AD principals against your managed instance,
you may run a trace against a managed instance using an Azure AD Authentication option, including:
'Azure Active Directory - Password'
'Azure Active Directory - Universal with MFA'
'Azure Active Directory – Integrated'
5. Select Connect .
Now that Object Explorer is connected, you can create and run an extended events trace. Follow the steps in
Quick Start: Extended events in SQL Server to learn how to create, test, and display the results of an extended
events session.
4. Select Connect .
5. Follow the steps in Create a Trace (SQL Server Profiler) to configure the trace.
6. Select Run after configuring the trace.
Next steps
Learn more about Windows Authentication for Azure AD principals with Azure SQL Managed Instance:
What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance?
(Preview)
How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and
Kerberos (Preview)
How Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory
and Kerberos (Preview)
Extended Events
Troubleshoot Windows Authentication for Azure AD
principals on Azure SQL Managed Instance
9/13/2022 • 2 minutes to read • Edit Online
This article contains troubleshooting steps for use when implementing Windows Authentication for Azure AD
principals.
The klist get MSSQLSvc command should return a ticket from the kerberos.microsoftonline.com realm with a
Service Principal Name (SPN) to MSSQLSvc/<miname>.<dnszone>.database.windows.net:1433 .
Next steps
Learn more about implementing Windows Authentication for Azure AD principals on Azure SQL Managed
Instance:
What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance?
(Preview)
How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and
Kerberos (Preview)
How Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory
and Kerberos (Preview)
How to set up Windows Authentication for Azure Active Directory with the modern interactive flow (Preview)
How to set up Windows Authentication for Azure AD with the incoming trust-based flow (Preview)
Azure SQL Managed Instance content reference
9/13/2022 • 3 minutes to read • Edit Online
Load data
SQL Server to Azure SQL Managed Instance Guide: Learn about the recommended migration process and
tools for migration to Azure SQL Managed Instance.
Migrate TDE cert to Azure SQL Managed Instance: If your SQL Server database is protected with transparent
data encryption (TDE), you would need to migrate the certificate that SQL Managed Instance can use to
decrypt the backup that you want to restore in Azure.
Import a DB from a BACPAC
Export a DB to BACPAC
Load data with BCP
Load data with Azure Data Factory
Network configuration
Determine subnet size: Since the subnet cannot be resized after SQL Managed Instance is deployed, you need
to calculate what IP range of addresses is required for the number and types of managed instances you plan
to deploy to the subnet.
Create a new VNet and subnet: Configure the virtual network and subnet according to the network
requirements.
Configure an existing VNet and subnet: Verify network requirements and configure your existing virtual
network and subnet to deploy SQL Managed Instance.
Configure service endpoint policies for Azure Storage (Preview): Secure your subnet against erroneous or
malicious data exfiltration into unauthorized Azure Storage accounts.
Configure custom DNS: Configure custom DNS to grant external resource access to custom domains from
SQL Managed Instance via a linked server of db mail profiles.
Find the management endpoint IP address: Determine the public endpoint that SQL Managed Instance is
using for management purposes.
Verify built-in firewall protection: Verify that SQL Managed Instance allows traffic only on necessary ports,
and other built-in firewall rules.
Connect applications: Learn about different patterns for connecting the applications to SQL Managed
Instance.
Feature configuration
Configure Azure AD auth
Configure conditional access
Multi-factor Azure AD auth
Configure multi-factor auth
Configure auto-failover group to automatically failover all databases on an instance to a secondary instance
in another region in the event of a disaster.
Configure a temporal retention policy
Configure In-Memory OLTP
Configure Azure Automation
Transactional replication enables you to replicate your data between managed instances, or from SQL Server
on-premises to SQL Managed Instance, and vice versa.
Configure threat detection – threat detection is a built-in Azure SQL Managed Instance feature that detects
various potential attacks such as SQL injection or access from suspicious locations.
Creating alerts enables you to set up alerts on monitored metrics such as CPU utilization, storage space
consumption, IOPS and others for SQL Managed Instance.
Transparent Data Encryption
Configure TDE with BYOK
Rotate TDE BYOK keys
Remove a TDE protector
Managed Instance link feature
Prepare environment for link feature
Replicate database with link feature in SSMS
Replicate database with Azure SQL Managed Instance link feature with T-SQL and PowerShell scripts
Failover database with link feature in SSMS - Azure SQL Managed Instance
Failover (migrate) database with Azure SQL Managed Instance link feature with T-SQL and PowerShell scripts
Best practices with link feature for Azure SQL Managed Instance
Operations
User-initiated manual failover on SQL Managed Instance
Develop applications
Connectivity
Use Spark Connector
Authenticate an app
Use batching for better performance
Connectivity guidance
DNS aliases
Set up a DNS alias by using PowerShell
Ports - ADO.NET
C and C ++
Excel
Design applications
Design for disaster recovery
Design for elastic pools
Design for app upgrades
Design Multi-tenant SaaS applications
SaaS design patterns
SaaS video indexer
SaaS app security
Next steps
Get started by deploying SQL Managed Instance.
Connect your application to Azure SQL Managed
Instance
9/13/2022 • 8 minutes to read • Edit Online
IMPORTANT
You can also enable data access to your managed instance from outside a virtual network. You are able to access your
managed instance from multi-tenant Azure services like Power BI, Azure App Service, or an on-premises network that are
not connected to a VPN by using the public endpoint on a managed instance. You will need to enable public endpoint on
the managed instance and allow public endpoint traffic on the network security group associated with the managed
instance subnet. See more important details on Configure public endpoint in Azure SQL Managed Instance.
IMPORTANT
On 9/22/2020 support for global virtual network peering for newly created virtual clusters was announced. It means that
global virtual network peering is supported for SQL managed instances created in empty subnets after the
announcement date, as well for all the subsequent managed instances created in those subnets. For all the other SQL
managed instances peering support is limited to the networks in the same region due to the constraints of global virtual
network peering. See also the relevant section of the Azure Virtual Networks frequently asked questions article for more
details. To be able to use global virtual network peering for SQL managed instances from virtual clusters created before
the announcement date, consider configuring maintenance window on the instances, as it will move the instances into
new virtual clusters that support global virtual network peering.
Once you have the basic infrastructure set up, you need to modify some settings so that the VPN gateway can
see the IP addresses in the virtual network that hosts SQL Managed Instance. To do so, make the following very
specific changes under the Peering settings .
1. In the virtual network that hosts the VPN gateway, go to Peerings , go to the peered virtual network
connection for SQL Managed Instance, and then click Allow Gateway Transit .
2. In the virtual network that hosts SQL Managed Instance, go to Peerings , go to the peered virtual network
connection for the VPN gateway, and then click Use remote gateways .
As shown in this image, there are two entries for each virtual network involved and a third entry for the
VPN endpoint that is configured in the portal.
Another way to check the routes is via the following command. The output shows the routes to the
various subnets:
C:\ >route print -4
===========================================================================
Interface List
14...54 ee 75 67 6b 39 ......Intel(R) Ethernet Connection (3) I218-LM
57...........................rndatavnet
18...94 65 9c 7d e5 ce ......Intel(R) Dual Band Wireless-AC 7265
1...........................Software Loopback Interface 1
Adapter===========================================================================
If you're using virtual network peering, ensure that you have followed the instructions for setting Allow
Gateway Transit and Use Remote Gateways.
If you're using virtual network peering to connect an Azure App Service hosted application, and the SQL
Managed Instance virtual network has a public IP address range, make sure that your hosted application
settings allow your outbound traffic to be routed to public IP networks. Follow the instructions in
Regional virtual network integration.
DRIVER/ TO O L VERSIO N
Next steps
For information about SQL Managed Instance, see What is SQL Managed Instance?.
For a tutorial showing you how to create a new managed instance, see Create a managed instance.
Automate management tasks using SQL Agent jobs
in Azure SQL Managed Instance
9/13/2022 • 8 minutes to read • Edit Online
NOTE
SQL Agent is not available in Azure SQL Database or Azure Synapse Analytics. Instead, we recommend Job automation
with Elastic Jobs.
Job steps
SQL Agent Job steps are sequences of actions that SQL Agent should execute. Every step has the following step
that should be executed if the step succeeds or fails, number of retries in a case of failure.
SQL Agent enables you to create different types of job steps, such as Transact-SQL job steps that execute a single
Transact-SQL batch against the database, or OS command/PowerShell steps that can execute custom OS script,
SSIS job steps that enable you to load data using SSIS runtime, or replication steps that can publish changes
from your database to other databases.
NOTE
For more information on leveraging the Azure SSIS Integration Runtime with SSISDB hosted by SQL Managed Instance,
see Use Azure SQL Managed Instance with SQL Server Integration Services (SSIS) in Azure Data Factory.
Transactional replication can replicate the changes from your tables into other databases in SQL Managed
Instance, Azure SQL Database, or SQL Server. For information, see Configure replication in Azure SQL Managed
Instance.
Other types of job steps are not currently suppor ted in SQL Managed Instance, such as Merge replication,
and Queue reader.
Job schedules
A schedule specifies when a job runs. More than one job can run on the same schedule, and more than one
schedule can apply to the same job.
A schedule can define the following conditions for the time when a job runs:
Whenever SQL Server Agent starts. Job is activated after every failover.
One time, at a specific date and time, which is useful for delayed execution of some job.
On a recurring schedule.
For more information on scheduling a SQL Agent job, see Schedule a Job.
NOTE
Azure SQL Managed Instance currently does not enable you to start a job when the CPU is idle.
Job notifications
SQL Agent jobs enable you to get notifications when the job finishes successfully or fails. You can receive
notifications via email.
If it isn't already enabled, first you would need to configure the Database Mail feature on SQL Managed Instance:
GO
EXEC sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
EXEC sp_configure 'Database Mail XPs', 1;
GO
RECONFIGURE
As an example exercise, set up the email account that will be used to send the email notifications. Assign the
account to the email profile called AzureManagedInstance_dbmail_profile . To send e-mail using SQL Agent jobs in
SQL Managed Instance, there should be a profile that must be called AzureManagedInstance_dbmail_profile .
Otherwise, SQL Managed Instance will be unable to send emails via SQL Agent.
NOTE
For the mail server, we recommend you use authenticated SMTP relay services to send email. These relay services typically
connect through TCP ports 25 or 587 for connections over TLS, or port 465 for SSL connections, however Database Mail
can be configured to use any port. These ports require a new outbound rule in your managed instance's network security
group. These services are used to maintain IP and domain reputation to minimize the possibility that external domains
reject your messages or put them to the SPAM folder. Consider an authenticated SMTP relay service already in your on-
premises servers. In Azure, SendGrid is one such SMTP relay service, but there are others.
Use the following sample script to create a Database Mail account and profile, then associate them together:
Test the Database Mail configuration via T-SQL using the sp_send_db_mail system stored procedure:
You can notify the operator that something happened with your SQL Agent jobs. An operator defines contact
information for an individual responsible for the maintenance of one or more instances in SQL Managed
Instance. Sometimes, operator responsibilities are assigned to one individual.
In systems with multiple instances in SQL Managed Instance or SQL Server, many individuals can share operator
responsibilities. An operator does not contain security information, and does not define a security principal.
Ideally, an operator is not an individual whose responsibilities may change, but an email distribution group.
You can create operators using SQL Server Management Studio (SSMS) or the Transact-SQL script shown in the
following example:
EXEC msdb.dbo.sp_add_operator
@name=N'AzureSQLTeam',
@enabled=1,
@email_address=N'[email protected]';
Confirm the email's success or failure via the Database Mail Log in SSMS.
You can then modify any SQL Agent job and assign operators that will be notified via email if the job completes,
fails, or succeeds using SSMS or the following T-SQL script:
Job history
SQL Managed Instance currently doesn't allow you to change any SQL Agent properties because they are stored
in the underlying registry values. This means options for adjusting the Agent retention policy for job history
records are fixed at the default of 1000 total records and max 100 history records per job.
For more information, see View SQL Agent job history.
USE [master]
GO
CREATE USER [login_name] FOR LOGIN [login_name];
GO
GRANT EXECUTE ON master.dbo.xp_sqlagent_enum_jobs TO [login_name];
GRANT EXECUTE ON master.dbo.xp_sqlagent_is_starting TO [login_name];
GRANT EXECUTE ON master.dbo.xp_sqlagent_notify TO [login_name];
Learn more
What is Azure SQL Managed Instance?
What's new in Azure SQL Managed Instance?
Azure SQL Managed Instance T-SQL differences from SQL Server
Features comparison: Azure SQL Database and Azure SQL Managed Instance
Next steps
Configure Database Mail
Troubleshoot outbound SMTP connectivity problems in Azure
Detectable types of query performance bottlenecks
in SQL Server and Azure SQL Managed Instance
9/13/2022 • 12 minutes to read • Edit Online
In this example, t1.c1 takes @p1 , but t2.c2 continues to take GUID as literal. In this case, if you change the
value for c2 , the query is treated as a different query, and a new compilation will happen. To reduce
compilations in this example, you would also parameterize the GUID.
The following query shows the count of queries by query hash to determine whether a query is properly
parameterized:
SELECT TOP 10
q.query_hash
, count (distinct p.query_id ) AS number_of_distinct_query_ids
, min(qt.query_sql_text) AS sampled_query_text
FROM sys.query_store_query_text AS qt
JOIN sys.query_store_query AS q
ON qt.query_text_id = q.query_text_id
JOIN sys.query_store_plan AS p
ON q.query_id = p.query_id
JOIN sys.query_store_runtime_stats AS rs
ON rs.plan_id = p.plan_id
JOIN sys.query_store_runtime_stats_interval AS rsi
ON rsi.runtime_stats_interval_id = rs.runtime_stats_interval_id
WHERE
rsi.start_time >= DATEADD(hour, -2, GETUTCDATE())
AND query_parameterization_type_desc IN ('User', 'None')
GROUP BY q.query_hash
ORDER BY count (distinct p.query_id) DESC;
Waiting-related problems
Once you have eliminated a suboptimal plan and Waiting-related problems that are related to execution
problems, the performance problem is generally the queries are probably waiting for some resource. Waiting-
related problems might be caused by:
Blocking :
One query might hold the lock on objects in the database while others try to access the same objects. You
can identify blocking queries by using DMVs. For more information, see Understand and resolve blocking
problems.
IO problems
Queries might be waiting for the pages to be written to the data or log files. In this case, check the
INSTANCE_LOG_RATE_GOVERNOR , WRITE_LOG , or PAGEIOLATCH_* wait statistics in the DMV. See using DMVs to
identify IO performance issues.
Tempdb problems
If the workload uses temporary tables or there are tempdb spills in the plans, the queries might have a
problem with tempdb throughput. To investigate further, review identify tempdb issues.
Memor y-related problems
If the workload doesn't have enough memory, the page life expectancy might drop, or the queries might
get less memory than they need. In some cases, built-in intelligence in Query Optimizer will fix memory-
related problems. See using DMVs to identify memory grant issues. If you encounter out of memory
errors, review sys.dm_os_out_of_memory_events. Consider also the Memor y optimized premium-
series tier of Azure SQL Managed Instance hardware with higher ratios of memory to vCores.
Methods to show top wait categories
These methods are commonly used to show the top categories of wait types:
Use Query Store to find wait statistics for each query over time. In Query Store, wait types are combined into
wait categories. You can find the mapping of wait categories to wait types in sys.query_store_wait_stats.
Use sys.dm_os_wait_stats to return information about all the waits encountered by threads that executed
during a query operation. You can use this aggregated view to diagnose performance problems with the
Azure SQL Managed Instance or SQL Server instance. Queries can be waiting on resources, queue waits, or
external waits.
Use sys.dm_os_waiting_tasks to return information about the queue of tasks that are waiting on some
resource.
In high-CPU scenarios, Query Store and wait statistics might not reflect CPU usage if:
High-CPU-consuming queries are still executing.
The high-CPU-consuming queries were running when a failover happened.
DMVs that track Query Store and wait statistics show results for only successfully completed and timed-out
queries. They don't show data for currently executing statements until the statements finish. Use the dynamic
management view sys.dm_exec_requests to track currently executing queries and the associated worker time.
TIP
Additional tools:
TigerToolbox waits and latches
TigerToolbox usp_whatsup
Next steps
Configure the max degree of parallelism Server Configuration Option
Understand and resolve SQL Server blocking problems
Monitoring Microsoft Azure SQL Managed Instance performance using dynamic management views
Tune nonclustered indexes with missing index suggestions
sys.server_resource_stats (Azure SQL Managed Instance)
Overview of Azure SQL Managed Instance resource limits
Monitoring Microsoft Azure SQL Managed Instance
performance using dynamic management views
9/13/2022 • 16 minutes to read • Edit Online
Permissions
In Azure SQL Managed Instance, querying a dynamic management view requires VIEW SERVER STATE
permissions.
In an instance of SQL Server and in Azure SQL Managed Instance, dynamic management views return server
state information.
Once you identify the problematic queries, it's time to tune those queries to reduce CPU utilization. If you don't
have time to tune the queries, you may also choose to upgrade the SLO of the managed instance to work
around the issue.
For data file IO issues (including PAGEIOLATCH_SH , PAGEIOLATCH_EX , PAGEIOLATCH_UP ). If the wait type name
has IO in it, it points to an IO issue. If there is no IO in the page latch wait name, it points to a different
type of problem (for example, tempdb contention).
WRITE_LOG
SELECT TOP 10
CONVERT(VARCHAR(30), GETDATE(), 121) AS runtime,
r.session_id,
r.blocking_session_id,
r.cpu_time,
r.total_elapsed_time,
r.reads,
r.writes,
r.logical_reads,
r.row_count,
wait_time,
wait_type,
r.command,
OBJECT_NAME(txt.objectid, txt.dbid) 'Object_Name',
TRIM(REPLACE(
REPLACE(
REPLACE(
SUBSTRING(
SUBSTRING(
text,
(r.statement_start_offset / 2) + 1,
((CASE r.statement_end_offset
WHEN -1 THEN
DATALENGTH(text)
ELSE
r.statement_end_offset
END - r.statement_start_offset
) / 2
) + 1
),
1,
1000
),
CHAR(10),
' '
),
CHAR(13),
' '
)
) stmt_text,
mg.dop, --Degree of parallelism
mg.request_time, --Date and time when this query requested the
memory grant.
mg.grant_time, --NULL means memory has not been granted
mg.requested_memory_kb / 1024.0 requested_memory_mb, --Total requested amount of memory in megabytes
mg.granted_memory_kb / 1024.0 AS granted_memory_mb, --Total amount of memory actually granted in
megabytes. NULL if not granted
mg.required_memory_kb / 1024.0 AS required_memory_mb, --Minimum memory required to run this query in
megabytes.
max_used_memory_kb / 1024.0 AS max_used_memory_mb,
mg.query_cost, --Estimated query cost.
mg.timeout_sec, --Time-out in seconds before this query gives
up the memory grant request.
mg.resource_semaphore_id, --Non-unique ID of the resource semaphore on
which this query is waiting.
mg.wait_time_ms, --Wait time in milliseconds. NULL if the memory
is already granted.
CASE mg.is_next_candidate --Is this process the next candidate for a memory grant
WHEN 1 THEN
'Yes'
WHEN 0 THEN
'No'
ELSE
'Memory has been granted'
END AS 'Next Candidate for Memory Grant',
qp.query_plan
FROM sys.dm_exec_requests AS r
JOIN sys.dm_exec_query_memory_grants AS mg
ON r.session_id = mg.session_id
AND r.request_id = mg.request_id
CROSS APPLY sys.dm_exec_sql_text(mg.sql_handle) AS txt
CROSS APPLY sys.dm_exec_query_plan(r.plan_handle) AS qp
ORDER BY mg.granted_memory_kb DESC;
The following query returns the size of individual objects (in megabytes) in your database:
Monitoring connections
You can use the sys.dm_exec_connections view to retrieve information about the connections established to a
specific managed instance and the details of each connection. In addition, the sys.dm_exec_sessions view is
helpful when retrieving information about all active user connections and internal tasks.
The following query retrieves information on the current connection:
SELECT
c.session_id, c.net_transport, c.encrypt_option,
c.auth_scheme, s.host_name, s.program_name,
s.client_interface_name, s.login_name, s.nt_domain,
s.nt_user_name, s.original_login_name, c.connect_time,
s.login_time
FROM sys.dm_exec_connections AS c
JOIN sys.dm_exec_sessions AS s
ON c.session_id = s.session_id
WHERE c.session_id = @@SPID;
DECLARE @s datetime;
DECLARE @e datetime;
SET @s= DateAdd(d,-7,GetUTCDate());
SET @e= GETUTCDATE();
SELECT AVG(avg_cpu_percent) AS Average_Compute_Utilization
FROM sys.server_resource_stats
WHERE start_time BETWEEN @s AND @e;
GO
2. The following example returns the average storage space used by your instance per day, to allow for
growth trending analysis:
DECLARE @s datetime;
DECLARE @e datetime;
SET @s= DateAdd(d,-7,GetUTCDate());
SET @e= GETUTCDATE();
SELECT Day = convert(date, start_time), AVG(storage_space_used_mb) AS Average_Space_Used_mb
FROM sys.server_resource_stats
WHERE start_time BETWEEN @s AND @e
GROUP BY convert(date, start_time)
ORDER BY convert(date, start_time);
GO
To analyze the workload of an individual database, modify this query to filter on the specific database you want
to analyze. For example, if you have a database named MyDatabase , this Transact-SQL query returns the count of
concurrent requests in that database:
SELECT COUNT(*) AS [Concurrent_Requests]
FROM sys.dm_exec_requests R
INNER JOIN sys.databases D ON D.database_id = R.database_id
AND D.name = 'MyDatabase';
This is just a snapshot at a single point in time. To get a better understanding of your workload and concurrent
request requirements, you'll need to collect many samples over time.
Maximum concurrent logins
You can analyze your user and application patterns to get an idea of the frequency of logins. You also can run
real-world loads in a test environment to make sure that you're not hitting this or other limits we discuss in this
article. There isn't a single query or dynamic management view (DMV) that can show you concurrent login
counts or history.
If multiple clients use the same connection string, the service authenticates each login. If 10 users
simultaneously connect to a database by using the same username and password, there would be 10 concurrent
logins. This limit applies only to the duration of the login and authentication. If the same 10 users connect to the
database sequentially, the number of concurrent logins would never be greater than 1.
Maximum sessions
To see the number of current active sessions, run this Transact-SQL query on your database:
If you're analyzing a SQL Server workload, modify the query to focus on a specific database. This query helps
you determine possible session needs for the database if you are considering moving it to Azure.
Again, these queries return a point-in-time count. If you collect multiple samples over time, you'll have the best
understanding of your session use.
You can get historical statistics on sessions by querying the sys.resource_stats view and reviewing the
active_session_count column.
SELECT
highest_cpu_queries.plan_handle,
highest_cpu_queries.total_worker_time,
q.dbid,
q.objectid,
q.number,
q.encrypted,
q.[text]
FROM
(SELECT TOP 50
qs.plan_handle,
qs.total_worker_time
FROM
sys.dm_exec_query_stats qs
ORDER BY qs.total_worker_time desc) AS highest_cpu_queries
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS q
ORDER BY highest_cpu_queries.total_worker_time DESC;
See also
Dynamic Management Views and Functions (Transact-SQL)
System Dynamic Management Views
Next steps
Introduction to Azure SQL Database and Azure SQL Managed Instance
Tune applications and databases for performance in Azure SQL Database and Azure SQL Managed Instance
Understand and resolve SQL Server blocking problems
Analyze and prevent deadlocks in Azure SQL Managed Instance
sys.server_resource_stats (Azure SQL Managed Instance)
Monitor Azure SQL Managed Instance with Azure
Monitor
9/13/2022 • 5 minutes to read • Edit Online
NOTE
Azure SQL Analytics (preview) is an integration with Azure Monitor, where many monitoring solutions are no longer in
active development. For more monitoring options, see Monitoring and performance tuning in Azure SQL Managed
Instance and Azure SQL Database.
Monitoring data
Azure SQL Managed Instance collects the same kinds of monitoring data as other Azure resources that are
described in Monitoring data from Azure resources.
See Monitoring Azure SQL Managed Instance with Azure Monitor reference for detailed information on the
metrics and logs metrics created by Azure SQL Managed Instance.
Analyzing metrics
You can analyze metrics for Azure SQL Managed Instance alongside metrics from other Azure services using the
metrics explorer by opening Metrics from the Monitor menu in the Azure portal. See Getting started with
Azure Metrics Explorer for details on using this tool.
For a list of the platform metrics collected for Azure SQL Managed Instance, see Monitoring Azure SQL
Managed Instance data reference metrics
For reference, you can see a list of all resource metrics supported in Azure Monitor.
Analyzing logs
Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties. This data is
optionally collected via Diagnostic settings.
All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema
is outlined in Azure Monitor resource log schema.
The Activity log is a type of platform log in Azure that provides insight into subscription-level events. You can
view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using
Log Analytics.
For a list of the types of resource logs collected for Azure SQL Managed Instance, see Resource Logs for Azure
SQL Managed Instance.
For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see Azure Monitor Logs
tables for Azure SQL Managed Instance.
Sample Kusto queries
IMPORTANT
When you select Logs from the Monitoring menu of an Azure SQL Managed Instance, Log Analytics is opened with the
query scope set to the current Azure SQL Managed Instance. If you want to run a query that includes data from
databases or data from other Azure services, select Select scope from the query menu. See Log query scope and time
range in Azure Monitor Log Analytics for details.
NOTE
After creating a diagnostic setting for a resource, it might take up to 15 minutes between when an event is emitted and
when it appears in a Log Analytics workspace.
Use the following sample queries to help you monitor your Azure SQL Managed Instance:
Example A: Display all managed instances with avg_cpu utilization over 95%.
Example B: Display all managed instances with storage utilization over 90%, dividing storage_space_used_mb_s
by reserved_storage_mb_s .
Alerts
Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data.
These metrics in Azure Monitor are always collected. They allow you to identify and address issues in your Azure
SQL Managed Instance before your customers notice them. You can set alerts on metrics, logs, and the activity
log.
If you are creating or running an application in Azure, Azure Monitor Application Insights may offer additional
types of alerts.
The following table lists common and recommended alert rules for Azure SQL Managed Instance. You may see
different options available depending on your purchase model.
Next steps
See Monitoring Azure SQL Managed Instance data reference for a reference of the metrics, logs, and other
important values created by Azure SQL Managed Instance.
For Azure SQL Database, see Monitoring Azure SQL Database with Azure Monitor.
See Monitoring Azure resources with Azure Monitor for details on monitoring Azure resources.
Monitoring Azure SQL Managed Instance data
reference
9/13/2022 • 2 minutes to read • Edit Online
Metrics
For more on using Azure Monitor SQL Insights (preview) for all products in the Azure SQL family, see Monitor
your SQL deployments with SQL Insights (preview).
For data specific to Azure SQL Managed Instance, see Data for Azure SQL Managed Instance.
For a complete list of metrics, see Microsoft.Sql/managedInstances.
Resource logs
This section lists the types of resource logs you can collect for Azure SQL Managed Instance.
For reference, see a list of all resource logs category types supported in Azure Monitor.
For a reference of resource log types collected for Azure SQL Managed Instance, see Streaming export of Azure
SQL Managed Instance Diagnostic telemetry for export
RESO URC E T Y P E N OT ES
AzureActivity Entries from the Azure Activity log that provides insight into
any subscription-level or management group level events
that have occurred in Azure.
Next steps
Monitoring Azure SQL Managed Instance with Azure Monitor
Monitoring Azure SQL Database with Azure Monitor
Monitoring Azure resources with Azure Monitor
Time zones in Azure SQL Managed Instance
9/13/2022 • 8 minutes to read • Edit Online
NOTE
Azure SQL Database does not support time zone settings; it always follows UTC. Use AT TIME ZONE in SQL Database if
you need to interpret date and time information in a non-UTC time zone.
NOTE
On August 8, 2022, the Chilean government made an official announcement about a Daylight-Saving Time (DST) time
zone change. Starting at 12:00 a.m. Saturday, September 10, 2022, until 12:00 a.m. Saturday, April 1, 2023, the official
time will advance 60 minutes. The change affects the following three time zones: Pacific SA Standard Time , Easter
Island Standard Time and Magallanes Standard Time . Azure SQL Managed Instances using affected time zones will
not reflect the changes until Microsoft releases an OS update to support this and Azure SQL Managed Instance service
absorbs the update on the OS level. If you need to alter affected time zones for your managed instances, please be aware
of the limitations and follow the guidance from the documentation.
NOTE
The time zone of an existing managed instance can't be changed.
"properties": {
"administratorLogin": "[parameters('user')]",
"administratorLoginPassword": "[parameters('pwd')]",
"subnetId": "[parameters('subnetId')]",
"storageSizeInGB": 256,
"vCores": 8,
"licenseType": "LicenseIncluded",
"hardwareFamily": "Gen5",
"collation": "Serbian_Cyrillic_100_CS_AS",
"timezoneId": "Central European Standard Time"
},
A list of supported values for the timezoneId property is at the end of this article.
If not specified, the time zone is set to UTC.
Check the time zone of an instance
The CURRENT_TIMEZONE function returns a display name of the time zone of the instance.
Cross-feature considerations
Restore and import
You can restore a backup file or import data to a managed instance from an instance or a server with different
time zone settings. Make sure to do so with caution. Analyze the application behavior and the results of the
queries and reports, just like when you transfer data between two SQL Server instances with different time zone
settings.
Point-in-time restore
When you perform a point-in-time restore, the time to restore to is interpreted as UTC time. This way any
ambiguities due to daylight saving time and its potential changes are avoided.
Auto -failover groups
Using the same time zone across a primary and secondary instance in a failover group isn't enforced, but we
strongly recommend it.
WARNING
We strongly recommend that you use the same time zone for the primary and secondary instance in a failover group.
Because of certain rare use cases keeping the same time zone across primary and secondary instances isn't enforced. It's
important to understand that in the case of manual or automatic failover, the secondary instance will retain its original
time zone.
Limitations
The time zone of the existing managed instance can't be changed. As a workaround, create a new managed
instance with the proper time zone and then either perform a manual backup and restore, or what we
recommend, perform a cross-instance point-in-time restore.
External processes launched from the SQL Server Agent jobs don't observe the time zone of the instance.
FLE Standard Time (UTC+02:00) Helsinki, Kyiv, Riga, Sofia, Tallinn, Vilnius
See also
CURRENT_TIMEZONE (Transact-SQL)
CURRENT_TIMEZONE_ID (Transact-SQL)
AT TIME ZONE (Transact-SQL)
sys.time_zone_info (Transact-SQL)
Azure SQL Managed Instance connection types
9/13/2022 • 2 minutes to read • Edit Online
Connection types
Azure SQL Managed Instance supports the following two connection types:
Redirect (recommended): Clients establish connections directly to the node hosting the database. To
enable connectivity using redirect, you must open firewalls and Network Security Groups (NSG) to allow
access on ports 1433, and 11000-11999. Packets go directly to the database, and hence there are latency and
throughput performance improvements using redirect over proxy. Impact of planned maintenance events of
gateway component is also minimized with redirect connection type compared to proxy since connections,
once established, have no dependency on gateway.
Proxy (default): In this mode, all connections are using a proxy gateway component. To enable connectivity,
only port 1433 for private networks and port 3342 for public connection need to be opened. Choosing this
mode can result in higher latency and lower throughput, depending on nature of the workload. Also, planned
maintenance events of gateway component break all live connections in proxy mode. We highly recommend
the redirect connection policy over the proxy connection policy for the lowest latency, highest throughput,
and minimized impact of planned maintenance.
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
The following PowerShell script shows how to change the connection type for a managed instance to Redirect .
Install-Module -Name Az
Import-Module Az.Accounts
Import-Module Az.Sql
Connect-AzAccount
# Get your SubscriptionId from the Get-AzSubscription command
Get-AzSubscription
# Use your SubscriptionId in place of {subscription-id} below
Select-AzSubscription -SubscriptionId {subscription-id}
# Replace {rg-name} with the resource group for your managed instance, and replace {mi-name} with the name
of your managed instance
$mi = Get-AzSqlInstance -ResourceGroupName {rg-name} -Name {mi-name}
$mi = $mi | Set-AzSqlInstance -ProxyOverride "Redirect" -force
Next steps
Restore a database to SQL Managed Instance
Learn how to configure a public endpoint on SQL Managed Instance
Learn about SQL Managed Instance connectivity architecture
Create alerts for Azure SQL Managed Instance
using the Azure portal
9/13/2022 • 5 minutes to read • Edit Online
Overview
You can receive an alert based on monitoring metrics for, or events on, your Azure services.
Metric values - The alert triggers when the value of a specified metric crosses a threshold you assign in
either direction. That is, it triggers both when the condition is first met and then afterwards when that
condition is no longer being met.
You can configure an alert to do the following when it triggers:
Send email notifications to the service administrator and coadministrators
Send email to additional emails that you specify.
Call a phone number with voice prompt
Send text message to a phone number
Call a webhook
Call Azure Function
Call Azure runbook
Call an external ticketing ITSM compatible system
You can configure and get information about alert rules using the Azure portal, PowerShell or the Azure CLI or
Azure Monitor REST API.
The following managed instance metrics are available for alerting configuration:
Virtual core count vCores provisioned for the managed 4-80 (vCores)
instance. Changes with resource
scaling operation.
3. On the drop-down menu, select one of the metrics you wish to set up your alert on (Storage space used
is shown in the example).
4. Select aggregation period - average, minimum, or maximum reached in the given time period (Avg, Min,
or Max).
5. Select New aler t rule
6. In the Create alert rule pane click on Condition name (Storage space used is shown in the example)
7. On the Configure signal logic pane, define Operator, Aggregation type, and Threshold value
Operator type options are greater than, equal and less than (the threshold value)
Aggregation type options are min, max or average (in the aggregation granularity period)
Threshold value is the alert value which will be evaluated based on the operator and aggregation
criteria
In the example shown in the screenshot, value of 1840876 MB is used representing a threshold value of
1.8 TB. As the operator in the example is set to greater than, the alert will be created if the storage space
consumption on the managed instance goes over 1.8 TB. Note that the threshold value for storage space
metrics must be expressed in MB.
8. Set the evaluation period - aggregation granularity in minutes and frequency of evaluation. The
frequency of evaluation will denote time the alerting system will periodically check if the threshold
condition has been met.
9. Select action group. Action group pane will show up through which you will be able to select an existing,
or create a new action. This action defines that will happen upon triggering an alert (for example, sending
email, calling you on the phone, executing a webhook, Azure function, or a runbook, for example).
To create new action group, select +Create action group
Define how do you want to be alerted: Enter action group name, short name, action name and
select Action Type. The Action Type defines if you will be notified via email, text message, voice call,
or if perhaps webhook, Azure function, runbook will be executed, or ITSM ticket will be created in
your compatible system.
10. Fill in the alert rule details for your records, select the severity type.
Complete creating the alert rule by clicking on Create aler t rule button.
New alert rule will become active within a few minutes and will be triggered based on your settings.
Verifying alerts
NOTE
To supress noisy alerts, see Supression of alerts using action rules.
Upon setting up an alerting rule, verify that you are satisfied with the alerting trigger and its frequency. For the
example shown on this page for setting up an alert on storage space used, if your alerting option was email, you
might receive email such is the one shown below.
The email shows the alert name, details of the threshold and why the alert was triggered helping you to verify
and troubleshoot your alert. You can use See in Azure por tal button to view alert received via email in Azure
portal.
Alternatively, you could also click on Alerts on the Azure navigation bar, if you have it configured.
2. On the Alerts pane, select Manage alert rules.
List of existing alerts will show up. Select an individual existing alert rule to manage it. Existing active
rules can be modified and tuned to your preference. Active rules can also be suspended without being
deleted.
Next steps
Learn about Azure Monitor alerting system, see Overview of alerts in Microsoft Azure
Learn more about metric alerts, see Understand how metric alerts work in Azure Monitor
Learn about configuring a webhook in alerts, see Call a webhook with a classic metric alert
Learn about configuring and managing alerts using PowerShell, see Action rules
Learn about configuring and managing alerts using API, see Azure Monitor REST API reference
Configure Advanced Threat Protection in Azure
SQL Managed Instance
9/13/2022 • 2 minutes to read • Edit Online
Azure portal
1. Sign in to the Azure portal.
2. Navigate to the configuration page of the instance of SQL Managed Instance you want to protect. Under
Security , select Defender for SQL .
3. In the Microsoft Defender for SQL configuration page:
Turn ON Microsoft Defender for SQL.
Configure the Send aler ts to email address to receive security alerts upon detection of anomalous
database activities.
Select the Azure storage account where anomalous threat audit records are saved.
Select the Advanced Threat Protection types that you would like configured. Learn more about
Advanced Threat Protection alerts.
4. Click Save to save the new or updated Microsoft Defender for SQL policy.
Next steps
Learn more about Advanced Threat Protection.
Learn about managed instances, see What is an Azure SQL Managed Instance.
Learn more about Advanced Threat Protection for Azure SQL Database.
Learn more about SQL Managed Instance auditing.
Learn more about Microsoft Defender for Cloud.
Determine required subnet size and range for Azure
SQL Managed Instance
9/13/2022 • 5 minutes to read • Edit Online
IMPORTANT
A subnet size of 16 IP addresses (subnet mask /28) allows the deployment of a single managed instance inside it. It
should be used only for evaluation or for dev/test scenarios where scaling operations won't be performed.
IMPORTANT
It's not possible to change the subnet address range if any resource exists in the subnet. Consider using bigger subnets
rather than smaller ones to prevent issues in the future.
GP 5 6 3 14
BC 5 6 5 16
Update scenarios
During a scaling operation, instances temporarily require additional IP capacity that depends on pricing tier:
GP Scaling vCores 3
GP Scaling storage 0
GP Switching to BC 5
BC Scaling vCores 5
BC Scaling storage 5
BC Switching to GP 3
NOTE
Though it's possible to deploy managed instances to a subnet with a number of IP addresses that's less than the output
of the subnet formula, always consider using bigger subnets instead. Using a bigger subnet can help avoid future issues
stemming from a lack of IP addresses, such as the inability to create additional instances within the subnet or scale
existing instances.
Next steps
For an overview, see What is Azure SQL Managed Instance?.
Learn more about connectivity architecture for SQL Managed Instance.
See how to create a virtual network where you'll deploy SQL Managed Instance.
For DNS issues, see Configure a custom DNS.
Create a virtual network for Azure SQL Managed
Instance
9/13/2022 • 2 minutes to read • Edit Online
NOTE
You should determine the size of the subnet for SQL Managed Instance before you deploy the first instance. You can't
resize the subnet after you put the resources inside.
If you plan to use an existing virtual network, you need to modify that network configuration to accommodate SQL
Managed Instance. For more information, see Modify an existing virtual network for SQL Managed Instance.
After a managed instance is created, moving the managed instance or virtual network to another resource group or
subscription is not supported.
IMPORTANT
You can move the instance to another subnet inside the Vnet. Moving a managed instance from one subnet to another
within the same VNET is allowed and supported. Moving a managed instance across VNET is not supported.
This button opens a form that you can use to configure the network environment where you can deploy
SQL Managed Instance.
NOTE
This Azure Resource Manager template will deploy a virtual network with two subnets. One subnet, called
ManagedInstances , is reserved for SQL Managed Instance and has a preconfigured route table. The other
subnet, called Default , is used for other resources that should access SQL Managed Instance (for example, Azure
Virtual Machines).
3. Configure the network environment. On the following form, you can configure parameters of your
network environment:
You might change the names of the virtual network and subnets, and adjust the IP ranges associated with
your networking resources. After you select the Purchase button, this form will create and configure
your environment. If you don't need two subnets, you can delete the default one.
Next steps
For an overview, see What is SQL Managed Instance?.
Learn about connectivity architecture in SQL Managed Instance.
Learn how to modify an existing virtual network for SQL Managed Instance.
For a tutorial that shows how to create a virtual network, create a managed instance, and restore a database
from a database backup, see Create a managed instance.
For DNS issues, see Configure a custom DNS.
Configure an existing virtual network for Azure SQL
Managed Instance
9/13/2022 • 2 minutes to read • Edit Online
NOTE
You can create a managed instance only in virtual networks created through the Azure Resource Manager deployment
model. Azure virtual networks created through the classic deployment model are not supported. Calculate subnet size by
following the guidelines in the Determine the size of subnet for SQL Managed Instance article. You can't resize the subnet
after you deploy the resources inside.
After the managed instance is created, you can move the instance to another subnet inside the same Vnet or across
vNets, but moving the instance or VNet to another resource group or subscription is not supported.
$scriptUrlBase = 'https://raw.githubusercontent.com/Microsoft/sql-server-
samples/master/samples/manage/azure-sql-db-managed-instance/delegate-subnet'
$parameters = @{
subscriptionId = '<subscriptionId>'
resourceGroupName = '<resourceGroupName>'
virtualNetworkName = '<virtualNetworkName>'
subnetName = '<subnetName>'
}
Key benefits
Configuring Virtual network Azure Storage service endpoint policies for your Azure SQL Managed Instance
provides the following benefits:
Improved security for your Azure SQL Managed Instance traffic to Azure Storage : Endpoint
policies establish a security control that prevents erroneous or malicious exfiltration of business-critical
data. Traffic can be limited to only those storage accounts that are compliant with your data governance
requirements.
Granular control over which storage accounts can be accessed : Service endpoint policies can
permit traffic to storage accounts at a subscription, resource group, and individual storage account level.
Administrators can use service endpoint policies to enforce adherence to the organization's data security
architecture in Azure.
System traffic remains unaffected : Service endpoint policies never obstruct access to storage that is
required for Azure SQL Managed Instance to function. This includes the storage of backups, data files,
transaction log files, and other assets.
IMPORTANT
Service endpoint policies only control traffic that originates from the SQL Managed Instance subnet and terminates in
Azure storage. The policies do not affect, for example, exporting the database to an on-prem BACPAC file, Azure Data
Factory integration, the collection of diagnostic information via Azure Diagnostic Settings, or other mechanisms of data
extraction that do not directly target Azure Storage.
Limitations
Enabling service endpoint policies for your Azure SQL Managed Instance has the following limitations:
While in preview, this feature is available in all Azure regions where SQL Managed Instance is supported
except for China East 2 , China Nor th 2 , Central US EUAP , East US 2 EUAP , US Gov Arizona , US Gov
Texas , US Gov Virginia , and West Central US .
The feature is available only to virtual networks deployed through the Azure Resource Manager deployment
model.
The feature is available only in subnets that have service endpoints for Azure Storage enabled.
Enabling service endpoints for Azure Storage also extends to include paired regions where you deploy the
virtual network to support Read-Access Geo-Redundant storage (RA-GRS) and Geo-Redundant storage
(GRS) traffic.
Assigning a service endpoint policy to a service endpoint upgrades the endpoint from regional to global
scope. In other words, all traffic to Azure Storage will go through the service endpoint regardless of the
region in which the storage account resides.
Configure policies
You'll first need to create your service endpoint policy, and then associate the policy with the SQL Managed
Instance subnet. Modify the workflow in this section to suit your business needs.
NOTE
SQL Managed Instance subnets require policies to contain the /Services/Azure/ManagedInstance service alias (See
step 5).
Managed instances deployed to a subnet that already contains service endpoint policies will be automatically
upgraded the /Services/Azure/ManagedInstance service alias.
6. In Policy definitions, select + Add under Resources and enter or select the following information in the
Add a resource pane:
Service: Select Microsoft.Storage .
Scope: Select All accounts in subscription .
Subscription: Select a subscription containing the storage account(s) to permit. Refer to your inventory
of Azure storage accounts created earlier.
Select Add to finish adding the resource.
Repeat this step to add any additional subscriptions.
7. Optional: you may configure tags on the service endpoint policy under Tags .
8. Select Review + Create . Validate the information and select Create . To make further edits, select
Previous .
TIP
First, configure policies to allow access to entire subscriptions. Validate the configuration by ensuring that all workflows
operate normally. Then, optionally, reconfigure policies to allow individual storage accounts, or accounts in a resource
group. To do so, select Single account or All accounts in resource group in the Scope: field instead and fill in the
other fields accordingly.
WARNING
If the policies on this subnet do not have the /Services/Azure/ManagedInstance alias, you may see the following error:
Failed to save subnet 'subnet'. Error: 'Found conflicts with NetworkIntentPolicy.
Details: Service endpoint policies on subnet are missing definitions To resolve this, update all the policies on
the subnet to include the /Services/Azure/ManagedInstance alias.
Next steps
Learn more on securing your Azure Storage accounts.
Read about SQL Managed Instance's security capabilities.
Explore the connectivity architecture of SQL Managed Instance.
Move Azure SQL Managed Instance across subnets
9/13/2022 • 9 minutes to read • Edit Online
NOTE
1 Custom rules added to the source subnet configuration are not copied to the destination subnet. Any customization of
the source subnet configuration must be replicated manually to the destination subnet. One way to achieve this is by
using the same route table and network security group for the source and destination subnet.
IMPORTANT
Gen4 hardware is being retired and is not available for new deployments, as announced on December 18, 2019.
Customers using Gen4 for Azure SQL Databases, elastic pools, or SQL managed instances should migrate to currently
available hardware, such as standard-series (Gen5), before January 31, 2023.
For more information on Gen4 hardware retirement and migration to current hardware, see our Blog post on Gen4
retirement. Existing Gen4 databases, elastic pools, and SQL managed instances will be migrated automatically to
equivalent standard-series (Gen5) hardware.
Downtime caused by automatic migration will be minimal and similar to downtime during scaling operations within
selected service tier. To avoid unplanned interruptions to workloads, migrate proactively at the time of your choice before
January 31, 2023.
Operation steps
The following table details the operation steps that occur during the instance move operation:
ST EP N A M E ST EP DESC RIP T IO N
Virtual cluster resizing / creation Depending on the state of the destination subnet, the
virtual cluster is either created or resized.
New instance startup The SQL process starts on the deployed virtual cluster in the
destination subnet.
Seeding database files / attaching database files Depending on the service tier, either the database is seeded
or the database files are attached.
Preparing failover and failover After data has been seeded or database files reattached, the
system prepares for failover. When everything is ready, the
system performs a failover with a shor t downtime ,
usually less than 10 seconds.
Old SQL instance cleanup Removes the old SQL process from the source virtual cluster.
Virtual cluster deletion If it's the last instance within the source subnet, the final step
deletes the virtual cluster synchronously. Otherwise, the
virtual cluster is asynchronously defragmented.
A detailed explanation of the operation steps can be found in the overview of Azure SQL Managed Instance
management operations
Portal
PowerShell
Azure CLI
The option to choose the instance subnet is located on the Networking blade of the Azure portal. The instance
move operation starts when you select a subnet and save your changes.
The first step of the move operation is to prepare the destination subnet for deployment, which may take several
minutes. Once the subnet is ready, the instance move management operation starts and becomes visible in the
Azure portal.
Monitor instance move operations from the Over view blade of the Azure portal. Select the notification to open
an additional blade containing information about the current step, the total steps, and a button to cancel the
operation.
Next steps
To learn how to create your first managed instance, see Quickstart guide.
For a features and comparison list, see common SQL features.
For more information about VNet configuration, see SQL Managed Instance VNet configuration.
For a quickstart that creates a managed instance and restores a database from a backup file, see Create a
managed instance.
For a tutorial about using Azure Database Migration Service for migration, see SQL Managed Instance
migration using Database Migration Service.
Delete a subnet after deleting an Azure SQL
Managed Instance
9/13/2022 • 3 minutes to read • Edit Online
IMPORTANT
There is no need for any manual action on the virtual cluster in order to release the subnet. Once the last virtual cluster is
deleted, you can go and delete the subnet.
There are rare circumstances in which create operation can fail and result with deployed empty virtual cluster.
Additionally, as instance creation can be canceled, it is possible for a virtual cluster to be deployed with instances
residing inside, in a failed to deploy state. Virtual cluster removal will automatically be initiated in these
situations and removed in the background.
IMPORTANT
There are no charges for keeping an empty virtual cluster or instances that have failed to create.
Deletion of a virtual cluster is a long-running operation lasting for about 1.5 hours (see SQL Managed Instance
management operations for up-to-date virtual cluster delete time). The virtual cluster will still be visible in the portal
until this process is completed.
Only one delete operation can be run on the virtual cluster. All subsequent customer-initiated delete requests will
result with an error as delete operation is already in progress.
To delete a virtual cluster by using the Azure portal, search for the virtual cluster resources.
After you locate the virtual cluster you want to delete, select this resource, and select Delete . You're prompted to
confirm the virtual cluster deletion.
Azure portal notifications will show you a confirmation that the request to delete the virtual cluster has been
successfully submitted. The deletion operation itself will last for about 1.5 hours, during which the virtual cluster
will still be visible in portal. Once the process is completed, the virtual cluster will no longer be visible and the
subnet associated with it will be released for reuse.
TIP
If there are no SQL Managed Instances shown in the virtual cluster, and you are unable to delete the virtual cluster,
ensure that you do not have an ongoing instance deployment in progress. This includes started and canceled
deployments that are still in progress. This is because these operations will still use the virtual cluster, locking it from
deletion. Review the Deployments tab of the resource group where the instance was deployed to see any deployments
in progress. In this case, wait for the deployment to complete, then delete the SQL Managed Instance. The virtual cluster
will be synchronously deleted as part of the instance removal.
To delete a virtual cluster through the API, use the URI parameters specified in the virtual clusters delete method.
Next steps
For an overview, see What is Azure SQL Managed Instance?.
Learn about connectivity architecture in SQL Managed Instance.
Learn how to modify an existing virtual network for SQL Managed Instance.
For a tutorial that shows how to create a virtual network, create an Azure SQL Managed Instance, and restore
a database from a database backup, see Create an Azure SQL Managed Instance (portal).
For DNS issues, see Configure a custom DNS.
Configure a custom DNS for Azure SQL Managed
Instance
9/13/2022 • 2 minutes to read • Edit Online
IMPORTANT
Always use a fully qualified domain name (FQDN) for the mail server, for the SQL Server instance, and for other services,
even if they're within your private DNS zone. For example, use smtp.contoso.com for your mail server because smtp
won't resolve correctly. Creating a linked server or replication that references SQL Server VMs inside the same virtual
network also requires an FQDN and a default DNS suffix. For example, SQLVM.internal.cloudapp.net . For more
information, see Name resolution that uses your own DNS server.
IMPORTANT
Updating virtual network DNS servers won't affect SQL Managed Instance immediately. See how to synchronize virtual
network DNS servers setting on SQL Managed Instance virtual cluster for more details.
Next steps
For an overview, see What is Azure SQL Managed Instance?.
For a tutorial showing you how to create a new managed instance, see Create a managed instance.
For information about configuring a VNet for a managed instance, see VNet configuration for managed
instances.
Synchronize virtual network DNS servers setting on
SQL Managed Instance virtual cluster
9/13/2022 • 2 minutes to read • Edit Online
IMPORTANT
Synchronizing DNS servers setting will affect all of the Managed Instances hosted in the virtual cluster.
Use PowerShell command Invoke-AzResourceAction to synchronize DNS servers configuration for all the virtual
clusters in the subnet.
Get-AzSqlVirtualCluster `
| where SubnetId -match $virtualNetwork.Id `
| select Id `
| Invoke-AzResourceAction -Action updateManagedInstanceDnsServers -Force
Use Azure CLI command az resource invoke-action to synchronize DNS servers configuration for all the virtual
clusters in the subnet.
Next steps
Learn more about configuring a custom DNS Configure a custom DNS for Azure SQL Managed Instance.
For an overview, see What is Azure SQL Managed Instance?.
Determine the management endpoint IP address -
Azure SQL Managed Instance
9/13/2022 • 2 minutes to read • Edit Online
For more information about SQL Managed Instance and connectivity, see Azure SQL Managed Instance
connectivity architecture.
Verify the Azure SQL Managed Instance built-in
firewall
9/13/2022 • 2 minutes to read • Edit Online
Verify firewall
To verify these ports, use any security scanner tool to test these ports. The following screenshot shows how to
use one of these tools.
Next steps
For more information about SQL Managed Instance and connectivity, see Azure SQL Managed Instance
connectivity architecture.
Migrate databases from SQL Server to SQL
Managed Instance by using Log Replay Service
(Preview)
9/13/2022 • 24 minutes to read • Edit Online
NOTE
We recommend automating the migration of databases from SQL Server to SQL Managed Instance by using Database
Migration Service. Consider using LRS to orchestrate migrations when Database Migration Service doesn't fully
support your scenarios.
LRS is the only method to restore differential backups on managed instance. It isn't possible to manually restore
differential backups on managed instance, nor to manually set the NORECOVERY mode using T-SQL.
How it works
Building a custom solution to migrate databases to the cloud with LRS requires several orchestration steps, as
shown in the diagram and a table later in this section.
Migration consists of making database backups on SQL Server with CHECKSUM enabled, and copying backup
files to Azure Blob Storage. Full, log, and differential backups are supported. LRS cloud service is used to restore
backup files from Azure Blob Storage to SQL Managed Instance. Blob Storage serves as an intermediary storage
between SQL Server and SQL Managed Instance.
LRS monitors Blob Storage for any new differential or log backups added after the full backup has been
restored. LRS then automatically restores these new files. You can use the service to monitor the progress of
backup files being restored to SQL Managed Instance, and stop the process if necessary.
LRS doesn't require a specific naming convention for backup files. It scans all files placed on Azure Blob Storage
and constructs the backup chain from reading the file headers only. Databases are in a restoring state during
the migration process. Databases are restored in NORECOVERY mode, so they can't be used for read or write
workloads until the migration process completes.
If you're migrating several databases, you need to:
Place backup files for each database in a separate folder on Azure Blob Storage in a flat-file structure. For
example, use separate database folders: bolbcontainer/database1/files , blobcontainer/database2/files , etc.
Don't use nested folders inside database folders as this structure isn't supported. For example, don't use
subfolders: blobcontainer/database1/subfolder/files .
Start LRS separately for each database.
Specify different URI paths to separate database folders on Azure Blob Storage.
Autocomplete versus Continuous mode migration
You can start LRS in either autocomplete or continuous mode.
Use autocomplete mode in cases when you have the entire backup chain generated in advance, and when you
don't plan to add any more files once the migration has been started. This migration mode is recommended for
passive workloads that don't require data catch-up. Upload all backup files to the Azure Blob Storage, and start
the autocomplete mode migration. The migration will complete automatically when the last specified backup file
has been restored. Migrated database will become available for read and write access on SQL Managed
Instance.
In case that you plan to keep adding new backup files while migration is in progress, use continuous mode. This
mode is recommended for active workloads requiring data catch-up. Upload the currently available backup
chain to Azure Blob Storage, start the migration in continuous mode, and keep adding new backup files from
your workload as needed. The system will periodically scan Azure Blob Storage folder and restore any new log
or differential backup files found. When you're ready to cutover, stop the workload on your SQL Server, generate
and upload the last backup file. Ensure that the last backup file has restored by watching that the final log-tail
backup is shown as restored on SQL Managed Instance. Then, initiate manual cutover. The final cutover step
makes the database come online and available for read and write access on SQL Managed Instance.
After LRS is stopped, either automatically through autocomplete, or manually through cutover, you can't resume
the restore process for a database that was brought online on SQL Managed Instance. For example, once
migration completes, you're no longer able to restore more differential backups for an online database. To
restore more backup files after migration completes, you need to delete the database from the managed
instance and restart the migration from the beginning.
Migration workflow
Typical migration workflow is shown in the image below, and steps outlined in the table.
Autocomplete mode needs to be used only when all backup chain files are available in advance. This mode is
recommended for passive workloads for which no data catch-up is required.
Continuous mode migration needs to be used when you don't have the entire backup chain in advance, and
when you plan to add new backup files once the migration is in progress. This mode is recommended for active
workloads for which data catch-up is required.
O P ERAT IO N DETA IL S
1. Copy database backups from SQL Ser ver to Blob Copy full, differential, and log backups from SQL Server to a
Storage . Blob Storage container by using AzCopy or Azure Storage
Explorer.
2. Star t LRS in the cloud . You can start the service with PowerShell (start-
azsqlinstancedatabaselogreplay) or the Azure CLI
(az_sql_midb_log_replay_start cmdlets). Choose between
autocomplete or continuous migration modes.
After the service starts, it will take backups from the Blob
Storage container and start restoring them to SQL Managed
Instance.
2.1. Monitor the operation's progress . You can monitor progress of the restore operation with
PowerShell (get-azsqlinstancedatabaselogreplay) or the
Azure CLI (az_sql_midb_log_replay_show cmdlets).
O P ERAT IO N DETA IL S
2.2. Stop the operation if required (optional) . If you need to stop the migration process, use PowerShell
(stop-azsqlinstancedatabaselogreplay) or the Azure CLI
(az_sql_midb_log_replay_stop).
3. Cut over to the cloud when you're ready . If LRS was started in autocomplete mode, the migration will
automatically complete once the specified last backup file
has been restored.
Getting started
Consider the requirements in this section to get started with using LRS to migrate.
SQL Server
Make sure you have the following requirements for SQL Server:
SQL Server versions from 2008 to 2022
Full backup of databases (one or multiple files)
Differential backup (one or multiple files)
Log backup (not split for a transaction log file)
CHECKSUM enabled for backups (mandatory)
Azure
Make sure you have the following requirements for Azure:
PowerShell Az.SQL module version 2.16.0 or later (installed or accessed through Azure Cloud Shell)
Azure CLI version 2.19.0 or later (installed)
Azure Blob Storage container provisioned
Shared access signature (SAS) security token with read and list permissions generated for the Blob Storage
container
Azure RBAC permissions
Running LRS through the provided clients requires one of the following Azure roles:
Subscription Owner role
SQL Managed Instance Contributor role
Custom role with the following permission: Microsoft.Sql/managedInstances/databases/*
Requirements
Ensure the following requirements are met:
Use the full recovery model on SQL Server (mandatory).
Use CHECKSUM for backups on SQL Server (mandatory).
Place backup files for an individual database inside a separate folder in a flat-file structure (mandatory).
Nested folders inside database folders aren't supported.
Best practices
We recommend the following best practices:
Run Data Migration Assistant to validate that your databases are ready to be migrated to SQL Managed
Instance.
Split full and differential backups into multiple files, instead of using a single file.
Enable backup compression to help the network transfer speeds.
Use Cloud Shell to run PowerShell or CLI scripts, because it will always be updated to the latest cmdlets
released.
Configure maintenance window to allow scheduling of system updates at a specific day/time. This
configuration will help achieve a more predictable time of database migrations, as impactful system
upgrades interrupt migration in progress.
Plan to complete a single LRS migration job within a maximum of 30 days. On expiry of this timeframe, the
LRS job will be automatically canceled.
IMPORTANT
You can't use databases being restored through LRS until the migration process completes.
LRS doesn't support read-only access to databases during the migration.
After the migration completes, the migration process is finalized and can't be resumed with additional differential
backups.
TIP
System updates on managed instance will take precedence over database migrations in progress. All pending LRS
migrations in case of a system update on Managed Instance will be suspended and resumed once the update has been
applied. This system behavior might prolong migration time, especially in cases of large databases. To achieve a
predictable time of database migrations, consider configuring maintenance window allowing scheduling of system updates
at a specific day/time, and consider running and completing migration jobs outside of the scheduled maintenance window
day/time.
Steps to migrate
To migrate using LRS, follow the steps in this section.
Make database backups on SQL Server
You can make database backups on SQL Server by using either of the following options:
Back up to the local disk storage, and then upload files to Azure Blob Storage, if your environment restricts
direct backups to Blob Storage.
Back up directly to Blob Storage with the TO URL option in Transact-SQL (T-SQL), if your environment and
security procedures allow it.
Set databases that you want to migrate to the full recovery model to allow log backups.
-- To permit log backups, before the full database backup, modify the database to use the full recovery
USE master
ALTER DATABASE SampleDB
SET RECOVERY FULL
GO
To manually make full, differential, and log backups of your database to local storage, use the following sample
T-SQL scripts. Ensure the CHECKSUM option is enabled, as it's mandatory for LRS.
The following example takes a full database backup to the local disk:
The following example takes a transaction log backup to the local disk:
NOTE
To migrate multiple databases using the same Azure Blob Storage container, place all backup files of an individual database
into a separate folder inside the container. Use flat-file structure for each database folder, as nested folders aren't
supported.
-- Place all backup files for database 1 in a separate "database1" folder in a flat-file structure.
-- Don't use nested folders inside database1 folder.
https://<mystorageaccountname>.blob.core.windows.net/<containername>/<database1>/<all-database1-backup-
files>
-- Place all backup files for database 2 in a separate "database2" folder in a flat-file structure.
-- Don't use nested folders inside database2 folder.
https://<mystorageaccountname>.blob.core.windows.net/<containername>/<database2>/<all-database2-backup-
files>
-- Place all backup files for database 3 in a separate "database3" folder in a flat-file structure.
-- Don't use nested folders inside database3 folder.
https://<mystorageaccountname>.blob.core.windows.net/<containername>/<database3>/<all-database3-backup-
files>
4. Select the time frame for token expiration. Ensure the token is valid during your migration.
5. Select the time zone for the token: UTC or your local time.
IMPORTANT
The time zone of the token and your managed instance might mismatch. Ensure that the SAS token has the
appropriate time validity, taking time zones into consideration. To account for time zone differences, set the
validity time frame FROM well before your migration window starts, and the TO time frame well after you expect
your migration to complete.
IMPORTANT
Don't select any other permissions. If you do, LRS won't start. This security requirement is by-design.
7. Select Create .
The SAS authentication is generated with the time validity that you specified. You need the URI version of the
token, as shown in the following screenshot.
NOTE
Using SAS tokens created with permissions set through defining a stored access policy isn't supported at this time. Follow
the instructions in this article to manually specify Read and List permissions for the SAS token.
2. Copy the second part of the token, starting after the question mark ( ? ) all the way until the end of the
string. Use it as the StorageContainerSasToken parameter in PowerShell or the Azure CLI when starting
LRS.
NOTE
Don't include the question mark ( ? ) when you copy either part of the token.
Login-AzAccount
Select the appropriate subscription where your managed instance resides by using the following PowerShell
cmdlet:
Select-AzSubscription -SubscriptionId <subscription ID>
NOTE
When migrating multiple databases, LRS must be started separately for each database pointing to the full URI path of
Azure Blob storage container and the individual database folder.
az sql midb log-replay start -g mygroup --mi myinstance -n mymanageddb -a --last-bn "backup.bak"
--storage-uri "https://<mystorageaccountname>.blob.core.windows.net/<containername>/<databasefolder>"
--storage-sas "sv=2019-02-02&ss=b&srt=sco&sp=rl&se=2023-12-02T00:09:14Z&st=2019-11-
25T16:09:14Z&spr=https&sig=92kAe4QYmXaht%2Fgjocqwerqwer41s%3D"
IMPORTANT
Ensure that the entire backup chain has been uploaded to Azure Blob Storage prior to starting the migration in
autocomplete mode. This mode doesn't allow new backup files to be added once the migration is in progress.
Ensure that you have specified the last backup file correctly, and that you have not uploaded more files after it. If the
system detects more backup files beyond the last specified backup file, the migration will fail.
IMPORTANT
Once LRS has been started in continuous mode, you'll be able to add new log and differential backups to Azure Blob
Storage until the manual cutover. Once manual cutover has been initiated, no additional differential files can be added, nor
restored.
When you start a background job, a job object returns immediately, even if the job takes an extended time to
complete. You can continue to work in the session without interruption while the job runs. For details on running
PowerShell as a background job, see the PowerShell Start-Job documentation.
Similarly, to start an Azure CLI command on Linux as a background process, use the ampersand ( & ) at the end
of the LRS start command:
To monitor migration progress through the Azure CLI, use the following command:
To stop the migration process through the Azure CLI, use the following command:
To complete the migration process in LRS continuous mode through the Azure CLI, use the following command:
az sql midb log-replay complete -g mygroup --mi myinstance -n mymanageddb --last-backup-name "backup.bak"
Limitations
Consider the following limitations of LRS:
During the migration process, databases being migrated can't be used for read-only access on SQL Managed
Instance.
Configure maintenance window to allow scheduling of system updates at a specific day/time. Plan to run and
complete migrations outside of the scheduled maintenance window.
LRS requires databases on SQL Server to be backed up with the CHECKSUM option enabled.
The SAS token that LRS uses must be generated for the entire Azure Blob Storage container, and it must have
Read and List permissions only. For example, if you grant Read , List and Write permissions, LRS won't be
able to start because of the extra Write permission.
Using SAS tokens created with permissions set through defining a stored access policy isn't supported.
Follow the instructions in this article to manually specify Read and List permissions for the SAS token.
Backup files containing % and $ characters in the file name can't be consumed by LRS. Consider renaming
such file names.
Backup files for different databases must be placed in separate folders on Blob Storage in a flat-file structure.
Nested folders inside individual database folders aren't supported.
If using autocomplete mode, the entire backup chain needs to be available in advance on Azure Blob Storage.
It isn't possible to add new backup files in autocomplete mode. Use continuous mode if you need to add new
backup files while migration is in progress.
LRS must be started separately for each database pointing to the full URI path containing an individual
database folder.
LRS can support up to 100 simultaneous restore processes per single managed instance.
Single LRS job can run for the maximum of 30 days, after which it will be automatically canceled.
TIP
If you require database to be R/O accessible during the migration, a much longer timeframe to perform the migration,
and the best possible minimum downtime, consider the link feature for Managed Instance as a recommended migration
solution in these cases.
Troubleshooting
After you start LRS, use the monitoring cmdlet (PowerShell: get-azsqlinstancedatabaselogreplay or Azure CLI:
az_sql_midb_log_replay_show ) to see the status of the operation. If LRS fails to start after some time and you get
an error, check for the most common issues:
Does an existing database on SQL Managed Instance have the same name as the one you're trying to
migrate from SQL Server? Resolve this conflict by renaming one of the databases.
Was the database backup on SQL Server made via the CHECKSUM option?
Are the permissions granted for the SAS token Read and List only?
Did you copy the SAS token for LRS after the question mark ( ? ), with content starting like this:
sv=2020-02-10... ?
Is the SAS token validity time applicable for the time window of starting and completing the migration?
There might be mismatches due to the different time zones used for SQL Managed Instance and the SAS
token. Try regenerating the SAS token and extending the token validity of the time window before and after
the current date.
Are the database name, resource group name, and managed instance name spelled correctly?
If you started LRS in autocomplete mode, was a valid filename for the last backup file specified?
Next steps
Learn more about migrating to SQL Managed Instance using the link feature.
Learn more about migrating from SQL Server to SQL Managed instance.
Learn more about differences between SQL Server and SQL Managed Instance.
Learn more about best practices to cost and size workloads migrated to Azure.
Migrate a certificate of a TDE-protected database
to Azure SQL Managed Instance
9/13/2022 • 5 minutes to read • Edit Online
IMPORTANT
A migrated certificate is used for restore of the TDE-protected database only. Soon after restore is done, the migrated
certificate gets replaced by a different protector, either a service-managed certificate or an asymmetric key from the key
vault, depending on the type of the TDE you set on the instance.
Prerequisites
To complete the steps in this article, you need the following prerequisites:
Pvk2Pfx command-line tool installed on the on-premises server or other computer with access to the
certificate exported as a file. The Pvk2Pfx tool is part of the Enterprise Windows Driver Kit, a self-contained
command-line environment.
Windows PowerShell version 5.0 or higher installed.
PowerShell
Azure CLI
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Managed Instance, but all future
development is for the Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az
module and in the AzureRM modules are substantially identical.
USE master
GO
SELECT db.name as [database_name], cer.name as [certificate_name]
FROM sys.dm_database_encryption_keys dek
LEFT JOIN sys.certificates cer
ON dek.encryptor_thumbprint = cer.thumbprint
INNER JOIN sys.databases db
ON dek.database_id = db.database_id
WHERE dek.encryption_state = 3
3. Execute the following script to export the certificate to a pair of files (.cer and .pvk), keeping the public
and private key information:
USE master
GO
BACKUP CERTIFICATE TDE_Cert
TO FILE = 'c:\full_path\TDE_Cert.cer'
WITH PRIVATE KEY (
FILE = 'c:\full_path\TDE_Cert.pvk',
ENCRYPTION BY PASSWORD = '<SomeStrongPassword>'
)
4. Use the PowerShell console to copy certificate information from a pair of newly created files to a .pfx file,
using the Pvk2Pfx tool:
certlm
2. In the Certificates MMC snap-in, expand the path Personal > Certificates to see the list of certificates.
3. Right-click the certificate and click Expor t .
4. Follow the wizard to export the certificate and private key to a .pfx format.
2. Once all preparation steps are done, run the following commands to upload base-64 encoded certificate
to the target managed instance:
The certificate is now available to the specified managed instance, and the backup of the corresponding TDE-
protected database can be restored successfully.
Next steps
In this article, you learned how to migrate a certificate protecting the encryption key of a database with
Transparent Data Encryption, from the on-premises or IaaS SQL Server instance to Azure SQL Managed
Instance.
See Restore a database backup to a Azure SQL Managed Instance to learn how to restore a database backup to
Azure SQL Managed Instance.
Prepare your environment for a link - Azure SQL
Managed Instance
9/13/2022 • 14 minutes to read • Edit Online
NOTE
The link is a feature of Azure SQL Managed Instance and is currently in preview.
Prerequisites
To use the link with Azure SQL Managed Instance, you need the following prerequisites:
An active Azure subscription. If you don't have one, create a free account.
Supported version of SQL Server with required service update installed.
Azure SQL Managed Instance. Get started if you don't have it.
Ensure that your SQL Server version has the appropriate servicing update installed, as listed below. You must
restart your SQL Server instance during the update.
SERVIC IN G UP DAT E
SQ L SERVER VERSIO N EDIT IO N S H O ST O S REQ UIREM EN T
SQL Server 2022 (16.x) Evaluation Edition Windows Server Must sign up at
Preview https://aka.ms/mi-link-
2022-signup to participate
in preview experience.
SERVIC IN G UP DAT E
SQ L SERVER VERSIO N EDIT IO N S H O ST O S REQ UIREM EN T
SQL Server 2019 (15.x) Enterprise, Standard, or Windows Server SQL Server 2019 CU15
Developer (KB5008996), or above for
Enterprise and Developer
editions, and CU17
(KB5016394), or above, for
Standard editions.
SQL Server 2016 (13.x) Enterprise, Standard, or Windows Server SQL Server 2016 SP3 (KB
Developer 5003279) and SQL Server
2016 Azure Connect pack
(KB 5014242)
To make sure that you have the database master key, use the following T-SQL script on SQL Server:
The above query will display if Always On availability group is enabled, or not, on your SQL Server.
IMPORTANT
For SQL Server 2016, if you need to enable Always On availability group, you will need to complete extra steps
documented in prepare SQL Server 2016 prerequisites. These extra steps are not required for all higher SQL Server
versions (2019-2022) supported by the link.
If the availability groups feature isn't enabled, follow these steps to enable it, or otherwise skip to the next
section:
1. Open SQL Server Configuration Manager.
2. Select SQL Ser ver Ser vices from the left pane.
3. Right-click the SQL Server service, and then select Proper ties .
If using SQL Ser ver 2016 , and if Enable Always On Availability Groups option is disabled with
message This computer is not a node in a failover cluster. , follow extra steps described in prepare
SQL Server 2016 prerequisites. Once you've completed these other steps, come back and retry this
step again.
6. Select OK in the dialog
7. Restart the SQL Server service.
Enable startup trace flags
To optimize the performance of your SQL Managed Instance link, we recommend enabling the following trace
flags at startup:
-T1800 : This trace flag optimizes performance when the log files for the primary and secondary replicas in
an availability group are hosted on disks with different sector sizes, such as 512 bytes and 4K. If both primary
and secondary replicas have a disk sector size of 4K, this trace flag isn't required. To learn more, review
KB3009974.
-T9567 : This trace flag enables compression of the data stream for availability groups during automatic
seeding. The compression increases the load on the processor but can significantly reduce transfer time
during seeding.
To enable these trace flags at startup, use the following steps:
1. Open SQL Server Configuration Manager.
2. Select SQL Ser ver Ser vices from the left pane.
3. Right-click the SQL Server service, and then select Proper ties .
4. Go to the Star tup Parameters tab. In Specify a star tup parameter , enter -T1800 and select Add to
add the startup parameter. Then enter -T9567 and select Add to add the other trace flag. Select Apply to
save your changes.
5. Select OK to close the Proper ties window.
To learn more, review the syntax for enabling trace flags.
Restart SQL Server and validate the configuration
After you've ensured that you're on a supported version of SQL Server, enabled the Always On availability
groups feature, and added your startup trace flags, restart your SQL Server instance to apply all of these
changes:
1. Open SQL Ser ver Configuration Manager .
2. Select SQL Ser ver Ser vices from the left pane.
3. Right-click the SQL Server service, and then select Restar t .
After the restart, run the following T-SQL script on SQL Server to validate the configuration of your SQL Server
instance:
-- Run on SQL Server
-- Shows the version and CU of SQL Server
SELECT @@VERSION as 'SQL Server version'
Your SQL Server version should be one of the supported versions with service updates applied, the Always On
availability groups feature should be enabled, and you should have the trace flags -T1800 and -T9567 enabled.
The following screenshot is an example of the expected outcome for a SQL Server instance that has been
properly configured:
NOTE
Global virtual network peering is enabled by default on managed instances provisioned after November 2020. Raise a
support ticket to enable global virtual network peering on older instances.
TIP
We recommend ExpressRoute for the best network performance when you're replicating data. Provision a gateway with
enough bandwidth for your use case.
Port numbers can't be changed or customized. IP address ranges of subnets hosting managed instance, and SQL
Server must not overlap.
The following table describes port actions for each environment:
EN VIRO N M EN T W H AT TO DO
SQL Server (in Azure) Open both inbound and outbound traffic on port 5022 for
the network firewall to the entire subnet IP range of SQL
Managed Instance. If necessary, do the same on the SQL
Server host OS (Windows/Linux) firewall. Create a network
security group (NSG) rule in the virtual network that hosts
the VM to allow communication on port 5022.
SQL Server (outside Azure) Open both inbound and outbound traffic on port 5022 for
the network firewall to the entire subnet IP range of SQL
Managed Instance. If necessary, do the same on the SQL
Server host OS (Windows/Linux) firewall.
SQL Managed Instance Create an NSG rule in Azure portal to allow inbound and
outbound traffic from the IP address and the networking
hosting SQL Server on port 5022 and port range 11000-
11999.
Use the following PowerShell script on the Windows host OS of the SQL Server instance to open ports in the
Windows firewall:
New-NetFirewallRule -DisplayName "Allow TCP port 5022 inbound" -Direction inbound -Profile Any -Action Allow
-LocalPort 5022 -Protocol TCP
New-NetFirewallRule -DisplayName "Allow TCP port 5022 outbound" -Direction outbound -Profile Any -Action
Allow -LocalPort 5022 -Protocol TCP
To verify that the SQL Server endpoint is receiving connections on port 5022, run the following PowerShell
command on the host operating system of your SQL Server instance:
A successful test shows TcpTestSucceeded : True . You can then proceed to creating a SQL Agent job on the
managed instance to try testing the SQL Server test endpoint on port 5022 from the managed instance.
Next, create a SQL Agent job on the managed instance called NetHelper by running the following T-SQL script
on the managed instance. Replace:
<SQL_SERVER_IP_ADDRESS> with the IP address of SQL Server that can be accessed from managed instance.
IF EXISTS(select * from msdb.dbo.sysjobs where name = 'NetHelper') THROW 70000, 'Agent job NetHelper already
exists. Please rename the job, or drop the existing job before creating it again.', 1
-- To delete NetHelper job run: EXEC msdb.dbo.sp_delete_job @job_name=N'NetHelper'
Then, create a stored procedure ExecuteNetHelper that will help run the job and obtain results from the network
probe. Run the following T-SQL script on managed instance:
Run the following query on managed instance to execute the stored procedure that will execute the NetHelper
agent job and show the resulting log:
-- Run on managed instance
EXEC ExecuteNetHelper
If the connection was successful, the log will show True . If the connection was unsuccessful, the log will show
False .
Cau t i on
Proceed with the next steps only if you've validated network connectivity between your source and target
environments. Otherwise, troubleshoot network connectivity issues before proceeding.
Install SSMS
SQL Server Management Studio (SSMS) is the easiest way to use a SQL Managed Instance link. Download
SSMS version 18.12.1, or later and install it to your client machine.
After installation finishes, open SSMS and connect to your supported SQL Server instance. Right-click a user
database and validate that the Azure SQL Managed Instance link option appears on the menu.
Next steps
After you've prepared your environment, you're ready to start replicating your database. To learn more,
review Link feature for Azure SQL Managed Instance.
Prepare SQL Server 2016 prerequisites - Azure SQL
Managed Instance link
9/13/2022 • 5 minutes to read • Edit Online
Alternatively, you can also use Server Manager to install WSFC module using the graphical user interface.
In case you need to remove the cluster in the future, for some reason, this can only be done with PowerShell
command Remove-Cluster .
If you have successfully created cluster using this method, skip ahead to Grant permissions in SQL Server for
WSFC
Create cluster using Failover Cluster Manager application
Alternatively, a more flexible way to create a cluster on the Windows OS hosting the SQL Server is through the
graphical user interface, using the Failover Cluster Manager application. Follow these steps:
1. Find out your Windows Server name by executing hostname command from the command prompt.
2. Record the output of this command (sample output marked in the image below), or keep this window
open as you'll use this name in one of the next steps.
3. Open Failover Cluster Manager by pressing Windows key + R on the keyboard, type
%windir%\system32\Cluadmin.msc , and click OK.
Alternatively, Failover Cluster Manager can be accessed by opening Server Manager, selecting Tools in
the upper right corner, and then selecting Failover Cluster Manager.
4. In Windows Cluster manager, click on Create Cluster option.
Verification
To verify that single-node WSFC cluster has been created, follow these steps:
1. In the Failover Cluster Manager, click on the cluster name on the left-hand side, and expand it by clicking
on the > arrow.
In case that you've closed and reopened Failover Cluster Manager after its creation, the cluster name
might not show up on the left-hand side (see the image below).
2. Click on Connect to Cluster on the right-hand side, choose to connect to <Cluster on this server...> ,
and click OK.
3. Click on Nodes.
You should be able to see the local machine single-node added to this cluster and with the Status
being Up . This verification confirms the WSFC configuration has been completed successfully. You
can now close the Failover Cluster Manager tool.
Next, verify that Always On option can be enabled on SQL Server by following these steps:
1. Open SQL Server Configuration Manager
2. Double-click on SQL Server
3. Click on Always On High Availability tab
You should be able to see the name of the WSFC you've created, and you should be able to check-on
the Enable Always On Availability Groups should option. This verification confirms the configuration
has been completed successfully.
Next, grant permissions on SQL Server to NT Authority \ System Windows host OS system account, to enable
creation of Availability Groups in SQL Server using WSFC. Execute the following T-SQL script on your SQL
Server:
1. Log in to your SQL Server, using a client such is SSMS
2. Execute the following T-SQL script
Next steps
Continue environment preparation for the link by returning to enable Always On on your SQL Server section
in prepare your environment for a link guide.
To learn more about configuring multiple-node WSFC (not mandatory, and only optional for the link), see
Create a failover cluster guide for Windows Server.
Replicate a database by using the link feature in
SSMS - Azure SQL Managed Instance
9/13/2022 • 3 minutes to read • Edit Online
NOTE
The link is a feature of Azure SQL Managed Instance and is currently in preview.
Prerequisites
To replicate your databases to SQL Managed Instance through the link, you need the following prerequisites:
An active Azure subscription. If you don't have one, create a free account.
Supported version of SQL Server with required service update installed.
Azure SQL Managed Instance. Get started if you don't have it.
SQL Server Management Studio v18.12.1 or later.
A properly prepared environment.
Set up database recovery and backup
All databases that will be replicated via the link must be in full recovery mode and have at least one full backup.
Use SSMS to back up your database. Follow these steps:
1. In SSMS, right-click on a database name on SQL Server
2. Select Tasks, and then click on Backup Up.
3. Ensure Backup type is Full.
4. Ensure Backup-to option has the backup path to a disk with sufficient free storage space available.
5. Click on OK to complete the full backup.
For more information, see Create a Full Database Backup.
Replicate a database
In the following steps, you use the Managed Instance link wizard in SSMS to create the link between SQL
Server and SQL Managed Instance. After you create the link, your source database gets a read-only replica copy
on your target managed instance.
NOTE
The link supports replication of user databases only. Replication of system databases is not supported. To replicate
instance-level objects (stored in master or msdb databases), we recommend that you script them out and run T-SQL
scripts on the destination instance.
5. On the Select Databases page, choose one or more databases that you want to replicate to SQL
Managed Instance via the link feature. Then select Next .
6. On the Login to Azure and select Managed Instance page, select Sign In to sign in to Microsoft
Azure.
If you're running SSMS on Windows Server, the login screen in some cases might not show up with
the error message
Content within this application coming from the website listed below is being blocked by Internet
Explorer Enhanced Security Configuration.
. This happens when Windows Server blocks web content from rendering due to security settings
configuration. In this case, you'll need to turn off Internet Explorer ESC on Windows servers.
7. On the Login to Azure and select Managed Instance page, choose the subscription, resource group,
and target managed instance from the dropdown lists. Select Login and provide login details for SQL
Managed Instance. After you've provided all necessary information, select Next .
8. Review the prepopulated values on the Specify Distributed AG Options page, and change any that
need customization. When you're ready, select Next .
9. Review the actions on the Summar y page. Optionally, select Script to create a script that you can run at
a later time. When you're ready, select Finish .
10. The Executing actions page displays the progress of each action.
11. After all steps finish, the Results page shows check marks next to the successfully completed actions. You
can now close the window.
Connect to your managed instance and use Object Explorer to view your replicated database. Depending on the
database size and network speed, the database might initially be in a Restoring state. After initial seeding
finishes, the database is restored to the managed instance and ready for read-only workloads.
Next steps
To break the link and fail over your database to SQL Managed Instance, see Failover a database. To learn more,
see Link feature for Azure SQL Managed Instance.
Replicate a database with the link feature via T-SQL
and PowerShell scripts - Azure SQL Managed
Instance
9/13/2022 • 21 minutes to read • Edit Online
NOTE
The link is a feature of Azure SQL Managed Instance and is currently in preview.
You can also use a SQL Server Management Studio (SSMS) wizard to set up the link to replicate your database.
Prerequisites
To replicate your databases to SQL Managed Instance, you need the following prerequisites:
An active Azure subscription. If you don't have one, create a free account.
Supported version of SQL Server with required service update installed.
Azure SQL Managed Instance. Get started if you don't have it.
PowerShell module Az.SQL 3.9.0, or higher
A properly prepared environment.
Replicate a database
Use the following instructions to manually set up the link between your SQL Server instance and managed
instance. After the link is created, your source database gets a read-only replica copy on your target managed
instance.
NOTE
The link supports replication of user databases only. Replication of system databases is not supported. To replicate
instance-level objects (stored in master or msdb databases), we recommend that you script them out and run T-SQL
scripts on the destination instance.
SQL Server name Short, single-word SQL Server name. Run SELECT @@SERVERNAME from T-
For example: sqlserver1. SQL.
SQL Server FQDN Fully qualified domain name (FQDN) of See your network (DNS) configuration
your SQL Server. For example: on-premises, or the server name if
sqlserver1.domain.com. you're using an Azure virtual machine
(VM).
SQL Managed Instance name Short, single-word SQL Managed See the name of your managed
Instance name. For example: instance in the Azure portal.
managedinstance1.
SQL Managed Instance FQDN Fully qualified domain name (FQDN) of See the host name on the SQL
your SQL Managed Instance. For Managed Instance overview page in
example: the Azure portal.
managedinstance1.6d710bcf372b.data
base.windows.net.
Resolvable domain name DNS name that can be resolved to an Run nslookup command from the
IP address. For example, running command prompt.
nslookup sqlserver1.domain.com
should return an IP address such as
10.0.0.1.
SQL Server IP IP address of your SQL Server. In case Run ipconfig command from the
of multiple IPs on SQL Server, choose command prompt of host OS running
IP address that is accessible from the SQL Server.
Azure.
Certificate-based trust is the only supported way to secure database mirroring endpoints on SQL Server and
SQL Managed Instance. If you've existing availability groups that use Windows authentication, you need to add
certificate-based trust to the existing mirroring endpoint as a secondary authentication option. You can do this
by using the ALTER ENDPOINT statement, as shown further in this article.
IMPORTANT
Certificates are generated with an expiration date and time. They must be renewed and rotated before they expire.
Here's an overview of the process to secure database mirroring endpoints for both SQL Server and SQL
Managed Instance:
1. Generate a certificate on SQL Server and obtain its public key.
2. Obtain a public key of the SQL Managed Instance certificate.
3. Exchange the public keys between SQL Server and SQL Managed Instance.
4. Import Azure-trusted root certificate authority keys to SQL Server
The following sections describe these steps in detail.
Create a certificate on SQL Server and import its public key to SQL Managed Instance
First, create database master key in the master database, if not already present. Insert your password in place of
<strong_password> in the script below, and keep it in a confidential and secure place. Run this T-SQL script on
SQL Server:
Then, generate an authentication certificate on SQL Server. In the script below replace:
@cert_expiry_date with the desired certificate expiration date (future date).
Record this date and set a self-reminder to rotate (update) SQL server certificate before its expiry to ensure
continuous operation of the link.
IMPORTANT
It is strongly recommended to use the auto-generated certificate name from this script. While customizing your own
certificate name on SQL Server is allowed, this name should not contain any \ characters.
-- Create the SQL Server certificate for the instance link
USE MASTER
-- Customize SQL Server certificate expiration date by adjusting the date below
DECLARE @cert_expiry_date AS varchar(max)='03/30/2025'
Then, use the following T-SQL query on SQL Server to verify that the certificate has been created:
In the query results, you'll see that the certificate has been encrypted with the master key.
Now, you can get the public key of the generated certificate on SQL Server:
Save values of SQLServerCertName and SQLServerPublicKey from the output, because you'll need it for the next
step.
For the next step, use PowerShell with the installed Az.Sql module 3.9.0, or higher. Or preferably, use Azure
Cloud Shell online from the web browser to run the commands, because it's always updated with the latest
module versions.
First, ensure that you're logged in to Azure and that you've selected the subscription where your managed
instance is hosted. Selecting the proper subscription is especially important if you have more than one Azure
subscription on your account. Replace:
<SubscriptionID> with your Azure subscription ID.
# Run in Azure Cloud Shell (select PowerShell console)
Then, run the following script in Azure Cloud Shell (PowerShell console). Fill out necessary user information,
copy it, paste it, and then run the script. Replace:
<SQLServerPublicKey> with the public portion of the SQL Server certificate in binary format, which you've
recorded in the previous step. It's a long string value that starts with 0x .
<SQLServerCertName> with the SQL Server certificate name you've recorded in the previous step.
<ManagedInstanceName> with the short name of your managed instance.
# Enter the name for the server SQLServerCertName certificate – for example, "Cert_sqlserver1_endpoint"
$CertificateName = "<SQLServerCertName>"
# Insert the certificate public key blob that you got from SQL Server – for example, "0x1234567..."
$PublicKeyEncoded = "<SQLServerPublicKey>"
# Upload the public key of the authentication certificate from SQL Server to Azure.
New-AzSqlInstanceServerTrustCertificate -ResourceGroupName $ResourceGroup -InstanceName $ManagedInstanceName
-Name $CertificateName -PublicKey $PublicKeyEncoded
The result of this operation will be a summary of the uploaded SQL Server certificate to Azure.
In case this is needed, to see all SQL Server certificates uploaded on a managed instance, use Get-
AzSqlInstanceServerTrustCertificate PowerShell command in Azure Cloud Shell. To remove SQL Server
certificate uploaded on a managed instance, use Remove-AzSqlInstanceServerTrustCertificate PowerShell
command in Azure Cloud Shell.
Get the certificate public key from SQL Managed Instance and import it to SQL Server
The certificate for securing the link endpoint is automatically generated on Azure SQL Managed Instance. This
section describes how to get the certificate public key from SQL Managed Instance, and how to import it to SQL
Server.
Run the following script in Azure Cloud Shell. Replace:
<SubscriptionID> with your Azure subscription ID.
<ManagedInstanceName> with the short name of your managed instance.
# Run in Azure Cloud Shell (select PowerShell console)
# ===============================================================================
# POWERSHELL SCRIPT TO EXPORT MANAGED INSTANCE PUBLIC CERTIFICATE
# ===== Enter user variables here ====
# Fetch the public key of the authentication certificate from Managed Instance. Outputs a binary key in the
property PublicKey.
Get-AzSqlInstanceEndpointCertificate -ResourceGroupName $ResourceGroup -InstanceName $ManagedInstanceName -
EndpointType "DATABASE_MIRRORING" | out-string
Copy the entire PublicKey output (starts with 0x ) from the Azure Cloud Shell as you'll require it in the next step.
Alternatively, if you encounter issues in copy-pasting the PublicKey from Azure Cloud Shell console, you could
also run T-SQL command EXEC sp_get_endpoint_certificate 4 on managed instance to obtain its public key for
the link endpoint.
Next, import the obtained public key of managed instance security certificate to SQL Server. Run the following
query on SQL Server. Replace:
<ManagedInstanceFQDN> with the fully qualified domain name of managed instance.
<PublicKey> with the PublicKey value obtained in the previous step (from Azure Cloud Shell, starting with
0x ). You don't need to use quotation marks.
IMPORTANT
The name of the certificate must be SQL Managed Instance FQDN and should not be modified. The link will not be
operational if using a custom name.
--Trust certificates issued by Microsoft PKI root authority for Azure database.windows.net domains
DECLARE @CERTID int
SELECT @CERTID = CERT_ID('MicrosoftPKI')
EXEC sp_certificate_add_issuer @CERTID, N'*.database.windows.net'
END
ELSE
PRINT 'Certificate MicrosoftPKI already exsits.'
GO
--Trust certificates issued by DigiCert PKI root authority for Azure database.windows.net domains
DECLARE @CERTID int
SELECT @CERTID = CERT_ID('DigiCertPKI')
EXEC sp_certificate_add_issuer @CERTID, N'*.database.windows.net'
END
ELSE
PRINT 'Certificate DigiCertPKI already exsits.'
GO
Finally, verify all created certificates by using the following dynamic management view (DMV):
If the preceding query doesn't show an existing database mirroring endpoint, run the following script on SQL
Server to obtain name of the earlier generated SQL Server certificate.
Validate that the mirroring endpoint was created by running the following script on SQL Server:
NOTE
Skip this step if you've just created a new mirroring endpoint. Use this step only if you're using existing availability groups
with an existing database mirroring endpoint.
If you're using existing availability groups for the link, or if there's an existing database mirroring endpoint, first
validate that it satisfies the following mandatory conditions for the link:
Type must be DATABASE_MIRRORING .
Connection authentication must be CERTIFICATE .
Encryption must be enabled.
Encryption algorithm must be AES .
Run the following query on SQL Server to view details for an existing database mirroring endpoint:
On SQL Server, the same database mirroring endpoint is used for both availability groups and distributed
availability groups. If your connection_auth_desc endpoint is NTLM (Windows authentication) or KERBEROS , and
you need Windows authentication for an existing availability group, it's possible to alter the endpoint to use
multiple authentication methods by switching the authentication option to NEGOTIATE CERTIFICATE . This change
will allow the existing availability group to use Windows authentication, while using certificate authentication for
SQL Managed Instance.
Similarly, if encryption doesn't include AES and you need RC4 encryption, it's possible to alter the endpoint to
use both algorithms. For details about possible options for altering endpoints, see the documentation page for
sys.database_mirroring_endpoints.
The following script is an example of how to alter your existing database mirroring endpoint on SQL Server.
Replace:
<YourExistingEndpointName> with your existing endpoint name.
<SQLServerCertName> with the name of the generated SQL Server certificate (obtained in one of the earlier
steps above).
Depending on your specific configuration, you might need to customize the script further. You can also use
SELECT * FROM sys.certificates to get the name of the created certificate on SQL Server.
After you run the ALTER endpoint query and set the dual authentication mode to Windows and certificate, use
this query again on SQL Server to show details for the database mirroring endpoint:
You've successfully modified your database mirroring endpoint for a SQL Managed Instance link.
First, find out your SQL Server name by running the following T-SQL statement:
Then, use the following script to create the availability group on SQL Server. Replace:
<AGName> with the name of your availability group. For multiple databases, you'll need to create multiple
availability groups. A Managed Instance link requires one database per availability group. Consider naming
each availability group so that its name reflects the corresponding database - for example, AG_<db_name> .
<DatabaseName> with the name of database that you want to replicate.
<SQLServerName> with the name of your SQL Server instance obtained in the previous step.
<SQLServerIP> with the SQL Server IP address. You can use a resolvable SQL Server host machine name as
an alternative, but you need to make sure that the name is resolvable from the SQL Managed Instance virtual
network.
IMPORTANT
For SQL Server 2016, delete WITH (CLUSTER_TYPE = NONE) from the above T-SQL statement. Leave as-is for all higher
SQL Server versions.
Next, create distributed availability group on SQL Server. In the following code, replace:
<DAGName> with the name of your distributed availability group. When you're replicating several databases,
you need one availability group and one distributed availability group for each database. Consider naming
each item accordingly - for example, DAG_<db_name> .
<AGName> with the name of the availability group that you created in the previous step.
<SQLServerIP> with the IP address of SQL Server from the previous step. You can use a resolvable SQL
Server host machine name as an alternative, but make sure that the name is resolvable from the SQL
Managed Instance virtual network (requires configuration of custom Azure DNS for managed instance's
subnet).
<ManagedInstanceName> with the short name of your managed instance.
<ManagedInstnaceFQDN> with the fully qualified domain name of your managed instance.
-- Run on SQL Server
-- Create a distributed availability group for the availability group and database
-- ManagedInstanceName example: 'sqlmi1'
-- ManagedInstanceFQDN example: 'sqlmi1.73d19f36a420a.database.windows.net'
USE MASTER
CREATE AVAILABILITY GROUP [<DAGName>]
WITH (DISTRIBUTED)
AVAILABILITY GROUP ON
N'<AGName>' WITH
(
LISTENER_URL = 'TCP://<SQLServerIP>:5022',
AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
FAILOVER_MODE = MANUAL,
SEEDING_MODE = AUTOMATIC,
SESSION_TIMEOUT = 20
),
N'<ManagedInstanceName>' WITH
(
LISTENER_URL = 'tcp://<ManagedInstanceFQDN>:5022;Server=[<ManagedInstanceName>]',
AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
FAILOVER_MODE = MANUAL,
SEEDING_MODE = AUTOMATIC
);
GO
Alternatively, you can use SSMS Object Explorer to find availability groups and distributed availability groups.
Expand the Always On High Availability folder and then the Availability Groups folder.
Create a link
The final step of the setup process is to create the link.
For simplicity of the process, sign in to the Azure portal and run the following PowerShell script from Azure
Cloud Shell. Replace:
<ManagedInstanceName>with the short name of your managed instance.
<AGName> with the name of the availability group created on SQL Server.
<DAGName> with the name of the distributed availability group created on SQL Server.
<DatabaseName> with the database replicated in the availability group on SQL Server.
<SQLServerIP> with the IP address of your SQL Server. The provided IP address must be accessible by
managed instance.
# Run in Azure Cloud Shell
# =============================================================================
# POWERSHELL SCRIPT FOR CREATING MANAGED INSTANCE LINK
# Instructs Managed Instance to join distributed availability group on SQL Server
# ===== Enter user variables here ====
# Enter the availability group name that was created on SQL Server
$AGName = "<AGName>"
# Enter the distributed availability group name that was created on SQL Server
$DAGName = "<DAGName>"
# Enter the database name that was placed in the availability group for replication
$DatabaseName = "<DatabaseName>"
# Create link on managed instance. Join distributed availability group on SQL Server.
New-AzSqlInstanceLink -ResourceGroupName $ResourceGroup -InstanceName $ManagedInstanceName -Name $DAGName -
PrimaryAvailabilityGroupName $AGName -SecondaryAvailabilityGroupName $ManagedInstanceName -TargetDatabase
$DatabaseName -SourceEndpoint $SourceIP
The result of this operation will be a time stamp of the successful execution of the request to create a link.
In case this is needed, to see all links on a managed instance, use Get-AzSqlInstanceLink PowerShell command in
Azure Cloud Shell. To remove an existing link, use Remove-AzSqlInstanceLink PowerShell command in Azure
Cloud Shell.
NOTE
The link feature supports one database per link. To replicate multiplate databases on an instance, create a link for each
individual database. For example, to replicate 10 databases to SQL Managed Instance, create 10 individual links.
After the connection is established, the Managed Instance Databases view in SSMS initially shows the
replicated databases in a Restoring state as the initial seeding phase moves and restores the full backup of the
database. After the database is restored, replication has to catch up to bring the two databases to a synchronized
state. The database will no longer be in Restoring after the initial seeding finishes. Seeding small databases
might be fast enough that you won't see the initial Restoring state in SSMS.
IMPORTANT
The link won't work unless network connectivity exists between SQL Server and SQL Managed Instance. To
troubleshoot network connectivity, follow the steps in Test bidirectional network connectivity.
Take regular backups of the log file on SQL Server. If the used log space reaches 100 percent, replication to SQL
Managed Instance stops until space use is reduced. We highly recommend that you automate log backups by setting
up a daily job. For details, see Back up log files on SQL Server.
Next steps
For more information on the link feature, see the following resources:
Managed Instance link – connecting SQL Server to Azure reimagined
Prepare your environment for a Managed Instance link
Use a Managed Instance link with scripts to migrate a database
Use a Managed Instance link via SSMS to replicate a database
Use a Managed Instance link via SSMS to migrate a database
Fail over a database by using the link in SSMS -
Azure SQL Managed Instance
9/13/2022 • 3 minutes to read • Edit Online
NOTE
The link is a feature of Azure SQL Managed Instance and is currently in preview.
Prerequisites
To fail over your databases to SQL Managed Instance, you need the following prerequisites:
An active Azure subscription. If you don't have one, create a free account.
Supported version of SQL Server with required service update installed.
Azure SQL Managed Instance. Get started if you don't have it.
SQL Server Management Studio v18.12.1 or later.
An environment that's prepared for replication.
Setup of the link feature and replication of your database to your managed instance in Azure.
If you're performing a planned manual failover, stop the workload on the source SQL Server database to allow
the SQL Managed Instance replicated database to completely catch up and failover without data loss. If you're
performing a forced failover, you might lose data.
1. Open SSMS and connect to your SQL Server instance.
2. In Object Explorer, right-click your database, hover over Azure SQL Managed Instance link , and select
Failover database to open the Failover database to Managed Instance wizard.
3. On the Introduction page of the Failover database to Managed Instance wizard, select Next .
4. On the Log in to Azure page, select Sign-in to provide your credentials and sign in to your Azure
account. Select the subscription that's hosting SQL Managed Instance from the dropdown list, and then
select Next .
5. On the Failover Type page, choose the type of failover you're performing. Select the box to confirm that
you've stopped the workload for a planned failover, or you understand that you might lose data if using a
forced failover. Select Next .
6. On the Clean-up (optional) page, choose to drop the availability group if you created it solely for the
purpose of migrating your database to Azure and you no longer need it. If you want to keep the
availability group, leave the boxes cleared. Select Next .
7. On the Summar y page, review the actions that will be performed for your failover. Optionally, select
Script to create a script that you can run at a later time. When you're ready to proceed with the failover,
select Finish .
On successful execution of the failover process, the link is dropped and no longer exists. The source SQL Server
database and the target SQL Managed Instance database can both execute a read/write workload. They're
completely independent. Repoint your application connection string to managed instance to complete the
migration process.
IMPORTANT
On successful failover, manually repoint your application(s) connection string to managed instance FQDN to continue
running in Azure, and to complete the migration process.
Next steps
To learn more, see Link feature for Azure SQL Managed Instance.
Failover (migrate) a database with a link via T-SQL
and PowerShell scripts - Azure SQL Managed
Instance
9/13/2022 • 8 minutes to read • Edit Online
NOTE
The link is a feature of Azure SQL Managed Instance and is currently in preview. You can also use a SQL Server
Management Studio (SSMS) wizard to failover a database with the link.
Prerequisites
To replicate your databases to SQL Managed Instance, you need the following prerequisites:
An active Azure subscription. If you don't have one, create a free account.
Supported version of SQL Server with required service update installed.
Azure SQL Managed Instance. Get started if you don't have it.
PowerShell module Az.SQL 3.9.0, or higher
A properly prepared environment.
Database failover
Database failover from SQL Server to SQL Managed Instance breaks the link between the two databases.
Failover stops replication and leaves both databases in an independent state, ready for individual read/write
workloads.
To start migrating your database to SQL Managed Instance, first stop any application workloads on SQL Server
during your maintenance hours. This enables SQL Managed Instance to catch up with database replication and
migrate to Azure while mitigating data loss.
While the primary database is a part of an Always On availability group, you can't set it to read-only mode. You
need to ensure that your applications aren't committing transactions to SQL Server prior to the failover.
Ensure that you know the name of the link you would like to fail over. Use the below script in Azure Cloud Shell
to list all active links on managed instance. Replace:
<ManagedInstanceName> with the short name of your managed instance.
From the output of the above script, record the Name property of the link you'd like to fail over.
Then, switch the replication mode from async to sync on managed instance for the link identified by running the
below script in Azure Cloud Shell. Replace:
<ManagedInstanceName> with the short name of your managed instance.
<DAGName> with the name of the link you found out on the previous step (the Name property from the
previous step).
# Run in Azure Cloud Shell
# =============================================================================
# POWERSHELL SCRIPT TO SWITCH LINK REPLICATION MODE (ASYNC\SYNC)
# ===== Enter user variables here ====
Executing the above command will indicate success by displaying summary of the operation, with the property
ReplicationMode shown as Sync .
In case you need to revert this operation, execute the above script to switch the replication mode, replacing the
Sync string in the -ReplicationMode to Async .
To confirm that you've changed the link's replication mode successfully, use the following dynamic management
view. Results indicate the SYNCHRONOUS_COMIT state.
-- Run on SQL Server
-- Verifies the state of the distributed availability group
SELECT
ag.name, ag.is_distributed, ar.replica_server_name,
ar.availability_mode_desc, ars.connected_state_desc, ars.role_desc,
ars.operational_state_desc, ars.synchronization_health_desc
FROM
sys.availability_groups ag
join sys.availability_replicas ar
on ag.group_id=ar.group_id
left join sys.dm_hadr_availability_replica_states ars
on ars.replica_id=ar.replica_id
WHERE
ag.is_distributed=1
Now that you've switched both SQL Managed Instance and SQL Server to sync mode, the replication between
the two entities is synchronous. If you need to reverse this state, follow the same steps and set the async state
for both SQL Server and SQL Managed Instance.
Check LSN values on both SQL Server and SQL Managed Instance
To complete the migration, confirm that replication has finished. For this, ensure that the log sequence numbers
(LSNs) indicating the log records written for both SQL Server and SQL Managed Instance are the same.
Initially, it's expected that the SQL Server LSN will be higher than the SQL Managed Instance LSN. Network
latency might cause SQL Managed Instance to lag somewhat behind the primary SQL Server instance. Because
the workload has been stopped on SQL Server, you should expect the LSNs to match and stop changing after
some time.
Use the following T-SQL query on SQL Server to read the LSN of the last recorded transaction log. Replace:
<DatabaseName> with your database name and look for the last hardened LSN number.
Use the following T-SQL query on SQL Managed Instance to read the last hardened LSN for your database.
Replace <DatabaseName> with your database name.
This query will work on a General Purpose managed instance. For a Business Critical managed instance, you
need to uncomment and drs.is_primary_replica = 1 at the end of the script. On Business Critical, this filter
ensures that only primary replica details are read.
-- Run on a managed instance
-- Obtain the LSN for the database on SQL Managed Instance.
SELECT
db.name AS [Database name],
drs.database_id AS [Database ID],
drs.group_id,
drs.replica_id,
drs.synchronization_state_desc AS [Sync state],
drs.end_of_log_lsn AS [End of log LSN],
drs.last_hardened_lsn AS [Last hardened LSN]
FROM
sys.dm_hadr_database_replica_states drs
inner join sys.databases db on db.database_id = drs.database_id
WHERE
db.name = '<DatabaseName>'
-- for Business Critical, add the following as well
-- AND drs.is_primary_replica = 1
Alternatively, you could also use Azure Cloud Shell PowerShell command Get-AzSqlInstanceLink to fetch the
LastHardenedLsn property for your link on the managed instance which will provide the same information as
the above T-SQL query.
Verify once again that your workload is stopped on SQL Server. Check that LSNs on both SQL Server and SQL
Managed Instance match, and that they remain matched and unchanged for some time. Stable LSNs on both
instances indicate that the tail log has been replicated to SQL Managed Instance and the workload is effectively
stopped.
On successful execution of the failover process, the link is dropped and no longer exists. The source SQL Server
database and the target SQL Managed Instance database can both execute a read/write workload. They're
completely independent. Repoint your application connection string to managed instance to complete the
migration process.
IMPORTANT
On successful failover, manually repoint your application(s) connection string to managed instance FQDN to continue
running in Azure, and to complete the migration process.
With this step, you've finished the migration of the database from SQL Server to SQL Managed Instance.
Next steps
For more information on the link feature, see the following resources:
Managed Instance link – connecting SQL Server to Azure reimagined
Prepare your environment for Managed Instance link
Use a Managed Instance link with scripts to replicate a database
Use a Managed Instance link via SSMS to replicate a database
Use a Managed Instance link via SSMS to migrate a database
Best practices with link feature for Azure SQL
Managed Instance (preview)
9/13/2022 • 2 minutes to read • Edit Online
NOTE
The link feature for Azure SQL ManagedInstance is currently in preview.
Use the following Transact-SQL (T-SQL) command to check the log spaced used by your database on SQL
Server:
The query output looks like the following example below for sample database tpcc :
In this example, the database has used 76% of the available log, with an absolute log file size of approximately
27 GB (27,971 MB). The thresholds for action may vary based on your workload, but it's typically an indication
that you should take a log backup to truncate the log file and free up some space.
Next steps
To get started with the link feature, prepare your environment for replication.
For more information on the link feature, see the following articles:
Managed Instance link – overview
Managed Instance link – connecting SQL Server to Azure reimagined
Change automated backup settings for Azure SQL
Managed Instance
9/13/2022 • 6 minutes to read • Edit Online
WARNING
If you reduce the current retention period, you lose the ability to restore to points in time older than the new retention
period. Backups that are no longer needed to provide PITR within the new retention period are deleted.
If you increase the current retention period, you don't immediately gain the ability to restore to older points in time within
the new retention period. You gain that ability over time, as the system starts to retain backups for longer periods.
NOTE
These APIs will affect only the PITR retention period. If you configured long-term retention (LTR) for your database, it
won't be affected. For information about how to change long-term retention periods, see Long-term retention.
Azure portal
Azure CLI
PowerShell
Rest API
To change the PITR backup retention period or the differential backup frequency for active databases by using
the Azure portal:
1. Go to the managed instance with the databases whose retention period you want to change.
2. Select Backups on the left pane, and then select the Retention policies tab.
3. Select the databases for which you want to change the PITR backup retention.
4. Select Configure policies from the action bar.
Configure backup storage redundancy
Configure backup storage redundancy for SQL Managed Instance by using the Azure portal, the Azure CLI, and
Azure PowerShell.
Azure portal
Azure CLI
PowerShell
Rest API
In the Azure portal, during instance creation, the default option for the backup storage redundancy is geo-
redundancy. To change it:
1. Go to the Basics tab and select Configure Managed Instance .
2. On the Compute + storage pane, select the option for the type of backup storage redundancy that you
want.
3. Select Apply . For now, this change will be applied only for PITR backups. Long-term retention backups
will retain the old storage redundancy type.
The time it takes to perform the backup redundancy change depends on the size of the all the databases within a
single managed instance. Changing the backup redundancy will take more time for instances that have large
databases. It's possible to combine the backup storage redundancy change with the operation to update the
service-level objective (SLO).
Use the Notification pane of the Azure portal to view the status of the change operation.
Next steps
Database backups are an essential part of any business continuity and disaster recovery strategy because
they help protect your data from accidental corruption or deletion. To learn about the other business
continuity solutions for SQL Managed Instance, see Business continuity overview.
For information about how to configure, manage, and restore from long-term retention of automated
backups in Azure Blob Storage by using the Azure portal, see Manage long-term backup retention by using
the Azure portal.
For information about how to configure, manage, and restore from long-term retention of automated
backups in Azure Blob Storage by using PowerShell, see Manage long-term backup retention by using
PowerShell.
Get more information about how to restore a database to a point in time by using the Azure portal.
To learn all about backup storage consumption on Azure SQL Managed Instance, see Backup storage
consumption on Managed Instance explained.
To learn how to fine-tune backup storage retention and costs for Azure SQL Managed Instance, see Fine
tuning backup storage costs on SQL Managed Instance.
Restore a database from a backup in Azure SQL
Managed Instance
9/13/2022 • 8 minutes to read • Edit Online
IMPORTANT
You can't overwrite an existing database during restore.
Recovery time
Several factors affect the recovery time to restore a database through automated database backups:
The size of the database
The compute size of the database
The number of transaction logs involved
The amount of activity that needs to be replayed to recover to the restore point
The network bandwidth if the restore is to a different region
The number of concurrent restore requests that are processed in the target region
For a large or very active database, the restore might take several hours. A prolonged outage in a region might
cause a high number of geo-restore requests for disaster recovery. When there are many requests, the recovery
time for individual databases can increase. Most database restores finish in less than 12 hours.
TIP
For Azure SQL Managed Instance, system updates take precedence over database restores in progress. If there's a system
update for SQL Managed Instance, all pending restores are suspended and then resumed after the update has been
applied. This system behavior might prolong the time of restores and might be especially impactful to long-running
restores.
To achieve a predictable time of database restores, consider configuring maintenance windows that allow scheduling of
system updates at a specific day and time. Also consider running database restores outside the scheduled maintenance
window.
Permissions
To recover by using automated backups, you must be either:
A member of the SQL Server Contributor role or SQL Managed Instance Contributor role (depending on the
recovery destination) in the subscription
The subscription owner
For more information, see Azure RBAC: Built-in roles.
You can recover by using the Azure portal, PowerShell, or the REST API. You can't use Transact-SQL.
Point-in-time restore
You can restore a database to an earlier point in time. The request can specify any service tier or compute size
for the restored database. Ensure that you have sufficient resources on the instance to which you're restoring the
database.
When the restore is complete, it creates a new database on the same instance as the original database. The
restored database is charged at normal rates, based on its service tier and compute size. You don't incur charges
until the database restore is complete.
You generally restore a database to an earlier point for recovery purposes. You can treat the restored database
as a replacement for the original database or use it as a data source to update the original database.
IMPORTANT
You can't perform a point-in-time restore on a geo-secondary database. You can do so only on a primary database.
Database replacement
If you want the restored database to be a replacement for the original database, you should specify the
original database's compute size and service tier. You can then rename the original database and give the
restored database the original name by using the ALTER DATABASE command in T-SQL.
Data recover y
If you plan to retrieve data from the restored database to recover from a user or application error, you
need to write and run a data recovery script that extracts data from the restored database and applies to
the original database. Although the restore operation might take a long time to complete, the restoring
database is visible in the database list throughout the restore process.
If you delete the database during the restore, the restore operation will be canceled. You won't be charged
for the database that did not complete the restore.
Azure portal
Azure CLI
PowerShell
To recover a database in SQL Managed Instance to a point in time by using the Azure portal, open the database
overview page, and select Restore on the toolbar. Choose the point-in-time backup point from which a new
database will be created.
IMPORTANT
If you delete a managed instance, all its databases are also deleted and can't be recovered. You can't restore a deleted
managed instance.
Azure portal
Azure CLI
PowerShell
To recover a database by using the Azure portal, open the managed instance's overview page and select
Deleted databases . Select a deleted database that you want to restore. Then enter the name for the new
database that will be created with data restored from the backup.
TIP
It might take several minutes for recently deleted databases to appear on the Deleted databases page in the Azure
portal, or when you want to display deleted databases by using the command line.
Geo-restore
IMPORTANT
Geo-restore is available only for managed instances configured with geo-redundant backup storage. If you're not
currently using geo-replicated backups for a database, you can change this by configuring backup storage
redundancy.
You can perform geo-restore on managed instances that reside in the same subscription only.
You can restore a database on any managed instance in any Azure region from the most recent geo-replicated
backups. Geo-restore uses a geo-replicated backup as its source. You can request a geo-restore even if an
outage has made the database or datacenter inaccessible.
Geo-restore is the default recovery option when your database is unavailable because of an incident in the
hosting region. You can restore the database to a server in any other region.
There's a delay between when a backup is taken and when it's geo-replicated to an Azure blob in a different
region. As a result, the restored database can be up to one hour behind the original database. The following
illustration shows a database restore from the last available backup in another region.
Azure portal
Azure CLI
PowerShell
From the Azure portal, you create a new managed instance and select an available geo-restore backup. The
newly created database contains the geo-restored backup data.
To geo-restore a database from the Azure portal to an existing managed instance in a region of your choice,
select the managed instance. Then follow these steps:
1. Select New database .
2. Enter a database name.
3. Under Use existing data , select Backup .
4. Select a backup from the list of available geo-restore backups.
After you complete the process of creating an instance database, it will contain the restored geo-restore backup.
Geo -restore considerations
For detailed information about using geo-restore to recover from an outage, see Recover from an outage.
Geo-restore is the most basic disaster-recovery solution available in SQL Managed Instance. It relies on
automatically created geo-replicated backups with a recovery point objective (RPO) of up to 1 hour and an
estimated recovery time objective (RTO) of up to 12 hours. It doesn't guarantee that the target region will have
the capacity to restore your databases after a regional outage, because a sharp increase of demand is likely. If
your application uses relatively small databases and is not critical to the business, geo-restore is an appropriate
disaster-recovery solution.
For business-critical applications that require large databases and must ensure business continuity, use auto-
failover groups. That feature offers a much lower RPO and RTO, and the capacity is always guaranteed.
For more information about business continuity choices, see Overview of business continuity.
Next steps
SQL Managed Instance automated backups
Long-term retention
To learn about faster recovery options, see Auto-failover groups.
Restore a database in Azure SQL Managed Instance
to a previous point in time
9/13/2022 • 6 minutes to read • Edit Online
Limitations
Point-in-time restore to SQL Managed Instance has the following limitations:
When you're restoring from one instance of SQL Managed Instance to another, both instances must be in the
same subscription and region. Cross-region and cross-subscription restore aren't currently supported.
Point-in-time restore of a whole SQL Managed Instance is not possible. This article explains only what's
possible: point-in-time restore of a database that's hosted on SQL Managed Instance.
WARNING
Be aware of the storage size of your SQL Managed Instance. Depending on size of the data to be restored, you might run
out of instance storage. If there isn't enough space for the restored data, use a different approach.
The following table shows point-in-time restore scenarios for SQL Managed Instance:
RESTO RE EXIST IN G
DB TO T H E SA M E RESTO RE EXIST IN G RESTO RE DRO P P ED RESTO RE DRO P P ED
IN STA N C E O F SQ L DB TO A N OT H ER SQ L DB TO SA M E SQ L DB TO A N OT H ER SQ L
M A N A GED IN STA N C E M A N A GED IN STA N C E M A N A GED IN STA N C E M A N A GED IN STA N C E
4. On the Restore page, select the point for the date and time that you want to restore the database to.
5. Select Confirm to restore your database. This action starts the restore process, which creates a new
database and populates it with data from the original database at the specified point in time. For more
information about the recovery process, see Recovery time.
To restore the database to another SQL Managed Instance, also specify the names of the target resource group
and target SQL Managed Instance:
Use one of the following methods to connect to your database in the SQL Managed Instance:
SSMS/Azure Data Studio via an Azure virtual machine
Point-to-site
Public endpoint
Portal
PowerShell
Azure CLI
In the Azure portal, select the database from the SQL Managed Instance, and then select Delete .
Alter the new database name to match the original database name
Connect directly to the SQL Managed Instance and start SQL Server Management Studio. Then, run the
following Transact-SQL (T-SQL) query. The query will change the name of the restored database to that of the
dropped database that you intend to overwrite.
Use one of the following methods to connect to your database in SQL Managed Instance:
Azure virtual machine
Point-to-site
Public endpoint
Next steps
Learn about automated backups.
Monitor backup activity for Azure SQL Managed
Instance
9/13/2022 • 3 minutes to read • Edit Online
Overview
Azure SQL Managed Instance emits events (also known as Extended Events or XEvents) during backup activity
for the purpose of reporting. Configure an XEvent session to track information such as backup status, backup
type, size, time, and location within the msdb database. This information can be integrated with backup
monitoring software and also used for the purpose of Enterprise Audit.
Enterprise Audits may require proof of successful backups, time of backup, and duration of the backup.
Verbose tracking
Configure a verbose XEvent session to track greater details about your backup activity. This script captures start
and finish of both full, differential and log backups. Since this script is more verbose, it fills up the ring buffer
faster, so entries may recycle faster than with the simple script.
Use Transact-SQL (T-SQL) to configure the verbose XEvent session:
CREATE EVENT SESSION [Verbose backup trace] ON SERVER
ADD EVENT sqlserver.backup_restore_progress_trace(
WHERE (
[operation_type]=(0) AND (
[trace_message] like '%100 percent%' OR
[trace_message] like '%BACKUP DATABASE%' OR [trace_message] like '%BACKUP LOG%'))
)
ADD TARGET package0.ring_buffer
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,
MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,
TRACK_CAUSALITY=OFF,STARTUP_STATE=ON)
WITH
a AS (SELECT xed = CAST(xet.target_data AS xml)
FROM sys.dm_xe_session_targets AS xet
JOIN sys.dm_xe_sessions AS xe
ON (xe.address = xet.event_session_address)
WHERE xe.name = 'Backup trace'),
b AS(SELECT
d.n.value('(@timestamp)[1]', 'datetime2') AS [timestamp],
ISNULL(db.name, d.n.value('(data[@name="database_name"]/value)[1]', 'varchar(200)')) AS database_name,
d.n.value('(data[@name="trace_message"]/value)[1]', 'varchar(4000)') AS trace_message
FROM a
CROSS APPLY xed.nodes('/RingBufferTarget/event') d(n)
LEFT JOIN master.sys.databases db
ON db.physical_database_name = d.n.value('(data[@name="database_name"]/value)[1]', 'varchar(200)'))
SELECT * FROM b
The following screenshot shows an example of the output of the above query:
In this example, five databases were automatically backed up over the course of 2 hours and 30 minutes, and
there are 130 entries in the XEvent session.
Verbose tracking
The following Transact-SQL (T-SQL) code queries the verbose XEvent session and returns the name of the
database, as well as the start and finish of both full, differential and log backups.
WITH
a AS (SELECT xed = CAST(xet.target_data AS xml)
FROM sys.dm_xe_session_targets AS xet
JOIN sys.dm_xe_sessions AS xe
ON (xe.address = xet.event_session_address)
WHERE xe.name = 'Verbose backup trace'),
b AS(SELECT
d.n.value('(@timestamp)[1]', 'datetime2') AS [timestamp],
ISNULL(db.name, d.n.value('(data[@name="database_name"]/value)[1]', 'varchar(200)')) AS database_name,
d.n.value('(data[@name="trace_message"]/value)[1]', 'varchar(4000)') AS trace_message
FROM a
CROSS APPLY xed.nodes('/RingBufferTarget/event') d(n)
LEFT JOIN master.sys.databases db
ON db.physical_database_name = d.n.value('(data[@name="database_name"]/value)[1]', 'varchar(200)'))
SELECT * FROM b
The following screenshot shows an example of a full backup in the XEvent session:
The following screenshot shows an example of an output of a differential backup in the XEvent session:
Next steps
Once your backup has completed, you can then restore to a point in time or configure a long-term retention
policy.
To learn more, see automated backups.
Configure an auto-failover group for Azure SQL
Managed Instance
9/13/2022 • 11 minutes to read • Edit Online
NOTE
This article covers auto-failover groups for Azure SQL Managed Instance. For Azure SQL Database, see Configure auto-
failover groups in SQL Database.
Prerequisites
Consider the following prerequisites:
The secondary managed instance must be empty that is, contain no user databases.
The two instances of SQL Managed Instance need to be the same service tier, and have the same storage size.
While not required, it's strongly recommended that two instances have equal compute size, to make sure that
secondary instance can sustainably process the changes being replicated from the primary instance,
including the periods of peak activity.
The IP address range(s) of the virtual network hosting the primary instance must not overlap with IP address
range(s) of the virtual network hosting the secondary instance.
Network Security Groups (NSG) rules on subnet hosting instance must have port 5022 (TCP) and the port
range 11000-11999 (TCP) open inbound and outbound for connections from and to the subnet hosting the
other managed instance. This applies to both subnets, hosting primary and secondary instance.
The secondary SQL Managed Instance is configured during its creation with the correct DNS zone ID. It's
accomplished by passing the primary instance's zone ID as the value of DnsZonePartner parameter when
creating the secondary instance. If not passed as a parameter, the zone ID is generated as a random string
when the first instance is created in each VNet and the same ID is assigned to all other instances in the same
subnet. Once assigned, the DNS zone can't be modified.
The collation and time zone of the secondary managed instance must match that of the primary managed
instance.
Managed instances should be deployed in paired regions for performance reasons. Managed instances
residing in geo-paired regions benefit from significantly higher geo-replication speed compared to unpaired
regions.
Portal
PowerShell
1. In the Azure portal, go to the Vir tual network resource for your primary managed instance.
2. Select Peerings under Settings and then select + Add.
Peering link name The name for the peering must be unique within the
virtual network.
Traffic forwarded from remote virtual network Both Allowed (default) and Block option will work for
this tutorial. For more information, see Create a peering
Virtual network gateway or Route Server Select None . For more information about the other
options available, see Create a peering.
Peering link name The name of the same peering to be used in the virtual
network hosting secondary instance.
Traffic forwarded from remote virtual network Both Allowed (default) and Block option will work for
this tutorial. For more information, see Create a peering.
Virtual network gateway or Route Server Select None . For more information about the other
options available, see Create a peering.
2. Select Add to configure the peering with the virtual network you selected. After a few seconds, select the
Refresh button and the peering status will change from Updating to Connected.
Portal
PowerShell
Create the failover group for your SQL Managed Instances by using the Azure portal.
1. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL isn't in the list, select All
ser vices , then type Azure SQL in the search box. (Optional) Select the star next to Azure SQL to add it
as a favorite item to the left-hand navigation.
2. Select the primary managed instance you want to add to the failover group.
3. Under Settings , navigate to Instance Failover Groups and then choose to Add group to open the
instance failover group creation page.
4. On the Instance Failover Group page, type the name of your failover group and then choose the
secondary managed instance from the drop-down. Select Create to create your failover group.
5. Once failover group deployment is complete, you'll be taken back to the Failover group page.
Test failover
Test failover of your failover group using the Azure portal or PowerShell.
Portal
PowerShell
IMPORTANT
If roles didn't switch, check the connectivity between the instances and related NSG and firewall rules. Proceed with the
next step only after roles switch.
1. Go to the new secondary managed instance and select Failover once again to fail the primary instance back
to the primary role.
IMPORTANT
Azure portal does not support creation of failover groups across different subscriptions. Also, for the existing failover
groups across different subscriptions and/or resource groups, failover can't be initiated manually via portal from the
primary SQL Managed Instance. Initiate it from the geo-secondary instance instead.
After step 3 and until step 4 is completed the databases in instance A will remain unprotected from a
catastrophic failure of instance A.
IMPORTANT
When the failover group is deleted, the DNS records for the listener endpoints are also deleted. At that point, there's a
non-zero probability of somebody else creating a failover group with the same name. Because failover group names must
be globally unique, this will prevent you from using the same name again. To minimize this risk, don't use generic failover
group names.
Permissions
Permissions for a failover group are managed via Azure role-based access control (Azure RBAC).
Azure RBAC write access is necessary to create and manage failover groups. The SQL Managed Instance
Contributor role has all the necessary permissions to manage failover groups.
The following table lists specific permission scopes for Azure SQL Managed Instance:
A C T IO N P ERM ISSIO N SC O P E
Create failover group Azure RBAC write access Primary managed instance
Secondary managed instance
Fail over failover group Azure RBAC write access Failover group on new primary
managed instance
Next steps
For detailed steps configuring a failover group, see the Add a managed instance to a failover group tutorial
For an overview of the feature, see Auto-failover groups.
User-initiated manual failover on SQL Managed
Instance
9/13/2022 • 5 minutes to read • Edit Online
NOTE
This article is not related with cross-region failovers on auto-failover groups.
NOTE
Ensuring that your applications are failover resilient prior to deploying to production will help mitigate the risk of
application faults in production and will contribute to application availability for your customers. Learn more about testing
your applications for cloud readiness with Testing App Cloud Readiness for Failover Resiliency with SQL Managed Instance
video recoding.
Using PowerShell
The minimum version of Az.Sql needs to be v2.9.0. Consider using Azure Cloud Shell from the Azure portal that
always has the latest PowerShell version available.
As a pre-requirement, use the following PowerShell script to install required Azure modules. In addition, select
the subscription where Managed Instance you wish to failover is located.
Connect-AzAccount
Select-AzSubscription -SubscriptionId $subscription
Use PowerShell command Invoke-AzSqlInstanceFailover with the following example to initiate failover of the
primary node, applicable to both BC and GP service tier.
Use the following PS command to failover read secondary node, applicable to BC service tier only.
Using CLI
Ensure to have the latest CLI scripts installed.
Use az sql mi failover CLI command with the following example to initiate failover of the primary node,
applicable to both BC and GP service tier.
Use the following CLI command to failover read secondary node, applicable to BC service tier only.
POST
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Mic
rosoft.Sql/managedInstances/{managedInstanceName}/failover?api-version=2019-06-01-preview
The following properties need to be passed in the API call:
A P I P RO P ERT Y PA RA M ET ER
Before initiating the failover, your output will indicate the current primary replica on BC service tier containing
one primary and three secondaries in the Always On Availability Group. Upon execution of a failover, running
this query again would need to indicate a change of the primary node.
You will not be able to see the same output with GP service tier as the one above shown for BC. This is because
GP service tier is based on a single node only. You can use alternative T-SQL query showing the time SQL
process started on the node for GP service tier instance:
The short loss of connectivity from your client during the failover, typically lasting under a minute, will be the
indication of the failover execution regardless of the service tier.
NOTE
Completion of the failover process (not the actual short unavailability) might take several minutes at a time in case of
high-intensity workloads. This is because the instance engine is taking care of all current transactions on the primary
and catch up on the secondary node, prior to being able to failover.
IMPORTANT
Functional limitations of user-initiated manual failover are:
There could be one (1) failover initiated on the same Managed Instance every 15 minutes .
For BC instances there must exist quorum of replicas for the failover request to be accepted.
For BC instances it is not possible to specify which readable secondary replica to initiate the failover on.
Failover will not be allowed until the first full backup for a new database is completed by automated backup systems.
Failover will not be allowed if there exists a database restore in progress.
Next steps
Learn more about testing your applications for cloud readiness with Testing App Cloud Readiness for Failover
Resiliency with SQL Managed Instance video recoding.
Learn more about high availability of managed instance High availability for Azure SQL Managed Instance.
For an overview, see What is Azure SQL Managed Instance?.
Azure CLI samples for Azure SQL Database and
SQL Managed Instance
9/13/2022 • 2 minutes to read • Edit Online
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Samples
Azure SQL Database
Azure SQL Managed Instance
The following table includes links to Azure CLI script examples to manage single and pooled databases in Azure
SQL Database.
Create databases
Create pooled databases Creates elastic pools, moves pooled databases, and changes
compute sizes.
Scale databases
A REA DESC RIP T IO N
Scale pooled database Scales a SQL elastic pool to a different compute size.
Configure geo-replication
Configure failover group Configures a failover group for a group of databases and
failover over databases to the secondary server.
Single database Creates a database and a failover group, adds the database
to the failover group, then tests failover to the secondary
server.
Copy a database to a new server Creates a copy of an existing database in SQL Database in a
new server.
Import a database from a BACPAC file Imports a database to SQL Database from a BACPAC file.
IMPORTANT
For limitations, see supported regions and supported subscription types.
If you don't have an Azure subscription, create an Azure free account before you begin.
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
For this script, use Azure CLI locally as it takes too long to run in Cloud Shell.
Sign in to Azure
Use the following script to sign in using a specific subscription.
# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="create-managed-instance"
vNet="msdocs-azuresql-vnet-$randomIdentifier"
subnet="msdocs-azuresql-subnet-$randomIdentifier"
nsg="msdocs-azuresql-nsg-$randomIdentifier"
route="msdocs-azuresql-route-$randomIdentifier"
instance="msdocs-azuresql-instance-$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"
echo "Using resource group $resourceGroup with login: $login, password: $password..."
az network nsg rule create --name "allow_management_inbound" --nsg-name $nsg --priority 100 --resource-group
$resourceGroup --access Allow --destination-address-prefixes 10.0.0.0/24 --destination-port-ranges 9000 9003
1438 1440 1452 --direction Inbound --protocol Tcp --source-address-prefixes "*" --source-port-ranges "*"
az network nsg rule create --name "allow_misubnet_inbound" --nsg-name $nsg --priority 200 --resource-group
$resourceGroup --access Allow --destination-address-prefixes 10.0.0.0/24 --destination-port-ranges "*" --
direction Inbound --protocol "*" --source-address-prefixes 10.0.0.0/24 --source-port-ranges "*"
az network nsg rule create --name "allow_health_probe_inbound" --nsg-name $nsg --priority 300 --resource-
group $resourceGroup --access Allow --destination-address-prefixes 10.0.0.0/24 --destination-port-ranges "*"
--direction Inbound --protocol "*" --source-address-prefixes AzureLoadBalancer --source-port-ranges "*"
az network nsg rule create --name "allow_management_outbound" --nsg-name $nsg --priority 1100 --resource-
group $resourceGroup --access Allow --destination-address-prefixes AzureCloud --destination-port-ranges 443
12000 --direction Outbound --protocol Tcp --source-address-prefixes 10.0.0.0/24 --source-port-ranges "*"
az network nsg rule create --name "allow_misubnet_outbound" --nsg-name $nsg --priority 200 --resource-group
$resourceGroup --access Allow --destination-address-prefixes 10.0.0.0/24 --destination-port-ranges "*" --
direction Outbound --protocol "*" --source-address-prefixes 10.0.0.0/24 --source-port-ranges "*"
# This step will take awhile to complete. You can monitor deployment progress in the activity log within the
Azure portal.
echo "Creating $instance with $vNet and $subnet..."
az sql mi create --admin-password $password --admin-user $login --name $instance --resource-group
$resourceGroup --subnet $subnet --vnet-name $vNet --location "$location"
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Azure CLI script to enable transparent data
encryption using your own key
9/13/2022 • 3 minutes to read • Edit Online
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
For this script, use Azure CLI locally as it takes too long to run in Cloud Shell.
Sign in to Azure
Use the following script to sign in using a specific subscription.
# Variable block
location="East US"
vault="msdocssqlvault$randomIdentifier"
key="msdocs-azuresql-key-$randomIdentifier"
echo $instanceId
az keyvault set-policy --name $vault --object-id $instanceId --key-permissions get unwrapKey wrapKey
# keyPath="C:\yourFolder\yourCert.pfx"
# keyPassword="yourPassword"
# az keyvault certificate import --file $keyPath --name $key --vault-name $vault --password $keyPassword
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
For this script, use Azure CLI locally as it takes too long to run in Cloud Shell.
Sign in to Azure
Use the following script to sign in using a specific subscription.
let "randomIdentifier=$RANDOM*$RANDOM"
$managedDatabase = "managedDatabase-$randomIdentifier"
# To specify a specific point-in-time (in UTC) to restore from, use the ISO8601 format:
# restorePoint=“2021-07-09T13:10:00Z”
restorePoint=$(date +%s)
restorePoint=$(expr $restorePoint - 60)
restorePoint=$(date -d @$restorePoint +"%Y-%m-%dT%T")
echo $restorePoint
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Azure PowerShell samples for Azure SQL Database
and Azure SQL Managed Instance
9/13/2022 • 4 minutes to read • Edit Online
O P T IO N EXA M P L E/ L IN K
The following table includes links to sample Azure PowerShell scripts for Azure SQL Database.
L IN K DESC RIP T IO N
Create a single database and configure a server-level firewall This PowerShell script creates a single database and
rule configures a server-level IP firewall rule.
Create elastic pools and move pooled databases This PowerShell script creates elastic pools, moves pooled
databases, and changes compute sizes.
Configure and fail over a single database using active geo- This PowerShell script configures active geo-replication for a
replication single database and fails it over to the secondary replica.
Configure and fail over a pooled database using active geo- This PowerShell script configures active geo-replication for a
replication database in an elastic pool and fails it over to the secondary
replica.
Configure a failover group for a single database This PowerShell script creates a database and a failover
group, adds the database to the failover group, and tests
failover to the secondary server.
Configure a failover group for an elastic pool This PowerShell script creates a database, adds it to an elastic
pool, adds the elastic pool to the failover group, and tests
failover to the secondary server.
Scale a single database This PowerShell script monitors the performance metrics of a
single database, scales it to a higher compute size, and
creates an alert rule on one of the performance metrics.
Scale an elastic pool This PowerShell script monitors the performance metrics of
an elastic pool, scales it to a higher compute size, and
creates an alert rule on one of the performance metrics.
Copy a database to a new server This PowerShell script creates a copy of an existing database
in a new server.
Import a database from a bacpac file This PowerShell script imports a database into Azure SQL
Database from a bacpac file.
Sync data between databases This PowerShell script configures Data Sync to sync between
multiple databases in Azure SQL Database.
L IN K DESC RIP T IO N
Sync data between SQL Database and SQL Server on- This PowerShell script configures Data Sync to sync between
premises a database in Azure SQL Database and a SQL Server on-
premises database.
Update the SQL Data Sync sync schema This PowerShell script adds or removes items from the Data
Sync sync schema.
Additional resources
The examples listed on this page use the PowerShell cmdlets for creating and managing Azure SQL resources.
Additional cmdlets for running queries and performing many database tasks are located in the sqlserver
module. For more information, see SQL Server PowerShell.
PowerShell script to enable transparent data
encryption using your own key
9/13/2022 • 4 minutes to read • Edit Online
Prerequisites
An existing managed instance. See Use PowerShell to create a managed instance.
If you don't have an Azure subscription, create an Azure free account before you begin.
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
O P T IO N EXA M P L E/ L IN K
If you are running PowerShell locally, you also need to run Connect-AzAccount to create a connection with Azure.
Sample scripts
# You will need an existing Managed Instance as a prerequisite for completing this script.
# See https://docs.microsoft.com/en-us/azure/sql-database/scripts/sql-database-create-configure-managed-
instance-powershell
# If there are multiple subscriptions, choose the one where AKV is created:
Set-AzContext -SubscriptionId "subscription ID"
# Install the Az.Sql PowerShell package if you are running this PowerShell locally (uncomment below):
# Install-Module -Name Az.Sql
# 1. Create Resource and setup Azure Key Vault (skip if already done)
# Create Resource group (name the resource and specify the location)
$location = "westus2" # specify the location
$resourcegroup = "MyRG" # specify a new RG name
New-AzResourceGroup -Name $resourcegroup -Location $location
# Create new Azure Key Vault with a globally unique VaultName and soft-delete option turned on:
$vaultname = "MyKeyVault" # specify a globally unique VaultName
New-AzKeyVault -VaultName $vaultname -ResourceGroupName $resourcegroup -Location $location
# Authorize Managed Instance to use the AKV (wrap/unwrap key and get public part of key, if public part
exists):
$objectid = (Set-AzSqlInstance -ResourceGroupName $resourcegroup -Name "MyManagedInstance" -
AssignIdentity).Identity.PrincipalId
Set-AzKeyVaultAccessPolicy -BypassObjectIdValidation -VaultName $vaultname -ObjectId $objectid -
PermissionsToKeys get,wrapKey,unwrapKey
# Allow access from your client IP address(es) to be able to complete remaining steps:
Update-AzKeyVaultNetworkRuleSet -VaultName $vaultname -IpAddressRange "xxx.xxx.xxx.xxx/xx"
# First, give yourself necessary permissions on the AKV, (specify your account instead of contoso.com):
Set-AzKeyVaultAccessPolicy -VaultName $vaultname -UserPrincipalName "[email protected]" -
PermissionsToKeys create,import,get,list
# The recommended way is to import an existing key from a .pfx file. Replace "<PFX private key password>"
with the actual password below:
$keypath = "c:\some_path\mytdekey.pfx" # Supply your .pfx path and name
$securepfxpwd = ConvertTo-SecureString -String "<PFX private key password>" -AsPlainText -Force
$key = Add-AzKeyVaultKey -VaultName $vaultname -Name "MyTDEKey" -KeyFilePath $keypath -KeyFilePassword
$securepfxpwd
# ...or get an existing key from the vault:
# $key = Get-AzKeyVaultKey -VaultName $vaultname -Name "MyTDEKey"
# Alternatively, generate a new key directly in Azure Key Vault (recommended for test purposes only -
uncomment below):
# $key = Add-AzureKeyVaultKey -VaultName $vaultname -Name MyTDEKey -Destination Software -Size 2048
Next steps
For more information on Azure PowerShell, see Azure PowerShell documentation.
Additional PowerShell script samples for SQL Managed Instance can be found in Azure SQL Managed Instance
PowerShell scripts.
Azure Resource Manager templates for Azure SQL
Database & SQL Managed Instance
9/13/2022 • 3 minutes to read • Edit Online
The following table includes links to Azure Resource Manager templates for Azure SQL Database.
L IN K DESC RIP T IO N
Elastic pool This template allows you to deploy an elastic pool and to
assign databases to it.
Failover groups This template creates two servers, a single database, and a
failover group in Azure SQL Database.
Threat Detection This template allows you to deploy a server and a set of
databases with Threat Detection enabled, with an email
address for alerts for each database. Threat Detection is part
of the SQL Advanced Threat Protection (ATP) offering and
provides a layer of security that responds to potential
threats over servers and databases.
Auditing to Azure Blob storage This template allows you to deploy a server with auditing
enabled to write audit logs to a Blob storage. Auditing for
Azure SQL Database tracks database events and writes them
to an audit log that can be placed in your Azure storage
account, OMS workspace, or Event Hubs.
Auditing to Azure Event Hub This template allows you to deploy a server with auditing
enabled to write audit logs to an existing event hub. In order
to send audit events to Event Hubs, set auditing settings
with Enabled State , and set
IsAzureMonitorTargetEnabled as true . Also, configure
Diagnostic Settings with the SQLSecurityAuditEvents log
category on the master database (for server-level
auditing). Auditing tracks database events and writes them
to an audit log that can be placed in your Azure storage
account, OMS workspace, or Event Hubs.
L IN K DESC RIP T IO N
Azure Web App with SQL Database This sample creates a free Azure web app and a database in
Azure SQL Database at the "Basic" service level.
Azure Web App and Redis Cache with SQL Database This template creates a web app, Redis Cache, and database
in the same resource group and creates two connection
strings in the web app for the database and Redis Cache.
Import data from Blob storage using ADF V2 This Azure Resource Manager template creates an instance
of Azure Data Factory V2 that copies data from Azure Blob
storage to SQL Database.
HDInsight cluster with a database This template allows you to create an HDInsight cluster, a
logical SQL server, a database, and two tables. This template
is used by the Use Sqoop with Hadoop in HDInsight article.
Azure Logic App that runs a SQL Stored Procedure on a This template allows you to create a logic app that will run a
schedule SQL stored procedure on schedule. Any arguments for the
procedure can be put into the body section of the template.
Provision server with Azure AD-only authentication enabled This template creates a SQL logical server with an Azure AD
admin set for the server and Azure AD-only authentication
enabled.
Documentation changes for SQL Server on Azure
Virtual Machines
9/13/2022 • 9 minutes to read • Edit Online
July 2022
C H A N GES DETA IL S
Azure CLI for SQL best practices assessment It's now possible to configure the SQL best practices
assessment feature using the Azure CLI.
Configure tempdb from Azure por tal It's now possible to configure your tempdb settings, such as
the number of files, initial size, and autogrowth ratio for an
existing SQL Server instance by using the Azure portal. See
manage SQL Server VM from portal to learn more.
May 2022
C H A N GES DETA IL S
SDK-style SQL projects Use Microsoft.Build.Sql for SDK-style SQL projects in the SQL
Database Projects extension in Azure Data Studio or VS
Code. This feature is currently in preview. To learn more, see
SDK-style SQL projects.
April 2022
C H A N GES DETA IL S
March 2022
C H A N GES DETA IL S
Security best practices The SQL Server VM security best practices have been
rewritten and refreshed!
C H A N GES DETA IL S
January 2022
C H A N GES DETA IL S
Migrate with distributed AG It's now possible to migrate your database(s) from a
standalone instance of SQL Server or an entire availability
group over to SQL Server on Azure VMs using a distributed
availability group! See the prerequisites to get started.
2021
C H A N GES DETA IL S
Deployment configuration improvements It's now possible to configure the following options when
deploying your SQL Server VM from an Azure Marketplace
image: System database location, number of tempdb data
files, coll