DP 300notes241025
DP 300notes241025
o Go to Azure SQL
o In additional settings, add existing data if required, either from a backup or sample data.
o You should use Managed Service Accounts (MSA) for a single computer running a service.
▪ A Group Managed Service Account (gMSA) is used for assigning the MSA to multiple
servers.
o You need:
▪ Region,
▪ In Networking, select "Private endpoint", then "+Add private endpoint" and select
the subnet from above.
o When created, in "Firewalls and virtual networks", click "+Add client IP", and "Allow Azure
services and resources to access this server".
Page 1 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
o PaaS SQL Database and Managed Instance have built-in patching, and they always use the
latest stable Database Engine version.
o You have full control of the database engine, e.g. when to apply patches.
▪ You need SQL Server 2008 R2 or above, and Windows Server 2008 R2 or above.
▪ for existing VMs, by going to Azure Portal – the relevant VM – Settings – SQL Server
configuration – Patching.
▪ This daily checks whether there are any unregistered VMs in the subscription, and if
so, registers them in lightweight mode.
- To take advantage of all of the features, you would still need to manually
upgrade.
▪ To do this, go to Azure Portal – SQL virtual machines (plural) – and at the top, click
on "Automatic SQL Server VM registration".
Page 2 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
-. evaluate requirements for the deployment
Page 3 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
5. evaluate the scalability of the possible database offering
Page 4 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
6. evaluate the security aspects of the possible database offering
o Scalable – there are hardware limits, but if you divide data into partitions, each on a separate
server, it can be scaled out.
o Increase performance – Smaller amount of data in a single partition, and multiple data stores
can be accessed at the same time.
o Increase availability – if one instance fails, only that partition is temporarily unreadable.
▪ If some data is fairly static or small, consider replicating it in all partitions, to reduce
cross-partition access.
o Vertical partitioning.
▪ Some columns may be needed less often, and they could be separated away, and
used only when needed.
▪ Some columns may also be more sensitive, and could be separated away.
▪ All partitions would need to be capable of being joined – for instance, by the same
primary key in each.
o Functional partitioning.
Page 5 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
▪ Store data could be in one partition, and employee data in another.
▪ Some tables could be more sensitive, and could be separated away into another
partition.
• Consider the backup, archiving (including deleting) and High Availability, Disaster Recovery
requirements for each partition.
o Network bandwidth
• You can:
o Lookup strategy
▪ Have a shard key (an ID), and a map which shows where the data is stored.
o Range strategy
Page 6 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
▪ Similar data is kept on the same storage node, so it can retrieve multiple items in a
single operation.
o Hash strategy
▪ Data distributed evenly among the shards. Reduces hotspots (high loads for an
individual server) by using some random element for distribution.
o Single database.
o Elastic pool.
o Compute tier:
▪ Provisioned – for regular usage patterns, or multiple databases with elastic pools.
o Specify separate amount of Number of vCores, memory, and amount/speed of storage. Look
at:
▪ iOPS,
▪ Backup retention.
o Maximum of:
▪ 80 vCores at Gen5,
▪ 4 Tb memory, and
Page 7 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
▪ Azure Hybrid Benefit allows you to bring in your existing on-prem licenses to the
cloud.
o Choose from:
▪ General purpose (scale computer and storage) – For most business workloads.
Storage latency of 5-10 ms (about the same as SQL Server on a VM).
▪ Hyperscale (on-demand scalable storage) – Only for Azure SQL Database – say 100
Tb+ storage.
- You cannot subsequently change out of Hyperscale. Cost the same as Azure
SQL Database.
▪ Zone and Local Redundancy are cheaper for single region data resiliency.
o Tempdb
▪ Azure SQL Database creates 1 file per vCore with 32Gb per file, with caps of up to 32
files for serverless computing only.
o Offers bundles of maximum number of compute, memory and I/O (reads/writes) resources
for each class (cannot separate them).
o Uses Azure Premium disks. Provision in increments of 250 Gb to 1 Tb, and 256 Gb thereafter.
Page 8 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
o Choose from:
▪ Please note: Basic and Standard S0, S1 and S2 have less than 1 vCore, and cannot
use "Change data capture".
- Consider Basic, S0 and S1, where database files are stored in Azure
Standard Storage (HDD), for development, testing and infrequently
accessed workloads.
▪ See https://dtucalculator.azurewebsites.net/
o For the DMVs to have accurate figures, you may need to flush the Query Store after re-
scaling. Use:
▪ EXEC sp_query_store_flush_db;
o Server.
▪ This is a logical server, which includes logins, firewall and auditing rules, policies and
failover groups.
o Serverless model.
• Configure network:
o No access.
o Public/private endpoint.
o Choose whether to "Allow Azure services and resources to access this server" (for other
Azure services).
Page 9 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
▪ Or if not, you could allow specific Virtual Networks to have access.
• Connection policy
o Default – Redirect if connection originates inside Azure, and Proxy if outside Azure.
• You can have sample data, or data based on the restore from a geo-replicated backup.
o CS/CI = case-[in]sensitive,
o AS/AI = accent-[in]sensitive.
Page 10 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
10. configure Azure SQL Managed Instance for scale and performance
• Service Tier:
o General Purpose
o Business Critical
▪ low-latency workloads
▪ Fast Failovers
• Hardware Generation
o Up to 80 vCores,
o 400 Gb memory,
o up to 16 Tb database size.
o Cross-database queries,
▪ The execution environment for .NET framework code (also known as "managed
code").
Page 11 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
o The msdb system database.
o SQL Managed Instances does not support the DTU-based purchased model.
• Tempdb
11. configure SQL Server in Azure VMs for scale and performance
• SLA for Virtual Machines
o When you need an older version of SQL Server or access to a Windows Operating System.
o When you need SSAS (Analysis), SSIS (Integration) or SSRS (Reporting) (non Azure services),
Page 12 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
o When you need features not available in Azure SQL Database or Azure MI.
o Azure VM marketplace images are configured for optimal SQL Server performance.
▪ Data drives should be put on Premium P30 and P40 disks for cache support.
▪ Log drive should be put on Premium P3o to P80 disks, or Ultra disks for
submillisecond latency.
o Stripe multiple data disks using Storage Spaces (similar to RAID, but done in software) to
increase I/O bandwidth. 3+ drives form a storage pool. This should be done by:
- Simple
- Mirror
- Parity
Page 13 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
o Increases reliability, but reduces capacity.
o Increases resiliency.
▪ Creating a volume.
o Use Local Redundant Storage, not Geo-redundant storage, on the storage account.
▪ Good for testing and development, small-medium databases or traffic web servers.
▪ Good for medium traffic web servers, network appliances, batch processes, and
application servers.
▪ Good for relational database servers, medium to large caches, and in-memory
analytics.
▪ Good for Big Data, SQL, NoSQL databases, data warehousing and large transactional
databases.
▪ heavy graphic rendering and video editing, as well as model training and inferencing
(ND) with deep learning.
▪ Automated backup,
▪ Automated patching,
▪ View information in Azure Portal about your SQL Server configuration, and more.
Page 14 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
▪ It is installed when you deploy an SQL Server VM from the Azure Marketplace.
o When creating a VM, the "SQL Server settings – Change configuration" shows the storage.
o All of the SQL Server VM marketplace images follow default storage best practices.
o After setting the VM, when using disk caching for Premium SSD, you can select the disk
caching level (by going to Settings – Disks):
▪ It should be ReadOnly for SQL Server data files, as this improves reads from cache
(VM memory and local SSD), which is much faster than from disk (Azure Blob
storage).
▪ It should be None for SQL Server Log files, as the data is written sequentially.
▪ ReadWrite caching should not be used for the SQL Server files, as SQL Server does
not support data consistency with this cache type. However, it could be used for the
O/S drive, but it is not recommended to change the O/S caching level.
• vCore-based model:
o Business Critical service tier includes 3 replicas (and about 2.7x price)
o Single database.
Page 15 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
▪ They can be moved in/out of elastic pool.
▪ They can be dynamically (i.e. manually) scaled (but not autoscaled) up and down.
o Elastic pool.
▪ This is for multiple databases, good when they have variable usage patterns.
▪ Can add databases by going to the pool and clicking on "+Add databases".
• Storage costs:
• For DTU model, consider the following factors when determining how many DTUs you need:
o Note: Unit price for eDTU pools is 1.5x the DTU unit price for a single database.
▪ Price for v-Core pools is at the same unit price as for single databases.
o However, it requires extra time and CPU, both to compress and retrieve data.
o You can compress at the row level, the page (8,192 characters) level, or none.
- Numeric types (apart from tinyint) storage will be reduced, maybe down to
1 byte. Tinyint already takes 1 byte.
- Row compression
- Prefix compression
Page 16 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
o If values in the same column start with the same characters, this
can be optimised.
- Dictionary compression
o If values after prefix compression in any column are the same, this
can be optimised.
• Available in:
▪ You cannot use data compression with tables which have SPARSE columns.
▪ To change the compression option in a clustered index, you need to drop the
clustered index, preferably OFFLINE, and then rebuild the table.
▪ EXEC sp_estimate_data_compression_savings
- 'SchemaName',
- 'TableName',
Page 17 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
- Index_ID – either zero for a Heap, 1 for a clustered Index, or >1 for Non-
clustered Index. NULL if a table, and not an index,
◼ FROM sys.indexes
◼ SELECT *
◼ FROM sys.partitions
• To enable compression:
o In SSMS
▪ Click next, and select the compression type for each partition.
- You can also click on "Use same compression type for all partitions".
▪ Select whether to run immediately or to create a script (to a file, clipboard, or new
query window).
- If using this on a VM, you may also get "Schedule – you could select: one
time, recurring (Daily, Weekly or Monthly), when SQL Server Agent starts,
or whenever the CPUs become idle.
o In T-SQL - table
o In T-SQL – index
o Indexes work best when you scan large amounts of data, like fact tables in data warehouses.
Page 18 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
▪ They are generally clustered. Non-clustered only uses when you have a data type
not supported by a clustered index – e.g. XML, text and image.
▪ Best used when the data is not often read, but you need the data to be retained for
regulatory or business reasons.
▪ Saves space, but there is a high CPU cost to uncompressing it, which is more than
any I/O saving.
- This will impact on whether you can use Azure SQL Database/Managed
Instance, or whether you need a VM.
o Downtime allowances
▪ Are you allowed any downtime at all? If not, you need to do an online migration.
o Security requirements
o Location for data storage (e.g. GDPR, California Consumer Privacy Act, or similar
requirements)
o It can also discover and assess SQL data estate at scale (across your data center).
o Get Azure SQL deployment recommendations, target sizing and monthly estimates.
• Do you need to migrate non-SQL objects, such as Access, DB2, MySQL, Oracle and SAP ASE databases
to SQL Server or Azure SQL?
• Do you need to migrate SQL Server objects to SQL Database/Managed Instance? If so:
Page 19 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
▪ If so, use Data Migration Assistant (DMA).
▪ It can also discover and assess SQL data estate, and recommend performance and
reliability improvements for your target environment.
▪ Detect compatibility issues between your current database and a target version of
SQL Server or Azure SQL.
o Do you need to compare workloads between the source and target SQL Server?
o Do you need to migrate open source databases, such as MySQL, PostgreSQL or MariaDB?
▪ Minimal downtime (especially if online using the Premium pricing tier). Good for
large migrations.
▪ You need:
- To allow outbound point 443 (HTTPS) – you may also need 1434 (UDP).
- Does not initiate any backups, and uses existing full and log backups (not
differential).
Page 20 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
o Create a Virtual Network for the Azure Database Migration Service using either ExpressRoute
or VPN.
o Enable outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor.
o Allow database engine access in Windows firewall, and open the Windows firewall to TCP
port 1433 (unless you have changed it). You may also need to have UDP port 1434.
o Create a server-level IP firewall rule to allow Azure Database Migration Service access.
o Your credentials need CONTROL SERVER on the SQL Server instance, and CONTROL
DATABASE on Azure SQL.
o In Data Migration Assistant, select +New and Assessment, and enter a project name.
o Select Database Engine, SQL Server and Azure SQL Database, and either/both:
Page 21 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
o Select “Schema only” in “Migration scope”.
o In the Azure portal, go to this service and click Create, and select:
- You can have the 4 vCore Premium DMS free for 6 months. You can use it
for a total of 1 year, and create 2 DMS services per subscription.
o In the Azure portal, go to “Azure Database Migration Services”, select the relevant instance,
and select ”New Migration Project”.
o Add a project name, SQL Server, Azure SQL Database, and Data migration.
o Select databases, note the Expected downtime, and click “Next: Select target:.
Page 22 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
o Click “Next: Map to target databases”. This will be mapping to new databases, unless you
have a database with the same name.
o Click “Next: Summary” and enter an Activity Name for the migration.
o Click “Start migration”. You can monitor the migration from there.
o Once complete, verify that the target database has been migrated.
• Other options:
o Bulk Copy Program (bcp) can be used for connecting from on-prem or a VM to Azure SQL.
• Investigate what effect the database compatibility level may have had
• Azure SQL Database and Azure SQL MI will always use the latest version.
• Are existing queries using the best plan under the new compatibility level?
• Are there regressions? If so, force the last known good plan.
• Look for features which work better in the source database but not in the target.
• Some features may only be available once the database compatibility level has
changed.
• Azure SQL Database has fewer features than Azure SQL MI, which has fewer features than
on-prem or Azure VM.
Page 23 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
20. set up SQL Data Sync
• Azure SQL Data Sync allows you to synchronize data across multiple databases.
o Tables need to have a primary key, which cannot be changed (rows can be deleted/recreated
instead).
• Sync Metadata Database contains the metadata and log for Data Sync. It is an Azure SQL Database in
the same region as the Hub Database.
o It should be an empty database. Data Sync creates tables and runs a frequent workload.
• Member databases are either Azure SQL Database or on-prem (not Managed Instance).
o If you are using on-prem, you will need to install and configure a local sync agent.
▪ But if there are several members, this depends on which member syncs first.
• Use in:
Page 24 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
o Go to Azure portal – SQL databases.
▪ Automatic Sync (If on, choose from Seconds, Minutes, Hours or Days in Sync
Frequency),
▪ Use private link (a service managed private endpoint). If yes, you will later need to
approve the Private Endpoint Connection.
o Subscription,
o Sync Directions (To the Hub, From the Hub, or Bi-directional Sync),
▪ Select “Create and Generate Key”, and copy it to the clipboard, then click OK.
o In the “Sync Metadata Database Configuration”, enter credentials for the metadata database
server.
▪ If automatically created, this will be the same server as the hub database.
Page 25 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
▪ You may need a firewall rule, created in the portal or in SSMS.
o Click Register.
o In the “SQL Server Configuration” box, connect using SQL Server or Windows authentication.
o Provide a name for the new sync member (not the database name) and the Sync Directions.
• To see if it works, go to the Database Sync Group page – Tables, and click on Refresh schema.
• Go to Tasks – Export Data. This will open the SQL Server Import and Export Wizard
(using SSIS).
• This will copy data, but not views, stored procedures, functions etc.
• Right hand click on the source database in SSMS and go to Tasks – Export Data-tier
Application.
• This will create a bacpac (back-up package), an archive containing schema and data.
• sqlpackage.exe /a:Export
/SourceServerName:servername.database.windows.net
Page 26 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
/SourceDatabaseName:dbname /SourceUser:username
/SourcePassword:password
/TargetFile:C:\Users\user\Desktop\backup150.bacpac
• Then to upload it, assuming you are still in the Command Prompt, run:
• sqlpackage.exe /a:Import
/TargetServerName:ManagedInstancename.appname.database.windows.n
et /TargetDatabaseName:dbname /TargetUser:username
/TargetPassword:password
/SourceFile:C:\Users\user\Desktop\backup150.bacpac
• Uses Bacpac.
• You can check the export status by going to the Azure SQL Server (not the database)
and go to Import/Export history.
• After it has exported, you can then use this for importing into MI using SSMS, or
create a new Azure SQL Database using the Azure Portal.
• Go to the Azure SQL Server (not the database), and click Import database.
• -StorageKey $(Get-AzStorageAccountKey `
Page 27 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Plan and Implement Data Platform Resources
• -ResourceGroupName "<resourceGroupName>" -
StorageAccountName "<storageAccountName>").Value[0] `
• -StorageUri
"https://myStorageAccount.blob.core.windows.net/importsample/sample.
bacpac" `
• -AdministratorLogin "<userId>" `
• --storage-uri
"https://myStorageAccount.blob.core.windows.net/importsample/sample.
bacpac" `
• -u "<userId>" -p "<password>"
22a. implement Azure SQL Managed Instance database copy and move
• You can copy or move one or more databases from one Azure SQL Managed Instance to another(s).
This is useful when:
• You are able to copy/move one or many databases from one Managed Instance to one or many
Managed Instances, without possibility of data loss.
o If you copy a database, the original remains online. There is no further synchronization, and
the databases on both instances are able to be read and written to.
• You need read permissions from the source and write permissions for both source and the
destination databases.
Page 28 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
• There must also be sufficient network connectivity between the two Managed Instances.
• To copy/move:
o In the Source details pane, select the source database(s) and Managed Instance.
• After the data is transferred, the status changed to “Copy/Move ready for completion”.
o If it is not completed in 24 hours, then the copy/move is cancelled, and the destination
database is dropped.
o SQL Server authentication (user name and password, sent in plain text), and
o Cloud-only identities,
o Hybrid identities that support cloud authentication with Single Sign-On (SSO), using
password hash or pass-through authentication.
• Decision tree:
o Cloud-only identities
o Federated authentication
Page 29 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
▪ If you want to integrate with an existing federation provider, or
o Pass-through authentication
• Other authentications:
o Admin tools on a non-Azure machine that is not domain-joined: use Azure AD integrated
authentication, or Azure AD interactive authentication with multifactor authentication.
o Older apps where you can't change the connection string: SQL authentication.
• Microsoft Entra ID can allow additional security such as Multi-Factor Authentication (MFA).
o Go to the Azure Portal – Active Directory – (The relevant active directory, if more than one),
and Authentication methods. These include:
o Enter:
▪ Name
▪ Groups (Optional).
Page 30 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
o Click Create.
▪ GO
o Logins can:
▪ Auditing,
• However, you can create logins from Azure AD users, groups or apps.
▪ <option_list> ::=
- | SID = sid
- | DEFAULT_DATABASE = database
- | DEFAULT_LANGUAGE = language
• Create user:
Page 31 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
▪ [ { FOR | FROM } LOGIN login_name ]
o [;]
o <limited_options_list> ::=
▪ DEFAULT_SCHEMA = schema_name
▪ | ALLOW_ENCRYPTED_VALUE_MODIFICATIONS = [ ON | OFF ] ]
• Both SQL Server Administrators and Microsoft Entra ID Administrators for SQL Server can create:
• You cannot create an SQL Server login from the Azure portal.
▪ Windows authentication,
- Strong verification.
- Uses identities in Azure AD. You can use it when your computer is logged
into Windows but it is not federated with Azure.
▪ Special purpose logins, which cannot connect to SQL Server, but which can own
objects and have permissions:
Page 32 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
- “Mapped to [stand-alone] asymmetric key”,
• Create a user using SSMS (Managed Instance and Azure SQL Database):
▪ “SQL user with password”. Also called a "contained database user". You can select
- Can make your database more portable. Allowed in Azure SQL Database
and in a contained database in SQL Server.
- Cannot login to a server, but can be granted permissions and can sign
modules
- Cannot login to a server, but can be granted permissions and can sign
modules
▪ “Windows user”.
Page 33 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
o serveradmin – change server-wide configuration options and shut down the server.
o securityadmin – GRANT, DENY and REVOKE server-level permissions, and any database-level
permissions if they have access to the database.
o public – includes all users, group and roles. When you want the same permission(s) for
everyone.
o db_owner – all configuration and most maintenance activities (in Azure SQL Database, some
activities require server-level permissions), including DROP database.
▪ However, if you give them db_denydatareader or DENY permissions, you can deny
read access to data.
o db_securityadmin – can modify role membership for custom roles only and manage
permissions. Can elevate own permissions.
o db_[deny]datareader – [cannot] read all data from all user tables and views.
• In Azure SQL Databases, there are also two special database roles in the "master" database only:
o dbmanager – can create/delete databases. Connects as the dbo (database owner) user.
o loginmanager – create/delete logins in the "master" database (as per securityadmin server
role in on-prem SQL Server)
o sp_helprotect – returns user permissions for an object (or all objects) in the current
database.
Page 34 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
o sp_helprolemember – direct members of a role.
• There are also role-based access control (RBAC), which are security rights outside of databases, which
include:
o SQL DB/Managed Instance/Server Contributor – manage SQL Databases, Mis or Servers, but
not get access to them. Cannot manage security-related policies.
o SQL Security Manager – mange security-related policies for servers and databases, but no
access to them.
• When deploying, Azure uses the "server admin", which is a principal in Azure SQL Database, and a
member of the sysadmin role in MI.
• In a particular login:
o Click Search.
o Select:
▪ “The server”,
▪ “Specific objects”. If so, click “Object Types” and select Endpoints, Logins, Servers,
Availability Groups and/or Server roles.
▪ “All objects of the types” – select Endpoints, Logins, Servers, Availability Groups
and/or Server roles.
o Server
o Database
o Schema
▪ Type
Page 35 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
27. apply principle of least privilege for all securables
• Users should have the least privilege that is necessary for them to do their job.
• You can use Roles to assigned permissions to roles, and then users to roles.
o GRANT
▪ Why use REVOKE instead of GRANT? It doesn’t give permissions, but it doesn’t stop
permissions if they have it through another role.
▪ If DENY is applied to the public role, no non-sysadmin will have this permission.
• You can also prevent users from querying objects directly by allowing only access to procedures or
functions.
o If two objects have the same owner, then permissions in a second object called from the first
are not separately checked.
o SELECT permission in a database includes all (child) schemas, and the tables and views.
o CONTROL gives ownership-like permissions and includes all other permissions, including
ALTER, SELECT, INSERT, UPDATE.
- If this is not used, the private key is encrypted using the database master
key.
▪ EXPIRY_DATE = ‘20291231’;
Page 36 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
- You can also have a START_DATE (in UTC). If not specified, START_DATE
defaults to current date, and EXPIRY_DATE (UTC) is one year after
START_DATE.
o GO
o The Azure Key Vault can store customer-managed certificates ("Bring your own Key – BYOK")
• To restore a previously-created certificate, you can also use CREATE CERTIFICATE with FILE = 'path'
o Azure SQL Database does not support creating a certificate from a file or using private key
files.
o You can change the password, but not the SUBJECT or DATEs.
▪ CREATE LOGIN [login_name] FROM EXTERNAL PROVIDER -- the last 3 words indicate
Azure AD.
o To check
o To create a user:
o You can create users in the master database, then create a user based on it, but it is better
practice to do the above:
Page 37 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
▪ [In Master]
CREATE LOGIN demo WITH PASSWORD = 'Pa55.w.rd'
- To check
▪ [In database]
CREATE USER demo FROM LOGIN demo
• To check users:
• To grant permissions:
▪ For example:
GRANT SELECT ON OBJECT::Region TO Ted [WITH GRANT OPTION];
o PERMISSION can be
- They can also be CONTROL (all rights), REFERENCES (view foreign keys),
TAKE OWNERSHIP, VIEW CHANGE TRACKING and VIEW DEFINITION.
▪ For schema, ALTER permission on a schema is wide-ranging. You can alter, create or
drop any securable in that schema. However, you cannot change ownership.
- For tables and views, ALL means DELETE, INSERT, REFERENCES, SELECT, and
UPDATE.
Page 38 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
- For table-valued functions, ALL means DELETE, INSERT, REFERENCES,
SELECT and UPDATE
o The optional [WITH GRANT OPTION] allows you to grant that permission to others.
• To check permissions:
o or if sysadmin in MI or VM:
o For example:
o PERMISSION is:
▪ ALTER ANY [Server_Securable] – CREATE, ALTER and DROP things such as LOGIN.
▪ DELETE/INSERT/SELECT/UPDATE
▪ CREATE Server-/Database-/Schema-Securable.
Page 39 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
o OBJECT can be a database, schema or object
▪ All permissions
▪ Specific database.
▪ Specific object.
o database_principal is a database user or user-defined role, but not a fixed database role or a
server principal.
o You need ALTER permission on the role, or ALTER ANY ROLE on the database, or
db_securityadmin or db_owner.
o Don't confuse this with TLS – transparent layer security – which encrypts when in transit.
• It is protected by the TDE protector, using a service-managed certificate or an asymmetric key in the
Azure Key Vault.
Page 40 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
o For Azure SQL Database, it is set at the server level. New databases are encrypted by default
(but not ones created through restore or database copy).
o For Azure SQL Managed Instance, it is set at the instance level and is inherited to all
encrypted databases.
• To enable it in Azure SQL Database only, go to the Azure Portal, then the relevant database, then go
to “Transparent data encryption” and set “Data encryption” to ON.
o However, you can’t switch the TDE protector to a key in Key Vault in T-SQL.
o Set-AzSqlServerTransparentDataEncryptionProtector
o Add-AzSqlServerKeyVaultKey
o Set-AzSqlDatabaseTransparentDataEncryption
o SQL Database communicates over port 1433. You need that opened on your own
computer/server.
o create a reserved IP (classic deployment) for the resource that needs to connect, then
o Server-level firewall rules are for users/apps to have access to all databases.
Page 41 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
o Database firewall rules are for an individual or app.
o This applies to all databases in the server on Azure SQL Database only, whether single or
pooled databases. It does not apply to Azure SQL Managed Instance.
o You will need SQL Server Contributor or SQL Security Manager role, or the owner of the
resource that contains the Azure SQL Server.
o Select “Add client IP” to add your current IP address. This opens port 1433.
▪ A firewall rule of 0.0.0.0 enables all Azure services to bypass the server-level
firewall rule – but in the portal, you need to turn on "Allow Azure services and
resources to access this server" instead.
o Click OK. The rules are then stored in the master database.
• In T-SQL:
• You can also manage using PowerShell, CLI (Command Line Interface) or REST API.
o It can only be done using T-SQL statements, and you need CONTROL DATABASE permission
at the database level.
• In T-SQL:
Page 42 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
▪ EXECUTE sp_set_database_firewall_rule @name = N'MyDatabaseFirewallRule',
• If you wish to use an Azure Key Vault, then you need to create it first
▪ Cryptographic Operations: Decrypt, Encrypt, Unwrap Key, Wrap Key, Verify and
Sign.
o It costs $0.03 for 10,000 transactions. The Premium version allows for a Hardware Security
Module (HSM).
o Select the columns and choose “Encryption Table”, either Deterministic or Randomized.
▪ Deterministic allows equality joins, GROUP BY, indexes and DISTINCT. Randomized
prevents this.
o In “Master Key Configuration”, you can go to “Select an Azure Key Vault” and select the Key
Vault.
• When the columns are encrypted, then when connecting, go to the “Additional Connection
Parameters” tab, and enter:
Page 43 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
▪ Needed to access/read the metadata of the column master/encryption keys to
manage keys or query encrypted columns.
o Security Administrator generates columns encryption keys and column master keys.
▪ Needs access to the keys and the key store, but not the database.
o Database Administrator (DBA) manages metadata about the keys in the database.
▪ $storeLocation = "CurrentUser"
▪ Import-Module "SqlServer"
▪ $cmkSettings = New-SqlCertificateStoreColumnMasterKeySettings -
CertificateStoreLocation "CurrentUser" -Thumbprint $cert.Thumbprint
o # Generate a column encryption key, encrypt it with the column master key to produce an
encrypted value of the column encryption key.
▪ $encryptedValue = New-SqlColumnEncryptionKeyEncryptedValue -
TargetColumnMasterKeySettings $cmkSettings
o # Share the location of the column master key and an encrypted value of the column
encryption key with a DBA, via a CSV file on a share drive
▪ $keyDataFile = "Z:\keydata.txt"
Page 44 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
▪ $keyData.KeyStoreProviderName
▪ $keyData.KeyPath
▪ $keyData.EncryptedValue
o # Obtain the location of the column master key and the encrypted value of the column
encryption key from your Security Administrator, via a CSV file on a share drive.
o $keyDataFile = "Z:\keydata.txt"
o Import-Module "SqlServer"
o $connStr = "Server = " + $serverName + "; Database = " + $databaseName + "; Integrated
Security = True"
o $cmkName = "CMK1"
o # Generate a column encryption key, encrypt it with the column master key and create
column encryption key metadata in the database.
o $cekName = "CEK1"
Page 45 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
o An enclave is something within a bigger something, such as some territory inside bigger
territory.
o You can use this in SQL Server 2019 or later, or Azure SQL Database.
• Always Encrypted protects sensitive data from malware and users who should have access to the
database but not the data by encrypting it on the client, not allowing it to be in plaintext in the
Database Engine.
• However:
o because the data is encrypted, you can only do comparison based on values being the same
(or not), if you are using deterministic encryption.
o you cannot do data encryption, key rotation, or pattern matching in the database.
• To solve this problem, you can use Always Encrypted with secure enclaves. This create a protected
part of the memory, which can do computations on plaintext data in the secure enclave.
o It’s like a black box - You cannot view the data or code inside the enclave, even if you used a
debugging system.
Page 46 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
▪ This also requires Microsoft Azure Attestation with an Attestation administrator,
which verifies the trustworthiness of the Azure SQL Database, together with an
attestation provider. However, this is not required for the DP-300 exam.
▪ This is available for all versions of Azure SQL Database, including Elastic Pools, or
SQL Server 2019 or later.
▪ It provides some additional protection against OS-level threats. You also have Azure
protection, such as just-in-time-access, multifactor authentication, and secure
monitoring.
▪ However, VBS enclave cannot defend itself from bigger attacks, such as replacing
the enclave program with malware, so if you need strong security isolation, you may
wish to consider the Intel SGX enclave instead.
• Please note – you cannot switch it Off after it has been On.
• You can also enable it in SSMS by right-hand clicking on the database, select Properties, and change
“Enable Secure Enclaves” to On.
o It prevents access to sensitive data by putting a mask, with none or part of the data (e.g. last
4 digits of a credit card).
Page 47 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
o You can select the Schema, Table and Column to define the columns for masking.
- XXXX for string data types. You can use fewer Xs if it less than 4 characters.
- Exposes the last 4 digits of the credit card, with a constant string prefix.
▪ Email ([email protected]),
- Exposes the first letter, but replaces everything else with a constant string
prefix.
- Shows the first X characters, the last Y characters, and a custom padding
string in the middle.
• You can select specific SQL users who were excluded from masking.
o Note: Administrators are always excluded for Dynamic Data Masking – they can always read
the data.
34. implement Azure Key Vault and disk encryption for Azure VMs
• To encrypt disks for Azure VMs:
Page 48 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
o Select Disks (left-hand side),
o Next to “Select key from Azure Key Value: Key vault”, select “Create new”.
o Add a name (unique amongst Azure Key Vaults) and Resource Group.
o Go to the “Access Policies” tab, click “Enable Access to: Azure Disk Encryption for volume
encryption”.
o After creating the Key Vault, leave the Key field blank, click Select, and Save.
• Packages of data are encrypted from one side and then decrypted by the other side.
• TLS 1.0 was defined in January 1999, and TLS 1.1 was defined in April 2006.
• It was widely deprecated by web sites around the year 2020. Microsoft no longer supported
them in Microsoft Teams Desktop as of July 7, 2021.
• TLS 1.2 was defined in August 2008, with stronger SHA-256 encryption, improved reliability and better
performance.
• This is the most commonly used TLS version, and creates a secure connection.
• To configure TLS:
• or Azure CLI:
Page 49 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
• az sql server update -n sql-server-name -g sql-server-group --set
minimalTlsVersion="1.2"
o Go to the database.
o At the bottom of the screen, you may have “X columns with classification
recommendations”.
- [n/a], Other
- Networking
- Personal data: Contact Info, Name, National ID, SSN, Health, Date of Birth,
- Credentials
- General – Business data not meant for the public, such as emails,
documents and files which do not include confidential data.
▪ You cannot select [n/a] for both Information Type or Sensitivity Label.
Page 50 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
• The following roles can modify and read a database’s data classification:
o Owner,
o Contributor,
• Additionally, the following roles can read (but not modify) a database’s data classification:
o Reader, and
• You can use Audit to drill down into "Security Insights", "Access to Sensitive Data" etc.
• You can also use T-SQL, REST API or PowerShell to manage classifications.
• In T-SQL:
o WITH (
▪ Networking, Contact Info, Credentials, Credit Card, Banking, Other, Name, National
IS, SSN, Health, Date of Birth
o )
• Notes:
Page 51 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
o Under high activity, Azure will prioritise other actions and may not record some audited
events.
o Server policy audits always applies to the database, regardless of any database-level auditing
policies. They can sit side-by-side.
o Microsoft recommends using only server-level auditing, unless you want to audit different
event types/categories for a specific database.
▪ BATCH_COMPLETED_GROUP
• To do this:
o Click “Enable Azure SQL Auditing” to track these events for a particular database or server.
You can select the details to be stored in:
- The Advanced settings allow you to choose the retention period (the
default, zero days, is unlimited),
o If you are in the database, you can click on “View server settings”.
o If you are in the server, you can also audit Microsoft support operations.
▪ Give the container a name, set the Public access level to Private and click OK.
▪ In the Properties, click on Properties and copy the URL for future use.
Page 52 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
▪ Add “Blob” to “Allowed services”, choose the Start date as yesterday (to avoid
timezone related problems), and an End date.
▪ Click “Generate SAS” and copy this token for future use. (are the highlighted
needed?)
o You would need to set up a stream to consume these events and write them to a target.
o You can use SSMS, going to File – Open – Merge Audit Files.
▪ In advanced properties, you can change to the secondary access storage key.
▪ Then you can go to your Storage Account – Settings – Access keys, and click the
regenerate icon on the primary access key.
▪ You can then go back to the audit, and change it to the primary key.
▪ You can then go to your Storage Account – Settings – Access keys, and click the
regenerate icon on the secondary access key.
o However, it does not track how many times nor does it track historic data. Therefore, it more
lightweight and requires less storage than CDC (Change Data Capture).
o It therefore enables applications to determines which rows have changed, and request those
rows. (But you cannot see the previous data.)
Page 53 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
o The data is stored in an in-memory rowstore, and flushed on every checkpoint to the internal
data.
o You may wish to consider using snapshot isolation for the database, so that changes made
while getting the data are not visible within the transaction:
o In SSMS
▪ Select the Retention Period and Units (by default, 2 Days) – the minimum is 1
Minute; there is no maximum.
- If False, change tracking information will not be removed and will continue
to grow.
o In T-SQL
▪ SET CHANGE_TRACKING = ON
o In SSMS
▪ If True, you can also change “Track Columns Updated” to True. This will indicate
whether UPDATEs to individual columns will be tracked.
o In T-SQL
▪ ENABLE CHANGE_TRACKING
Page 54 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
• You can also disable Change Tracking on tables and databases
o However, to disable it on the database, all track changing of tables needs to be disabled first.
▪ DISABLE CHANGE_TRACKING
o SELECT * from sys.change_tracking_tables -- this uses the current database. You need:
• To use it:
▪ CT.SYS_CHANGE_COLUMNS, CT.SYS_CHANGE_CONTEXT
• Change Data Capture (CDC) is supported in Azure SQL Database, Azure SQL Managed Instance and
SQL Server on VM.
o Cannot be used in Azure SQL Database Free, Basic or Standard tier Single Database (S0, S1,
S2).
o Cannot be used in Azure SQL Database Elastic Pool with vCore < 1 or eDTUs < 100.
• Before you can enable it for a table, you must switch it on for the database.
Page 55 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
o EXEC sys.sp_cdc_enable_db
▪ It creates the Change Data Capture objects, including metadata tables and DDL
triggers.
▪ @source_schema = N'HumanResources'
▪ , @source_name = N'Department'
▪ , @role_name = N'cdc_admin'
- The database role used to gate access to change data. Could be a new role.
o EXECUTE sys.sp_cdc_help_change_data_capture
Page 56 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
o Enter an email address for your scan reports and alerts.
• To view details of the findings, go to Security – Security Center - “View additional findings in
Vulnerability Assessment”.
o Findings include an overview, number of issues found, severity risk summary, and findings
list.
▪ You can “Approve as Baseline” specific results. Any similar results are put in the
“Passed” section.
• It is hard to make sure data is compliance if you don't know where it is.
• Others may not know what data your company has access to.
• Azure Purview catalogs your data, whether it is on-premises, in a machine on the Internet, or in a
cloud using Software-as-a-Service (SaaS).
• It calls itself a Unified Data Governance solution. Cost from US$300 for 10 Gb of metadata.
• Azure Purview Data Map captures metadata (information about data) from the various
sources, by scanning and classifying it.
• Azure Purview Data Catalog helps you to find data with classification or metadata filters.
• Azure Purview Data Insights allow you to see where sensitive data is and how it flows from
one data source to another.
• Bank account, business, company, driver's license, medial accounts, passport, social security,
tax file, and other identification numbers.
• Date of Birth,
• Email,
• Ethnic group,
Page 57 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
• IP (Internet Protocol) Addresses.
• You can create scan rule sets which group together the classifications and file types.
• In the Azure SQL Database, go to "Server Firewall", and click on "Allow azure services and
resources".
• Fill in a connection Name, and select the "Key Vault name", which should be the Key
Vault you have just created.
• Enter a name, and Select the Subscription, Server name, and collection.
• For this database, click on the new scan (a lot of Cs with a little pencil).
• Important: change the credential to the credential you have set up earlier.
• Unless you want information such as Stored Procedures executions, turn off Lineage
extraction.
• You can delete the reference to the database, or click on "View details".
• You can view the results of the scans by going to the Data catalog.
• You have filter by Object Type, Classification (such as Person's Name), Contact,
Label or Assigned Terms.
Page 58 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
41. Implement Azure SQL Database ledger
• You may have data that you need to know has not been tampered with – for example, in the financial
and medical field.
• A Database Ledger protects your data:
• Preserves historical data, by maintaining previous values in a history table, which can
support T-SQL queries for auditing and forensics.
• Manages the process transparently, not requiring application changes.
• Providing cryptographic (secure communication techniques) proof of data to auditors,
reducing time needed to audit data.
• Any modification is hashed (cryptographically using SHA-256), to create a root hash.
• Root hashes are stored in blocks, which are closed after 30 seconds or 100,000
transactions.
• This block is then hashed along the root hash of the previous block, forming a
blockchain.
• The latest block hash is called the "database digest".
• They can be stored in immutable Azure Blob storage (Write Once, Read
Many or WORM) or Azure Confidential Ledger.
• You can then verify the database's integrity by comparing the database
digest hash against the database calculated hashes.
• You can create ledger databases in SQL Server 2022 and Azure SQL Database.
• You can create "updatable ledger tables". When doing so, the following are created:
• The table itself
• It includes the 4 GENERATED ALWAYS columns ledger_start/end_transaction_id and
ledger_start/end_sequence_number.
• The transaction_id columns are the unique transaction ID (which may contain
multiple rows).
• The sequence_number shows the order the values are inserted in each transaction
(restarting at zero for each transaction).
• A history table, showing the previous version of a row when it has been updated or deleted.
• The 4 GENERATED ALWAYS columns are also created in this table.
• Data cannot be deleted from this table.
• If you don't give it a name, it will generally have the suffix
.MSSQL_LedgerHistoryFor_(GUID).
• A view, which joins the updatable ledger table with the history table.
• It shows the transaction ID, together with whether it was a DELETE or INSERT (an
UPDATE is both).
• Microsoft recommends querying the history of changes using the ledger view,
instead of the history table.
• You can also create "append-only ledger tables".
• You can insert data.
• Updates and deletions are denied, even by system administrators or DBAs.
Page 59 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
• You get the error message "Updates are not allowed for the append only Ledger
table 'NAMEOFTABLE'."
• No history table is created, as there are no updates/deletes. However, two GENERATED
ALWAYS columns are automatically added in the main table: ledger_start _transaction_id
and _sequence_number.
• A view is created provides information about the transactions and the user which inserted
the data. However, it is more helpful for updatable ledger tables instead of append-only, as
you cannot UPDATE or DELETE, and is provided for consistency.
• You can also create ledger databases.
• All your tables are ledger tables (either Updatable or Append-only).
• By default, every table is an Updatable ledger table.
• To do this when creating a database in the Azure Portal:
• go to Security – Ledger, click "Configure ledger", and select "Enable for all future
tables in this database".
• You can also "Enable automatic digest storage", to store the digests automatically in
an Azure Storage account or Azure Confidential Ledger.
• To do this in the Azure Portal for all future tables:
• Go to the database in Azure Portal, and go to Security – Ledger, and select "Enable
for all future tables in this database".
• To do this in T-SQL, end the CREATE DATABASE command with "WITH LEDGER = ON"
• Transaction and block data is stored in:
• sys.database_ledger_transactions – information about each transaction, and
• sys.database_ledger_blocks – a row for every block.
• To create ledger tables in T-SQL:
• You need to have the ENABLE LEDGER permission.
• To create an updatable table in T-SQL, add at the end of the CREATE TABLE statement:
• WITH (SYSTEM_VERSIONING = ON
• (HISTORY_TABLE = [Schema].[TableName]),
• LEDGER = ON);
• Note – LEDGER = ON is optional for ledger databases.
• To create an append-only ledger table in T-SQL, use:
• WITH (LEDGER = ON (APPEND_ONLY = ON));
• You cannot convert existing (non-ledger) tables to ledger tables.
• You would need to create new ledger tables, copy the data across, and then (optionally)
rename the ledger tables. You can copy using:
• The stored procedure sp_copy_data_in_batches @source_table_name = N'NAME',
@target_table_name = N'NAME'.
• This splits the copy operation into batches of 10,000-100,000 rows per transaction.
As this is done in parallel, this can speed the copying.
• Alternatively, you can use SELECT INTO or BULK INSERT.
• To verify the ledger database, use:
Page 60 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
• T-SQL
• DECLARE @digest_locations NVARCHAR(MAX) = (SELECT * FROM
sys.database_ledger_digest_locations FOR JSON AUTO,
INCLUDE_NULL_VALUES);SELECT @digest_locations as digest_locations;
• BEGIN TRY
• EXEC sys.sp_verify_database_ledger_from_digest_storage @digest_locations;
• SELECT 'Ledger verification succeeded.' AS Result;
• END TRY
• BEGIN CATCH
• THROW;
• END CATCH
• This script can be found in the Azure portal – [Name of database] – Security – Ledger – Verify
database.
• If successful, you get a message. The output includes:
• path – the digest locations,
• last_digest_block_id – the last block ID, and
• is_current – whether the "path" is the latest (true) or previous (false) location.
• If unsuccessful, the database has been tampered with. Ideally, you should restore to a point
in time that can be verified, and using manually creating any future transactions through
investigating backups.
Page 61 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
• CREATE USER User1 WITHOUT LOGIN;
• CREATE USER User2 WITHOUT LOGIN;
• -- and a Table with values:
• GO --Create schema must be the first statement in a batch
• CREATE SCHEMA Customers
• GO
• CREATE TABLE Customers.Customers
• (Customer nvarchar(10),
• Status nvarchar(10),
• UserLead nvarchar(10))
• INSERT INTO Customers.Customers VALUES
• ('John', 'A', 'User1'), ('Fred', 'B', 'User2'), ('Trevor', 'A', 'Boss') , ('Alfred', 'B', 'Boss')
• -- Function
• GO
• CREATE SCHEMA RLS;
• GO
• CREATE FUNCTION RLS.rls_security(@User as nvarchar(10), @Status as
nvarchar(10)) RETURNS TABLE
• WITH SCHEMABINDING
• AS
• RETURN SELECT 1 AS rls_security_result
• WHERE @User = USER_NAME() or (USER_NAME() = 'BOSS' AND @Status = 'A') ;
• GO
• -- Add SELECT permissions to the function and the table:
• GRANT SELECT ON RLS.rls_security TO [Boss]
• GRANT SELECT ON RLS.rls_security TO [User1]
• GRANT SELECT ON RLS.rls_security TO [User2]
• GRANT SELECT ON Customers.Customers TO [Boss]
• GRANT SELECT ON Customers.Customers TO [User1]
• GRANT SELECT ON Customers.Customers TO [User2]
• GRANT INSERT ON Customers.Customers TO [Boss]
• -- Create the security policy
• CREATE SECURITY POLICY RLSPolicy
• ADD FILTER PREDICATE RLS.rls_security(UserLead, Status)
• ON Customers.Customers,
• ADD BLOCK PREDICATE RLS.rls_security(UserLead, Status)
• ON Customers.Customers AFTER INSERT
• WITH (STATE = ON); -- To enable the policy
Page 62 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Implement a Secure Environment
• GO
• -- Then you can test:
• EXECUTE AS USER = 'User1'
• SELECT * FROM Customers.Customers
• REVERT
• -- Second test
• EXECUTE AS USER = 'Boss'
• SELECT * FROM Customers.Customers
• INSERT INTO Customers.Customers
• VALUES ('Sally', 'A', 'User1')
• SELECT * FROM Customers.Customers
• INSERT INTO Customers.Customers
• VALUES ('Susan', 'B', 'User1')
• REVERT
• -- Turn off security policy
• ALTER SECURITY POLICY RLSPolicy
• WITH (STATE = OFF);
• -- Do third test
• EXECUTE AS USER = 'User1'
• SELECT * FROM Customers.Customers
• REVERT
• EXECUTE AS USER = 'Boss'
• SELECT * FROM Customers.Customers
• INSERT INTO Customers.Customers
• VALUES ('Susan', 'B', 'User1')
• SELECT * FROM Customers.Customers
• REVERT
• You receive alerts on suspicious database activities (including access and query patterns),
possible vulnerabilities, and SQL injection attacks.
• Alerts are integrated with Microsoft Defender for Cloud, which includes recommended
actions.
• You may wish to enable auditing, for writing database events to an Azure log.
• See separate "configure server and database audits" topic for more details.
Page 63 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
• To set it up:
• In the Azure Portal, go to the SQL Server – Security – Microsoft Defender for Cloud.
• Click on Configure.
• In the "Advanced Threat Protection Settings", click "Add your contact details to the
subscription's email settings in Defender for Cloud", and provide which roles should receive
the email notifications, together with any additional address.
• The emails will provide information on the activities, database, server and
application name, and the event time.
• There will be links for "View recent SQL alerts", Investigation steps and Remediation
steps.
• If you want to, click the "Notify about alerts with the following severity (or higher)", and
select a level.
• You will see alerts in the Overview – Notifications, and in Security – Advanced Threat Protection.
o They are stored in a time-series database which is suitable for alerting and fast detection of
issues.
o Select:
▪ Scope,
▪ Metric Namespace,
o To change the date/time range, go to the top-right hand corner (where it says "Local time").
▪ You can also change the "Show time as" from Local to UTC/GMT, and change the
"Time granularity" (how often it does the aggregation).
▪ Only a maximum of 30 days visible at once, but you can use the arrow at the left-
right to go back up to 93 days in the past.
o You can:
Page 64 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
▪ Change the color of a line (by clicking on the color in the legend – not the line, but
the legend).
▪ Split or filter a metric, if it has a dimension (not applicable to Azure SQL Database).
▪ Add a second metric onto the same chart (e.g. "Date space allocated").
▪ Change the chart type (from Line to Area, Bar, Scatter and Grid).
▪ Move the chart up, down, clone it, delete it, or see more settings (in the … to the
right-hand side).
• Logs are events in the system, which may contain other (non-numerical) data and may be structured
or free-form, with a timestamp.
o Hardware/compute/memory,
o Client applications.
• Azure Monitor allows you to monitor resource metrics, such as processor, memory and I/O resources.
o You may need more CPU or I/O resources if you have high DTU/processor percentage or high
I/O percentage. Alternatively, your queries may need to be optimized.
▪ You get a row for every 15 seconds for about the past hour.
▪ You get a row showing the hourly summary of resource usage data for user
databases. Historical data is retained for 90 days.
▪ However, this is currently in a preview state. It says "Do not take a dependency on
the specific implementation of this feature because the feature might be changed
or removed in a future release."
Page 65 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
o Subscription
▪ Azure Activity log includes service health records and records of configuration
changes.
▪ Azure Service Health has information about your Azure services’ health
o Resources
▪ Resource logs are created internally regarding the internal operation of an Azure
resource.
▪ Azure Diagnostic extension for Azure VM, when enabled, submits logs and metrics
▪ Log Analytics agents can be installed into your Windows or Linux VMs, running in
Azure, another cloud, or on-prem
o Other sources
▪ In Application code, you can enable Application Insights to collect metrics and logs
relating to the performance and operations of the app.
o Blocked by firewall,
o Deadlocks,
o CPU %,
Page 66 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
o Data I/O % or Log I/O %,
o Sessions %,
o Workers %,
• Space/components used
o DTU percentage – CPU, memory and I/O for vCores (not DTU-based model)
o When high, query latency increases and queries may time out.
▪ If this hits 100%, then INSERT, UPDATE, ALTER and CREATE operations will fail
(SELECT and DELETE are fine).
o Data space used percent If this is getting high, then upgrade to the next service tier, shrink
the database, or scale out using sharding.
▪ This is used for caching. If you get out-of-memory errors, Increase service tier, or
compute size, or optimize queries.
• Connections/requested used
Page 67 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
o Sessions percentage
o Worker percentage
o Top queries per duration or execution count (Custom – Metric type: Duration or Execution
Count)
47. configure and monitor activity and performance at the infrastructure, server,
service, and database levels
• See topic 38.
o Metrics,
o Performance Overview,
o Performance recommendations, or
o https://docs.microsoft.com/en-us/azure/azure-sql/database/monitoring-with-dmvs
Page 68 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
48. Monitor by using SQL Insights
• SQL Insights uses DMVs to monitor health, diagnose problems, and tune performance.
• It supports:
• SQL Server 2012 or later,
• Azure SQL Database (but not with elastic pools, or Basic, S0, S1 or S2 service tiers),
• Azure SQL Managed Instance,
• SQL Server on Azure VMs.
• It can be gathered for the serverless tier, but will prevent the database from pausing.
• It does not support:
• Monitoring of more than one secondary replica per database,
• Authentication with Azure AD.
• Monitoring agents on dedicated VMs connect to your SQL resources and obtain the data.
• Microsoft recommends 1 Standard_B2s VM for every 100 connection strings.
• This data is now stored in Log Analytics workspace, and you can use Azure Monitor for
analysis.
• You can view this data from the SQL Insights workbook template or through log queries.
• The cost for SQL Insights are for the dedicated VMs, the Log Analytics workspaces, and any
alert rules.
• To enable SQL Insights:
• Create a Log Analytics workspace to store the data.
• Create a login/user and grant the required permissions:
• In Azure SQL Database, in the relevant (not "master") database, create a user with
a strong password, and grant the required permissions:
• CREATE USER [SQLInsightsUser] WITH PASSWORD = N'P@ssw0rdStr0ng';
• GO
• GRANT VIEW DATABASE STATE TO [SQLInsightsUser];
• In Azure Managed Instance and SQL Server on a VM:
• USE master
• GOCREATE LOGIN [SQLInsightsUser] WITH PASSWORD =
N'P@ssw0rdStr0ng';
• GO
• GRANT VIEW SERVER STATE TO [SQLInsightsUser];
• GO
• GRANT VIEW ANY DEFINITION TO [SQLInsightsUser];
• Create an Azure Virtual Machine:
• Operating system: Ubuntu 18.04 using Azure Marketplace image.
• Recommended VM: at least Standard_B2s (2 CPUs, 4GB)
• Not currently valid in South Africa West, US Gov Non-Region, DoD Central or East,
China Non-Regional, China East, China North, China North 2, West India.
Page 69 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
• Then you need to configure your database:
• For Azure SQL Database, in the Azure Portal, go to Set server firewall, and then add
a firewall rule.
• For Azure SQL MI, either connect inside the same Vnet, or connect in a different
Vnet using Azure Vnet peering or Vnet-to-Vnet VPN gateway.
• For on-premises, you need to use a Site-to-site VPN connection, or an Azure
ExpressRoute connection.
• You can choose to store your SQL user login passwords in a Key Vault.
• To create your SQL monitoring profile.
• In the Azure portal, go to Monitoring, then Insights – SQL.
• Then go to Manage profile and click "Create new profile".
• Enter:
• Name (cannot be edited later),
• Log Analytics workspace,
• Frequency collection (.5, 1, 2, 5 or 10 minutes).
• The higher the frequency and/or the more measures, the higher the cost.
• What to collect:
• Wait statistics,
• Memory clerks,
• Database I/O,
• Server properties,
• Performance counters,
• Requests,
• Schedulers,
• For Azure SQL Database and Azure SQL MI:
• Resource statistics,
• Resource governance.
• For SQL Server (on VM or on prem):
• Volume space,
• SQL Server CPU,
• Availability Replica States and
• Availability Database Replicas.
• Then click "Create monitoring profile", then "Create SQL monitoring profile".
• Add a monitoring machine
• Click on "Add monitoring machine".
• Select your VM.
• Add connection strings.
• For Azure SQL Database, enter in the format:
Page 70 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
• sqlAzureConnections": [
"Server=mysqlserver.database.windows.net;Port=1433;Database=mydatab
ase;User Id=$username;Password=$password;"
• ]
• Note: if you are not using Azure Key Vault, then you dno't need the
semicolon or the dollar sign surrounding "$password;".
• For Azure SQL Managed Instance, enter in the format:
• "sqlManagedInstanceConnections": [
• "Server= mysqlserver.<dns_zone>.database.windows.net;Port=1433;User
Id=$username;Password=$password;"
• ]
• For SQL Server, enter in the format:
• "sqlVmConnections": [
• "Server=SQLServerInstanceIPAddress;Port=1433;User
Id=$username;Password=$password;"
• ]
• Setting monitoring may take a few minutes. Afterwards, the Status column should change
"Collecting".
• To open SQL Insights:
• In the Azure portal, go to Azure Monitor, then Insights – SQL, and select a tile.
• You can enable alert rules by:
• Clicking on "Alerts".
Go to Alert templates, find a template, and click "Create rule".
• Select:
• the alert threshold (in percent),
• the name and severity for the alert, and
• an action group, creating notifications and alerts.
• Click "Enable alert rule", then "Deploy alert rule".
o An Azure Data Explorer cluster – a highly scalable data service for fast input and analytics, or
• You can query data using KQL (Kusto) or T-SQL, in Azure Data Explorer dashboards, Power BI, Grafana
or Excel.
• Creating watchers and dashboards are free. They are no charges for per resource or per user.
However, you would have to pay for storage.
Page 71 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
o Estate dashboards – a high level view:
▪ You can use filters to filter subscription, resource group and resource name.
▪ Active sessions,
▪ Backup history,
o Filters by time.
• You can also download data to Excel and query the data in KQL using Azure Data Explorer.
• You will need to set up access in each database target. The script can be generated in Configuration –
SQL targets, and click on “+ Add”. It should be run in the “master” database.
• You can have either private or public connectivity from the database watcher to the databases. To
manage a private endpoint, go to Configuration – Managed private endpoints, and click on “+ Add”.
• You can click on Monitoring – Dashboards to show all of the monitored resources.
• The datasets which are captured for SQL Database, Managed Instance and Elastic Pool are:
o Memory utilization
o Out-of-memory events
o Resource utilization
o SOS schedulers
o Storage IO
o Wait statistics
• The datasets which are captured for SQL Database and Managed Instance (not Elastic Pool) are:
o Active sessions
Page 72 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
o Backup history
o Connectivity
o Index metadata
o Missing indexes
o Session statistics
o Table metadata
• The datasets which are captured for SQL Managed Instance (not Elastic Pool) are:
o It contains 3 stores:
o Fix queries which are regressed due to changes in the execution plan.
o What are the Top X queries, by execution time, memory consumption, waiting on resources?
o Disabled by default for new SQL Server databases (e.g. on a VM), but
o In SSMS:
Page 73 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
▪ Go to the Query Store tab.
o In T-SQL:
• Options:
▪ In T-SQL:
▪ You can choose from 1, 5, 10, 15, 30, 60 or 1440 minutes. A query will have a
maximum of 1 row collected for this time period.
o MAX_STORAGE_SIZE_MB = 500,
Page 74 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
◼ To prevent it from reaching the limit, increase the
MAX_STORAGE_SIZE_MB. If you can't allocate extra
space, then decrease the Data Flush time.
o DATA_FLUSH_INTERVAL_SECONDS = 3000,
▪ Have a higher value if you don't have a large number of queries running being
generated. However, if the SQL Server crashes or restarts, then anything new will
not be saved.
▪ Having a lower value may have a negative impact of performance, as it will save
more often.
o SIZE_BASED_CLEANUP_MODE = AUTO,
o OPERATION_MODE = READ_WRITE,
▪ You can automatically delete Query data that you don't need.
o INTERVAL_LENGTH_MINUTES = 15,
o QUERY_CAPTURE_MODE = AUTO,
o MAX_PLANS_PER_QUERY = 1000,
o WAIT_STATS_CAPTURE_MODE = ON);
• To clear:
Page 75 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
52. identify sessions that cause blocking
• Blocking can occur when:
o Session 1 locks a resource (e.g. row, page or entire table), and then
o Explicit transactions require you to add the BEGIN, and COMMIT/ROLLBACK TRANSACTION.
• Session 1
o BEGIN TRANSACTION
o UPDATE [SalesLT].[Address]
• Session 2
o BEGIN TRANSACTION
o UPDATE [SalesLT].[Address]
• To view locks:
• To view blocking:
o DB_NAME(database_id) as [database],
o open_transaction_count
o FROM sys.dm_exec_requests
• For the session_id, look at the numbers in brackets at the top of SSMS.
• To reduce blocking, you can change the TRANSACTION ISOLATION LEVEL of a session:
Page 76 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
o SET TRANSACTION ISOLATION LEVEL …
o READ COMMITTED – No dirty reads, as would not read statements that have been modified
but not committed.
o SNAPSHOT – The data read remains the same until the end of the transaction. No blocks
unless the database is being recovered.
o SERIALIZABLE - No dirty reads, as would not read statements that have been modified but
not committed. However, blocks updates/inserts.
o DBCC USEROPTIONS
▪ DML statements start generating row versions – allows snapshots but doesn't
enable it.
Page 77 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
▪ Observing Memory Pressure in your database
▪ Profiler Equivalents,
- TSQL_Locks (deadlocks),
▪ Query Execution,
▪ System Monitoring
o Such as "session_id".
Page 78 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
o Event Tracing for Windows (ETW)
o Event Counter
▪ Counts how many times each event occurs. Processes data synchronously
o Histogram
▪ Counts how many times events occurs, for event fields and actions separately
(asynchronous).
o Pair Matching
o Ring Buffer
53. determine the appropriate Dynamic Management Views (DMVs) to gather query
performance information
o SELECT *
o FROM sys.dm_exec_cached_plans AS cp
o Extended Events
Page 79 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
▪ Lightweight profiling
o FROM
o (SELECT QS.*,
▪ SUBSTRING(ST.text, (QS.statement_start_offset/2) + 1,
▪ ((CASE statement_end_offset
- QS.statement_start_offset)/2) + 1) AS statement_text
o FROM sys.dm_exec_query_stats AS QS
o GROUP BY query_stats.query_hash
o ORDER BY 2 DESC;
o SELECT
o highest_cpu_queries.plan_handle,
o highest_cpu_queries.total_worker_time,
o FROM
o FROM sys.dm_exec_query_stats qs
Page 80 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
o SELECT TOP 10 req.session_id, req.start_time, cpu_time 'cpu_time_ms',
OBJECT_NAME(ST.objectid, ST.dbid) 'ObjectName',
SUBSTRING(REPLACE(REPLACE(SUBSTRING(ST.text, (req.statement_start_offset / 2)+1,
((CASE statement_end_offset WHEN -1 THEN DATALENGTH(ST.text) ELSE
req.statement_end_offset END-req.statement_start_offset)/ 2)+1), CHAR(10), ' '), CHAR(13),
' '), 1, 512) AS statement_text
o USE master
o GO
o Small column size (the best are numeric, but smaller text columns are OK too).
o Use columns which are in WHERE (SARGable columns) and JOIN clauses.
▪ If using LIKE '%text%', then an index (apart from a full-text index) will not help.
▪ Additional columns can be included using INCLUDE (covered queries). This can make
the index key smaller and more efficient.
o Clustered or Non-clustered?
▪ Only one clustered index per table. It also used in PRIMARY KEYs. It re-sorts the
table. Use for frequently used queries and range queries.
- Should be used with the UNIQUE property – but it is possible to create one
which doesn't.
- IDENTITY
- Frequently used.
Page 81 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
▪ As many non-clustered indexes as you want. It creates a separate index.
o If you INSERT, UPDATE, DELETE or MERGE, then all indexes need to be adjusted.
• Create in T-SQL:
• Create in SSMS:
o Right-hand click on Indexes in the relevant table and select "New Index" – "[Non-]Clustered
Index".
▪ In Azure SQL Database, only gives information about databases to which user has
access.
▪ In creating the index, put equality before inequality – both of these should be the
key – and INCLUDE the included columns.
o SELECT
o , mig.index_group_handle
o , mid.index_handle
Page 82 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
▪ AND mid.inequality_columns IS NOT NULL
o , migs.*
o , mid.database_id
o , mid.[object_id]
o ON migs.group_handle = mig.index_group_handle
o ON mig.index_handle = mid.index_handle
o Microsoft says that the Query Optimizer typically selects the best execution plan, so only use
this as a last resort.
o KEEPFIXED PLAN
▪ The query won't be recompiled because the statistics change. It will only recompile
if the schema of the underlying tables changes or sp_recompile is run against these
tables.
o KEEP PLAN
Page 83 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
o ROBUST PLAN
▪ Creates a plan that works for the maximum potential row size. If it isn't, then
performance may be impaired.
o or
• Otherwise, the stored procedure will be optimised as per the first running.
▪ or use
- GO
◼ or SHOWPLAN_TEXT
▪ Use when
- Input1 is small.
- Input2 is large.
Page 84 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
- Input2 is indexed on the join.
▪ It uses the top input (in the execution plan) and takes 1 row.
o Merge joins
▪ Use when
- Input1 and Input2 are sorted on their join – or if not, possibly when Input1
and Input2 are of a similar size. Then, the Sort might be worth the time
compared with the Hash Join.
o Hash joins
▪ Also used in the middle of complex queries, as intermediate results are often not
indexed or suitably sorted.
o This converts into a Hash Join or Nested Loops join after the first input has been scanned,
when it uses Batch mode.
o Can you narrow down the columns? If so, maybe you can then use indexes.
▪ Is there a Sort? It's expensive – do you really need it? If so, could you have an Index
which has already sorted on those columns?
▪ Do you use parameters? If so, and the performance is based, can you WITH
RECOMPILE the stored procedure, or use OPTION (RECOMPILE) for queries.
Page 85 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
- don't use ISNULL(X, 'Y') function – use (X IS NULL or X = 'Y')
▪ If so, could you use an INCLUDE with the index? This writes the data into the index,
but in a separate part of the index away from the Key – so it's quicker, but doesn't
slow down the index much.
- It's also useful for Unique indexes – INCLUDE columns are included in the
Uniqueness test.
▪ This will increase the row size, increasing time to retrieve the data.
• Different loops
o Are you using a Hash Join when, with some changes, a Merge Join or Nested Loop could be
used?
▪ ON Qry.query_text_id = Txt.query_text_id ;
• To use in SSMS:
o Note – you can click on "Configure" to change the time period. You can also click on "Track
the selected query in a new window".
▪ Regressed Queries
- Have your query speed got worse? Have a look at Duration, CPU Time,
Logical Reads, Physical Reads, and more
Page 86 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
- Are the resources used more during particular days, or daily/night?
- The most extreme values in Duration, Execution Count, CPU Time etc.
- You can click on the categories (e.g. High Memory, Lock, Buffer I/O or CPU
waits) to get detail on that category.
▪ Tracked Queries
57a. assess database performance by using Intelligent Insights for Azure SQL Database
and Managed Instance
• Not available in some regions
• Compares current database workload (last hour) with the last 7-days.
o Uses data from the Query Store (see topic 48), which is enabled by default in Azure SQL
Database.
• Monitors using Artificial Intelligence operational thresholds, detects issues with high wait times,
critical exceptions, and query parameterizations
o Impacted metrics are increase to query duration, excessive waiting, timed-out or errored-out
requests.
o Includes a “root cause analysis” in a readable form. May also contain a recommendation.
o Log Analytics workspace, can be used with Azure SQL Analytics (cloud-based only monitoring
solution) to see insights in the Azure portal. The typical way to view insights.
Page 87 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
▪ To add Azure SQL Analytics, go to Home in the Azure portal, click “+Create a
resource”, and search for “Azure SQL analytics”
• How to connect
o Go to the database in the Azure Portal, and go to Monitoring – Diagnostic settings – Add.
▪ Add all the Category Details (log and metric), and in “Destination details” check
“Send to Log Analytics workspace”.
▪ DTUs, worker threads and login sessions reaching resource limits for Azure SQL
Database.
o Workload increase
o Memory pressure
o Data locking
▪ When there are more Parallel workers than there should have been.
o Missing indexes
Page 88 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
o Pricing tier downgrade.
• In Azure SQL Database, in the Azure Portal – you can go to Intelligent Performance – Automatic
tuning.
o You can click on a “Create index” or “Drop index” and implement it.
• For VMs, you can also use the Database Engine Tuning Advisor.
o You then need to give it details such as the Query Store or a T-SQL file (.sql extension) with
your workload.
o The statistics contain information about the distribution of values in tables or indexed views’
columns.
o This enables the Query Optimizer to create better quality plans (e.g. seek vs scan).
• Usually, the Query Optimizer determines when statistics might be out of date and then updates them.
However, you may wish to manually update them if:
o After maintenance operations, such as a bulk insert (but not rebuilding or reorganizing an
index, as they do not change the data distribution).
• The stored procedure sp_updatestats updates statistics for all user-defined and internal tables.
▪ WITH FULLSCAN – This scans all of rows. It is the same as SAMPLE 100 PERCENT
Page 89 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
o You can append this with
• To use database DMVs, you need to have VIEW DATABASE STATE permission on the database.
▪ SLO is the Service Level Objective, which includes deployment option, service tier,
hard and compute amount.
▪ You get a row for every 15 seconds for about the past hour.
• Waiting on resources:
▪ Returns information about all the waits encountered by threads that executed.
- Governor
Page 90 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
o LOG_RATE_GOVERNOR – waits for Azure SQL Database
o INSTANCE_LOG_GOVERNOR – MI waits
- IO
- Parallel
o Possible blocking
• To use Server-scoped DMVs, you need VIEW SERVER STATE permission on the server.
• In addition:
▪ msdb, tempdb and model are not listed in Azure SQL Database.
Page 91 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
o SELECT SERVERPROPERTY('EngineEdition');
▪ Returns 5 for SQL Database, 8 for Managed Instance, and <5 for on-prem/VM.
o Runs DBCC CHECKALLOC, which checks the consistency of disk space allocation structures
o Runs DBCC CHECKTABLE for all tables and index views. The DBCC checks the integrity of all
pages and structures in a particular table or index view, including:
▪ Every row in a table has a matching row in a nonclustered index (and the other way
round), and is in the correct partition.
o Runs DBCC CHECKCATALOG which checks for catalog consistency, using an internal database
snapshot to provide transaction consistency to perform these checks.
▪ Does not work on tempdb or Filestream data (binary large objects or BLOBs on the
file system).
o Validates the contents of every indexed view in the database, and link-level consistency
between table metadata and file system directories and files.
o Relevant Database
Page 92 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
- Do only repairs which have no chance of data loss. Includes quick repairs
(e.g. missing rows in non-clustered indexes), and time-consuming repairs
(building an index).
• WITH Arguments
o TABLOCK – obtains exclusive locks, which will speed it up, but reduce concurrency.
o ESTIMATEONLY – No database checks are done, but displays the amount of tempdb space
needed to do it.
o PHYSICAL_ONLY – limits checking to page structure integrity, record header integrity, and
consistency of the database.
• Best practices:
Page 93 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
o BEGIN TRANSACTION beforehand, so the user can confirm that they want to accept the
results.
o After using DBCC CHECKDB, you need to inspect the referential integrity of the database –
DBCC CHECKCONSTRAINTS. This checks the integrity of a constraint or all constraints in a
table, or all constraints.
o Go to “Automatic tuning”.
o This says that the last good plan should be forced whenever some plan change regression is
found – when the estimated gain is >10 seconds, or the number of errors in the new plan is >
recommend plan.
• In Azure SQL Database only, you can automate index maintenance by:
o You can change “Create Index” and “Drop Index” from Inherit (from server) to OFF or ON
[the default for servers is OFF for both of these]. These will override the server settings.
o Indexes will only be auto-created if the CPU, data I/O and log I/O are lower than 80%.
o The performance of queries using the auto-created index will be reviewed. If is doesn’t
improve performance, it is automatically dropped.
Page 94 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
63. configure Resource Governor for performance
• Resource Governor is used in Azure SQL Database. However, it is not configurable.
• In VMs and Azure SQL MI, you can use Resource Governor to balance resources used by different
sessions.
o You can divide resources (CPU, physical I/O, and memory) differently, based on which
workload it is in. This can improve performance on critical workloads.
• Terminology:
o Resource pool – the physical resources. Two resource pools are created when SQL Server is
installed: internal and default.
▪ Without Resource Govenor enabled, all new sessions are classified into the default
workload group, and system requests into the internal workload group.
o Workload group – a container for requests which have similar criteria, and
o In SSMS
o In T-SQL
▪ GO
o In SSMS
▪ Double-click the empty cell in the Name, and enter the resource pool Name.
o In T-SQL:
▪ GO
▪ GO
o Settings:
Page 95 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
▪ MIN_CPU_PERCENT and MAX_CPU_PERCENT
- e.g. Department A has min of 60%, and Department B has max of 40%.
▪ CAP_CPU_PERCENT
• Workload Groups:
o In SSMS
▪ Go down to the "Workload groups for resource pool", and enter a name, with any
other values.
o In T-SQL
▪ CREATE WORKLOAD GROUP myGroup -- or ALTER, if you wish to change it, or DROP
to delete it.
▪ GO
o RETURNS sysname
o WITH SCHEMABINDING
o AS
o BEGIN
Page 96 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
o if DATEPART(HOUR,GETDATE())<8 or DATEPART(HOUR,GETDATE())>17
▪ BEGIN
- RETURN 'gOutsideOfficeHours';
▪ END
o RETURN 'gInsideOfficeHours';
o END
o GO
• T-SQL
▪ Returns information about the current resource pool state, the current
configuration of resource pools, and resource pool statistics.
▪ Returns workload group statistics and the current in-memory configuration of the
workload group.
• GLOBAL_TEMPORARY_TABLE_AUTO_DROP
• LAST_QUERY_PLAN_STATS
Page 97 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
• LEGACY_CARDINALITY_ESTIMATION
o The query optimizer cardinality estimation model changed in SQL 2014. Should only be
turned on for compatibility purposes.
o Too high a MAXDOP may cause performance problems when executing multiple queries at
the same time, as it may stave new queries of resources. Could reduce MAXDOP if this
happens.
o The default for new Azure SQL Databases is 8, which is best of most typical workloads.
• OPTIMIZE_FOR_AD_HOC_WORKLOADS
o Stores a compiled plan stub when a batch is compiled for the first time, which has a smaller
memory footprint. When it is compiled/executed again, it will be replaced with a full
compiled plan.
• PARAMETER_SNIFFING
▪ No need to spend time and CPU evaluating. However, may be suboptimal for certain
parameters.
• QUERY_OPTIMIZER_HOTFIXES
▪ So you can have a compatibility level for SQL Server 2012, but have query
optimization hotfixes that were released after this version.
• There are many more, but these are the main ones.
o AUTO_CLOSE ON/OFF
▪ Whether the database is shut down after the last user exists.
o AUTO_CREATE_STATISTICS ON/OFF
▪ Creates statistics on single columns in query predicates, to improve query plans and
performance.
o AUTO_UPDATE_STATISTICS[_ASYNC]
▪ Query Optimizer updates statistics when they are used by a query and might be out-
of-date, after insert/update/delete/merge operations change the data distribution.
_ASYNC specifies whether it is done asynchronously or not.
Page 98 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
o AUTO_SHRINK ON/OFF
▪ Shrinks when more than 25% of the file contains unused space. Recommended to
leave OFF.
o READ_ONLY / READ_WRITE
▪ Can users only read from the database (not modify it).
▪ Only one user at a time, or only db_owner fixed database roles and dbcreator and
sysadmin fixed server roles (any number), or all users which have appropriate
permissions.
▪ Changes the recovery option. FULL uses transaction log backups. BULK_LOGGED
only minimally logs certain large-scale (bulk) operations. Simple only allows for
complete backups.
o COMPATIBILITY_LEVEL = 100 (SQL Server 2008 and R2), 110, 120, 130, 140, 150 (SQL Server
2019)
▪ In Azure SQL Database and MI and SQL Server 2014, you cannot set it below SQL
Server 2008 (100).
o MODIFY FILE
o (NAME=NameFile,FILEGROWTH=40MB or 40%);
o AUTOGROW_ALL_FILES
▪ If any file in a filegroup meets the autogrow threshold, all files in the filegroup will
grow.
Page 99 of 159
DP-300: Administering Microsoft Azure SQL Solutions
From 26 July 2024, with minor revisions from 25 October 2024
Monitor, Configure and Optimize Database Resources
o EXEC sp_spaceused
o FROM sys.database_files
• To view the number of pages used as well as total free space for a particular database, you can use
▪ Returns space usage information for each data file in the database.
• You can also go to Reports – Standard Reports – Disk Usage on Azure VM.
o There are 7 different features, some of which are also available on lower levels.
• You can disable any of them (except APPROX_COUNT_DISTINCT) for all queries in a single database,
or for a single query:
o All queries: ALTER DATABASE SCOPED CONFIGURATION SET X = OFF. 'X' is the first heading.
• [DISABLE_ ] BATCH_MODE_ADAPTIVE_JOINS
o For Azure SQL Database, and SQL Server 2017 or higher. Needs a Columnstore index in the
query or a table being referenced in the join, or batch mode enabled for rowstore.
o Selects the Join type (Hash Join or Nested Loops Join) during runtime based on actual input
rows, when it has scanned the first input.
o It defines a threshold (where the small number of rows makes a Nested Loops join better
than a Hash join) that is used to decide when to switch to a Nested Loops plan.
o Enabled by default in SQL Server 2017 under compatibility level 140, and Azure under
compatibility level 140.
• APPROX_COUNT_DISTINCT
o Provides an approximate COUNT DISTINCT for big data – decreases memory and
performance requirement. It guarantees up to a 2% error rate (within a 97% probability).
o Available in all compatibility levels of Azure SQL Database, and in SQL Server 2019 or higher.
• BATCH_MODE_ON_ROWSTORE / DISALLOW_BATCH_MODE
o Queries can work on batches of rows instead of one row at a time, when cached.
o This happens automatically when the query plan decides it is appropriate in Compatibility
Mode 140 for Batch Mode, and Mode 150 (SQL Server 2019+) for Row mode. No changes are
required.
• [DISABLE_ ] INTERLEAVED_EXECUTION_TVF
o Enabled by default in (Azure or SQL Server 2017+) and Compatibility Level 140+.
o Use the actual cardinality of a multi-statement table valued functions on first compilation,
rather than a fixed guess (100 rows from SQL Server 2014).
o Enabled by default in (Azure or SQL Server 2017+) and Compatibility Level 140+.
o SQL Server looks how much memory is allocated to a cached query, and then allocates same
amount of memory next time (instead of guessing, then adding more, more, more).
▪ If a query spills to disk, add more memory for consecutive executions. If it wastes
50+% of the memory, reduce memory for consecutive executions.
• [DISABLE_ ] TSQL_SCALAR_UDF_INLINING
o Scalar UDFs are transformed into equivalent relational expressions inlined into the query,
often resulting in performance gains.
▪ Does not work with all UDFs, including those which have multiple RETURN
statements.
▪ Can also be disabled for a specific UDF by adding "WITH INLINE = OFF" before "AS
BEGIN".
• [DISABLE_ ] DEFERRED_COMPILATION_TV
o Use the actual cardinality of the table variable encountered on first compilation instead of a
fixed guess (1 row).
• This is for SQL Server on a VM, and Azure SQL MI, but not Azure SQL Database, as it uses SQL Server
Agent.
o SQL Server Agent doesn't need to be enabled on Azure SQL MI – it is always running.
o It doesn't have all of the functionality of on-prem SQL Server, but it has most of it.
o Go to SQL Server Agent (right-hand click it and Start if needed on a VM) – Jobs.
o Enter the First Step name, select the database, and which user is running the command, and
enter your T-SQL command.
o Enter:
▪ A name,
▪ Whether it is:
- One time,
- "Start automatically when SQL Server Agent starts" – this setting is not
supported in MI.
o If you subsequently want to Edit or Remove it, you can click those buttons.
o If you want to import a previously made schedule, click "Pick" and then choose the schedule.
• To do this in T-SQL:
▪ USE msdb ;
▪ GO
▪ EXEC sp_add_schedule
▪ @schedule_name = N'ScheduleName' ,
▪ @freq_type = 4,
▪ @active_start_time = 012345 ;
▪ GO
▪ EXEC sp_attach_schedule
▪ @job_name = N'JobName',
▪ @schedule_name = N'ScheduleName' ;
▪ GO
• To view schedules:
o USE msdb ;
o GO
o from sysschedules
• To create an operator:
o Go to SQL Server Agent (right-hand click it and Start if needed on a VM) – Operators.
o Enter Name and e-mail name and/or pager e-mail name (and pager timings).
▪ Pager functionality has been deprecated, and will be removed in a future version.
• In T-SQL, use:
o USE msdb ;
o GO
o EXEC dbo.sp_add_operator
▪ @name = N'OperatorName',
▪ @email_address = N'EmailAddress'
• To configure notifications:
o Go to SQL Server Agent (right-hand click it and Start if needed on a VM) – Jobs.
o Go to Notifications, and
▪ Select Email, Page(r), "Write to the Windows Application event log" and
"Automatically delete job"
• In T-SQL, use:
o USE msdb ;
o GO
o EXEC dbo.sp_add_notification
o @alert_name = N'NameOfAlert',
o @notification_method = 1 ;
o Create a Database Mail account for the SQL Server Agent service account to use.
o Create a Database Mail profile for the SQL Server Agent service account to use and add the
user to the DatabaseMailUserRole in the msdb database.
o Set the profile as the default profile for the msdb database.
o Written in JSON.
o https://docs.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/create-sql-
vm-resource-manager-template?tabs=CLI
o Using PowerShell
▪ https://docs.microsoft.com/en-us/azure/azure-sql/database/single-database-
create-quickstart?tabs=azure-powershell
▪ https://docs.microsoft.com/en-us/azure/azure-sql/managed-
instance/scripts/create-configure-managed-instance-powershell
o If you are using an Azure Pipeline, you can use a DACPAC (data-tier application portable
artifact)
▪ This gets added to your azure-pipelines.yml (yml stands for “Yet Another Markup
Language”).
• Bicep, a Domain Specific Language (DSL), uses declarative syntax to deploy Azure resources.
• You can use Bicep Extension for VS Code to create and deploy your files.
• "quickstarts/microsoft.sql/sql-database",
• "quickstarts/microsoft.sql/sqlmi-new-vnet", or
• "quickstarts/microsoft.sqlvirtualmachine/sql-vm-new-storage".
• Alternatively, you can click "Build your own template in the editor".
• Microsoft.Sql/managedInstances
• Microsoft.Computer/virtualMachines and
Microsoft.SqlVirtualMachine/sqlVirtualMachines
• For Bicep:
• You can also go from ARM to Bicep by clicking on the "Decompile" button, or in Azure CLI:
$resourceGroup
-ServerName "sqldatabase220714-5ps" `
-Location "eastus" `
$server
-ServerName "sqldatabase220714-5ps" `
-DatabaseName "mydatabase" `
-Edition Basic
$database
• You can create elastic job agents to automate maintenance tasks and/or run T-SQL queries.
o Update reference data or load or summarise data from databases or Azure Blob storage.
o Targets can be in different servers, subscriptions or regions, but must be in the same Azure
cloud.
▪ One or more databases, all databases in a server or elastic pool or shard map.
o This is the equivalent of SQL Agent Jobs, which are available in SQL MI, but are not available
in Azure SQL Database.
• You need:
o Elastic Job agent – the Azure resource which runs the jobs. This is free.
o Job database – an existing Azure SQL Database stores job related data, such as metadata,
logs, results and job definitions. It also contains stored procedures and other objects for jobs.
o Target group – servers, elastic pools, databases and databases of shard map(s) which are
affected.
▪ If a server or elastic group, all databases in the server at the time of running the job
will be affected. You will need to give the master database credential, so the
databases can be enumerated. You can also exclude individual databases or all
databases in an elastic pool.
o Job – unit of work which contained job steps, each of which specify the T-SQL script and
other details.
▪ Scripts must be "idempotent", capable of running twice with the same result.
o EXEC jobs.sp_add_target_group_member
▪ @target_group_name = 'GrpDatabase',
▪ @target_type = 'SqlDatabase'
- or 'SqlServer', -- or 'PoolGroup'
▪ @server_name = 'DataBaseName.database.windows.net';
o To view the recently created target group and target group members
• In each database, you will need a job agent credential in each affected database. You could use
PowerShell for this.
• @credential_name='RunJob',
• @target_group_name='GrpDatabase'
o EXEC jobs.sp_update_job
▪ @job_name='Sample T-SQL',
▪ @enabled=1,
▪ @schedule_interval_count=1
• For MI and VM, you need a master server and one or more target servers.
o Right-hand click on SQL Server Agent, and go to Multi Server Administration – Make this a
Master.
o Add your target servers (by clicking on "Add Connection", if they are not already registered).
o After checking that the servers are compatible, you can "create a new login if necessary and
assign it rights to the MSX".
o Right-hand click on SQL Server Agent, and go to Multi Server Administration – Make this a
Target.
o You can "create a new login if necessary and assign it rights to the MSX".
o You can go to the Targets page and select "Target local server" or "Target multiple servers".
o This is done through the installation of the Sql Server IaaS Agent Extension to enable
automated backups (this can be done through the “Create a Virtual Machine” process).
o Needs to be:
▪ Windows Server 2012 and SQL Server 2014 Standard/Enterprise (for Automated
Backup version 1), or
▪ Storage account.
o You can back up the default instance or a single named instance. If there is no default
instance and multiple named instances, it will fail.
o You have no control over when it happens, but it has minimal impact if you use “retry logic”.
o If you have a database quorum, there should be at least one primary replica online.
o Business Critical and Premium databases should also have at least one secondary replica
online.
o If you are intending to have this run to a schedule, click "Enabled" if you want the schedule
to be enabled.
o In this new box, enter a name, a facet, and what you are checking (at least one field, an
operator and a value).
▪ These conditions are what SHOULD be – the policy will fail if this is NOT the case.
o In the Against targets, select target types. If this is blank, then it will be targeted against the
server.
▪ "On demand",
• A workflow is multiple steps which define an overall process. IT starts with a trigger, and
continues with multiple actions.
• Examples:
• detects a threat
• creates a recommendation
• Consumption – about US$1 for every 40,000 actions, and US$1.25 for 10,000 standard
connector executions per day.
• Triggers are the first step – why should the workflow start?
• Actions
• Connections
• Azure AD Integrated,
• If it is Azure SQL Database, then you don't need a Gateway. This is for SQL Server on
prem.
• You can combine this with other connections – for example, Azure Storage.
• Go to the Logic App – API connections to view the API connections used by the Logic App.
• In the Azure Portal, go to API connections (not Logic App) to view all connections.
(intvalue int,
messagetext varchar(20),
BEGIN
FROM SalesLT.NewTable
END
select @MyOutput
▪ Instance – a database.
▪ Alert if counter falls below, becomes equal to, or rises above a Value.
▪ You can click New Job, or View [Existing] job (once you have selected one),
▪ You can click "New Operator", or View [Existing] operator (once you have selected
one).
o Have a delay between responses. 0 minutes and 0 seconds indicate that you want a response
for every occurrence of the alert.
o If "Metrics":
▪ Select a metric.
o If "Alerts":
▪ Select a metric.
o Click on Conditions:
- Dynamic thresholds learns the data and models it using algorithms and
methods, detecting pattern such as seasonality (hourly, daily, weekly).
▪ If static, select:
▪ If dynamic, select
- Select the operator (greater than the upper threshold and/or below the
lower threshold)
▪ Aggregation granularity period – how often the measures are grouped together,
▪ Email,
o The name
o Description (optional),
- The line turns from blue to red dots, and the background turns light red as
well.
o Go to Monitoring – Logs.
▪ Measure,
▪ Aggregation granularity (5, 10, 15, 30 or 45 minutes, 1-6 hours, or 1-2 days).
▪ Frequency of evaluation (5, 10, 15, 30 or 45 minutes, 1-6 hours, or 1-2 days).
▪ Email,
▪ Voice.
o Going to the Azure Portal, and the specific database, and go to Monitoring – Alerts, "+New
alert rule", and selecting:
▪ Resource,
▪ Alert Details.
o GO
o RECONFIGURE
o GO
o GO
o RECONFIGURE
o GO
o In SSMS, you can right-hand click on the server instance (not the database),
▪ 99.9% for a zero replicas (8 hours 45 minutes over a year, or 43 minutes 48 seconds
over a month),
▪ 99.95% for one replica (4 hours 22 minutes over a year, or 21 minutes 54 seconds
over a month).
▪ 99.99% (52 minutes over a year, or 4 minutes 23 seconds over a month) – this is for
other Azure SQL Database tiers and Azure SQL Managed Instance.
▪ However, if you are in Business Critical/Premium tiers, and you have Zone
Redundant Deployments, this increases to 99.995% (26 minutes over a year, or 2
minutes 11 seconds over a month).
o However, the SQL Server may fail, even though the VM is healthy – so the actual SLA will
lower.
• Terminology:
o RPO – Recovery Point Objective of 5 seconds (how much data you can lose)
o RTO – Recovery Time Objective of 30 seconds (how long until you can use it again –
maximum "Failover" time)
▪ If exceeded, you get a credit of 100% of the total monthly cost of the Secondary
o Auto-failover groups
o 2-9 SQL Server instances on VMs or VMs and on-premises data center.
o You can use synchronous commit for secondary replica in on-prem network.
▪ Transactions are not committed on the primary until they can be committed on the
secondary.
o Availability replicas running Azure VMs allow for DR. It uses an asynchronous commit.
o You also need a VPN connection for the entire failover cluster, using a multi-subnet failover
cluster.
o For DR purposes, you also need a replica domain controller at the disaster recovery site.
• Database mirroring
o No VPN required, and they don't have to be in the same Active Directory domain (but you
can – but you will ned a VPN and a replica domain controller).
• Replicate and fail over SQL Server to Azure with Azure Storage.
• Log shipping
o As log shipping requires Windows file sharing, you would need a VPN tunnel.
▪ Designed to protect against network card or disk failure – but there are other
solutions in Azure.
▪ Storage Spaces Direct (S2S) for a Storage Area Network for Windows Server 2016 or
later.
▪ Premium File Share for Windows Server 2012 or later. Uses SSD, have low latency,
supported for Failover Cluster Instances.
• For MI:
▪ They need to have at least the same service tier as the primary.
- You don't need to disconnect the secondaries unless you change between
General Purpose and Business Critical.
▪ More than 1 secondary means that, even if one fails, there will still be at least one
until it is recreated.
▪ Uses snapshot isolation mode, so updates from the primary are not delayed by
long-running queries on the secondary.
o Simple DR (but not HA) of Azure VMs from a primary to a secondary region.
o Can replicate using recovery point snapshots; they capture disk data, data in memory, and
transactions in process.
o Click on Settings – Failover groups – and the name of the failover group.
o You can "edit the configuration" (read/write failover policy and grace period).
o Once you have done all of your changes, click "Save" or "Discard".
o Click on Settings – Failover groups – and the name of the failover group.
o Go to SSMS and the server which hosts a SECONDARY replica of the availability group.
o Right-hand click the availability group to be failed over, and click on "Failover".
o If the Introduction page of the wizard says "Perform a planned failover for this availability
group", then you can do this without data loss.
o In the "Select New Primary Replica" page, you can view the status of:
- Forced quorum
- Not applicable.
▪ "Data loss, Warnings (X)", where X shows the number of warnings – this would have
to be a forced failover.
o The relevant secondary replica will then become the new primary replica.
o In the "Connect to Replica" page, you can connect to the failover target.
o Recovery model.
o Backup component:
o Back up to:
▪ Contents shows the media contents for the selected disk/tape (not URL).
▪ Overwrite all existing backup set, replacing prior backups with the current backup.
▪ Check media set name and backup set expiration – requires the backup operation to
verify name and expiration date.
o Backup to a new media set, and erase all existing backup sets.
o Reliability
o Transaction log
▪ Backup the transaction log and truncate it to free log space. The database remains
online.
▪ Backup the transaction log tail (tail-log backup), and leave the database in a
restoring state (not available to users until it is completely restored).
▪ Specific date.
o Encrypt backup, using AES 128, AES 192, AES 256 and Triple DES.
▪ Only enabled if you append to existing backup set. Backup your certificate or keys to
a different location.
• If you have a VM with IaaS Extension can configure backups in the Azure Portal.
• You can:
o This exists in sysadmin and dbcreator fixed server roles, and dbo (owner) for existing
databases.
o Source
▪ Database – this list only contains databases backed up, based on the msdb backup
history.
▪ Device – tape, URL or file. This is required if the backup was taken on a different SQL
Server instance.
o Destination
▪ Database to restore.
▪ Restore to.
- Alternatively, you can select [Backup] Timeline, which shows the database
backup history as a timeline.
o Restore plan
▪ File Type,
▪ Only relevant if a database was replicated when the backup was created, and when
restoring a published database to a different server (other than the creation server).
o Recovery state:
- Only choose this option in a full or bulk-logged recovery model if you are
also restoring all log files at the same time.
o Tail-log backup.
o Server connections
▪ Restore options may fail if there are active connections to the database.
▪ The "Continue with Restore" dialog box will be displayed after each backup is
restored.
▪ If you click "No", the database will be left in the Restoring state.
• For Azure SQL MI, to restore an Azure SQL database to a different region:
o Go to the MI, click on "+New database", select the database name, and change "Use existing
data" to "Backup" and select the backup.
• Database backups for Azure SQL Database and Azure SQL MI are done automatically.
o You can do a:
- You can change it to 1-35 days optionally (apart from Hyperscale and Basic
tier databases – basic has a maximum of 7 days).
- Note: In MI, PITR is available for individual databases, but not for the entire
instance.
o The first backup is scheduled immediately after a new database is created or restored.
• To restore a database:
o Click "Restore".
o You cannot restore over an existing database (but you can rename it afterwards).
o You can use PowerShell cmdlets to restore an existing database, but again, you can't restore
over an existing database.
o In the "Additional settings" tab, change "Use existing data" to "Backup", and select a backup.
o It may take up to 7 days before the first LTR backup will be shown in the list of available
backups.
o Ensure that you have a LTR policy on secondary databases, only to be created when they
become primary.
o Backups are stored in Azure Blob storage – a different storage container weekly.
• To configure this, go to Azure portal – the server – Backups – Retention policies – select the
database(s), and configure the LTR:
o Weekly backups,
o Monthly backups,
o WeekOfYear backups.
• To view backups, go to Azure portal – the server – Backups – Available backups – and next to the
relevant database, under “Available LTR backups”, select Manage.
o You can click on an LTR backup, and select Restore (which creates a new database) or Delete.
▪ GO
o They are already granted in the sysadmin fixed server role, and the db_owner and
db_backupoperator fixed database roles.
o TO MyPreviouslyCreatedNamedBackupDevice
o NORECOVERY, NO_TRUNCATE
- Useful when failing over to a secondary database or when saving the tail
before a RESTORE.
o GO
• Use:
o FROM MyPreviouslyCreatedNamedBackupDevice
o [FILE = BackupSetFileNumber]
▪ NORECOVERY is useful when you are restoring a single file, but you need to restore
more.
▪ Use RECOVERY when you have finished restoring, and you want the database to be
online.
• For example:
o RESTORE … WITH FILE = 6 NORECOVERY, STOPAT = 'Jun 19, 2024 12:00 PM';
• You can only use T-SQL in an MI when doing a complete restore from an Azure Blob Storage Account:
o WITH COPY_ONLY
o [COMPRESSION | NO_COMPRESSION]
o [STATS = X]
• This is for VMs (and Mis if using COPY_ONLY). The syntax is:
o TO MyPreviouslyCreatedNamedBackupDevice
o [MIRROR TO AnotherBackupDevice]
o [WITH
▪ COPY_ONLY
- Creating a full backup, but is not treated as a full backup for purposes of
future DIFFERENTIAL or TRANSACTION LOG backups.
▪ DIFFERENTIAL
▪ COMPRESSION | NO_COMPRESSION
▪ CREDENTIAL
▪ ENCRYPTION
- If you encrypt, you will also need to use SERVER CERTIFICATE or SERVER
ASYMMETRIC KEY.
- Used when creating a snapshot of the database files and storing them into
Azure Blobs.
• [WITH
o NOINIT | INIT
▪ Whether the backup operation appends to/overwrites the existing backup sets on
the backup media. The default is NOINIT (append).
o NOSKIP | SKIP
▪ Checks whether a backup operation checks the expiration date and time of the
backup sets on the media before overwriting them. The default is NOSKIP (Check
the date/time).
o NOFORMAT | FORMAT
▪ Whether the media header should be written on the volumes used for the backup
operation, overwriting any existing media header and backup sets. The default is
NOFORMAT.
- Be careful from using FORMAT, as it renders the entire media set unusable.
o NO_CHECKSUM | CHECKSUM
▪ Whether backup checksums are enabled – this validates the backup. The default is
NO_CHECKSUM (no generation of backup checksums).
o STOP_ON_ERROR | CONTINUE_AFTER_ERROR
o STATS = X
o REWIND | NOREWIND
o UNLOAD | NOUNLOAD
• [WITH
▪ NORECOVERY
- Backs up the tail of the log and leaves the database in the RESTORING
state. Useful when failing over to a secondary database or when saving the
tail of the log before a RESTORE operation.
▪ STANDBY = standby_file_name
- Backs up the tail of the log and leaves the database in a read-only and
STANDBY state.
▪ NO_TRUNCATE
- The log is not truncated and requires SQL Server to attempt to backup
regardless of the state of the database.
o One or more domain-joined VMs in Azure running SQL Server 2012+ Enterprise, or SQL
Server 2016+ Standard in:
▪ They need to be registered with the SQL IaaS Agent extension in full manageability
mode and are using the same domain account for the SQL Server service on each
VM.
▪ One for the availability group listener within the same subnet as the availability
group.
o 1-8 sets of secondary replicas (only 1 allowed in SQL server Standard), each of which hosts
the secondary databases (this does not replace backups).
• Note:
o The primary replica send transaction log records to every secondary database ("data
synchronization").
o You can configure 1+ secondary replicas to support read-only access to secondary databases,
and/or to permit backups on secondary databases.
▪ Minimizes transaction latency, but there is a lag for when the data is committed
onto the secondaries.
o Synchronous-commit mode. The primary replica does not commit until the secondary replica
has hardened the log.
• Failover:
o This is when the target secondary replica transitions to being the new primary replica.
▪ Automatic failover – no data loss – occurs when there is a failure to the primary
replica – for synchronous-commit mode only. Needs to have a Windows Server
Failover Cluster quorum and be synchronized.
o Forced manual failover (also known as "forced failover"). For asynchronous-commit mode.
This is a DR option.
▪ The only type of failover that is possible if the target secondary replica is not
synchronized with the primary replica.
o After failover, Azure SQL connections are automatically redirected to the new primary node.
o Name the cluster, and give a Storage Account which is the Cloud Witness.
▪ Storage Account name: 3-24 characters using numbers and lower-case letters.
o Click Apply.
o In Azure portal, go to the VM – Settings – SQL Server configuration – Open - High Availability.
- Type: "Internal" allows apps in the same Virtual Network to connect to the
availability group.
- The "Resource group" and "Location" should be that where the SQL Server
instances are in.
- The Probe Port is for the internal load balancer, which is 59999 by default.
o Click "Apply".
o Click "Apply".
• In the Azure Portal – Settings – High Availability, the status of the availability group(s) are shown.
• However, you can also use the SQL Server to do this as well – and this is the way I do this in the videos
to this course.
o Secondary databases do not exist until backups of the new primary database are restored to
the secondary replicas (use RESTORE WITH NORECOVERY).
• In SSMS
o Connect to one of your SQL Server VMs using (for example) RDP.
o In SSMS, go to your SQL Server instance – Always On High Availability – Availability Groups.
o Click "OK".
o GO
• To configure it in SSMS,
o Enter the listener DNS name – in SSMS, that is up to 15 letters, numbers, hyphens and
underscores.
▪ Static IP.
- You must specify a static IP address for every subnet that hosts an
availability replica, including Subnet and IP Address.
• It monitors network connections and the health of the nodes (clustered servers)
• A two-node cluster will function without a quorum resource, but its use is recommended.
o This then provides an odd number of votes, and a 3 quorum votes minimum.
• To configure:
o Right-hand click the cluster, and go to More Actions – Configure Cluster Quorum Settings.
o Cloud Witness
▪ Only 1Mb.
▪ Recommended to use whenever possible, unless you have a failover cluster solution
with shared storage.
▪ Use General Purpose and Standard Storage (Blob storage and Premium storage are
not supported).
▪ Once finished, you can see this witness in the Failover Cluster Manager snap-in.
o Disk Witness.
▪ The disk is highly available (most resilient) and can fail over between nodes.
▪ Only can be used with a cluster which uses Azure Shared Disks.
▪ By default, all nodes have a vote, but you can assign votes to only some nodes.
▪ You could also have "No nodes", which then the same as "No majority (disk witness
only)" – see below.
▪ The cluster quorum is the majority of voting nodes in the active cluster
membership.
o Node majority with witness ("Node and File Share Majority" or "Node and Disk Majority")
VM for SQLSERVER1
• Virtual machine name: SQLSERVER1
VM for SQLSERVER2
• Virtual machine name: SQLSERVER2
Connect to VMDOMAINCONTROLLER
• Go to VM – VMDOMAINCONTROLLER – Connect
• Enter credentials.
• Do you want your computer to be discoverable by other PCs and devices on this network? Yes
Join VM1 to DC
• Go to SQLSERVER1 (then SQLSERVER2) – Networking – Network Interface hyperlink – DNS servers –
Custom – enter the DNS server IP address – X.Y.0.4
• Do you want your computer to be discoverable by other PCs and devices on this network? Yes
• After reboot, go to All Services, Right click on SQLSERVER1 and select “Failover Cluster Manager”.
• Right-hand click on SQL Server (in SQL Server Services) – go to the “Always On Availability Groups”
and check “Always on Availability Group”.
• Go to SQL Server Configuration Manager – SQL Server Network Configuration – Protocols – TCP/IP
and Enable.
• Go to Windows Defender Firewall – New Rule – Port – TCP 1433 (all others as default).
• New
Configure Witness
• Go to Failover Cluster Manager – the actual failover cluster (in my case, SQLCLUSTER.filecats.co.uk)
• Click Next.
• Click Next x 3.
• Next
• Create the backup (right-hand click on the database – Tasks – Back Up…)
• Select the replicas… Click “Add Replica” and log into SQLSERVER2.
• Look at availability mode, automatic failover, and readable secondaries. (Synchronous good if you
have close physical distance.)
• Finish the wizard (It’s OK for the purposes of the DP-300 course if the listener configuration has a
warning).
Add listener.
• In SSMS, go to Always On High Availability – Availability Groups – NameOfGroup – right-hand click on
Availability Group Listener – Add a Listener.
• Port – 1433
Test failover
• Go to Always On High Availability – Availability Group – SQLAVAILABILITYGROUP
• Finish.
• You can use transactional replication to push changes made in an Azure MI to:
• Useful for:
o Distributing changes to one or more databases in SQL Server, Azure SQL MI or Azure SQL
Database.
o Publisher
▪ Publishes changes made on some tables ("articles"), and send the updates to the
Distributor.
▪ Cannot be Azure SQL Database (need to use Data Sync – topic 14 – for this).
o Distributor
- Can be the same Azure SQL MI as the Publisher, but a different database.
▪ If SQL Server instance, version needs to be the same or higher than the Publisher
version.
o Pull subscriber
▪ Can be Azure SQL MI or an SQL Server instance, but needs to the same type as the
Distributor.
o Push subscriber.
• Create a Publication:
o Specify a Distributor.
- You will need to specify a default snapshot folder, a directory that agents
can read from and write to this folder.
▪ Transaction replication – changes occur in near real time, applied to the Subscribe in
the same order as they occurred on the publisher.
▪ Merge replication – Data can be changed on both the Publisher and Subscriber.
- When connected to the network, all rows which have changed between
Publisher and Subscriber are synchronised. For on-prem, VM and MI
▪ Snapshot replication – distributes data at a specific moment of time, and does not
monitor for updates to the data.
o Select data, database objects and filter columns and rows from table articles to publish.
o Enter the logins and passwords for connections made by replication agents.
• You can create readable secondary databases in the same or different region.
▪ Azure SQL Database and Azure SQL MI can both use auto-failover groups.
o Database migration from one server to another with minimum downtime, and
o It uses asynchronous replication, so the transactions are committed on the primary before
being replicated.
• To configure geo-replication:
• You can upgrade to SQL Server 2019 from SQL Server 2012+.
• You can also upgrade to a higher version except from Enterprise (the highest):
o Standard (or in older versions, Workgroup or Small Business) can upgrade to Standard or
Enterprise.
• If an application requires a previous version, you can use that version’s compatibility level.
o You cannot add new features during the upgrade (but you can do it afterwards).
• Online
o You need to do a side-by-side installation, and then decommission the previous SQL Server.
o You can choose what features to use, and you can install a 64-bit instance, even if your
previous version is 32-bit.
o Verify that the hardware and software you intend to use is supported.
▪ If using Analysis Services, make sure you install the correct server mode – tabular or
multidimensional.
o Reboot if necessary.
o Add more space or decrease the maximum capacity to a database elastic pool.
• Terminology:
▪ Generally increases with inserts and decreases with deletes, but dependent on
fragmentation.
▪ Can grow automatically, but does not automatically decrease after deletes.
• This applies to Azure SQL Database, not Azure SQL Managed Instance.
▪ SELECT file_id, size FROM sys.database_files WHERE type = 1 -- "1" = Log file. Size is
in 8 Kb pages.
▪ Will impact database performance when running; should be done when less used.
▪ DBCC SHRINKDATABASE(MyDatabase) will shrink all the data and log files in the
MyDatabase database. (Note the lack of quote marks.)
o For a database
▪ FROM sys.resource_stats
▪ ORDER BY start_time
▪ FROM sys.database_files
▪ WHERE type = 1
o FROM sys.dm_db_index_physical_stats(NULL,NULL,NULL,NULL,NULL)
▪ --The arguments are: database_id (use db_id to look it up), object_id (use object_id
to look it up), index_id, partition_number and mode (the scan level).
o DBCC SHOWCONTIG
▪ Where more than >20% of rows have been deleted (due to DELETE or UPDATE),
reorganize. This removes rows marked as deleted.
o REORGANIZE
o REBUILD
- An offline rebuild is generally quicker, but locks the index during this time.
• Generally, REORGANIZE if >10% and <30%, and REBUILD is >30% - but this is a guide only.
o The database generates information about the contents of each column. Can be useful for
deciding whether to use a scan or seek.
• Auto Shrink
• Simplify queries
o Requirements
o Actions
o Requirements
o Actions
o Requirements
▪ Values that are not part of a record’s key are to be removed from the table.
o Action
o Requirements
o Actions
o Requirements
o Actions
o decimal/numeric
o money, smallmoney
• Approximate numerics
o float, real
• Character strings
o binary, varbinary
o Azure SQL Database supports only one database file (except in Hyperscale).
o Primary file
▪ Start-up information.
o Secondary file
▪ Additional, but optional, user-defined data files (zero to multiple). Cannot be used
in Azure SQL Database.
o Transaction Log
o Simple databases can have a single data file and a single transaction log file.
o O/S file name – its location, including directory path (you can set this on VM only).
• Storage size
• Filegroups
o Contains multiple files for admin, data allocation or storage purposes. Not used in Azure SQL
Database.
o By default, the "default" filegroup is the PRIMARY filegroup. However, you can change it.
o The primary filegroup contains the primary file, system tables. The default filegroup (which
may be the same) contains any other objects where you have not specified a filegroup.
o If you use multiple data files, Microsoft recommends that you create a second file group for
the other files and make that filegroup the default filegroup.
o GO
▪ FILENAME = N'C:\PathToData\NewData.ndf' ,
◼ or FILEGROWTH = 10%
▪ TO FILEGROUP [NewFileGroup]
o sp_help 'Schema.TableName'