DP 300notes210925
DP 300notes210925
DP 300notes210925
o Go to Azure SQL
o In additional settings, add existing data if required, either from a backup or sample data.
o You should use Managed Service Accounts (MSA) for a single computer running a service.
▪ A Group Managed Service Account (gMSA) is used for assigning the MSA to multiple
servers.
o You need:
▪ Region,
▪ In Networking, select "Private endpoint", then "+Add private endpoint" and select
the subnet from above.
o When created, in "Firewalls and virtual networks", click "+Add client IP", and "Allow Azure
services and resources to access this server".
Page 1 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement Data Platform Resources
o PaaS SQL Database and Managed Instance have built-in patching, and they always use the
latest stable Database Engine version.
o You have full control of the database engine, e.g. when to apply patches.
▪ You need SQL Server 2008 R2 or above, and Windows Server 2008 R2 or above.
▪ for existing VMs, by going to Azure Portal – the relevant VM – Settings – SQL Server
configuration – Patching.
▪ This daily checks whether there are any unregistered VMs in the subscription, and if
so, registers them in lightweight mode.
- To take advantage of all of the features, you would still need to manually
upgrade.
▪ To do this, go to Azure Portal – SQL virtual machines (plural) – and at the top, click
on "Automatic SQL Server VM registration".
Page 2 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement Data Platform Resources
4. evaluate requirements for the deployment
Page 3 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement Data Platform Resources
6. evaluate the scalability of the possible database offering
Page 4 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement Data Platform Resources
8. evaluate the security aspects of the possible database offering
o Single database.
o Elastic pool.
o Compute tier:
▪ Provisioned – for regular usage patterns, or multiple databases with elastic pools.
o Specify separate amount of Number of vCores, memory, and amount/speed of storage. Look
at:
▪ iOPS,
▪ Backup retention.
o Maximum of:
▪ 80 vCores at Gen5,
▪ 4 Tb memory, and
▪ Azure Hybrid Benefit allows you to bring in your existing on-prem licenses to the
cloud.
Page 5 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement Data Platform Resources
▪ Reserved capacity is paying in advance at a discount.
o Choose from:
▪ General purpose (scale computer and storage) – For most business workloads.
Storage latency of 5-10 ms (about the same as SQL Server on a VM).
▪ Hyperscale (on-demand scalable storage) – Only for Azure SQL Database – say 100
Tb+ storage.
- You cannot subsequently change out of Hyperscale. Cost the same as Azure
SQL Database.
▪ Zone and Local Redundancy are cheaper for single region data resiliency.
o Tempdb
▪ Azure SQL Database creates 1 file per vCore with 32Gb per file, with caps of up to 32
files for serverless computing only.
o Offers bundles of maximum number of compute, memory and I/O (reads/writes) resources
for each class (cannot separate them).
o Uses Azure Premium disks. Provision in increments of 250 Gb to 1 Tb, and 256 Gb thereafter.
o Choose from:
Page 6 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement Data Platform Resources
▪ Standard (for typical performance)
▪ Please note: Basic and Standard S0, S1 and S2 have less than 1 vCore, and cannot
use "Change data capture".
- Consider Basic, S0 and S1, where database files are stored in Azure
Standard Storage (HDD), for development, testing and infrequently
accessed workloads.
▪ See https://dtucalculator.azurewebsites.net/
o For the DMVs to have accurate figures, you may need to flush the Query Store after re-
scaling. Use:
▪ EXEC sp_query_store_flush_db;
o Server.
▪ This is a logical server, which includes logins, firewall and auditing rules, policies and
failover groups.
o Serverless model.
• Configure network:
o No access.
o Public/private endpoint.
o Choose whether to "Allow Azure services and resources to access this server" (for other
Azure services).
• Connection policy
Page 7 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement Data Platform Resources
o Proxy – uses Azure SQL Database gateways,
o Default – Redirect if connection originates inside Azure, and Proxy if outside Azure.
• You can have sample data, or data based on the restore from a geo-replicated backup.
o CS/CI = case-[in]sensitive,
o AS/AI = accent-[in]sensitive.
Page 8 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement Data Platform Resources
10. configure Azure SQL Managed Instance for scale and performance
• Service Tier:
o General Purpose
o Business Critical
▪ low-latency workloads
▪ Fast Failovers
• Hardware Generation
o Up to 80 vCores,
o 400 Gb memory,
o up to 16 Tb database size.
o Cross-database queries,
▪ The execution environment for .NET framework code (also known as "managed
code").
Page 9 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement Data Platform Resources
o The msdb system database.
o SQL Managed Instances does not support the DTU-based purchased model.
• Tempdb
11. configure SQL Server in Azure VMs for scale and performance
• SLA for Virtual Machines
o When you need an older version of SQL Server or access to a Windows Operating System.
o When you need SSAS (Analysis), SSIS (Integration) or SSRS (Reporting) (non Azure services),
Page 10 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement Data Platform Resources
o When you need features not available in Azure SQL Database or Azure MI.
o Azure VM marketplace images are configured for optimal SQL Server performance.
▪ Data drives should be put on Premium P30 and P40 disks for cache support.
▪ Log drive should be put on Premium P3o to P80 disks, or Ultra disks for
submillisecond latency.
o Stripe multiple data disks using Storage Spaces (similar to RAID, but done in software) to
increase I/O bandwidth. 3+ drives form a storage pool. This should be done by:
- Simple
- Mirror
- Parity
Page 11 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement Data Platform Resources
o Increases resiliency.
▪ Creating a volume.
o Use Local Redundant Storage, not Geo-redundant storage, on the storage account.
▪ Good for testing and development, small-medium databases or traffic web servers.
▪ Good for medium traffic web servers, network appliances, batch processes, and
application servers.
▪ Good for relational database servers, medium to large caches, and in-memory
analytics.
▪ Good for Big Data, SQL, NoSQL databases, data warehousing and large transactional
databases.
▪ heavy graphic rendering and video editing, as well as model training and inferencing
(ND) with deep learning.
▪ Automated backup,
▪ Automated patching,
▪ View information in Azure Portal about your SQL Server configuration, and more.
▪ It is installed when you deploy an SQL Server VM from the Azure Marketplace.
Page 12 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement Data Platform Resources
12. calculate resource requirements
• Purchasing models:
• vCore-based model:
o Business Critical service tier includes 3 replicas (and about 2.7x price)
o Single database.
▪ They can be dynamically (i.e. manually) scaled (but not autoscaled) up and down.
o Elastic pool.
▪ This is for multiple databases, good when they have variable usage patterns.
▪ Can add databases by going to the pool and clicking on "+Add databases".
• Storage costs:
• For DTU model, consider the following factors when determining how many DTUs you need:
o Note: Unit price for eDTU pools is 1.5x the DTU unit price for a single database.
▪ Price for v-Core pools is at the same unit price as for single databases.
Page 13 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement Data Platform Resources
▪ Exceeding this may result in time out.
o Network bandwidth
• You can:
o Lookup strategy
▪ Have a shard key (an ID), and a map which shows where the data is stored.
o Range strategy
▪ Similar data is kept on the same storage node, so it can retrieve multiple items in a
single operation.
o Hash strategy
▪ Data distributed evenly among the shards. Reduces hotspots (high loads for an
individual server) by using some random element for distribution.
Page 14 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement Data Platform Resources
14. set up SQL Data Sync
• Azure SQL Data Sync allows you to synchronize data across multiple databases.
o Tables need to have a primary key, which cannot be changed (rows can be deleted/recreated
instead).
• Sync Metadata Database contains the metadata and log for Data Sync. It is an Azure SQL Database in
the same region as the Hub Database.
o It should be an empty database. Data Sync creates tables and runs a frequent workload.
• Member databases are either Azure SQL Database or on-prem (not Managed Instance).
o If you are using on-prem, you will need to install and configure a local sync agent.
▪ But if there are several members, this depends on which member syncs first.
• Use in:
Page 15 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement Data Platform Resources
o Go to the Hub database.
▪ Automatic Sync (If on, choose from Seconds, Minutes, Hours or Days in Sync
Frequency),
▪ Use private link (a service managed private endpoint). If yes, you will later need to
approve the Private Endpoint Connection.
o Subscription,
o Sync Directions (To the Hub, From the Hub, or Bi-directional Sync),
▪ Select “Create and Generate Key”, and copy it to the clipboard, then click OK.
o In the “Sync Metadata Database Configuration”, enter credentials for the metadata database
server.
▪ If automatically created, this will be the same server as the hub database.
o Click Register.
Page 16 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement Data Platform Resources
o In the “SQL Server Configuration” box, connect using SQL Server or Windows authentication.
o Provide a name for the new sync member (not the database name) and the Sync Directions.
• To see if it works, go to the Database Sync Group page – Tables, and click on Refresh schema.
- This will impact on whether you can use Azure SQL Database/Managed
Instance, or whether you need a VM.
o Downtime allowances
▪ Are you allowed any downtime at all? If not, you need to do an online migration.
o Security requirements
o Location for data storage (e.g. GDPR, California Consumer Privacy Act, or similar
requirements)
o It can also discover and assess SQL data estate at scale (across your data center).
o Get Azure SQL deployment recommendations, target sizing and monthly estimates.
• Do you need to migrate non-SQL objects, such as Access, DB2, MySQL, Oracle and SAP ASE databases
to SQL Server or Azure SQL?
• Do you need to migrate SQL Server objects to SQL Database/Managed Instance? If so:
Page 17 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement Data Platform Resources
o Do you need to migrate and/or upgrade SQL Server?
▪ It can also discover and assess SQL data estate, and recommend performance and
reliability improvements for your target environment.
▪ Detect compatibility issues between your current database and a target version of
SQL Server or Azure SQL.
o Do you need to compare workloads between the source and target SQL Server?
o Do you need to migrate open source databases, such as MySQL, PostgreSQL or MariaDB?
▪ Minimal downtime (especially if online using the Premium pricing tier). Good for
large migrations.
▪ You need:
- To allow outbound point 443 (HTTPS) – you may also need 1434 (UDP).
- Does not initiate any backups, and uses existing full and log backups (not
differential).
Page 18 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement Data Platform Resources
• You can upgrade to SQL Server 2019 from SQL Server 2012+.
• You can also upgrade to a higher version except from Enterprise (the highest):
o Standard (or in older versions, Workgroup or Small Business) can upgrade to Standard or
Enterprise.
o Developer can upgrade to Developer, or SQL Server 2019 (only) Web, Standard or Enterprise.
• If an application requires a previous version, you can use that version’s compatibility level.
o You cannot add new features during the upgrade (but you can do it afterwards).
• Online
o You need to do a side-by-side installation, and then decommission the previous SQL Server.
o You can choose what features to use, and you can install a 64-bit instance, even if your
previous version is 32-bit.
Page 19 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement Data Platform Resources
20. implement an offline migration strategy
• Migrating from SQL Server to Azure SQL Database - Prerequisites:
o Create a Virtual Network for the Azure Database Migration Service using either ExpressRoute
or VPN.
o Enable outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor.
o Allow database engine access in Windows firewall, and open the Windows firewall to TCP
port 1433 (unless you have changed it). You may also need to have UDP port 1434.
o Create a server-level IP firewall rule to allow Azure Database Migration Service access.
o Your credentials need CONTROL SERVER on the SQL Server instance, and CONTROL
DATABASE on Azure SQL.
o In Data Migration Assistant, select +New and Assessment, and enter a project name.
o Select Database Engine, SQL Server and Azure SQL Database, and either/both:
Page 20 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement Data Platform Resources
o In the Azure Database Migration Service subscription, click on Resource providers.
o In the Azure portal, go to this service and click Create, and select:
- You can have the 4 vCore Premium DMS free for 6 months. You can use it
for a total of 1 year, and create 2 DMS services per subscription.
o In the Azure portal, go to “Azure Database Migration Services”, select the relevant instance,
and select ”New Migration Project”.
o Add a project name, SQL Server, Azure SQL Database, and Data migration.
o Select databases, note the Expected downtime, and click “Next: Select target:.
o Click “Next: Map to target databases”. This will be mapping to new databases, unless you
have a database with the same name.
o Click “Next: Summary” and enter an Activity Name for the migration.
o Click “Start migration”. You can monitor the migration from there.
o Once complete, verify that the target database has been migrated.
• Other options:
o Bulk Copy Program (bcp) can be used for connecting from on-prem or a VM to Azure SQL.
Page 21 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Implement a Secure Environment
o SSIS packages – ETL (extract, transform and load).
o Verify that the hardware and software you intend to use is supported.
▪ If using Analysis Services, make sure you install the correct server mode – tabular or
multidimensional.
o Reboot if necessary.
o SQL Server authentication (user name and password, sent in plain text), and
o Cloud-only identities,
o Hybrid identities that support cloud authentication with Single Sign-On (SSO), using
password hash or pass-through authentication.
Page 22 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Implement a Secure Environment
o Hybrid identities that support federated authentication.
• Decision tree:
o Cloud-only identities
o Federated authentication
o Pass-through authentication
• Other authentications:
o Admin tools on a non-Azure machine that is not domain-joined: use Azure AD integrated
authentication, or Azure AD interactive authentication with multifactor authentication.
o Older apps where you can't change the connection string: SQL authentication.
o Go to the Azure Portal – Active Directory – (The relevant active directory, if more than one),
and Authentication methods. These include:
o Enter:
Page 23 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Implement a Secure Environment
▪ Name
▪ Groups (Optional).
o Click Create.
▪ GO
o Logins can:
▪ Auditing,
• However, you can create logins from Azure AD users, groups or apps.
▪ <option_list> ::=
Page 24 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Implement a Secure Environment
- PASSWORD = {'password’} – this cannot be used when FROM EXTERNAL
PROVIDER is used.
- | SID = sid
- | DEFAULT_DATABASE = database
- | DEFAULT_LANGUAGE = language
• Create user:
o [;]
o <limited_options_list> ::=
▪ DEFAULT_SCHEMA = schema_name
▪ | ALLOW_ENCRYPTED_VALUE_MODIFICATIONS = [ ON | OFF ] ]
• Both SQL Server Administrators and Azure Active Directory Administrators for SQL Server can create:
• Azure Active Directory Administrators for SQL Server only can create:
• You cannot create an SQL Server login from the Azure portal.
▪ Windows authentication,
- Strong verification.
Page 25 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Implement a Secure Environment
- Uses identities in Azure AD. You can use it when your computer is logged
into Windows but it is not federated with Azure.
▪ Special purpose logins, which cannot connect to SQL Server, but which can own
objects and have permissions:
• Create a user using SSMS (Managed Instance and Azure SQL Database):
▪ “SQL user with password”. Also called a "contained database user". You can select
- Can make your database more portable. Allowed in Azure SQL Database
and in a contained database in SQL Server.
- Cannot login to a server, but can be granted permissions and can sign
modules
- Cannot login to a server, but can be granted permissions and can sign
modules
▪ “Windows user”.
Page 26 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Implement a Secure Environment
25. configure security principals
• Principal is that which receives permissions.
o serveradmin – change server-wide configuration options and shut down the server.
o securityadmin – GRANT, DENY and REVOKE server-level permissions, and any database-level
permissions if they have access to the database.
o public – includes all users, group and roles. When you want the same permission(s) for
everyone.
o db_owner – all configuration and most maintenance activities (in Azure SQL Database, some
activities require server-level permissions), including DROP database.
▪ However, if you give them db_denydatareader or DENY permissions, you can deny
read access to data.
o db_securityadmin – can modify role membership for custom roles only and manage
permissions. Can elevate own permissions.
o db_[deny]datareader – [cannot] read all data from all user tables and views.
• In Azure SQL Databases, there are also two special database roles in the "master" database only:
o dbmanager – can create/delete databases. Connects as the dbo (database owner) user.
Page 27 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Implement a Secure Environment
o loginmanager – create/delete logins in the "master" database (as per securityadmin server
role in on-prem SQL Server)
o sp_helprotect – returns user permissions for an object (or all objects) in the current
database.
• There are also role-based access control (RBAC), which are security rights outside of databases, which
include:
o SQL DB/Managed Instance/Server Contributor – manage SQL Databases, Mis or Servers, but
not get access to them. Cannot manage security-related policies.
o SQL Security Manager – mange security-related policies for servers and databases, but no
access to them.
• When deploying, Azure uses the "server admin", which is a principal in Azure SQL Database, and a
member of the sysadmin role in MI.
• In a particular login:
o Click Search.
o Select:
▪ “The server”,
▪ “Specific objects”. If so, click “Object Types” and select Endpoints, Logins, Servers,
Availability Groups and/or Server roles.
▪ “All objects of the types” – select Endpoints, Logins, Servers, Availability Groups
and/or Server roles.
o Server
o Database
Page 28 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Implement a Secure Environment
o Schema
▪ Type
• You can use Roles to assigned permissions to roles, and then users to roles.
o GRANT
▪ Why use REVOKE instead of GRANT? It doesn’t give permissions, but it doesn’t stop
permissions if they have it through another role.
▪ If DENY is applied to the public role, no non-sysadmin will have this permission.
• You can also prevent users from querying objects directly by allowing only access to procedures or
functions.
o If two objects have the same owner, then permissions in a second object called from the first
are not separately checked.
o SELECT permission in a database includes all (child) schemas, and the tables and views.
o CONTROL gives ownership-like permissions and includes all other permissions, including
ALTER, SELECT, INSERT, UPDATE.
Page 29 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Implement a Secure Environment
o Don't confuse this with TLS – transparent layer security – which encrypts when in transit.
• It is protected by the TDE protector, using a service-managed certificate or an asymmetric key in the
Azure Key Vault.
o For Azure SQL Database, it is set at the server level. New databases are encrypted by default
(but not ones created through restore or database copy).
o For Azure SQL Managed Instance, it is set at the instance level and is inherited to all
encrypted databases.
• To enable it in Azure SQL Database only, go to the Azure Portal, then the relevant database, then go
to “Transparent data encryption” and set “Data encryption” to ON.
o However, you can’t switch the TDE protector to a key in Key Vault in T-SQL.
o Set-AzSqlServerTransparentDataEncryptionProtector
o Add-AzSqlServerKeyVaultKey
o Set-AzSqlDatabaseTransparentDataEncryption
o It prevents access to sensitive data by putting a mask, with none or part of the data (e.g. last
4 digits of a credit card).
Page 30 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Implement a Secure Environment
o You may see recommended fields to mask.
o You can select the Schema, Table and Column to define the columns for masking.
- XXXX for string data types. You can use fewer Xs if it less than 4 characters.
- Exposes the last 4 digits of the credit card, with a constant string prefix.
▪ Email ([email protected]),
- Exposes the first letter, but replaces everything else with a constant string
prefix.
- Shows the first X characters, the last Y characters, and a custom padding
string in the middle.
• You can select specific SQL users who were excluded from masking.
o Note: Administrators are always excluded for Dynamic Data Masking – they can always read
the data.
Page 31 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Implement a Secure Environment
31. implement Azure Key Vault and disk encryption for Azure VMs
• To encrypt disks for Azure VMs:
o Next to “Select key from Azure Key Value: Key vault”, select “Create new”.
o Add a name (unique amongst Azure Key Vaults) and Resource Group.
o Go to the “Access Policies” tab, click “Enable Access to: Azure Disk Encryption for volume
encryption”.
o After creating the Key Vault, leave the Key field blank, click Select, and Save.
o SQL Database communicates over port 1433. You need that opened on your own
computer/server.
o create a reserved IP (classic deployment) for the resource that needs to connect, then
o Server-level firewall rules are for users/apps to have access to all databases.
o This applies to all databases in the server on Azure SQL Database only, whether single or
pooled databases. It does not apply to Azure SQL Managed Instance.
o You will need SQL Server Contributor or SQL Security Manager role, or the owner of the
resource that contains the Azure SQL Server.
o Select “Add client IP” to add your current IP address. This opens port 1433.
Page 32 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Implement a Secure Environment
▪ A firewall rule of 0.0.0.0 enables all Azure services to bypass the server-level
firewall rule – but in the portal, you need to turn on "Allow Azure services and
resources to access this server" instead.
o Click OK. The rules are then stored in the master database.
• In T-SQL:
• You can also manage using PowerShell, CLI (Command Line Interface) or REST API.
o It can only be done using T-SQL statements, and you need CONTROL DATABASE permission
at the database level.
• In T-SQL:
• If you wish to use an Azure Key Vault, then you need to create it first
Page 33 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Implement a Secure Environment
▪ Cryptographic Operations: Decrypt, Encrypt, Unwrap Key, Wrap Key, Verify and
Sign.
o It costs $0.03 for 10,000 transactions. The Premium version allows for a Hardware Security
Module (HSM).
o Select the columns and choose “Encryption Table”, either Deterministic or Randomized.
▪ Deterministic allows equality joins, GROUP BY, indexes and DISTINCT. Randomized
prevents this.
o In “Master Key Configuration”, you can go to “Select an Azure Key Vault” and select the Key
Vault.
• When the columns are encrypted, then when connecting, go to the “Additional Connection
Parameters” tab, and enter:
o Security Administrator generates columns encryption keys and column master keys.
▪ Needs access to the keys and the key store, but not the database.
o Database Administrator (DBA) manages metadata about the keys in the database.
Page 34 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Implement a Secure Environment
▪ If so, then you can only use PowerShell.
▪ $storeLocation = "CurrentUser"
▪ Import-Module "SqlServer"
▪ $cmkSettings = New-SqlCertificateStoreColumnMasterKeySettings -
CertificateStoreLocation "CurrentUser" -Thumbprint $cert.Thumbprint
o # Generate a column encryption key, encrypt it with the column master key to produce an
encrypted value of the column encryption key.
▪ $encryptedValue = New-SqlColumnEncryptionKeyEncryptedValue -
TargetColumnMasterKeySettings $cmkSettings
o # Share the location of the column master key and an encrypted value of the column
encryption key with a DBA, via a CSV file on a share drive
▪ $keyDataFile = "Z:\keydata.txt"
▪ $keyData.KeyStoreProviderName
▪ $keyData.KeyPath
▪ $keyData.EncryptedValue
o # Obtain the location of the column master key and the encrypted value of the column
encryption key from your Security Administrator, via a CSV file on a share drive.
o $keyDataFile = "Z:\keydata.txt"
Page 35 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Implement a Secure Environment
o Import-Module "SqlServer"
o $connStr = "Server = " + $serverName + "; Database = " + $databaseName + "; Integrated
Security = True"
o $cmkName = "CMK1"
o # Generate a column encryption key, encrypt it with the column master key and create
column encryption key metadata in the database.
o $cekName = "CEK1"
Page 36 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Implement a Secure Environment
o Go to the database.
o At the bottom of the screen, you may have “X columns with classification
recommendations”.
- [n/a], Other
- Networking
- Personal data: Contact Info, Name, National ID, SSN, Health, Date of Birth,
Page 37 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Implement a Secure Environment
- Credentials
- General – Business data not meant for the public, such as emails,
documents and files which do not include confidential data.
▪ You cannot select [n/a] for both Information Type or Sensitivity Label.
• The following roles can modify and read a database’s data classification:
o Owner,
o Contributor,
• Additionally, the following roles can read (but not modify) a database’s data classification:
o Reader, and
• You can use Audit to drill down into "Security Insights", "Access to Sensitive Data" etc.
• You can also use T-SQL, REST API or PowerShell to manage classifications.
• In T-SQL:
o WITH (
▪ Networking, Contact Info, Credentials, Credit Card, Banking, Other, Name, National
IS, SSN, Health, Date of Birth
o )
Page 38 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Implement a Secure Environment
o To check sensitivity classifications:
• Notes:
o Under high activity, Azure will prioritise other actions and may not record some audited
events.
o Server policy audits always applies to the database, regardless of any database-level auditing
policies. They can sit side-by-side.
o Microsoft recommends using only server-level auditing, unless you want to audit different
event types/categories for a specific database.
▪ BATCH_COMPLETED_GROUP
• To do this:
o Click “Enable Azure SQL Auditing” to track these events for a particular database or server.
You can select the details to be stored in:
- The Advanced settings allow you to choose the retention period (the
default, zero days, is unlimited),
Page 39 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Implement a Secure Environment
- This Advanced setting only applies to new audits.
o If you are in the database, you can click on “View server settings”.
o If you are in the server, you can also audit Microsoft support operations.
▪ Give the container a name, set the Public access level to Private and click OK.
▪ In the Properties, click on Properties and copy the URL for future use.
▪ Add “Blob” to “Allowed services”, choose the Start date as yesterday (to avoid
timezone related problems), and an End date.
▪ Click “Generate SAS” and copy this token for future use. (are the highlighted
needed?)
o You would need to set up a stream to consume these events and write them to a target.
o You can use SSMS, going to File – Open – Merge Audit Files.
▪ In advanced properties, you can change to the secondary access storage key.
Page 40 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Implement a Secure Environment
▪ Then you can go to your Storage Account – Settings – Access keys, and click the
regenerate icon on the primary access key.
▪ You can then go back to the audit, and change it to the primary key.
▪ You can then go to your Storage Account – Settings – Access keys, and click the
regenerate icon on the secondary access key.
o However, it does not track how many times nor does it track historic data. Therefore, it more
lightweight and requires less storage than CDC (Change Data Capture).
o It therefore enables applications to determines which rows have changed, and request those
rows. (But you cannot see the previous data.)
o The data is stored in an in-memory rowstore, and flushed on every checkpoint to the internal
data.
o You may wish to consider using snapshot isolation for the database, so that changes made
while getting the data are not visible within the transaction:
o In SSMS
▪ Select the Retention Period and Units (by default, 2 Days) – the minimum is 1
Minute; there is no maximum.
- If False, change tracking information will not be removed and will continue
to grow.
o In T-SQL
▪ SET CHANGE_TRACKING = ON
Page 41 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Implement a Secure Environment
• However, you still need to enable it in a particular table.
o In SSMS
▪ If True, you can also change “Track Columns Updated” to True. This will indicate
whether UPDATEs to individual columns will be tracked.
o In T-SQL
▪ ENABLE CHANGE_TRACKING
o However, to disable it on the database, all track changing of tables needs to be disabled first.
▪ DISABLE CHANGE_TRACKING
o SELECT * from sys.change_tracking_tables -- this uses the current database. You need:
• To use it:
▪ CT.SYS_CHANGE_COLUMNS, CT.SYS_CHANGE_CONTEXT
Page 42 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Implement a Secure Environment
o Check that you don’t have to refresh the entire table:
• Change Data Capture (CDC) is supported in Azure SQL Database, Azure SQL Managed Instance and
SQL Server on VM.
o Cannot be used in Azure SQL Database Free, Basic or Standard tier Single Database (S0, S1,
S2).
o Cannot be used in Azure SQL Database Elastic Pool with vCore < 1 or eDTUs < 100.
• Before you can enable it for a table, you must switch it on for the database.
o EXEC sys.sp_cdc_enable_db
▪ It creates the Change Data Capture objects, including metadata tables and DDL
triggers.
▪ @source_schema = N'HumanResources'
▪ , @source_name = N'Department'
▪ , @role_name = N'cdc_admin'
- The database role used to gate access to change data. Could be a new role.
o EXECUTE sys.sp_cdc_help_change_data_capture
Page 43 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
o SELECT * FROM cdc.fn_cdc_get_all_changes_HR_Department (@from_lsn, @to_lsn, N'all');
• To view details of the findings, go to Security – Security Center - “View additional findings in
Vulnerability Assessment”.
o Findings include an overview, number of issues found, severity risk summary, and findings
list.
▪ You can “Approve as Baseline” specific results. Any similar results are put in the
“Passed” section.
o They are stored in a time-series database which is suitable for alerting and fast detection of
issues.
o Select:
▪ Scope,
Page 44 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
▪ Metric Namespace,
o To change the date/time range, go to the top-right hand corner (where it says "Local time").
▪ You can also change the "Show time as" from Local to UTC/GMT, and change the
"Time granularity" (how often it does the aggregation).
▪ Only a maximum of 30 days visible at once, but you can use the arrow at the left-
right to go back up to 93 days in the past.
o You can:
▪ Change the color of a line (by clicking on the color in the legend – not the line, but
the legend).
▪ Split or filter a metric, if it has a dimension (not applicable to Azure SQL Database).
▪ Add a second metric onto the same chart (e.g. "Date space allocated").
▪ Change the chart type (from Line to Area, Bar, Scatter and Grid).
▪ Move the chart up, down, clone it, delete it, or see more settings (in the … to the
right-hand side).
• Logs are events in the system, which may contain other (non-numerical) data and may be structured
or free-form, with a timestamp.
o Hardware/compute/memory,
o Client applications.
• Azure Monitor allows you to monitor resource metrics, such as processor, memory and I/O resources.
o You may need more CPU or I/O resources if you have high DTU/processor percentage or high
I/O percentage. Alternatively, your queries may need to be optimized.
▪ You get a row for every 15 seconds for about the past hour.
Page 45 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
o SELECT * FROM sys.resource_usage
▪ You get a row showing the hourly summary of resource usage data for user
databases. Historical data is retained for 90 days.
▪ However, this is currently in a preview state. It says "Do not take a dependency on
the specific implementation of this feature because the feature might be changed
or removed in a future release."
o Subscription
▪ Azure Activity log includes service health records and records of configuration
changes.
▪ Azure Service Health has information about your Azure services’ health
o Resources
▪ Resource logs are created internally regarding the internal operation of an Azure
resource.
▪ Azure Diagnostic extension for Azure VM, when enabled, submits logs and metrics
▪ Log Analytics agents can be installed into your Windows or Linux VMs, running in
Azure, another cloud, or on-prem
o Other sources
▪ In Application code, you can enable Application Insights to collect metrics and logs
relating to the performance and operations of the app.
Page 46 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
o I/O bytes read/written,
o Blocked by firewall,
o Deadlocks,
o CPU %,
o Sessions %,
o Workers %,
• Space/components used
o DTU percentage – CPU, memory and I/O for vCores (not DTU-based model)
o When high, query latency increases and queries may time out.
Page 47 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
▪ If this hits 100%, then INSERT, UPDATE, ALTER and CREATE operations will fail
(SELECT and DELETE are fine).
o Data space used percent If this is getting high, then upgrade to the next service tier, shrink
the database, or scale out using sharding.
▪ This is used for caching. If you get out-of-memory errors, Increase service tier, or
compute size, or optimize queries.
• Connections/requested used
o Sessions percentage
o Worker percentage
o Top queries per duration or execution count (Custom – Metric type: Duration or Execution
Count)
41. assess database performance by using Intelligent Insights for Azure SQL Database
and Managed Instance
• Not available in some regions
• Compares current database workload (last hour) with the last 7-days.
o Uses data from the Query Store (see topic 48), which is enabled by default in Azure SQL
Database.
• Monitors using Artificial Intelligence operational thresholds, detects issues with high wait times,
critical exceptions, and query parameterizations
o Impacted metrics are increase to query duration, excessive waiting, timed-out or errored-out
requests.
Page 48 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
o Includes a “root cause analysis” in a readable form. May also contain a recommendation.
o Log Analytics workspace, can be used with Azure SQL Analytics (cloud-based only monitoring
solution) to see insights in the Azure portal. The typical way to view insights.
▪ To add Azure SQL Analytics, go to Home in the Azure portal, click “+Create a
resource”, and search for “Azure SQL analytics”
• How to connect
o Go to the database in the Azure Portal, and go to Monitoring – Diagnostic settings – Add.
▪ Add all the Category Details (log and metric), and in “Destination details” check
“Send to Log Analytics workspace”.
▪ DTUs, worker threads and login sessions reaching resource limits for Azure SQL
Database.
o Workload increase
o Memory pressure
o Data locking
▪ When there are more Parallel workers than there should have been.
o Missing indexes
Page 49 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
o Multiple threads using the same TempDB resource.
42. configure and monitor activity and performance at the infrastructure, server,
service, and database levels
• See topic 38.
o Metrics,
o Performance Overview,
o Performance recommendations, or
o https://docs.microsoft.com/en-us/azure/azure-sql/database/monitoring-with-dmvs
• In Azure SQL Database, in the Azure Portal – you can go to Intelligent Performance – Automatic
tuning.
o You can click on a “Create index” or “Drop index” and implement it.
• For VMs, you can also use the Database Engine Tuning Advisor.
Page 50 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
o You then need to give it details such as the Query Store or a T-SQL file (.sql extension) with
your workload.
o The statistics contain information about the distribution of values in tables or indexed views’
columns.
o This enables the Query Optimizer to create better quality plans (e.g. seek vs scan).
• Usually, the Query Optimizer determines when statistics might be out of date and then updates them.
However, you may wish to manually update them if:
o After maintenance operations, such as a bulk insert (but not rebuilding or reorganizing an
index, as they do not change the data distribution).
• The stored procedure sp_updatestats updates statistics for all user-defined and internal tables.
▪ WITH FULLSCAN – This scans all of rows. It is the same as SAMPLE 100 PERCENT
o Go to “Automatic tuning”.
Page 51 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
o This says that the last good plan should be forced whenever some plan change regression is
found – when the estimated gain is >10 seconds, or the number of errors in the new plan is >
recommend plan.
• In Azure SQL Database only, you can automate index maintenance by:
o You can change “Create Index” and “Drop Index” from Inherit (from server) to OFF or ON
[the default for servers is OFF for both of these]. These will override the server settings.
o Indexes will only be auto-created if the CPU, data I/O and log I/O are lower than 80%.
o The performance of queries using the auto-created index will be reviewed. If is doesn’t
improve performance, it is automatically dropped.
• You can create elastic job agents to automate maintenance tasks and/or run T-SQL queries.
o Update reference data or load or summarise data from databases or Azure Blob storage.
o Targets can be in different servers, subscriptions or regions, but must be in the same Azure
cloud.
▪ One or more databases, all databases in a server or elastic pool or shard map.
o This is the equivalent of SQL Agent Jobs, which are available in SQL MI, but are not available
in Azure SQL Database.
• You need:
o Elastic Job agent – the Azure resource which runs the jobs. This is free.
o Job database – an existing Azure SQL Database stores job related data, such as metadata,
logs, results and job definitions. It also contains stored procedures and other objects for jobs.
Page 52 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
▪ You need a Standard (S0 or above) or Premium service tier. S1 or above is
recommended, but if you run frequent jobs or against a big target group, you may
need more.
o Target group – servers, elastic pools, databases and databases of shard map(s) which are
affected.
▪ If a server or elastic group, all databases in the server at the time of running the job
will be affected. You will need to give the master database credential, so the
databases can be enumerated. You can also exclude individual databases or all
databases in an elastic pool.
o Job – unit of work which contained job steps, each of which specify the T-SQL script and
other details.
▪ Scripts must be "idempotent", capable of running twice with the same result.
o EXEC jobs.sp_add_target_group_member
▪ @target_group_name = 'GrpDatabase',
▪ @target_type = 'SqlDatabase'
- or 'SqlServer', -- or 'PoolGroup'
Page 53 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
◼ If targeting a server or pool, @refresh_credential_name =
'RefreshPassword',
▪ @server_name = 'DataBaseName.database.windows.net';
o To view the recently created target group and target group members
• In each database, you will need a job agent credential in each affected database. You could use
PowerShell for this.
• @credential_name='RunJob',
• @target_group_name='GrpDatabase'
o EXEC jobs.sp_update_job
▪ @job_name='Sample T-SQL',
▪ @enabled=1,
▪ @schedule_interval_count=1
Page 54 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
o Add more space or decrease the maximum capacity to a database elastic pool.
• Terminology:
▪ Generally increases with inserts and decreases with deletes, but dependent on
fragmentation.
▪ Can grow automatically, but does not automatically decrease after deletes.
• This applies to Azure SQL Database, not Azure SQL Managed Instance.
Page 55 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
- SELECT elastic_pool_name, elastic_pool_storage_limit_mb,
avg_allocated_storage_percent FROM
master.sys.elastic_pool_resource_stats
▪ SELECT file_id, size FROM sys.database_files WHERE type = 1 -- "1" = Log file. Size is
in 8 Kb pages.
▪ Will impact database performance when running; should be done when less used.
▪ DBCC SHRINKDATABASE(MyDatabase) will shrink all the data and log files in the
MyDatabase database. (Note the lack of quote marks.)
o It contains 3 stores:
o Fix queries which are regressed due to changes in the execution plan.
o What are the Top X queries, by execution time, memory consumption, waiting on resources?
o Disabled by default for new SQL Server databases (e.g. on a VM), but
o In SSMS:
Page 56 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
▪ To enable Query Store generally, change "Operation Mode (Requested)" to "Read
write".
o In T-SQL:
• Options:
▪ In T-SQL:
▪ You can choose from 1, 5, 10, 15, 30, 60 or 1440 minutes. A query will have a
maximum of 1 row collected for this time period.
o MAX_STORAGE_SIZE_MB = 500,
Page 57 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
o DATA_FLUSH_INTERVAL_SECONDS = 3000,
▪ Have a higher value if you don't have a large number of queries running being
generated. However, if the SQL Server crashes or restarts, then anything new will
not be saved.
▪ Having a lower value may have a negative impact of performance, as it will save
more often.
o SIZE_BASED_CLEANUP_MODE = AUTO,
o OPERATION_MODE = READ_WRITE,
▪ You can automatically delete Query data that you don't need.
o INTERVAL_LENGTH_MINUTES = 15,
o QUERY_CAPTURE_MODE = AUTO,
o MAX_PLANS_PER_QUERY = 1000,
o WAIT_STATS_CAPTURE_MODE = ON);
• To clear:
Page 58 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
o Session 1 locks a resource (e.g. row, page or entire table), and then
o Explicit transactions require you to add the BEGIN, and COMMIT/ROLLBACK TRANSACTION.
• Session 1
o BEGIN TRANSACTION
o UPDATE [SalesLT].[Address]
• Session 2
o BEGIN TRANSACTION
o UPDATE [SalesLT].[Address]
• To view locks:
• To view blocking:
o DB_NAME(database_id) as [database],
o open_transaction_count
o FROM sys.dm_exec_requests
• For the session_id, look at the numbers in brackets at the top of SSMS.
• To reduce blocking, you can change the TRANSACTION ISOLATION LEVEL of a session:
Page 59 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
o READ COMMITTED – No dirty reads, as would not read statements that have been modified
but not committed.
o SNAPSHOT – The data read remains the same until the end of the transaction. No blocks
unless the database is being recovered.
o SERIALIZABLE - No dirty reads, as would not read statements that have been modified but
not committed. However, blocks updates/inserts.
o DBCC USEROPTIONS
▪ DML statements start generating row versions – allows snapshots but doesn't
enable it.
Page 60 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
• To create a new session:
▪ Profiler Equivalents,
- TSQL_Locks (deadlocks),
▪ Query Execution,
▪ System Monitoring
o Such as "session_id".
Page 61 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
▪ Correlates SQL Server events with Windows OS events. Processes data
synchronously
o Event Counter
▪ Counts how many times each event occurs. Processes data synchronously
o Histogram
▪ Counts how many times events occurs, for event fields and actions separately
(asynchronous).
o Pair Matching
o Ring Buffer
o For a database
▪ FROM sys.resource_stats
▪ ORDER BY start_time
▪ FROM sys.database_files
▪ WHERE type = 1
Page 62 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
• To assess fragmentation of database indexes:
o FROM sys.dm_db_index_physical_stats(NULL,NULL,NULL,NULL,NULL)
▪ --The arguments are: database_id (use db_id to look it up), object_id (use object_id
to look it up), index_id, partition_number and mode (the scan level).
o You can also check it by right-hand clicking on the index in SSMS, going to Properties –
Fragmentation.
o DBCC SHOWCONTIG
▪ Where more than >20% of rows have been deleted (due to DELETE or UPDATE),
reorganize. This removes rows marked as deleted.
o REORGANIZE
o REBUILD
- An offline rebuild is generally quicker, but locks the index during this time.
• Generally, REORGANIZE if >10% and <30%, and REBUILD is >30% - but this is a guide only.
o The database generates information about the contents of each column. Can be useful for
deciding whether to use a scan or seek.
Page 63 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
• Auto Shrink
o When creating a VM, the "SQL Server settings – Change configuration" shows the storage.
o All of the SQL Server VM marketplace images follow default storage best practices.
o After setting the VM, when using disk caching for Premium SSD, you can select the disk
caching level (by going to Settings – Disks):
▪ It should be ReadOnly for SQL Server data files, as this improves reads from cache
(VM memory and local SSD), which is much faster than from disk (Azure Blob
storage).
▪ It should be None for SQL Server Log files, as the data is written sequentially.
▪ ReadWrite caching should not be used for the SQL Server files, as SQL Server does
not support data consistency with this cache type. However, it could be used for the
O/S drive, but it is not recommended to change the O/S caching level.
• In VMs and Azure SQL MI, you can use Resource Governor to balance resources used by different
sessions.
o You can divide resources (CPU, physical I/O, and memory) differently, based on which
workload it is in. This can improve performance on critical workloads.
• Terminology:
Page 64 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
o Resource pool – the physical resources. Two resource pools are created when SQL Server is
installed: internal and default.
▪ Without Resource Govenor enabled, all new sessions are classified into the default
workload group, and system requests into the internal workload group.
o Workload group – a container for requests which have similar criteria, and
o In SSMS
o In T-SQL
▪ GO
o In SSMS
▪ Double-click the empty cell in the Name, and enter the resource pool Name.
o In T-SQL:
▪ GO
▪ GO
o Settings:
- e.g. Department A has min of 60%, and Department B has max of 40%.
▪ CAP_CPU_PERCENT
Page 65 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
▪ MIN_ and MAX_MEMORY_PERCENT
• Workload Groups:
o In SSMS
▪ Go down to the "Workload groups for resource pool", and enter a name, with any
other values.
o In T-SQL
▪ CREATE WORKLOAD GROUP myGroup -- or ALTER, if you wish to change it, or DROP
to delete it.
▪ GO
o RETURNS sysname
o WITH SCHEMABINDING
o AS
o BEGIN
o if DATEPART(HOUR,GETDATE())<8 or DATEPART(HOUR,GETDATE())>17
▪ BEGIN
- RETURN 'gOutsideOfficeHours';
▪ END
o RETURN 'gInsideOfficeHours';
o END
Page 66 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
o ALTER RESOURCE GOVERNOR with (CLASSIFIER_FUNCTION = dbo.fnClassifierTime);
o GO
• T-SQL
▪ Returns information about the current resource pool state, the current
configuration of resource pools, and resource pool statistics.
▪ Returns workload group statistics and the current in-memory configuration of the
workload group.
• GLOBAL_TEMPORARY_TABLE_AUTO_DROP
• LAST_QUERY_PLAN_STATS
• LEGACY_CARDINALITY_ESTIMATION
o The query optimizer cardinality estimation model changed in SQL 2014. Should only be
turned on for compatibility purposes.
o Too high a MAXDOP may cause performance problems when executing multiple queries at
the same time, as it may stave new queries of resources. Could reduce MAXDOP if this
happens.
Page 67 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
o The default for new Azure SQL Databases is 8, which is best of most typical workloads.
• OPTIMIZE_FOR_AD_HOC_WORKLOADS
o Stores a compiled plan stub when a batch is compiled for the first time, which has a smaller
memory footprint. When it is compiled/executed again, it will be replaced with a full
compiled plan.
• PARAMETER_SNIFFING
▪ No need to spend time and CPU evaluating. However, may be suboptimal for certain
parameters.
• QUERY_OPTIMIZER_HOTFIXES
▪ So you can have a compatibility level for SQL Server 2012, but have query
optimization hotfixes that were released after this version.
• There are many more, but these are the main ones.
o There are 7 different features, some of which are also available on lower levels.
• You can disable any of them (except APPROX_COUNT_DISTINCT) for all queries in a single database,
or for a single query:
o All queries: ALTER DATABASE SCOPED CONFIGURATION SET X = OFF. 'X' is the first heading.
o One query - add at the end of the query: OPTION (USE HINT('Y'). 'Y' is the second heading.
Page 68 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Monitor and Optimize Operational Resources
• [DISABLE_ ] BATCH_MODE_ADAPTIVE_JOINS
o For Azure SQL Database, and SQL Server 2017 or higher. Needs a Columnstore index in the
query or a table being referenced in the join, or batch mode enabled for rowstore.
o Selects the Join type (Hash Join or Nested Loops Join) during runtime based on actual input
rows, when it has scanned the first input.
o It defines a threshold (where the small number of rows makes a Nested Loops join better
than a Hash join) that is used to decide when to switch to a Nested Loops plan.
o Enabled by default in SQL Server 2017 under compatibility level 140, and Azure under
compatibility level 140.
• APPROX_COUNT_DISTINCT
o Provides an approximate COUNT DISTINCT for big data – decreases memory and
performance requirement. It guarantees up to a 2% error rate (within a 97% probability).
o Available in all compatibility levels of Azure SQL Database, and in SQL Server 2019 or higher.
• BATCH_MODE_ON_ROWSTORE / DISALLOW_BATCH_MODE
o Queries can work on batches of rows instead of one row at a time, when catched.
o This happens automatically when the query plan decides it is appropriate in Compatibility
Mode 140 for Batch Mode, and Mode 150 (SQL Server 2019+) for Row mode. No changes are
required.
• [DISABLE_ ] INTERLEAVED_EXECUTION_TVF
o Enabled by default in (Azure or SQL Server 2017+) and Compatibility Level 140+.
o Use the actual cardinality of a multi-statement table valued functions on first compilation,
rather than a fixed guess (100 rows from SQL Server 2014).
o Enabled by default in (Azure or SQL Server 2017+) and Compatibility Level 140+.
o SQL Server looks how much memory is allocated to a cached query, and then allocates same
amount of memory next time (instead of guessing, then adding more, more, more).
▪ If a query spills to disk, add more memory for consecutive executions. If it wastes
50+% of the memory, reduce memory for consecutive executions.
• [DISABLE_ ] TSQL_SCALAR_UDF_INLINING
o Enabled by default in (Azure or SQL Server 2019+) and Compatibility Level 150+.
Page 69 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Optimize Query Performance
▪ Running multiple times, once per row.
o Scalar UDFs are transformed into equivalent relational expressions inlined into the query,
often resulting in performance gains.
▪ Does not work with all UDFs, including those which have multiple RETURN
statements.
▪ Can also be disabled for a specific UDF by adding "WITH INLINE = OFF" before "AS
BEGIN".
• [DISABLE_ ] DEFERRED_COMPILATION_TV
o Use the actual cardinality of the table variable encountered on first compilation instead of a
fixed guess (1 row).
▪ or use
- GO
◼ or SHOWPLAN_TEXT
Page 70 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Optimize Query Performance
o Nested Loops joins.
▪ Use when
- Input1 is small.
- Input2 is large.
▪ It uses the top input (in the execution plan) and takes 1 row.
o Merge joins
▪ Use when
- Input1 and Input2 are sorted on their join – or if not, possibly when Input1
and Input2 are of a similar size. Then, the Sort might be worth the time
compared with the Hash Join.
o Hash joins
▪ Also used in the middle of complex queries, as intermediate results are often not
indexed or suitably sorted.
o This converts into a Hash Join or Nested Loops join after the first input has been scanned,
when it uses Batch mode.
o Can you narrow down the columns? If so, maybe you can then use indexes.
▪ Is there a Sort? It's expensive – do you really need it? If so, could you have an Index
which has already sorted on those columns?
▪ Do you use parameters? If so, and the performance is based, can you WITH
RECOMPILE the stored procedure, or use OPTION (RECOMPILE) for queries.
Page 71 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Optimize Query Performance
▪ Are you using a Heap? Do you need a clustered index?
▪ If so, could you use an INCLUDE with the index? This writes the data into the index,
but in a separate part of the index away from the Key – so it's quicker, but doesn't
slow down the index much.
- It's also useful for Unique indexes – INCLUDE columns are included in the
Uniqueness test.
▪ This will increase the row size, increasing time to retrieve the data.
• Different loops
o Are you using a Hash Join when, with some changes, a Merge Join or Nested Loop could be
used?
▪ ON Qry.query_text_id = Txt.query_text_id ;
• To use in SSMS:
o Note – you can click on "Configure" to change the time period. You can also click on "Track
the selected query in a new window".
Page 72 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Optimize Query Performance
▪ Regressed Queries
- Have your query speed got worse? Have a look at Duration, CPU Time,
Logical Reads, Physical Reads, and more
- The most extreme values in Duration, Execution Count, CPU Time etc.
- You can click on the categories (e.g. High Memory, Lock, Buffer I/O or CPU
waits) to get detail on that category.
▪ Tracked Queries
61. determine the appropriate Dynamic Management Views (DMVs) to gather query
performance information
Page 73 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Optimize Query Performance
o SELECT *
o FROM sys.dm_exec_cached_plans AS cp
o Extended Events
▪ Lightweight profiling
o FROM
o (SELECT QS.*,
▪ SUBSTRING(ST.text, (QS.statement_start_offset/2) + 1,
▪ ((CASE statement_end_offset
- QS.statement_start_offset)/2) + 1) AS statement_text
o FROM sys.dm_exec_query_stats AS QS
o GROUP BY query_stats.query_hash
o ORDER BY 2 DESC;
o SELECT
o highest_cpu_queries.plan_handle,
o highest_cpu_queries.total_worker_time,
o FROM
o FROM sys.dm_exec_query_stats qs
Page 74 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Optimize Query Performance
o ORDER BY qs.total_worker_time desc) AS highest_cpu_queries
o USE master
o GO
o Small column size (the best are numeric, but smaller text columns are OK too).
o Use columns which are in WHERE (SARGable columns) and JOIN clauses.
▪ If using LIKE '%text%', then an index (apart from a full-text index) will not help.
▪ Additional columns can be included using INCLUDE (covered queries). This can make
the index key smaller and more efficient.
o Clustered or Non-clustered?
▪ Only one clustered index per table. It also used in PRIMARY KEYs. It re-sorts the
table. Use for frequently used queries and range queries.
Page 75 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Optimize Query Performance
- Should be used with the UNIQUE property – but it is possible to create one
which doesn't.
- IDENTITY
- Frequently used.
o If you INSERT, UPDATE, DELETE or MERGE, then all indexes need to be adjusted.
• Create in T-SQL:
• Create in SSMS:
o Right-hand click on Indexes in the relevant table and select "New Index" – "[Non-]Clustered
Index".
▪ In Azure SQL Database, only gives information about databases to which user has
access.
▪ In creating the index, put equality before inequality – both of these should be the
key – and INCLUDE the included columns.
o SELECT
o , mig.index_group_handle
o , mid.index_handle
Page 76 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Optimize Query Performance
▪ (migs.user_seeks + migs.user_scans)) AS improvement_measure
o , migs.*
o , mid.database_id
o , mid.[object_id]
o ON migs.group_handle = mig.index_group_handle
o ON mig.index_handle = mid.index_handle
o Microsoft says that the Query Optimizer typically selects the best execution plan, so only use
this as a last resort.
o KEEPFIXED PLAN
▪ The query won't be recompiled because the statistics change. It will only recompile
if the schema of the underlying tables changes or sp_recompile is run against these
tables.
Page 77 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Optimize Query Performance
o KEEP PLAN
o ROBUST PLAN
▪ Creates a plan that works for the maximum potential row size. If it isn't, then
performance may be impaired.
o or
• Otherwise, the stored procedure will be optimised as per the first running.
• Simplify queries
o Requirements
o Actions
o Requirements
Page 78 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Optimize Query Performance
o Actions
o Requirements
▪ Values that are not part of a record’s key are to be removed from the table.
o Action
o Requirements
o Actions
o Requirements
o Actions
o decimal/numeric
o money, smallmoney
• Approximate numerics
o float, real
Page 79 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Optimize Query Performance
• Character strings
o binary, varbinary
o Azure SQL Database supports only one database file (except in Hyperscale).
o Primary file
▪ Start-up information.
o Secondary file
▪ Additional, but optional, user-defined data files (zero to multiple). Cannot be used
in Azure SQL Database.
o Transaction Log
o Simple databases can have a single data file and a single transaction log file.
o O/S file name – its location, including directory path (you can set this on VM only).
• Storage size
• Filegroups
Page 80 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Optimize Query Performance
o Contains multiple files for admin, data allocation or storage purposes. Not used in Azure SQL
Database.
o By default, the "default" filegroup is the PRIMARY filegroup. However, you can change it.
o The primary filegroup contains the primary file, system tables. The default filegroup (which
may be the same) contains any other objects where you have not specified a filegroup.
▪ There are other filegroups, called "Memory Optimized Data" and "Filestream".
o If you use multiple data files, Microsoft recommends that you create a second file group for
the other files and make that filegroup the default filegroup.
o GO
▪ FILENAME = N'C:\PathToData\NewData.ndf' ,
◼ or FILEGROWTH = 10%
▪ TO FILEGROUP [NewFileGroup]
o sp_help 'Schema.TableName'
o Scalable – there are hardware limits, but if you divide data into partitions, each on a separate
server, it can be scaled out.
o Increase performance – Smaller amount of data in a single partition, and multiple data stores
can be accessed at the same time.
Page 81 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Optimize Query Performance
o Have different hardware or services – Premium or Standard where needed.
o Increase availability – if one instance fails, only that partition is temporarily unreadable.
▪ If some data is fairly static or small, consider replicating it in all partitions, to reduce
cross-partition access.
o Vertical partitioning.
▪ Some columns may be needed less often, and they could be separated away, and
used only when needed.
▪ Some columns may also be more sensitive, and could be separated away.
▪ All partitions would need to be capable of being joined – for instance, by the same
primary key in each.
o Functional partitioning.
▪ Some tables could be more sensitive, and could be separated away into another
partition.
• Consider the backup, archiving (including deleting) and High Availability, Disaster Recovery
requirements for each partition.
o However, it requires extra time and CPU, both to compress and retrieve data.
o You can compress at the row level, the page (8,192 characters) level, or none.
- Numeric types (apart from tinyint) storage will be reduced, maybe down to
1 byte. Tinyint already takes 1 byte.
Page 82 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Optimize Query Performance
- char and nchar will be compressed, up to 50% in English, German, Hindi
and Turkish, but only up to 40% in Vietnamese and 15% in Japanese.
varchar and nvarchar types would not benefit from compression.
- Row compression
- Prefix compression
o If values in the same column start with the same characters, this
can be optimised.
- Dictionary compression
o If values after prefix compression in any column are the same, this
can be optimised.
• Available in:
▪ You cannot use data compression with tables which have SPARSE columns.
▪ To change the compression option in a clustered index, you need to drop the
clustered index, preferably OFFLINE, and then rebuild the table.
Page 83 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Optimize Query Performance
• Would compression be useful?
▪ EXEC sp_estimate_data_compression_savings
- 'SchemaName',
- 'TableName',
- Index_ID – either zero for a Heap, 1 for a clustered Index, or >1 for Non-
clustered Index. NULL if a table, and not an index,
◼ FROM sys.indexes
◼ SELECT *
◼ FROM sys.partitions
• To enable compression:
o In SSMS
▪ Click next, and select the compression type for each partition.
- You can also click on "Use same compression type for all partitions".
▪ Select whether to run immediately or to create a script (to a file, clipboard, or new
query window).
- If using this on a VM, you may also get "Schedule – you could select: one
time, recurring (Daily, Weekly or Monthly), when SQL Server Agent starts,
or whenever the CPUs become idle.
o In T-SQL - table
o In T-SQL – index
Page 84 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Perform Automation of Tasks
▪ REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE | ROW | NONE);
o Indexes work best when you scan large amounts of data, like fact tables in data warehouses.
▪ They are generally clustered. Non-clustered only uses when you have a data type
not supported by a clustered index – e.g. XML, text and image.
▪ Best used when the data is not often read, but you need the data to be retained for
regulatory or business reasons.
▪ Saves space, but there is a high CPU cost to uncompressing it, which is more than
any I/O saving.
• This is for SQL Server on a VM, and Azure SQL MI, but not Azure SQL Database, as it uses SQL Server
Agent.
o SQL Server Agent doesn't need to be enabled on Azure SQL MI – it is always running.
o It doesn't have all of the functionality of on-prem SQL Server, but it has most of it.
o Go to SQL Server Agent (right-hand click it and Start if needed on a VM) – Jobs.
o Enter the First Step name, select the database, and which user is running the command, and
enter your T-SQL command.
o Go to SQL Server Agent (right-hand click it and Start if needed on a VM) – Jobs.
o Enter:
▪ A name,
Page 85 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Perform Automation of Tasks
▪ Whether it is:
- One time,
- "Start automatically when SQL Server Agent starts" – this setting is not
supported in MI.
o If you subsequently want to Edit or Remove it, you can click those buttons.
o If you want to import a previously made schedule, click "Pick" and then choose the schedule.
• To do this in T-SQL:
▪ USE msdb ;
▪ GO
▪ EXEC sp_add_schedule
▪ @schedule_name = N'ScheduleName' ,
▪ @freq_type = 4,
▪ @active_start_time = 012345 ;
▪ GO
▪ EXEC sp_attach_schedule
▪ @job_name = N'JobName',
▪ @schedule_name = N'ScheduleName' ;
▪ GO
• To view schedules:
o USE msdb ;
o GO
o select *
o from sysschedules
Page 86 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Perform Automation of Tasks
• For MI and VM, you need a master server and one or more target servers.
o Right-hand click on SQL Server Agent, and go to Multi Server Administration – Make this a
Master.
o Add your target servers (by clicking on "Add Connection", if they are not already registered).
o After checking that the servers are compatible, you can "create a new login if necessary and
assign it rights to the MSX".
o Right-hand click on SQL Server Agent, and go to Multi Server Administration – Make this a
Target.
o You can "create a new login if necessary and assign it rights to the MSX".
o You can go to the Targets page and select "Target local server" or "Target multiple servers".
• To create an operator:
o Go to SQL Server Agent (right-hand click it and Start if needed on a VM) – Operators.
o Enter Name and e-mail name and/or pager e-mail name (and pager timings).
▪ Pager functionality has been deprecated, and will be removed in a future version.
• In T-SQL, use:
o USE msdb ;
o GO
o EXEC dbo.sp_add_operator
▪ @name = N'OperatorName',
▪ @email_address = N'EmailAddress'
• To configure notifications:
Page 87 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Perform Automation of Tasks
o Go to SQL Server Agent (right-hand click it and Start if needed on a VM) – Jobs.
o Go to Notifications, and
▪ Select Email, Page(r), "Write to the Windows Application event log" and
"Automatically delete job"
• In T-SQL, use:
o USE msdb ;
o GO
o EXEC dbo.sp_add_notification
o @alert_name = N'NameOfAlert',
o @operator_name = N'OperatorName',
o @notification_method = 1 ;
o Create a Database Mail account for the SQL Server Agent service account to use.
o Create a Database Mail profile for the SQL Server Agent service account to use and add the
user to the DatabaseMailUserRole in the msdb database.
o Set the profile as the default profile for the msdb database.
Page 88 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Perform Automation of Tasks
▪ In the "Manage Profile Security", "Default Profile" should say "Yes".
• Go to SQL Server Agent (right-hand click it and Start if needed on a VM) – Alerts.
▪ Instance – a database.
▪ Alert if counter falls below, becomes equal to, or rises above a Value.
▪ You can click New Job, or View [Existing] job (once you have selected one),
▪ You can click "New Operator", or View [Existing] operator (once you have selected
one).
o Have a delay between responses. 0 minutes and 0 seconds indicate that you want a response
for every occurrence of the alert.
Page 89 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Perform Automation of Tasks
o If "Metrics":
▪ Select a metric.
o If "Alerts":
▪ Select a metric.
o Click on Conditions:
- Dynamic thresholds learns the data and models it using algorithms and
methods, detecting pattern such as seasonality (hourly, daily, weekly).
▪ If static, select:
▪ If dynamic, select
- Select the operator (greater than the upper threshold and/or below the
lower threshold)
▪ Aggregation granularity period – how often the measures are grouped together,
Page 90 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Perform Automation of Tasks
o Click on "Create action group", and select:
▪ Email,
▪ Voice.
o The name
o Description (optional),
- The line turns from blue to red dots, and the background turns light red as
well.
Page 91 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Perform Automation of Tasks
o Go to Monitoring – Logs.
▪ Measure,
▪ Aggregation granularity (5, 10, 15, 30 or 45 minutes, 1-6 hours, or 1-2 days).
▪ Frequency of evaluation (5, 10, 15, 30 or 45 minutes, 1-6 hours, or 1-2 days).
▪ Email,
▪ Voice.
o Going to the Azure Portal, and the specific database, and go to Monitoring – Alerts, "+New
alert rule", and selecting:
▪ Resource,
Page 92 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Perform Automation of Tasks
- The signal could be a platform metric, or an activity log (an administrative
operation).
▪ Alert Details.
o GO
o RECONFIGURE
o GO
o GO
o RECONFIGURE
o GO
o In SSMS, you can right-hand click on the server instance (not the database),
o Written in JSON.
o https://docs.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/create-sql-
vm-resource-manager-template?tabs=CLI
o Using PowerShell
▪ https://docs.microsoft.com/en-us/azure/azure-sql/database/single-database-
create-quickstart?tabs=azure-powershell
▪ https://docs.microsoft.com/en-us/azure/azure-sql/managed-
instance/scripts/create-configure-managed-instance-powershell
Page 93 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Perform Automation of Tasks
▪ https://docs.microsoft.com/en-us/azure/azure-sql/database/single-database-
create-quickstart?tabs=azure-cli
o If you are using an Azure Pipeline, you can use a DACPAC (data-tier application portable
artifact)
▪ This gets added to your azure-pipelines.yml (yml stands for “Yet Another Markup
Language”).
o This is done through the installation of the Sql Server IaaS Agent Extension to enable
automated backups (this can be done through the “Create a Virtual Machine” process).
o Needs to be:
▪ Windows Server 2012 and SQL Server 2014 Standard/Enterprise (for Automated
Backup version 1), or
▪ Storage account.
o You can back up the default instance or a single named instance. If there is no default
instance and multiple named instances, it will fail.
o You have no control over when it happens, but it has minimal impact if you use “retry logic”.
Page 94 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement a High Availability and Disaster Recovery Environment
o If you have a database quorum, there should be at least one primary replica online.
o Business Critical and Premium databases should also have at least one secondary replica
online.
o If you are intending to have this run to a schedule, click "Enabled" if you want the schedule
to be enabled.
o In this new box, enter a name, a facet, and what you are checking (at least one field, an
operator and a value).
▪ These conditions are what SHOULD be – the policy will fail if this is NOT the case.
o In the Against targets, select target types. If this is blank, then it will be targeted against the
server.
▪ "On demand",
▪ 99.9% for a zero replicas (8 hours 45 minutes over a year, or 43 minutes 48 seconds
over a month),
Page 95 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement a High Availability and Disaster Recovery Environment
▪ 99.95% for one replica (4 hours 22 minutes over a year, or 21 minutes 54 seconds
over a month).
▪ 99.99% (52 minutes over a year, or 4 minutes 23 seconds over a month) – this is for
other Azure SQL Database tiers and Azure SQL Managed Instance.
▪ However, if you are in Business Critical/Premium tiers, and you have Zone
Redundant Deployments, this increases to 99.995% (26 minutes over a year, or 2
minutes 11 seconds over a month).
• For VMs:
o However, the SQL Server may fail, even though the VM is healthy – so the actual SLA will
lower.
• Terminology:
o RPO – Recovery Point Objective of 5 seconds (how much data you can lose)
o RTO – Recovery Time Objective of 30 seconds (how long until you can use it again –
maximum "Failover" time)
▪ If exceeded, you get a credit of 100% of the total monthly cost of the Secondary
o Auto-failover groups
o 2-9 SQL Server instances on VMs or VMs and on-premises data center.
o You can use synchronous commit for secondary replica in on-prem network.
▪ Transactions are not committed on the primary until they can be committed on the
secondary.
Page 96 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement a High Availability and Disaster Recovery Environment
o You need a domain controller VM, as it requires an Active Directory domain.
o Availability replicas running Azure VMs allow for DR. It uses an asynchronous commit.
o You also need a VPN connection for the entire failover cluster, using a multi-subnet failover
cluster.
o For DR purposes, you also need a replica domain controller at the disaster recovery site.
• Database mirroring
o An Azure VM running at least SQL Server 2012, and another SQL Server running on-prem
running at least SQL Server 2008R2, using server certificates.
o No VPN required, and they don't have to be in the same Active Directory domain (but you
can – but you will ned a VPN and a replica domain controller).
• Replicate and fail over SQL Server to Azure with Azure Storage.
• Log shipping
o As log shipping requires Windows file sharing, you would need a VPN tunnel.
Page 97 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement a High Availability and Disaster Recovery Environment
Page 98 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement a High Availability and Disaster Recovery Environment
▪ Designed to protect against network card or disk failure – but there are other
solutions in Azure.
▪ Storage Spaces Direct (S2S) for a Storage Area Network for Windows Server 2016 or
later.
▪ Premium File Share for Windows Server 2012 or later. Uses SSD, have low latency,
supported for Failover Cluster Instances.
• For MI:
Page 99 of 135
DP-300: Administering Relational Databases on Microsoft Azure
Plan and Implement a High Availability and Disaster Recovery Environment
▪ Updates are then replicated automatically.
▪ They need to have at least the same service tier as the primary.
- You don't need to disconnect the secondaries unless you change between
General Purpose and Business Critical.
▪ More than 1 secondary means that, even if one fails, there will still be at least one
until it is recreated.
▪ Uses snapshot isolation mode, so updates from the primary are not delayed by
long-running queries on the secondary.
o Simple DR (but not HA) of Azure VMs from a primary to a secondary region.
o Not SQL Server solution, but can be used with SQL Server on the VMs.
o Can replicate using recovery point snapshots; they capture disk data, data in memory, and
transactions in process.
o You can then add eligible databases into the failover group.
o Click on Settings – Failover groups – and the name of the failover group.
o You can "edit the configuration" (read/write failover policy and grace period).
o Once you have done all of your changes, click "Save" or "Discard".
o Click on Settings – Failover groups – and the name of the failover group.
o Go to SSMS and the server which hosts a SECONDARY replica of the availability group.
o Right-hand click the availability group to be failed over, and click on "Failover".
o If the Introduction page of the wizard says "Perform a planned failover for this availability
group", then you can do this without data loss.
o In the "Select New Primary Replica" page, you can view the status of:
- Normal quorum
- Forced quorum
- Not applicable.
▪ "Data loss, Warnings (X)", where X shows the number of warnings – this would have
to be a forced failover.
o The relevant secondary replica will then become the new primary replica.
o In the "Connect to Replica" page, you can connect to the failover target.
o Recovery model.
o Backup component:
o Back up to:
▪ Contents shows the media contents for the selected disk/tape (not URL).
▪ Overwrite all existing backup set, replacing prior backups with the current backup.
▪ Check media set name and backup set expiration – requires the backup operation to
verify name and expiration date.
o Backup to a new media set, and erase all existing backup sets.
o Reliability
o Transaction log
▪ Backup the transaction log and truncate it to free log space. The database remains
online.
▪ Backup the transaction log tail (tail-log backup), and leave the database in a
restoring state (not available to users until it is completely restored).
▪ Specific date.
o Encrypt backup, using AES 128, AES 192, AES 256 and Triple DES.
▪ Only enabled if you append to existing backup set. Backup your certificate or keys to
a different location.
• If you have a VM with IaaS Extension can configure backups in the Azure Portal.
• You can:
o This exists in sysadmin and dbcreator fixed server roles, and dbo (owner) for existing
databases.
o Source
▪ Database – this list only contains databases backed up, based on the msdb backup
history.
▪ Device – tape, URL or file. This is required if the backup was taken on a different SQL
Server instance.
o Destination
▪ Database to restore.
- Alternatively, you can select [Backup] Timeline, which shows the database
backup history as a timeline.
o Restore plan
▪ File Type,
▪ Only relevant if a database was replicated when the backup was created, and when
restoring a published database to a different server (other than the creation server).
o Recovery state:
- Only choose this option in a full or bulk-logged recovery model if you are
also restoring all log files at the same time.
o Tail-log backup.
o Server connections
▪ Restore options may fail if there are active connections to the database.
▪ The "Continue with Restore" dialog box will be displayed after each backup is
restored.
▪ If you click "No", the database will be left in the Restoring state.
• For Azure SQL MI, to restore an Azure SQL database to a different region:
o Go to the MI, click on "+New database", select the database name, and change "Use existing
data" to "Backup" and select the backup.
• Database backups for Azure SQL Database and Azure SQL MI are done automatically.
o You can do a:
- You can change it to 1-35 days optionally (apart from Hyperscale and Basic
tier databases – basic has a maximum of 7 days).
- Note: In MI, PITR is available for individual databases, but not for the entire
instance.
o The first backup is scheduled immediately after a new database is created or restored.
• To restore a database:
o Click "Restore".
o You cannot restore over an existing database (but you can rename it afterwards).
o In the "Additional settings" tab, change "Use existing data" to "Backup", and select a backup.
o You cannot control the timing nor manually create a LTR backup.
o It may take up to 7 days before the first LTR backup will be shown in the list of available
backups.
o Ensure that you have a LTR policy on secondary databases, only to be created when they
become primary.
o Backups are stored in Azure Blob storage – a different storage container weekly.
• To configure this, go to Azure portal – the server – Backups – Retention policies – select the
database(s), and configure the LTR:
o Weekly backups,
o Monthly backups,
o WeekOfYear backups.
• To view backups, go to Azure portal – the server – Backups – Available backups – and next to the
relevant database, under “Available LTR backups”, select Manage.
o You can click on an LTR backup, and select Restore (which creates a new database) or Delete.
• You can use transactional replication to push changes made in an Azure MI to:
• Useful for:
o Distributing changes to one or more databases in SQL Server, Azure SQL MI or Azure SQL
Database.
o Publisher
▪ Publishes changes made on some tables ("articles"), and send the updates to the
Distributor.
▪ Cannot be Azure SQL Database (need to use Data Sync – topic 14 – for this).
o Distributor
- Can be the same Azure SQL MI as the Publisher, but a different database.
▪ If SQL Server instance, version needs to be the same or higher than the Publisher
version.
▪ Can be Azure SQL MI or an SQL Server instance, but needs to the same type as the
Distributor.
o Push subscriber.
• Create a Publication:
o Specify a Distributor.
- You will need to specify a default snapshot folder, a directory that agents
can read from and write to this folder.
▪ Transaction replication – changes occur in near real time, applied to the Subscribe in
the same order as they occurred on the publisher.
▪ Merge replication – Data can be changed on both the Publisher and Subscriber.
- When connected to the network, all rows which have changed between
Publisher and Subscriber are synchronised. For on-prem, VM and MI
▪ Snapshot replication – distributes data at a specific moment of time, and does not
monitor for updates to the data.
o Select data, database objects and filter columns and rows from table articles to publish.
o Enter the logins and passwords for connections made by replication agents.
• You can create readable secondary databases in the same or different region.
▪ Azure SQL Database and Azure SQL MI can both use auto-failover groups.
o Database migration from one server to another with minimum downtime, and
o It uses asynchronous replication, so the transactions are committed on the primary before
being replicated.
• To configure geo-replication:
o One or more domain-joined VMs in Azure running SQL Server 2012+ Enterprise, or SQL
Server 2016+ Standard in:
▪ They need to be registered with the SQL IaaS Agent extension in full manageability
mode and are using the same domain account for the SQL Server service on each
VM.
▪ One for the availability group listener within the same subnet as the availability
group.
o 1-8 sets of secondary replicas (only 1 allowed in SQL server Standard), each of which hosts
the secondary databases (this does not replace backups).
• Note:
o The primary replica send transaction log records to every secondary database ("data
synchronization").
o You can configure 1+ secondary replicas to support read-only access to secondary databases,
and/or to permit backups on secondary databases.
o Synchronous-commit mode. The primary replica does not commit until the secondary replica
has hardened the log.
• Failover:
o This is when the target secondary replica transitions to being the new primary replica.
▪ Automatic failover – no data loss – occurs when there is a failure to the primary
replica – for synchronous-commit mode only. Needs to have a Windows Server
Failover Cluster quorum and be synchronized.
o Forced manual failover (also known as "forced failover"). For asynchronous-commit mode.
This is a DR option.
▪ The only type of failover that is possible if the target secondary replica is not
synchronized with the primary replica.
o After failover, Azure SQL connections are automatically redirected to the new primary node.
o Name the cluster, and give a Storage Account which is the Cloud Witness.
▪ Storage Account name: 3-24 characters using numbers and lower-case letters.
o Click Apply.
o In Azure portal, go to the VM – Settings – SQL Server configuration – Open - High Availability.
- Type: "Internal" allows apps in the same Virtual Network to connect to the
availability group.
- The "Resource group" and "Location" should be that where the SQL Server
instances are in.
- The Probe Port is for the internal load balancer, which is 59999 by default.
o Click "Apply".
o Click "Apply".
• In the Azure Portal – Settings – High Availability, the status of the availability group(s) are shown.
• However, you can also use the SQL Server to do this as well – and this is the way I do this in the videos
to this course.
o It is mission critical,
o The name of the secondary database must be the same as the primary database.
o Secondary databases do not exist until backups of the new primary database are restored to
the secondary replicas (use RESTORE WITH NORECOVERY).
• In SSMS
o Connect to one of your SQL Server VMs using (for example) RDP.
o In SSMS, go to your SQL Server instance – Always On High Availability – Availability Groups.
o Click "OK".
o GO
• It monitors network connections and the health of the nodes (clustered servers)
• A two-node cluster will function without a quorum resource, but its use is recommended.
o This then provides an odd number of votes, and a 3 quorum votes minimum.
• To configure:
o Right-hand click the cluster, and go to More Actions – Configure Cluster Quorum Settings.
o Cloud Witness
▪ Only 1Mb.
▪ Recommended to use whenever possible, unless you have a failover cluster solution
with shared storage.
▪ Use General Purpose and Standard Storage (Blob storage and Premium storage are
not supported).
- The endpoint server name, if you are using a different Azure service
endpoint, such as Microsoft Azure in China.
▪ Once finished, you can see this witness in the Failover Cluster Manager snap-in.
o Disk Witness.
▪ The disk is highly available (most resilient) and can fail over between nodes.
▪ Only can be used with a cluster which uses Azure Shared Disks.
▪ By default, all nodes have a vote, but you can assign votes to only some nodes.
▪ You could also have "No nodes", which then the same as "No majority (disk witness
only)" – see below.
o Node majority with witness ("Node and File Share Majority" or "Node and Disk Majority")
• To configure it in SSMS,
o Enter the listener DNS name – in SSMS, that is up to 15 letters, numbers, hyphens and
underscores.
▪ Static IP.
- You must specify a static IP address for every subnet that hosts an
availability replica, including Subnet and IP Address.
VM for SQLSERVER1
• Virtual machine name: SQLSERVER1
VM for SQLSERVER2
• Virtual machine name: SQLSERVER2
Connect to VMDOMAINCONTROLLER
• Go to VM – VMDOMAINCONTROLLER – Connect
• Enter credentials.
• Do you want your computer to be discoverable by other PCs and devices on this network? Yes
Join VM1 to DC
• Go to SQLSERVER1 (then SQLSERVER2) – Networking – Network Interface hyperlink – DNS servers –
Custom – enter the DNS server IP address – X.Y.0.4
• Do you want your computer to be discoverable by other PCs and devices on this network? Yes
• After reboot, go to All Services, Right click on SQLSERVER1 and select “Failover Cluster Manager”.
• Right-hand click on SQL Server (in SQL Server Services) – go to the “Always On Availability Groups”
and check “Always on Availability Group”.
• Go to SQL Server Configuration Manager – SQL Server Network Configuration – Protocols – TCP/IP
and Enable.
• Go to Windows Defender Firewall – New Rule – Port – TCP 1433 (all others as default).
• New
Configure Witness
• Go to Failover Cluster Manager – the actual failover cluster (in my case, SQLCLUSTER.filecats.co.uk)
• Click Next.
• Click Next x 3.
• Next
• Create the backup (right-hand click on the database – Tasks – Back Up…)
• Select the replicas… Click “Add Replica” and log into SQLSERVER2.
• Look at availability mode, automatic failover, and readable secondaries. (Synchronous good if you
have close physical distance.)
• Select Initial Data Synchronization – automatic seeding, full database and log backup, join only, or
skip.
• Finish the wizard (It’s OK for the purposes of the DP-300 course if the listener configuration has a
warning).
Add listener.
• In SSMS, go to Always On High Availability – Availability Groups – NameOfGroup – right-hand click on
Availability Group Listener – Add a Listener.
• Port – 1433
Test failover
• Go to Always On High Availability – Availability Group – SQLAVAILABILITYGROUP
• Finish.
• To use database DMVs, you need to have VIEW DATABASE STATE permission on the database.
▪ SLO is the Service Level Objective, which includes deployment option, service tier,
hard and compute amount.
▪ You get a row for every 15 seconds for about the past hour.
• Waiting on resources:
▪ Returns information about all the waits encountered by threads that executed.
- Governor
o INSTANCE_LOG_GOVERNOR – MI waits
- IO
- Parallel
o Possible blocking
• To use Server-scoped DMVs, you need VIEW SERVER STATE permission on the server.
• In addition:
▪ msdb, tempdb and model are not listed in Azure SQL Database.
o SELECT SERVERPROPERTY('EngineEdition');
▪ Returns 5 for SQL Database, 8 for Managed Instance, and <5 for on-prem/VM.
o Runs DBCC CHECKALLOC, which checks the consistency of disk space allocation structures
o Runs DBCC CHECKTABLE for all tables and index views. The DBCC checks the integrity of all
pages and structures in a particular table or index view, including:
▪ Every row in a table has a matching row in a nonclustered index (and the other way
round), and is in the correct partition.
o Runs DBCC CHECKCATALOG which checks for catalog consistency, using an internal database
snapshot to provide transaction consistency to perform these checks.
▪ Does not work on tempdb or Filestream data (binary large objects or BLOBs on the
file system).
o Validates the contents of every indexed view in the database, and link-level consistency
between table metadata and file system directories and files.
o Relevant Database
- Do only repairs which have no chance of data loss. Includes quick repairs
(e.g. missing rows in non-clustered indexes), and time-consuming repairs
(building an index).
• WITH Arguments
o TABLOCK – obtains exclusive locks, which will speed it up, but reduce concurrency.
o ESTIMATEONLY – No database checks are done, but displays the amount of tempdb space
needed to do it.
o PHYSICAL_ONLY – limits checking to page structure integrity, record header integrity, and
consistency of the database.
• Best practices:
o BEGIN TRANSACTION beforehand, so the user can confirm that they want to accept the
results.
o After using DBCC CHECKDB, you need to inspect the referential integrity of the database –
DBCC CHECKCONSTRAINTS. This checks the integrity of a constraint or all constraints in a
table, or all constraints.
o MODIFY FILE
o (NAME=NameFile,FILEGROWTH=40MB or 40%);
o AUTOGROW_ALL_FILES
▪ If any file in a filegroup meets the autogrow threshold, all files in the filegroup will
grow.
o EXEC sp_spaceused
o FROM sys.database_files
• To view the number of pages used as well as total free space for a particular database, you can use
▪ Returns space usage information for each data file in the database.
• You can also go to Reports – Standard Reports – Disk Usage on Azure VM.
o AUTO_CLOSE ON/OFF
▪ Whether the database is shut down after the last user exists.
o AUTO_CREATE_STATISTICS ON/OFF
▪ Creates statistics on single columns in query predicates, to improve query plans and
performance.
o AUTO_UPDATE_STATISTICS[_ASYNC]
▪ Query Optimizer updates statistics when they are used by a query and might be out-
of-date, after insert/update/delete/merge operations change the data distribution.
_ASYNC specifies whether it is done asynchronously or not.
o AUTO_SHRINK ON/OFF
▪ Shrinks when more than 25% of the file contains unused space. Recommended to
leave OFF.
o READ_ONLY / READ_WRITE
▪ Can users only read from the database (not modify it).
▪ Only one user at a time, or only db_owner fixed database roles and dbcreator and
sysadmin fixed server roles (any number), or all users which have appropriate
permissions.
▪ Changes the recovery option. FULL uses transaction log backups. BULK_LOGGED
only minimally logs certain large-scale (bulk) operations. Simple only allows for
complete backups.
o COMPATIBILITY_LEVEL = 100 (SQL Server 2008 and R2), 110, 120, 130, 140, 150 (SQL Server
2019)
▪ GO
o They are already granted in the sysadmin fixed server role, and the db_owner and
db_backupoperator fixed database roles.
o TO MyPreviouslyCreatedNamedBackupDevice
o NORECOVERY, NO_TRUNCATE
▪ NORECOVERY backups the tail of the log and leaves the database in the RESTORING
state.
- Useful when failing over to a secondary database or when saving the tail
before a RESTORE.
o GO
• Use:
o FROM MyPreviouslyCreatedNamedBackupDevice
o [FILE = BackupSetFileNumber]
▪ NORECOVERY is useful when you are restoring a single file, but you need to restore
more.
▪ Use RECOVERY when you have finished restoring, and you want the database to be
online.
• For example:
o RESTORE … WITH FILE = 6 NORECOVERY, STOPAT = 'Jun 19, 2024 12:00 PM';
• You can only use T-SQL in an MI when doing a complete restore from an Azure Blob Storage Account:
o WITH COPY_ONLY
o [COMPRESSION | NO_COMPRESSION]
o [STATS = X]
• This is for VMs (and Mis if using COPY_ONLY). The syntax is:
o TO MyPreviouslyCreatedNamedBackupDevice
o [MIRROR TO AnotherBackupDevice]
o [WITH
▪ COPY_ONLY
- Creating a full backup, but is not treated as a full backup for purposes of
future DIFFERENTIAL or TRANSACTION LOG backups.
▪ DIFFERENTIAL
▪ COMPRESSION | NO_COMPRESSION
▪ CREDENTIAL
▪ ENCRYPTION
- If you encrypt, you will also need to use SERVER CERTIFICATE or SERVER
ASYMMETRIC KEY.
- Used when creating a snapshot of the database files and storing them into
Azure Blobs.
• [WITH
o NOINIT | INIT
▪ Whether the backup operation appends to/overwrites the existing backup sets on
the backup media. The default is NOINIT (append).
o NOSKIP | SKIP
▪ Checks whether a backup operation checks the expiration date and time of the
backup sets on the media before overwriting them. The default is NOSKIP (Check
the date/time).
o NOFORMAT | FORMAT
▪ Whether the media header should be written on the volumes used for the backup
operation, overwriting any existing media header and backup sets. The default is
NOFORMAT.
- Be careful from using FORMAT, as it renders the entire media set unusable.
o NO_CHECKSUM | CHECKSUM
▪ Whether backup checksums are enabled – this validates the backup. The default is
NO_CHECKSUM (no generation of backup checksums).
o STOP_ON_ERROR | CONTINUE_AFTER_ERROR
o STATS = X
o REWIND | NOREWIND
o UNLOAD | NOUNLOAD
• [WITH
▪ NORECOVERY
- Backs up the tail of the log and leaves the database in the RESTORING
state. Useful when failing over to a secondary database or when saving the
tail of the log before a RESTORE operation.
- Backs up the tail of the log and leaves the database in a read-only and
STANDBY state.
▪ NO_TRUNCATE
- The log is not truncated and requires SQL Server to attempt to backup
regardless of the state of the database.
- If this is not used, the private key is encrypted using the database master
key.
▪ EXPIRY_DATE = ‘20291231’;
- You can also have a START_DATE (in UTC). If not specified, START_DATE
defaults to current date, and EXPIRY_DATE (UTC) is one year after
START_DATE.
o GO
o The Azure Key Vault can store customer-managed certificates ("Bring your own Key – BYOK")
• To restore a previously-created certificate, you can also use CREATE CERTIFICATE with FILE = 'path'
o Azure SQL Database does not support creating a certificate from a file or using private key
files.
o You can change the password, but not the SUBJECT or DATEs.
▪ CREATE LOGIN [login_name] FROM EXTERNAL PROVIDER -- the last 3 words indicate
Azure AD.
o To check
o To create a user:
o You can create users in the master database, then create a user based on it, but it is better
practice to do the above:
▪ [In Master]
CREATE LOGIN demo WITH PASSWORD = 'Pa55.w.rd'
- To check
▪ [In database]
CREATE USER demo FROM LOGIN demo
• To check users:
• To grant permissions:
▪ For example:
GRANT SELECT ON OBJECT::Region TO Ted [WITH GRANT OPTION];
o PERMISSION can be
▪ For schema, ALTER permission on a schema is wide-ranging. You can alter, create or
drop any securable in that schema. However, you cannot change ownership.
- For tables and views, ALL means DELETE, INSERT, REFERENCES, SELECT, and
UPDATE.
o The optional [WITH GRANT OPTION] allows you to grant that permission to others.
• To check permissions:
o or if sysadmin in MI or VM:
o For example:
o PERMISSION is:
▪ ALTER ANY [Server_Securable] – CREATE, ALTER and DROP things such as LOGIN.
▪ DELETE/INSERT/SELECT/UPDATE
▪ CREATE Server-/Database-/Schema-Securable.
▪ All permissions
▪ Specific database.
▪ Specific object.
o You need ALTER permission on the role, or ALTER ANY ROLE on the database, or
db_securityadmin or db_owner.