20765C ENU TrainerHandbook
20765C ENU TrainerHandbook
20765C
Provisioning SQL Databases
MCT USE ONLY. STUDENT USE PROHIBITED
ii Provisioning SQL Databases
Information in this document, including URL and other Internet Web site references, is subject to change
without notice. Unless otherwise noted, the example companies, organizations, products, domain names,
e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with
any real company, organization, product, domain name, e-mail address, logo, person, place or event is
intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the
user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in
or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical,
photocopying, recording, or otherwise), or for any purpose, without the express written permission of
Microsoft Corporation.
Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property
rights covering subject matter in this document. Except as expressly provided in any written license
agreement from Microsoft, the furnishing of this document does not give you any license to these
patents, trademarks, copyrights, or other intellectual property.
The names of manufacturers, products, or URLs are provided for informational purposes only and
Microsoft makes no representations and warranties, either expressed, implied, or statutory, regarding
these manufacturers or the use of the products with any Microsoft technologies. The inclusion of a
manufacturer or product does not imply endorsement of Microsoft of the manufacturer or product. Links
may be provided to third party sites. Such sites are not under the control of Microsoft and Microsoft is not
responsible for the contents of any linked site or any link contained in a linked site, or any changes or
updates to such sites. Microsoft is not responsible for webcasting or any other form of transmission
received from any linked site. Microsoft is providing these links to you only as a convenience, and the
inclusion of any link does not imply endorsement of Microsoft of the site or the products contained
therein.
© 2018 Microsoft Corporation. All rights reserved.
Released: 01/2018
MCT USE ONLY. STUDENT USE PROHIBITED
MICROSOFT LICENSE TERMS
MICROSOFT INSTRUCTOR-LED COURSEWARE
These license terms are an agreement between Microsoft Corporation (or based on where you live, one of its
affiliates) and you. Please read them. They apply to your use of the content accompanying this agreement which
includes the media on which you received it, if any. These license terms also apply to Trainer Content and any
updates and supplements for the Licensed Content unless other terms accompany those items. If so, those terms
apply.
BY ACCESSING, DOWNLOADING OR USING THE LICENSED CONTENT, YOU ACCEPT THESE TERMS.
IF YOU DO NOT ACCEPT THEM, DO NOT ACCESS, DOWNLOAD OR USE THE LICENSED CONTENT.
If you comply with these license terms, you have the rights below for each license you acquire.
1. DEFINITIONS.
a. “Authorized Learning Center” means a Microsoft IT Academy Program Member, Microsoft Learning
Competency Member, or such other entity as Microsoft may designate from time to time.
b. “Authorized Training Session” means the instructor-led training class using Microsoft Instructor-Led
Courseware conducted by a Trainer at or through an Authorized Learning Center.
c. “Classroom Device” means one (1) dedicated, secure computer that an Authorized Learning Center owns
or controls that is located at an Authorized Learning Center’s training facilities that meets or exceeds the
hardware level specified for the particular Microsoft Instructor-Led Courseware.
d. “End User” means an individual who is (i) duly enrolled in and attending an Authorized Training Session
or Private Training Session, (ii) an employee of a MPN Member, or (iii) a Microsoft full-time employee.
e. “Licensed Content” means the content accompanying this agreement which may include the Microsoft
Instructor-Led Courseware or Trainer Content.
f. “Microsoft Certified Trainer” or “MCT” means an individual who is (i) engaged to teach a training session
to End Users on behalf of an Authorized Learning Center or MPN Member, and (ii) currently certified as a
Microsoft Certified Trainer under the Microsoft Certification Program.
g. “Microsoft Instructor-Led Courseware” means the Microsoft-branded instructor-led training course that
educates IT professionals and developers on Microsoft technologies. A Microsoft Instructor-Led
Courseware title may be branded as MOC, Microsoft Dynamics or Microsoft Business Group courseware.
h. “Microsoft IT Academy Program Member” means an active member of the Microsoft IT Academy
Program.
i. “Microsoft Learning Competency Member” means an active member of the Microsoft Partner Network
program in good standing that currently holds the Learning Competency status.
j. “MOC” means the “Official Microsoft Learning Product” instructor-led courseware known as Microsoft
Official Course that educates IT professionals and developers on Microsoft technologies.
k. “MPN Member” means an active Microsoft Partner Network program member in good standing.
MCT USE ONLY. STUDENT USE PROHIBITED
l. “Personal Device” means one (1) personal computer, device, workstation or other digital electronic device
that you personally own or control that meets or exceeds the hardware level specified for the particular
Microsoft Instructor-Led Courseware.
m. “Private Training Session” means the instructor-led training classes provided by MPN Members for
corporate customers to teach a predefined learning objective using Microsoft Instructor-Led Courseware.
These classes are not advertised or promoted to the general public and class attendance is restricted to
individuals employed by or contracted by the corporate customer.
n. “Trainer” means (i) an academically accredited educator engaged by a Microsoft IT Academy Program
Member to teach an Authorized Training Session, and/or (ii) a MCT.
o. “Trainer Content” means the trainer version of the Microsoft Instructor-Led Courseware and additional
supplemental content designated solely for Trainers’ use to teach a training session using the Microsoft
Instructor-Led Courseware. Trainer Content may include Microsoft PowerPoint presentations, trainer
preparation guide, train the trainer materials, Microsoft One Note packs, classroom setup guide and Pre-
release course feedback form. To clarify, Trainer Content does not include any software, virtual hard
disks or virtual machines.
2. USE RIGHTS. The Licensed Content is licensed not sold. The Licensed Content is licensed on a one copy
per user basis, such that you must acquire a license for each individual that accesses or uses the Licensed
Content.
2.1 Below are five separate sets of use rights. Only one set of rights apply to you.
2.2 Separation of Components. The Licensed Content is licensed as a single unit and you may not
separate their components and install them on different devices.
2.3 Redistribution of Licensed Content. Except as expressly provided in the use rights above, you may
not distribute any Licensed Content or any portion thereof (including any permitted modifications) to any
third parties without the express written permission of Microsoft.
2.4 Third Party Notices. The Licensed Content may include third party code tent that Microsoft, not the
third party, licenses to you under this agreement. Notices, if any, for the third party code ntent are included
for your information only.
2.5 Additional Terms. Some Licensed Content may contain components with additional terms,
conditions, and licenses regarding its use. Any non-conflicting terms in those conditions and licenses also
apply to your use of that respective component and supplements the terms described in this agreement.
a. Pre-Release Licensed Content. This Licensed Content subject matter is on the Pre-release version of
the Microsoft technology. The technology may not work the way a final version of the technology will
and we may change the technology for the final version. We also may not release a final version.
Licensed Content based on the final version of the technology may not contain the same information as
the Licensed Content based on the Pre-release version. Microsoft is under no obligation to provide you
with any further content, including any Licensed Content based on the final version of the technology.
b. Feedback. If you agree to give feedback about the Licensed Content to Microsoft, either directly or
through its third party designee, you give to Microsoft without charge, the right to use, share and
commercialize your feedback in any way and for any purpose. You also give to third parties, without
charge, any patent rights needed for their products, technologies and services to use or interface with
any specific parts of a Microsoft technology, Microsoft product, or service that includes the feedback.
You will not give feedback that is subject to a license that requires Microsoft to license its technology,
technologies, or products to third parties because we include your feedback in them. These rights
survive this agreement.
c. Pre-release Term. If you are an Microsoft IT Academy Program Member, Microsoft Learning
Competency Member, MPN Member or Trainer, you will cease using all copies of the Licensed Content on
the Pre-release technology upon (i) the date which Microsoft informs you is the end date for using the
Licensed Content on the Pre-release technology, or (ii) sixty (60) days after the commercial release of the
technology that is the subject of the Licensed Content, whichever is earliest (“Pre-release term”).
Upon expiration or termination of the Pre-release term, you will irretrievably delete and destroy all copies
of the Licensed Content in your possession or under your control.
MCT USE ONLY. STUDENT USE PROHIBITED
4. SCOPE OF LICENSE. The Licensed Content is licensed, not sold. This agreement only gives you some
rights to use the Licensed Content. Microsoft reserves all other rights. Unless applicable law gives you more
rights despite this limitation, you may use the Licensed Content only as expressly permitted in this
agreement. In doing so, you must comply with any technical limitations in the Licensed Content that only
allows you to use it in certain ways. Except as expressly permitted in this agreement, you may not:
• access or allow any individual to access the Licensed Content if they have not acquired a valid license
for the Licensed Content,
• alter, remove or obscure any copyright or other protective notices (including watermarks), branding
or identifications contained in the Licensed Content,
• modify or create a derivative work of any Licensed Content,
• publicly display, or make the Licensed Content available for others to access or use,
• copy, print, install, sell, publish, transmit, lend, adapt, reuse, link to or post, make available or
distribute the Licensed Content to any third party,
• work around any technical limitations in the Licensed Content, or
• reverse engineer, decompile, remove or otherwise thwart any protections or disassemble the
Licensed Content except and only to the extent that applicable law expressly permits, despite this
limitation.
5. RESERVATION OF RIGHTS AND OWNERSHIP. Microsoft reserves all rights not expressly granted to
you in this agreement. The Licensed Content is protected by copyright and other intellectual property laws
and treaties. Microsoft or its suppliers own the title, copyright, and other intellectual property rights in the
Licensed Content.
6. EXPORT RESTRICTIONS. The Licensed Content is subject to United States export laws and regulations.
You must comply with all domestic and international export laws and regulations that apply to the Licensed
Content. These laws include restrictions on destinations, end users and end use. For additional information,
see www.microsoft.com/exporting.
7. SUPPORT SERVICES. Because the Licensed Content is “as is”, we may not provide support services for it.
8. TERMINATION. Without prejudice to any other rights, Microsoft may terminate this agreement if you fail
to comply with the terms and conditions of this agreement. Upon termination of this agreement for any
reason, you will immediately stop all use of and delete and destroy all copies of the Licensed Content in
your possession or under your control.
9. LINKS TO THIRD PARTY SITES. You may link to third party sites through the use of the Licensed
Content. The third party sites are not under the control of Microsoft, and Microsoft is not responsible for
the contents of any third party sites, any links contained in third party sites, or any changes or updates to
third party sites. Microsoft is not responsible for webcasting or any other form of transmission received
from any third party sites. Microsoft is providing these links to third party sites to you only as a
convenience, and the inclusion of any link does not imply an endorsement by Microsoft of the third party
site.
10. ENTIRE AGREEMENT. This agreement, and any additional terms for the Trainer Content, updates and
supplements are the entire agreement for the Licensed Content, updates and supplements.
12. LEGAL EFFECT. This agreement describes certain legal rights. You may have other rights under the laws
of your country. You may also have rights with respect to the party from whom you acquired the Licensed
Content. This agreement does not change your rights under the laws of your country if the laws of your
country do not permit it to do so.
13. DISCLAIMER OF WARRANTY. THE LICENSED CONTENT IS LICENSED "AS-IS" AND "AS
AVAILABLE." YOU BEAR THE RISK OF USING IT. MICROSOFT AND ITS RESPECTIVE
AFFILIATES GIVES NO EXPRESS WARRANTIES, GUARANTEES, OR CONDITIONS. YOU MAY
HAVE ADDITIONAL CONSUMER RIGHTS UNDER YOUR LOCAL LAWS WHICH THIS AGREEMENT
CANNOT CHANGE. TO THE EXTENT PERMITTED UNDER YOUR LOCAL LAWS, MICROSOFT AND
ITS RESPECTIVE AFFILIATES EXCLUDES ANY IMPLIED WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.
14. LIMITATION ON AND EXCLUSION OF REMEDIES AND DAMAGES. YOU CAN RECOVER FROM
MICROSOFT, ITS RESPECTIVE AFFILIATES AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP
TO US$5.00. YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL,
LOST PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES.
It also applies even if Microsoft knew or should have known about the possibility of the damages. The
above limitation or exclusion may not apply to you because your country may not allow the exclusion or
limitation of incidental, consequential or other damages.
Please note: As this Licensed Content is distributed in Quebec, Canada, some of the clauses in this
agreement are provided below in French.
Remarque : Ce le contenu sous licence étant distribué au Québec, Canada, certaines des clauses
dans ce contrat sont fournies ci-dessous en français.
EXONÉRATION DE GARANTIE. Le contenu sous licence visé par une licence est offert « tel quel ». Toute
utilisation de ce contenu sous licence est à votre seule risque et péril. Microsoft n’accorde aucune autre garantie
expresse. Vous pouvez bénéficier de droits additionnels en vertu du droit local sur la protection dues
consommateurs, que ce contrat ne peut modifier. La ou elles sont permises par le droit locale, les garanties
implicites de qualité marchande, d’adéquation à un usage particulier et d’absence de contrefaçon sont exclues.
EFFET JURIDIQUE. Le présent contrat décrit certains droits juridiques. Vous pourriez avoir d’autres droits
prévus par les lois de votre pays. Le présent contrat ne modifie pas les droits que vous confèrent les lois de votre
pays si celles-ci ne le permettent pas.
Acknowledgements
Microsoft Learning would like to acknowledge and thank the following for their contribution towards
developing this title. Their effort at various stages in the development has ensured that you have a good
classroom experience.
Contents
Module 1: SQL Server Components
Module Overview 1-1
Lesson 1: Introduction to the SQL Server Platform 1-2
Lesson 3: Side-by-Side Upgrade: Migrating SQL Server Data and Applications 3-20
Lesson 1: SQL Server on Virtual Machines and Azure SQL Database 7-2
Course Description
Note: This version of the course has been updated to reflect the changes in SQL Server
2017
This five-day instructor-led course provides students with the knowledge and skills to provision a
Microsoft SQL Server database. The course covers SQL Server provision both on-premise and in Azure,
and covers installing from new and migrating from an existing install.
Audience
The primary audience for this course is individuals who administer and maintain SQL Server databases.
These individuals perform database administration and maintenance as their primary area of
responsibility, or work in environments where databases play a key role in their primary job.
The secondary audiences for this course are individuals who develop applications that deliver content
from SQL Server databases.
Student Prerequisites
This course requires that you meet the following prerequisites:
Basic knowledge of the Microsoft Windows operating system and its core functionality.
These prerequisites can be achieved by attending course 20761A – Querying Data with Transact-SQL.
Course Objectives
After completing this course, students will be able to:
Course Outline
The course outline is as follows:
Course Materials
The following materials are included with your kit:
Course Handbook: a succinct classroom learning guide that provides the critical technical
information in a crisp, tightly-focused format, which is essential for an effective in-class learning
experience.
o Lessons: guide you through the learning objectives and provide the key points that are critical to
the success of the in-class learning experience.
o Labs: provide a real-world, hands-on platform for you to apply the knowledge and skills learned
in the module.
o Module Reviews and Takeaways: provide on-the-job reference material to boost knowledge
and skills retention.
o Lab Answer Keys: provide step-by-step lab solution guidance.
Modules: include companion content, such as questions and answers, detailed demo steps and
additional reading links, for each lesson. Additionally, they include Lab Review questions and answers
and Module Reviews and Takeaways sections, which contain the review questions and answers, best
practices, common issues and troubleshooting tips with answers, and real-world issues and scenarios
with answers.
Resources: include well-categorized additional resources that give you immediate access to the most
current premium content on TechNet, MSDN®, or Microsoft® Press®.
MCT USE ONLY. STUDENT USE PROHIBITED
About This Course iii
Course evaluation: at the end of the course, you will have the opportunity to complete an online
evaluation to provide feedback on the course, training facility, and instructor.
Note: At the end of each lab, you must revert the virtual machines to a snapshot. You can
find the instructions for this procedure at the end of each lab
The following table shows the role of each virtual machine that is used in this course:
Software Configuration
The following software is installed:
Course Files
The files associated with the labs in this course are located in the D:\Labfiles folder on the 20765C-MIA-
SQL virtual machine.
Classroom Setup
Each classroom computer will have the same virtual machine configured in the same way.
Hardware Level 6+
8GB or higher
DVD drive
Network adapter with Internet connectivity
*Striped
In addition, the instructor computer must be connected to a projection display device that supports SVGA
1024 x 768 pixels, 16 bit colors.
MCT USE ONLY. STUDENT USE PROHIBITED
1-1
Module 1
SQL Server Components
Contents:
Module Overview 1-1
Lesson 1: Introduction to the SQL Server Platform 1-2
Module Overview
This module introduces the Microsoft® SQL Server® platform. It describes the components, editions, and
versions of SQL Server, and the tasks that a database administrator commonly performs to configure a
SQL Server instance.
Objectives
At the end of this module, you should be able to:
Lesson 1
Introduction to the SQL Server Platform
As a DBA, it is important to be familiar with the database management system used to store your data.
SQL Server is a platform for developing business applications that are data focused. Rather than being a
single monolithic application, SQL Server is structured as a series of components. It is important to
understand how you use each of these.
You can install more than one copy of SQL Server on a server. Each of these is referred to as an instance
and can be configured and managed independently.
SQL Server ships in a variety of editions, each with a particular set of capabilities for different scenarios. It’s
important to understand the target business cases for each of the SQL Server editions—and how the
evolution, through a series of improving versions over many years, results in today’s stable and robust
platform.
Lesson Objectives
After completing this lesson, you will be able to:
Explain the role of each component that makes up the SQL Server platform.
Describe the functionality that SQL Server instances provide.
Component Description
Database The SQL Server database engine is the heart of the SQL Server platform. It
Engine provides a high-performance, scalable relational database engine based on the
SQL language that can be used to host online transaction processing (OLTP)
databases for business applications and data warehouse solutions. SQL Server
2017 also includes a memory-optimized database engine that uses in-memory
technology to improve performance for high-volume, short-running transactions.
Analysis Services SQL Server Analysis Services (SSAS) is an online analytical processing (OLAP)
engine that works with analytic cubes and tables. It’s used to implement
enterprise BI solutions for data analysis and data mining.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 1-3
Component Description
Integration SQL Server Integration Services (SSIS) is an extract, transform, and load (ETL)
Services platform tool for orchestrating the movement of data in both directions between
SQL Server components and external systems.
Reporting SQL Server Reporting Services (SSRS) is a reporting engine, based on web services,
Services that provides a web portal and end-user reporting tools. It can be installed in
native mode or integrated with Microsoft SharePoint® Server.
Master Data SQL Server Master Data Services (MDS) provides tooling and a hub for managing
Services master or reference data.
Machine This feature enables creation of distributed, machine learning solutions using a
Learning range of enterprise data sources. This feature was first introduced in SQL Server
Services 2016 supporting the R language only, but in SQL Server 2017, it now supports R
and Python.
Data Quality SQL Server Data Quality Services (DQS) is a knowledge-driven data quality tool for
Services data cleansing and matching.
StreamInsight SQL Server StreamInsight provides a platform for building applications that
perform complex event processing for streams of real-time data.
Full-Text Search Full-Text Search is a feature of the database engine that provides a sophisticated
semantic search facility for text-based data.
Replication The SQL Server database engine includes Replication, a set of technologies for
synchronizing data between servers to meet data distribution needs.
PowerPivot for PowerPivot for SharePoint is a specialized implementation of SQL Server Analysis
SharePoint Services that you can install in a Microsoft SharePoint Server farm to enable
tabular data modeling in shared Microsoft Excel® workbooks.
PowerPivot is also available natively in Excel.
Power View for Power View for SharePoint is a component of SQL Server Reporting Services when
SharePoint using SharePoint-integrated mode. It provides interactive data exploration,
visualization, and presentation experience that encourages intuitive, impromptu
reporting.
Power View is also available natively in Excel.
PolyBase PolyBase is an extension to the database engine where you can query distributed
datasets held in Hadoop or Microsoft Azure® Blob storage from Transact-SQL
statements.
MCT USE ONLY. STUDENT USE PROHIBITED
1-4 SQL Server Components
Additional instances of SQL Server require an instance name that you can use in conjunction with the
server name and are known as named instances. If you want all your instances to be named instances, you
simply provide a name for each one when you install them. You cannot install all components of SQL
Server in more than one instance. To access a named instance, client applications use the address Server-
Name\Instance-Name.
For example, a named instance called Test on a Windows server called APPSERVER1 could be addressed
as APPSERVER1\Test. The default instance on the same APPSERVER1 server could be addressed as
APPSERVER1 or APPSERVER1\MSSQLSERVER.
There is no need to install SQL Server tools and utilities more than once on a server. You can use a single
installation of the tools to manage and configure all instances.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 1-5
Principal Editions
Edition Business Use Case
Specialized Editions
Specialized editions are targeted at specific classes of business workloads.
Web Provides a secure, cost effective, and scalable platform for public websites, website
service providers, and applications.
Breadth Editions
Breadth editions are intended for customer scenarios and offered free or at low cost.
Developer You can build, test and demonstrate all SQL Server functionality.
Express Provides a free, entry-level edition suitable for learning and lightweight desktop or
web applications. SQL Server Express can be seamlessly upgraded to other versions
of SQL Server.
Cloud Editions
Edition Business Use Case
Microsoft You can build database applications on a scalable and robust cloud platform.
Azure SQL
Database
SQL Server Parallel Data Warehouse uses massively parallel processing (MPP) to execute queries against
vast amounts of data quickly. Parallel Data Warehouse systems are sold as a complete hardware and
software “appliance” rather than through standard software licenses.
For more information, see the Editions and supported features of SQL Server 2017 topic online at:
Early Versions
The earliest versions (1.0 and 1.1) were based on the
OS/2 operating system. SQL Server 4.2 and later
moved to the Microsoft Windows operating system,
initially on Windows NT.
SQL Server 2000 featured support for multiple instances and collations. It also introduced support for data
mining. After the product release, SQL Server Reporting Services (SSRS) was introduced as an add-on
enhancement to the product, along with support for 64-bit processors.
SQL Server Management Studio (SSMS) was released to replace several previous administrative tools.
The Transact-SQL language was substantially enhanced, including structured exception handling.
Dynamic Management Views (DMVs) and Dynamic Management Functions (DMFs) were introduced
for detailed health monitoring, performance tuning, and troubleshooting.
High availability improvements were included in the product; in particular, database mirroring was
introduced.
Specialized date- and time-related data types were introduced, including support for time zones
within date and time data.
Full-text indexing was integrated directly within the database engine. (Previously, full-text indexing
was based on interfaces to the operating system level services.)
A policy-based management framework was introduced to assist with a move to more declarative-
based management practices, rather than reactive practices.
Support for managing reference data being provided with the introduction of Master Data Services.
StreamInsight providing the ability to query data that is arriving at high speed, before storing it in a
database.
The introduction of tabular data models into SQL Server Analysis Services (SSAS).
Memory-optimized tables.
Updatable columnstore indexes (previously, columnstore indexes could be created but not updated).
SQL Server 2016 enhances the features in SQL Server 2014. These enhancements include:
New Query Store feature for storing query texts, execution plans, and performance metrics. This
feature enables you to identify which queries are consuming the most resources through a
dashboard.
Temporal tables that record the history of data changes, along with date and time stamps.
Polybase query engine to enable integration of SQL Server with external data sources such as Hadoop
or Azure Blob storage.
MCT USE ONLY. STUDENT USE PROHIBITED
1-8 SQL Server Components
Stretch database feature that enables you to dynamically and securely archive data from a local SQL
Server database to an Azure SQL database in Microsoft Azure.
New security features, such as always encrypted, dynamic data masking, and row level security.
The ability run the SQL Server database engine on Linux and Docker
Adaptive query processing, query plans will automatically optimize themselves on future runs
Enhancements to machine learning with support for the Python and open source packages (like
TensorFlow)
Five methods to identify the edition and version of a running SQL Server instance.
Demonstration Steps
1. Ensure that the 20765C-MIA-DC and 20765C-MIA-SQL virtual machines are running and log on to
MIA-SQL-20765C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.
2. On the taskbar, click the Microsoft SQL Server Management Studio 17 shortcut.
3. In the Connect to Server dialog box, click Connect.
6. In Object Explorer, point to the server name (MIA-SQL) and show that the server number is in
parentheses after the server name.
7. In Object Explorer, right-click the server name MIA-SQL, and then click Properties.
8. On the General page, note Product and Version properties are visible, and then click Cancel.
9. Start File Explorer. Navigate to C:\Program Files\Microsoft SQL
Server\MSSQL14.MSSQLSERVER\MSSQL\Log.
12. In the How do you want to open this file? dialog box, click Notepad, and then click OK.
13. The first entry in the file displays the version name, version number and edition, amongst other
information. Close Notepad.
14. In SQL Server Management Studio, select the code under the comment Method 4, and then click
Execute.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 1-9
15. Select the code under the comment Method 5, and then click Execute.
Categorize Activity
Place each piece of SQL Server terminology into the appropriate category. Indicate your answer by writing
the category number to the right of each item.
Items
Lesson 2
Overview of SQL Server Architecture
Before you start looking at the resources that SQL Server requires from the underlying server platform,
you need to know how SQL Server functions, so you can understand why each resource requirement
exists. To interpret SQL Server documentation, you also need to become familiar with some of the
terminology used when describing how the product functions.
The most important resources used by SQL Server from a server platform are CPU, memory, and I/O. In
this lesson, you will see how SQL Server is structured and how it uses these resources.
Lesson Objectives
After completing this lesson, you will be able to:
The parser checks that you have followed the rules of the Transact-SQL language and outputs a
syntax tree, which is a simplified representation of the queries to be executed. The parser outputs
what you want to achieve in your queries.
The algebrizer converts the syntax tree into a relational algebra tree, where operations are
represented by logic objects, rather than words. The aim of this phase is to take the list of what you
want to achieve, and convert it to a logical series of operations representing the work that needs to
be performed.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 1-11
The query optimizer then considers the different ways that your queries could be executed and finds
an acceptable plan. Costs are based on the required operation and the volume of data that needs to
be processed—this is calculated taking into account the distribution statistics. For example, the query
optimizer considers the data that needs to be retrieved from a table and the indexes available on the
table, to decide how to access data in the table. It is important to realize that the query optimizer
does not always look for the lowest cost plan because, in some situations, this might take too long.
Instead, the query optimizer finds a plan that it considers satisfactory. The query optimizer also
manages a query plan cache to avoid the overhead of performing all this work, when another similar
query is received for execution.
The page cache manages the storage of cached copies of data pages. Caching of data pages is used
to minimize the time it takes to access them. The page cache places data pages into memory, so they
are present when needed for query execution.
The locking and transaction management components work together to maintain consistency of your
data. This includes the maintenance of transactional integrity, with the help of the database log file.
SQLOS Layer
SQL Server Operating System (SQLOS) is the layer of SQL Server that provides operating system
functionality to the SQL Server components. All SQL Server components use programming interfaces
provided by SQLOS to access memory, schedule tasks, or perform I/O.
The abstraction layer provided by SQLOS avoids the need for resource-related code to be present
throughout the SQL Server database engine code. The most important functions provided by this layer
are memory management and scheduling. These two aspects are discussed in more detail later in this
lesson.
When a SQL Server component needs to execute code, the component creates a task that represents the
unit of work to be done. For example, if you send a batch of Transact-SQL commands to the server, it’s
likely that the batch will be executed within a task.
When a SQL Server component creates a task, it is assigned the next available worker thread that is not in
use. If no worker threads are available, SQL Server will try to retrieve another Windows thread, up to the
point that the max worker threads configuration limit is reached. At that point, the new task would need
to wait to get a worker thread. All tasks are arranged by the SQL Server scheduler until they are complete.
Affinity Mask
Schedulers can be enabled and disabled by setting the CPU affinity mask on the instance. The affinity
mask is a configurable bitmap that determines which CPUs from the host system should be used for SQL
Server—and can be changed without needing to reboot. By default, SQL Server will assume that it can use
all CPUs on the host system. While you can configure the affinity mask bitmap directly by using
sp_configure, this method is marked for deprecation in a future version of SQL Server. Use the Properties
dialog box for the server instance in SQL Server Management Studio (SSMS) to modify processor affinity.
Processor affinity can be modified at the level of individual processors, or at the level of NUMA nodes.
Note: Whenever the term CPU is used here in relation to SQL Server internal architecture, it
refers to any logical CPU, regardless of whether core or hyper-threading CPUs are being used.
When a task needs to wait for a resource, it is placed on a list until the resource is available; the task is
then signaled that it can continue, though it still needs to wait for another share of CPU time. This
allocation of CPUs to resources is a function of the SQLOS.
SQL Server keeps detailed internal records of how long tasks spend waiting, and of the types of resources
they are waiting for. Wait statistics information can be a useful resource for troubleshooting performance
problems. You can see these details by querying the following system views:
sys.dm_os_waiting_tasks;
sys.dm_os_wait_stats;
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 1-13
Parallelism
All operations run against SQL Server are capable of
running by using sequential elements on a single
task. To reduce the overall run time (at the expense
of additional CPU time), SQL Server can distribute
elements of an operation over several tasks, so they
execute in parallel.
Another configuration value, cost threshold for parallelism, determines the cost that a query must
meet before a parallel query plan will even be considered.
Query cost is determined based on the amount of data the query optimizer anticipates will need to
be read to complete the operation. This information is drawn from table and index statistics.
If a query is expensive enough to consider a parallel plan, SQL Server might still decide to use a sequential
plan that is lower in overall cost.
Controlling Parallelism
The query optimizer only creates a parallel plan and is not involved in deciding the MAXDOP value. This
value can be configured at the server level and overridden at the query level via a query hint. Even if the
query optimizer creates a parallel query plan, the execution engine might decide to use only a single CPU,
based on the resources available when it is time to execute the query.
In earlier versions of SQL Server, it was common to disable parallel queries on systems that were primarily
used for transaction processing. This limitation was implemented by adjusting the server setting for
MAXDOP to the value 1. In SQL Server 2016 and later, this is no longer generally considered a good
practice.
A better practice is to raise the value of cost threshold for parallelism so that a parallel plan is only
considered for higher cost queries.
For further information, see the Configure the cost threshold for parallelism Server Configuration Option
topic online:
Free Pages. Pages that are not yet used but are
kept to satisfy new memory requests.
The data cache implements a least recently used (LRU) algorithm to determine candidate pages to be
dropped from the cache as space is needed—after they have been flushed to disk (if necessary) by the
checkpoint process. The process that performs the task of dropping pages is known as the lazy writer—
this performs two core functions. By removing pages from the data cache, the lazy writer attempts to
keep sufficient free space in the buffer cache for SQL Server to operate. The lazy writer also monitors the
overall size of the buffer cache to avoid taking too much memory from the Windows operating system.
SQL Server 2014, SQL Server 2016 and SQL Server 2017 include a feature where the buffer pool can be
extended onto fast physical storage (such as a solid-state drive); this can significantly improve
performance.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 1-15
Providing SQL Server stays within the target memory, it requests additional memory from Windows when
required. If the target memory value is reached, the memory manager answers requests from components
by freeing up the memory of other components. This can involve evicting pages from caches. You can
control the target memory value by using the min server memory and max server memory instance
configuration options.
It is good practice to reduce the max server memory to a value where a SQL Server instance will not
attempt to consume all the memory available to the host operating system.
For more information, see the Server Memory Server Configuration Options topic online:
Server Memory Server Configuration Options
https://aka.ms/Do9z5e
You need to minimize the time taken by each I/O operation that is still required.
Ensuring that SQL Server data files are held on sufficiently fast storage.
One of the major goals of query optimization is to reduce the number of logical I/O operations. The side
effect of this is a reduction in the number of physical I/O operations.
MCT USE ONLY. STUDENT USE PROHIBITED
1-16 SQL Server Components
Note: Logical I/O counts can be difficult to interpret as certain operations can cause the counts to be
artificially inflated, due to multiple accesses to the same page. However, in general, lower counts are
better than higher counts.
The overall physical I/O operations occurring on the system can be seen by querying the
sys.dm_io_virtual_file_stats system function. The values returned by this function are cumulative from the
point that the system was last restarted.
Demonstration Steps
1. Ensure that the 20765C-MIA-DC and 20765C-MIA-SQL virtual machines are running and log on to
20765C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.
6. In Object Explorer, right-click the MIA-SQL server and click Properties. Note the values for
Platform, Memory and Processors.
7. Select the Processors tab, fully expand the tree under the Processor column heading. Note the
setting for Max Worker Threads.
8. Select the Advanced tab and in the Parallelism group, review the default values for Cost Threshold
for Parallelism and for Max Degree of Parallelism.
9. Select the Memory tab and review the default memory configurations. Click Cancel to close the
Server Properties window.
10. Execute the query below Step 5.
11. Close SQL Server Management Studio without saving any changes.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 1-17
Sequencing Activity
Put the following SQL Server architectural layers in order from highest (closest to the client application) to
lowest (closest to the operating system) by numbering each to indicate the correct order.
Steps
Query Execution
Layer
Storage Engine
SQLOS
MCT USE ONLY. STUDENT USE PROHIBITED
1-18 SQL Server Components
Lesson 3
SQL Server Services and Configuration Options
A SQL Server instance consists of several services running on a Windows server. Whether you are
managing the configuration of existing instances or planning to install new instances, it is important to
understand the function of the various SQL Server services, in addition to the tools available to assist in
configuring those services. The SQL Server Configuration Manager is the principal configuration tool you
will learn about in this lesson.
Note: Many topics in this lesson do not apply to an Azure SQL Database. The configuration
and management of service accounts for Azure SQL Database is not exposed to Azure users.
However, these topics apply to SQL Server instances running on virtual machines in the Azure
cloud.
Lesson Objectives
At the end of this lesson you will be able to:
Describe the SQL Server services.
Understand the requirements for service accounts for SQL Server services.
Describe the configuration of SQL Server network ports, aliases, and listeners.
SQL Server Integration Services. This service performs the tasks associated with extract, transform,
and load (ETL) operations. SQL Server includes the new Scale out for SSIS feature that provides high
availability and also supports SQL Server on Linux.
SQL Server Browser. By default, named instances use dynamic TCP port assignment, which means
that, when the SQL Server service starts, they select a TCP port from the available ports. With the SQL
Browser service, you can connect to named SQL Server instances when the connection uses dynamic
port assignment. The service is not required for connections to default SQL Server instances, which
use port 1433, the well-known port number for SQL Server. For connections to named instances that
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 1-19
use a TCP-specific port number, or which specify a port number in the connection string, the SQL
Server Browser service is not required and you can disable it.
SQL Full-Text Filter Daemon Launcher. This service supports the operation of full-text search; this is
used for the indexing and searching of unstructured and semi-structured data.
SQL Server VSS Writer. Use this to back up and restore by using the Windows Volume Shadow (VSS)
copy service. This service is disabled by default; you should only enable it if you intend to use VSS
backups.
SQL Server PolyBase Engine. This service provides the engine used to query Hadoop and Azure Blob
storage data from the SQL Server database engine.
SQL Server PolyBase Data Movement. This service carries out the transfer of data between the SQL
Server database engine and Hadoop or Azure Blob storage.
SQL Server Launchpad. This service supports the integration of R and Python language scripts into
Transact-SQL queries.
SQL Server Reporting Services is no longer available to install through SQL Server setup. Go to the
Microsoft Download Center to download Microsoft SQL Server Reporting Services.
Note: The service names in the above list match the names that appear in both the SQL
Server Configuration and Windows Services tools for SQL Server 2017 services.
In these tools, the names of services that are instance-aware will be followed by the name of the
instance to which they refer in brackets—for example, the service name for the database engine
service for an instance named MSSQLSERVER is SQL Server (MSSQLSERVER).
Configuring Services
You use SQL Server Configuration Manager (SSCM) to configure SQL Server services, and the network
libraries exposed by SQL Server services, in addition to configuring how client connections are made to
SQL Server.
You can use SSCM to control (that is, start, stop, and configure) each service independently; to set the
startup mode (automatic, manual, or disabled) of each service; and to set the service account identity for
each service.
You can also set startup parameters to start SQL Server services with specific configuration settings for
troubleshooting purposes.
requires for doing its job. Ideally, each service should run under its own dedicated account. This minimizes
risk because, if one account is compromised, other services remain unaffected.
When you use the SQL Server Setup installation program (or the SQL Server Configuration Manager) to
specify a service account, the account is automatically added to the relevant security group to ensure that
it has the appropriate rights.
Note: Because it automatically assigns accounts to the correct security groups, use the SQL
Server Configuration Manager to amend SQL Server service accounts. You should not use the
Windows Services tool to manage SQL Server service accounts.
The different types of accounts that you can use for SQL Server services include:
Domain user account. A nonadministrator domain user account is a suitable choice for service
accounts in a domain environment.
Local user account. A nonadministrator local user account is a secure choice for service accounts in a
nondomain environment, such as a perimeter network.
Local system account. The local system account is a highly privileged account that is used by various
Windows services. Consequently, you should avoid using it to run SQL Server services.
Local service account. You use the local service account, a predefined account with restricted
privileges, to access local resources. This account is used by Windows services and other applications
that do not require access to remote resources; generally, a dedicated service account for each SQL
Server service is preferred. If your database server runs on Windows Server 2008 R2 or later, you can
use a virtual service account instead (see below).
Network service account. The network service account has fewer privileges than the local system
account, but it does give a service access to network resources. However, because this account is
often used by multiple services, including Windows services, you should avoid using it where possible.
If your database server runs on Windows Server 2008 R2 or later, you can use a virtual service account
instead (see below).
Managed service account. Managed service accounts are available if the host operating system is
Windows Server 2008 R2 or later (Windows 7 or later also support managed service accounts). SQL
Server support for managed service accounts was introduced in SQL Server 2012. A managed service
account is a type of domain account that is associated with a single server, and which you can use to
manage services. You cannot use a managed service account to log on to a server, so it is more
secure than a domain user account. Additionally, unlike a domain user account, you do not need to
manage passwords for managed service accounts manually. However, a domain administrator needs
to create and configure a managed service account before you can use it.
Virtual service account. Virtual service accounts are available if the host operating system is
Windows Server 2008 R2 or later (Windows 7 or later also support virtual service accounts). SQL
Server support for virtual service accounts was introduced in SQL Server 2012. A virtual service
account is like a managed service account, except that it is a type of local account that you can use to
manage services, rather than a domain account. Unlike managed service accounts, an administrator
does not need to create or configure a virtual service account. This is because a virtual service account
is a virtualized instance of the built-in network service account with its own unique identifier.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 1-21
TCP/IP
Named pipes
Shared Memory
The configuration for the TCP/IP protocol provides different settings on each configured IP address if
required—or a general set of configurations that are applied to all IP addresses.
The protocols that are enabled by default is dependent on the edition of SQL Server that has been
installed. For more information, see the Default SQL Server Network Protocol Configuration topic online:
Default SQL Server Network Protocol Configuration
https://aka.ms/Fx0cnj
Other protocols, where client applications can connect over a network interface, must be enabled by an
administrator; this is done to minimize the exposure of a new SQL Server instance to security threats.
Aliases
Connecting to a SQL Server service can involve multiple settings such as server address, protocol, and
port. If you hard-code these connection details in your client applications—and then any of the details
change—your application will no longer work. To avoid this issue and to make the connection process
simpler, you can use SSCM to create aliases for server connections.
You create a server alias and associate it with a server, protocol, and port (if required). Client applications
can then connect to the alias by name without any concern about how those connections are made.
Aliases can provide a mechanism to move databases between SQL Server instances without having to
MCT USE ONLY. STUDENT USE PROHIBITED
1-22 SQL Server Components
amend client connection strings. You can configure one or more aliases for each client system that uses
SNAC (including the server itself). Aliases for 32-bit applications are configured independently of those for
64-bit applications.
Shared Memory
Named Pipes
TCP/IP
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 1-23
Review Question(s)
Question: On a single server, when might you use a multiple installation of SQL Server and
when might you use multiple SQL Server instances?
Module 2
Installing SQL Server
Contents:
Module Overview 2-1
Lesson 1: Considerations for Installing SQL Server 2-2
Module Overview
One of the key responsibilities of a database administrator (DBA) is to provision servers and databases.
This includes planning and performing the installation of Microsoft SQL Server® on physical servers and
virtual machines.
This module explains how to assess resource requirements for SQL Server and how to install it.
Objectives
After completing this module, you will be able to:
Lesson 1
Considerations for Installing SQL Server
This lesson covers the hardware and software requirements for installing SQL Server. You will learn about
the minimum requirements specified by Microsoft, in addition to tools and techniques to assess hardware
performance before you install SQL Server.
Note: This course covers SQL Server on Windows and in Microsoft Azure. Course 10999A –
SQL Server on Linux covers Linux-based installations.
Lesson Objectives
After completing this lesson, you will be able to:
Describe minimum hardware and software requirements for running SQL Server.
Understand considerations when designing I/O subsystems for SQL Server installations.
Use the SQLIOSim tool to assess the suitability of I/O subsystems for use with SQL Server.
Use the Diskspd tool for load generation and performance testing of storage I/O subsystems.
Software resource requirements. For SQL Server, the host operating system has to meet certain
software prerequisites.
For more information, see the Planning a SQL Server Installation topic in the SQL Server online reference:
Hardware Requirements
In earlier versions of SQL Server, it was necessary to focus on minimum requirements for processor, disk
space and memory. Nowadays, this is much less of a concern because the minimum hardware
requirements for running SQL Server are well below the specification of most modern systems. However,
SQL Server has limitations on the maximum CPU count and memory it will support. These limits vary
between different SQL Server editions—they might also vary on the edition of Microsoft Windows®
hosting the SQL Server instance.
Processors
The minimum processor requirements x64 processor architectures are shown in the following table:
Processor core count, rather than processor speed, is more of a concern when planning a SQL Server
installation. While you might want to add as many CPUs as possible, it is important to consider that there
is a trade-off between the number of CPU cores and license costs. In addition, not all computer
architectures support the addition of CPUs. Adding CPU resources might then require architectural
upgrades to computer systems, not just the additional CPUs.
Memory
The minimum memory requirements are shown in the following table:
Minimum Recommended
Edition
Memory Memory
All other 1 GB 4 GB
editions
Memory requirements for SQL Server are discussed further in the next topic.
Disk
SQL Server requires a minimum disk space of 6 GB to install (this might vary according to the set of
features you install) but this is not the only value to consider; the size of user databases can be far larger
than the space needed to install the SQL Server software.
Because of the large amount of data that must be moved between storage and memory when SQL Server
is in normal operation, I/O subsystem performance is critical.
Tools for assessing I/O subsystem performance are discussed later in this module.
MCT USE ONLY. STUDENT USE PROHIBITED
2-4 Installing SQL Server
Software Requirements
Like any server product, SQL Server requires specific combinations of operating system and software
before installation.
Operating System
Operating system requirements vary between different editions of SQL Server. The SQL Server online
documentation provides a precise list of supported versions and editions.
You can install some versions of SQL Server on the client operating systems, such as Windows 10.
It is strongly recommended that you do not install SQL Server on a domain controller.
Prerequisite Software
SQL Server requires .NET Framework for the Database Engine, Master Data Services, or Replication
components to install. Running SQL Server setup automatically installs .NET Framework 4.6 if it is not
present.
The installer for SQL Server will install the SNAC and the SQL Server setup support files.
However, to minimize the installation time for SQL Server, particularly in busy production environments, it
is useful to preinstall these components during any available planned downtime. Components such as the
.NET Framework often require a reboot after installation, so the pre-installation of these components can
further reduce downtime during installations or upgrades. You can manually install the .NET Framework
by using the Microsoft .NET Framework 4.6 (Web Installer) for Windows.
Note: The information in this link has not been fully updated to SQL Server 2017.
CPU
CPU utilization for a SQL Server largely depends
upon the types of queries that are running on the
system. Processor planning is often considered as
relatively straightforward, in that few system
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 2-5
architectures provide fine-grained control of the available processor resources. Testing with realistic
workloads is the best option.
Increasing the number of available CPUs will provide SQL Server with more scope for creating parallel
query plans. Even without parallel query plans, SQL Server workloads will make good use of multiple
processors when working with simple query workloads from a large number of concurrent users. Parallel
query plans are particularly useful when large amounts of data must be processed to return output.
Whenever possible, try to ensure that your server is dedicated to SQL Server. Most servers that are
running production workloads on SQL Server should have no other significant services running on the
same system. This particularly applies to other server applications such as Microsoft Exchange Server.
Many new systems are based on Non-Uniform Memory Access (NUMA) architectures. In a traditional
symmetric multiprocessing (SMP) system, all CPUs and memory are bound to a single system bus. The bus
can become a bottleneck when additional CPUs are added. On a NUMA-based system, each set of CPUs
has its own bus, complete with local memory. In some systems, the local bus might also include separate
I/O channels. These CPU sets are called NUMA nodes. Each NUMA node can access the memory of other
nodes but the local access to local memory is much faster. The best performance is achieved if the CPUs
mostly access their own local memory. Windows and SQL Server are both NUMA-aware and try to make
use of these advantages.
Optimal NUMA configuration is highly dependent on the hardware. Special configurations in the system
BIOS might be needed to achieve optimal performance. It is crucial to check with the hardware vendor for
the optimal configuration for a SQL Server on the specific NUMA-based hardware.
Memory
The availability of large amounts of memory for SQL Server to use is now one of the most important
factors when sizing systems.
While SQL Server will operate in relatively small amounts of memory, when memory configuration
challenges arise they tend to relate to the maximum, not the minimum, values. For example, the Express
Edition of SQL Server will not use more than 1 GB of memory, regardless of how much memory is installed
in the system.
The 64-bit operating system has a single address space that can directly access large amounts of memory.
SQL Server 2012 and SQL Server 2014 no longer support the use of Address Windowing Extensions
(AWE)-based memory to increase the address space for 32-bit systems. Consequently, 32-bit installations
on these earlier versions of SQL Server are effectively limited to accessing 4 GB of memory. SQL Server
2017 is only supported on 64-bit operating systems.
Determining Requirements
In the first phase of planning, you should
determine the requirements of the application,
including the I/O patterns that must be satisfied.
These include the frequency and size of reads and
writes sent by the application. As a general rule,
MCT USE ONLY. STUDENT USE PROHIBITED
2-6 Installing SQL Server
OLTP systems produce a high number of random I/O operations on the data files and sequential write
operations on database log files. By comparison, data warehouse-based applications tend to generate
large scans on data files, which are more typically sequential I/O operations on the data files.
Storage Styles
The second planning phase involves determining the style of storage to be used. With direct attached
storage (DAS), it is easier to get good predictable performance. On storage area network (SAN) systems,
more work is often required to get good performance; however, SAN storage typically provides a wide
variety of management capabilities and storage consolidation.
One particular challenge for SQL Server administrators is that SAN administrators are generally more
concerned with the disk space that is allocated to applications, rather than the performance requirements
of individual files. Rather than attempting to discuss file layouts with a SAN administrator, try to
concentrate on your performance requirements for specific files. Leave the decisions about how to
achieve those goals to the SAN administrator. In these discussions, you should focus on what is needed
rather than on how it can be achieved.
RAID Systems
In SAN-based systems, you will not often be concerned about the redundant array of independent disks
(RAID) levels being used. If you have specified the required performance on a file basis, the SAN
administrator will need to select appropriate RAID levels and physical disk layouts to achieve that.
For DAS storage, you should become aware of different RAID levels. While other RAID levels exist, RAID
levels 1, 5, and 10 are the most common ones used in SQL Server systems.
Number of Drives
For most current systems, the number of drives (or spindles, as they are still sometimes called), matters
more than the size of the disk. It is easy to find large disks that will hold substantial databases, but often a
single large disk will not be able to provide sufficient I/O operations per second or enough data
throughput (megabytes per second) to be workable. Solid state drive (SSD)-based systems are quickly
changing the available options in this area.
Drive Caching
Disk and disk array read caches are unlikely to have a significant effect on I/O performance because SQL
Server already manages its own caching system. It is unlikely that SQL Server will need to re-read a page
from disk that it has recently written, unless the system is low on memory.
Write caches can substantially improve SQL Server I/O performance, but make sure that hardware caches
guarantee a write, even after a system failure. Many drive write caches cannot survive failures and this can
lead to database corruptions.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 2-7
SQLIOSim is included on the SQL Server installation media, or you can download it from the Microsoft
website. It is also included as part of a SQL Server installation. The download package includes several
sample test configuration files for different usage scenarios; these are not included on the installation
media.
For more information about SQLIOSim and to download the utility, see the How to use the SQLIOSim
utility to simulate SQL Server activity on a disk subsystem topic on the Microsoft Support website:
How to use the SQLIOSim utility to simulate SQL Server activity on a disk subsystem
http://aka.ms/itwi63
MCT USE ONLY. STUDENT USE PROHIBITED
2-8 Installing SQL Server
Demonstration Steps
1. Ensure that the 20765C-MIA-DC and 20765C-MIA-SQL virtual machines are running and log on to
20765C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.
2. In the D:\Demofiles\Mod02 folder, right-click Setup.cmd, and then click Run as administrator.
3. In the User Account Control dialog box, click Yes, and wait for the script to finish.
7. In the Files and Configuration dialog box, in the System Level Configurations section, in the Cycle
Duration (sec) box, type 30.
9. In the Error Log (XML) box, type D:\sqliosim.log.xml, and then click OK.
10. In SQLIOSim, on the Simulator menu, click Start.
15. In File Explorer, go to D:\, and open sqliosim.log.xml with Office XML Handler.
16. Review the test results in the XML file, and then close the file without saving changes.
Detailed instructions on simulating SQL Server I/O activity, in addition to interpreting the results, are
included in a document packaged with the Diskspd download (UsingDiskspdforSQLServer.docx).
Demonstration Steps
1. Ensure that the 20765C-MIA-DC and 20765C-MIA-SQL virtual machines are running and log on to
20765C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.
4. In Windows PowerShell, at the command prompt, type the following, and then press ENTER:
cd D:\Demofiles\Mod02\Diskspd-v2.0.17\amd64fre
6. Wait for the command to complete, and then review the output of the tool.
and easier to utilize the large resources available on modern server hardware. You can run SQL Server in
two kinds of virtual environment.
Hyper-V also support snapshots of each image. A snapshot records the complete status of the image,
including the contents of all virtual hard drives and the virtual memory. You can, for example, use a
snapshot to trial a new configuration or custom code. If a problem arises, you can apply the snapshot to
return to the previous state.
You can use the same virtual hard disk image to create multiple VMs providing the image has been
generalized—for example, by using the sysprep.exe tool. In a generalized image, unique values such as
computer security identifiers (SIDs) have been removed. Multiple VMs created from such an image can
coexist on the same network without conflicts. You can use a single generalized image that includes SQL
Server, to deploy multiple SQL Server instances quickly.
For more information about Hyper-V and VMs, see:
Docker is the server component that hosts the containers in which applications execute. Docker manages
containers and enables containers to access shared resources including network cards and the single
kernel image.
Windows Server 2016 includes the Windows Containers server feature for container and docker support.
Containers can be started more quickly than VMs, because a full boot-up sequence is unnecessary.
A container does not provide a user interface. Instead, you must interact with the container and its
applications by using the command prompt and the docker.exe executable. For an example of SQL Server
installation in a container, see:
SQL Server 2017 on Windows Linux and Docker is now generally available
https://aka.ms/Aevy61
Many organizations are now adopting cloud solutions such as Microsoft Azure. In Azure, virtual machines
(VMs) can run Linux distributions just like on-premises servers.
By creating SQL Server for Linux, Microsoft has ensured that you have maximum flexibility in the platform
and tools that you use. This means that you can choose SQL Server without being bound to the complete
Microsoft platform—instead, you choose the components that suit your business model and IT
environment.
MCT USE ONLY. STUDENT USE PROHIBITED
2-12 Installing SQL Server
Trends in virtualization
Virtual machines (VMs), both on-premises and running in the cloud, are now firmly established as options
to host server workloads such as databases. A VM is a software environment that simulates a complete
hardware server. Within a VM, you can run an operating system such as Windows or Linux that uses its
normal drivers and interfaces to access hardware such as network cards and hard drives. However, that
operating system is in fact hosted within a virtualization host, such as Microsoft Hyper-V®. You use this
model to host multiple VMs with different operating systems on a single host machine. Within each VM,
software such as database servers run in a completely isolated environment.
One problem with VMs is that, because each one hosts a complete operating system, they use a lot of
memory and CPU resources. To run a large number of VMs on a single hardware server requires a
powerful machine. Containers are a new form of virtualization that addresses this problem. A
containerized virtualization system, such as Docker, provides a similar isolated environment to
applications such as databases servers. However, it shares the host operating system with each container.
In this way, you run multiple containers, each of which appears to its software as a complete operating
system, while using fewer resources on the host server. Containers are, therefore, much more scalable
than VMs.
SQL Server 2017 is the first version of SQL Server that is available on containers—whether the containers
are running on Linux or Windows.
Course 10999A – SQL Server on Linux covers this topic in greater detail.
Diskspd
SQLIOSim
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 2-13
Lesson 2
tempdb Files
tempdb is a special database available as a resource to all users of a SQL Server instance; you use it to
hold temporary objects that users, or the database engine, create.
Because it is a shared object, the performance of tempdb can have a significant impact on the overall
performance of a SQL Server instance; you can take steps to improve the performance of tempdb.
Lesson Objectives
At the end of this lesson, you will be able to:
Describe special considerations needed when designing storage and allocating data files for tempdb.
Because users and the database engine both use tempdb to hold large temporary objects, it is
common for tempdb memory requirements to exceed the capacity of the buffer pool—in which case,
the data will spool to the I/O subsystem. The performance of the I/O subsystem that holds tempdb
data files can therefore significantly impact the performance of the system as a whole. If the
performance of tempdb is a bottleneck in your system, you might decide to place tempdb files on
very fast storage, such as an array of SSDs.
Although it uses the same file structure, tempdb has a usage pattern unlike user databases. By their
nature, objects in tempdb are likely to be short-lived, and might be created and dropped in large
numbers. Under certain workloads—especially those that make heavy use of temporary objects—this
can lead to heavy contention for special system data pages, which can mean a significant drop in
performance. One mitigation for this problem is to create multiple data files for tempdb; this is
covered in more detail in the next topic.
When SQL Server recreates the tempdb database following a restart of the SQL Server service, the
size of the tempdb files returns to a preconfigured value. The tempdb data files and log file are
configured to autogrow by default, so if subsequent workloads require more space in tempdb than is
currently available, SQL Server will request more disk space from the operating system. If the initial
size of tempdb and the autogrowth increment set on the data files is small, SQL Server might need to
request additional disk space for tempdb many times before it reaches a stable size. Because a file-
MCT USE ONLY. STUDENT USE PROHIBITED
2-14 Installing SQL Server
growth operation requires an exclusive lock on the entire tempdb database, and tempdb is central
to the function of SQL Server, each file-growth operation can pause the database engine for its
duration. To avoid this problem, specify file size and autogrowth settings for tempdb that minimize
the number of autogrowth operations during normal running.
For more information, see the tempdb Database topic in the SQL Server online documentation:
tempdb Database
https://aka.ms/Qf9y7y
This is a change from earlier versions of SQL Server, where configuring tempdb was a post-installation
task.
With all versions of SQL Server, you can specify the location of the tempdb data files and log files during
installation.
You can configure these settings on the TempDB tab in the Database Engine Configuration section of
the installation process.
Verify the correctness of the statement by placing a mark in the column to the right.
Statement Answer
Lesson 3
Installing SQL Server
After making the decisions about your SQL Server configuration, you can move to installation. In this
lesson, you will see the phases that installation goes through and how SQL Server checks your system for
compatibility by using the System Configuration Checker tool.
For most users, the setup program will report that all was installed as expected. For the rare situations
where this does not occur, you will also learn how to carry out post-installation checks and
troubleshooting.
Lesson Objectives
After completing this lesson, you will be able to:
Installation Wizard
The SQL Server installation wizard provides a
simple user interface for installing SQL Server. You
can use it to select all the components of SQL
Server that you want to install. In addition to using
it to create a new installation on the server, you
can use it to add components to an existing one.
Note: You must be a local administrator to run the installation wizard on the local
computer. When installing from a remote share, you need read and execute permissions.
Command Prompt
You can also run the SQL Server setup program from the command prompt, using switches to specify the
options that you require. You can configure it for users to fully interact with the setup program, to view
the progress without requiring any input, or to run it in quiet mode without any user interface. Unless you
are using a volume licensing or third-party agreement, a user will always need to confirm acceptance of
the software license terms.
Configuration File
In addition to using switches to provide information to the command prompt setup, you can use a
configuration file. This can simplify the task of installing identically-configured instances across your
enterprise.
MCT USE ONLY. STUDENT USE PROHIBITED
2-16 Installing SQL Server
The configuration file is a text file containing name/value pairs. You can manually create this file by
running the installation wizard, selecting all your required options, and then, instead of installing the
product, generating a configuration file of those options—or you could take the configuration file from a
previously successful installation.
If you use a configuration file in conjunction with command prompt switches, the command prompt
values will override any values in your configuration file.
For more information, see the Install SQL Server topic in the SQL Server online documentation:
Product Updates. The installation process checks for any updates to prerequisite software.
Install Setup Files. The installation process installs the setup files required to install SQL Server.
Install Rules. The installation process checks for known potential issues that can occur during setup
and requires you to rectify any that it finds before continuing.
Setup Role. You must select the type of installation that you need to ensure that the process includes
the correct feature components you require. The options are:
o SQL Server Feature Installation. This option installs the key components of SQL Server,
including the database engine, Analysis Services, Reporting Services, and Integration Services.
o All Features with Defaults. This option installs all SQL Server features and uses the default
options for the service accounts.
After you select one of these options, you can further customize the features to install on the next page of
the wizard.
Feature Selection. You can use this page to select the exact features you want to install. You can also
specify where to install the instance features and the shared features.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 2-17
Feature Rules. The installation process checks for prerequisites of the features you have marked for
installation.
Instance Configuration. You must specify whether to install a default or named instance (if a default
instance is already present, installing a named instance is your only option) and, if you opt for a
named instance, the name that you want to use.
Server Configuration. Specify the service account details and startup type for each service that you
are installing.
Ready to Install. Use this page to review the options you have selected throughout the wizard
before performing the installation.
Complete. When the installation is complete, you might need to reboot the server.
Note: This is the sequence of pages in the Installation Wizard when you install SQL Server
on a server with no existing SQL Server instances. If previous SQL Server instances are already
installed, most of the same pages are displayed; some pages appear at different positions in the
sequence.
For more information, see the Install SQL Server from the Installation Wizard (Setup) topic in the SQL
Server online documentation:
You do not need to check the contents of the SQL Server setup log files after installation, because the
installer program will indicate any errors and attempt to reverse any of the SQL Server setup that has been
completed to that point. When errors occur during the SQL Server setup phase, the installation of the SQL
Server Native Access Client and the setup components is not reversed.
Typically, you only need to view the setup log files in two scenarios:
If setup is failing and the error information displayed by the installer does not help you to resolve the
issue.
MCT USE ONLY. STUDENT USE PROHIBITED
2-18 Installing SQL Server
If you contact Microsoft Product Support and they ask for detailed information.
If you do require the log files, you will find them in the %Programfiles%\Microsoft SQL Server\130\Setup
Bootstrap\Log folder.
Categorize Activity
Which of the following methods can be used to install SQL Server? Indicate your answer by writing the
category number to the right of each item.
Items
1 Installation Wizard
2 Windows Update
3 Command Prompt
5 PowerShell
Category 1 Category 2
Lesson 4
Automating Installation
Having learnt how to install SQL Server in the previous lesson we will now look at ways to automate the
installation.
Lesson Objectives
After completing this lesson, you will be able to:
Cumulative Updates (CUs) are periodic roll-up releases of hotfixes that have received further testing
as a group.
Service Packs (SPs) are periodic releases where full regression testing has been performed. Microsoft
recommends applying SPs to all systems after appropriate levels of organizational testing.
The simplest way to keep SQL Server up to date is to enable automatic updates from the Microsoft
Update service. Larger organizations, or those with documented configuration management processes,
should exert caution in applying automatic updates. It’s likely that the updates should be applied to test
or staging environments before being applied to production environments.
SQL Server can also have product SPs slipstreamed into the installation process to avoid having to apply
them after installation.
For more information, see the Install SQL Server Servicing Updates topic in the SQL Server online
documentation:
Unattended Installation
In many organizations, senior IT administrators
create script files for standard builds of software
installations and use them to ensure consistency.
Unattended installations can help with the
deployment of multiple identical installations of
SQL Server across an enterprise. Unattended
installations can also facilitate the delegation of
the installation to another person.
In both examples on the slide, the second method has been used. The first example shows a typical
installation command and the second shows how an upgrade could be performed using the same
method.
/q Switch
The "/q" switch shown in the examples specifies "quiet mode"—no user interface is provided. An
alternative switch "/qs" specifies "quiet simple" mode. In the quiet simple mode, the installation runs and
shows progress in the UI but does not accept any input.
Note: Note that you can use the installation wizard to create new installation .ini files
without carrying out a manual install. If you use the installation wizard all the way through to the
Ready to Install stage, the wizard generates a new ConfigurationFile.ini with the settings you
selected in C:\Program Files\Microsoft SQL Server\140\Setup Bootstrap\Log.
For more information, see the Install SQL Server from the Command Prompt topic in the SQL Server online
documentation:
For more information, see the Install SQL Server Using a Configuration File topic in the SQL Server online
documentation:
Demonstration Steps
1. Ensure that the 20765C-MIA-DC and 20765C-MIA-SQL virtual machines are running and log on to
20765C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.
4. Review the content in conjunction with the Install SQL Server From the Command Prompt topic in the
SQL Server online documentation. In particular, note the values of the following properties:
a. INSTANCEID
b. ACTION
c. FEATURES
d. QUIET
e. QUIETSIMPLE
f. INSTALLSHAREDDIR
g. INSTANCEDIR
h. INSTANCENAME
i. AGTSVCSTARTUPTYPE
j. SQLCOLLATION
k. SQLSVCACCOUNT
l. SQLSYSADMINACCOUNTS
m. TCPENABLED
5. Close Notepad.
InstallationFile.ini
ConfigurationFile.ini
Wizard.ini
Config.ini
MCT USE ONLY. STUDENT USE PROHIBITED
2-22 Installing SQL Server
Objectives
After completing this lab, you will be able to:
2. Keep the SQL Server Installation Center window open. You will use it again in a later exercise.
Results: After this exercise, you should have run the SQL Server setup program and used the tools in the
SQL Server Installation Center to assess the computer’s readiness for SQL Server installation.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 2-23
Startup Both SQL Server and SQL Server Agent should start manually
SA Password Pa55w.rd
o On the Feature Selection page, select only the features that are required.
o On the Server Configuration page, configure the service account name and password, the
startup type for the SQL Server Agent and SQL Server Database Engine services, and verify the
collation.
o On the Database Engine Configuration page, configure the authentication mode and the SA
password; add the current user (Student) to the SQL Server administrators list, specify the
required data directories, and verify that Filestream is not enabled.
Results: After this exercise, you should have installed an instance of SQL Server.
MCT USE ONLY. STUDENT USE PROHIBITED
2-24 Installing SQL Server
2. Verify that the service is configured to log on as ADVENTUREWORKS\ServiceAcct, and then start
the service.
2. View the SQL Server Native Client 32-bit client protocols and verify that the TCP/IP protocol is
enabled. Create an alias named Test that uses TCP/IP to connect to the MIA-SQL\SQLTEST instance
from 32-bit clients.
3. View the SQL Server Native Client protocols and verify that the TCP/IP protocol is enabled. Create an
alias named Test that uses TCP/IP to connect to the MIA-SQL\SQLTEST instance from 64-bit clients.
SELECT @@ServerName;
GO
a. View the properties of the Test instance and verify that the value of the Name property is MIA-
SQL\SQLTEST.
Results: After this exercise, you should have started the SQL Server service and connected using SSMS.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 2-25
The configuration of these instances should match the configuration of the SQLTEST instance you have
just installed, with the following differences:
User database files and log files should all be placed in C:\devdb.
The development servers have two CPU cores; you should configure tempdb accordingly.
2. Review the content of the file, paying particular attention to the following properties:
a. INSTANCEID
b. INSTANCENAME
c. ACTION
d. FEATURES
e. TCPENABLED
f. SQLUSERDBDIR
g. SQLUSERDBLOGDIR
h. SQLTEMPDBFILECOUNT
a. Amend the file to reflect the changes needed for this task.
d. User database files and log files should all be placed in C:\devdb.
e. The development servers have two CPU cores; tempdb should be configured accordingly.
2. Save the file and compare your changes to the solution shown in the file D:\Labfiles\Lab02\Solution\
SolutionConfigurationFile.ini.
MCT USE ONLY. STUDENT USE PROHIBITED
2-26 Installing SQL Server
Results: After this exercise, you will have reviewed and edited an unattended installation configuration
file.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 2-27
Review Question(s)
Question: What are the considerations for installing additional named instances on a server
where SQL Server is already installed?
MCT USE ONLY. STUDENT USE PROHIBITED
MCT USE ONLY. STUDENT USE PROHIBITED
3-1
Module 3
Upgrading SQL Server to SQL Server 2017
Contents:
Module Overview 3-1
Lesson 1: Upgrade Requirements 3-2
Lesson 3: Side-by-Side Upgrade: Migrating SQL Server Data and Applications 3-20
Lab: Upgrading SQL Server 3-31
Module Overview
Occasionally, you might need to upgrade existing Microsoft® SQL Server® instances, services, and
databases to a new version of SQL Server. This might arise for a number of reasons—for example:
To continue to receive support and security patches for SQL Server from Microsoft (because
mainstream support or extended support for your version is ending).
To move a SQL Server instance between different editions of the same version.
This module will cover points you might need to consider when planning an upgrade to SQL Server 2017,
and the different strategies you might use to deliver it.
For full details of the lifecycle dates for versions of SQL Server, see Microsoft Lifecycle Policy on the
Microsoft Support site:
Objectives
At the end of this module, you will be able to:
Describe the different strategies available for upgrading existing SQL Server instances, services, and
databases to SQL Server 2017.
Lesson 1
Upgrade Requirements
Before beginning an upgrade of an existing SQL Server to SQL Server 2017, you should draw up an
upgrade plan. This lesson covers the points you will need to consider.
Lesson Objectives
At the end of this lesson, you will be able to:
Identify the versions and editions of SQL Server suitable for upgrading to SQL Server 2017.
Describe the in-place and side-by-side strategies for upgrading to SQL Server 2017.
Use the SQL Server 2017 Data Migration Assistant.
You might decide to plan for application testing to confirm that your application will continue to operate
correctly with SQL Server 2017. Changes to your application could be required to correct any issues
discovered during testing.
An in-place upgrade.
A side-by-side upgrade.
The recommended practice is to treat a SQL Server upgrade as you would any other IT project by
assembling a team with the appropriate skills and developing an upgrade plan.
For more information about all aspects of upgrading to SQL Server 2017, see Upgrade SQL Server in the
SQL Server online documentation:
o SQL Server 2017 can restore database engine and Analysis Services backups from SQL Server
2005.
o SQL Server 2017 can attach database data and log files from SQL Server 2005.
No in-place upgrade path is available for SQL Server versions before 2005.
o To upgrade in-place from an older version, an interim upgrade to a version where a supported
upgrade path is available must be carried out. For example, a SQL Server 2000 instance could be
upgraded to SQL Server 2008 R2 SP2, then upgraded to SQL Server 2017.
In-place upgrades are permitted only between compatible editions of SQL Server. In general, you can
move from a lower-featured edition to an equally featured or higher featured edition as part of an
upgrade to SQL Server 2017; however, you cannot move from a higher featured edition to a lower
featured edition.
MCT USE ONLY. STUDENT USE PROHIBITED
3-4 Upgrading SQL Server to SQL Server 2017
For example:
A SQL Server 2008 R2 Standard Edition instance could be upgraded to SQL Server 2017 Standard
Edition or Enterprise Edition.
A SQL Server 2008 R2 Enterprise Edition instance could be upgraded to SQL Server 2017 Enterprise
Edition. An upgrade to SQL Server 2017 Standard Edition would not be permitted.
For a full list of valid migration sources and targets, see Supported Version and Edition Upgrades for SQL
Server 2017 in the SQL Server online documentation:
For more information about upgrading to SQL Server 2017 from SQL Server 2005, see Are you upgrading
from SQL Server 2005? in the SQL Server online documentation:
In-Place Upgrade
An in-place upgrade occurs when you replace the
installed version of SQL Server with a new version.
This is a highly automated, and therefore easier,
method of upgrading. However, an in-place
upgrade is not without risks. If the upgrade fails, it
is much harder to return to the previous operating
state by reverting to the older version of SQL
Server. For your organization, you will need to decide whether this risk outweighs the benefit of a more
straightforward upgrade process.
When you are weighing up this risk, you need to consider that it might not be the SQL Server upgrade
that fails. Even if the SQL Server upgrade works as expected, a client application might fail to operate as
anticipated on the new version of SQL Server. In this case, the need to recover the situation quickly will be
just as important as if the upgrade of SQL Server software had failed.
In-place upgrades have the added advantage of minimizing the need for additional hardware resources
and avoiding the redirection of client applications that are configured to work with the existing server.
It is not possible to upgrade from a 32-bit version of SQL Server, as SQL Server 2017 is 64-bit only.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 3-5
Side-by-Side Upgrade
In a side-by-side upgrade, databases are migrated from the existing SQL Server instance to a new SQL
Server 2017 instance. The migration might be carried out using database backups, detach and reattach of
data files, or through data transfer between databases using the Copy Database Wizard, BCP, SSIS, or
another ETL tool.
A side-by-side upgrade is subject to less risk than an in-place upgrade, because the original system stays
in place; it can be quickly returned to production should an upgrade issue arise. However, side-by-side
upgrades involve extra work and more hardware resources.
One server. A SQL Server 2017 instance is installed alongside the instance to be upgraded on the
same hardware. One-server side-by-side upgrades are less common in virtualized and cloud IT
infrastructures.
Two servers. SQL Server 2017 is installed on different hardware from the old instance.
A side-by-side upgrade offers a method to upgrade between versions and editions of SQL Server where
no in-place upgrade path exists—such as moving a database from an Enterprise Edition instance to a
Standard Edition instance.
Whether one or two servers are used, you will need enough hardware resources to provide for both the
original and the new systems to perform a side-by-side upgrade. Common issues associated with side-by-
side upgrades include:
Configuration of server-level objects and services (for example, logins and SQL Server Agent jobs) on
the new SQL Server 2017 instance.
Not all versions of SQL Server are supported when installed side-by-side on the same hardware.
For information on versions of SQL Server that might be installed side-by-side on the same server, see the
topic Using SQL Server Side-By-Side with Previous Versions of SQL Server, in Work with Multiple Versions
and Instances of SQL Server, in the SQL Server online documentation:
Hybrid Options
You can also use some elements of an in-place upgrade and a side-by-side upgrade together. For
example, rather than copying all the user databases, after installing the new version of SQL Server beside
the old version—and migrating all the server objects such as logins—you could detach user databases
from the old server instance and reattach them to the new one.
After user databases have been attached to a newer version of SQL Server, they cannot be reattached to
an older version, even if the database compatibility settings have not been upgraded. You need to
consider this risk when you use a hybrid approach.
Rolling Upgrade
To maximize up time and minimize risk, a more complex approach might be required if you are
upgrading an instance that employs High Availability (HA) functionality.
MCT USE ONLY. STUDENT USE PROHIBITED
3-6 Upgrading SQL Server to SQL Server 2017
The details of your upgrade plan will vary, depending on which HA features you are using, including:
Failover clustering
Mirroring
Log shipping
Replication
To assist in your upgrade planning, see Choose a Database Engine Upgrade Method in the SQL Server
online documentation:
The tool generates a report that lists any issues found and rates them with a severity:
High. The issue is a breaking change that will cause problems after migration.
Where relevant, the report will include affected database object names, the affected line(s) of code, and a
suggested resolution.
Lists of issues are generated for SQL Server 2017 and also for future versions (listing features that are still
supported in SQL Server 2017 but marked for deprecation in a future version of SQL Server).
Reports generated by Data Migration Assistant can be exported to .html or .csv files.
If any high severity issues are reported by Data Migration Assistant for SQL Server 2017, you must modify
your application to resolve them before proceeding with the upgrade.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 3-7
Note: Data Migration Assistant can only check compatibility of code in database objects
with SQL Server 2017 (user-defined functions, stored procedures, and so on). Database users and
client applications might generate and execute Transact-SQL statements against the instance to
be upgraded; Data Migration Assistant cannot assess the compatibility of SQL statements
generated by users and applications.
The Distributed Replay Utility uses as its input a file of trace data captured by the SQL Server Profiler
utility that records all client activity against the source SQL Server instance.
Note: The results from testing with the Distributed Replay Utility are only as representative
as the content of the source trace data files. You will need to consider how long the trace that
captures client application activity needs to run to capture a representative workload, and trade
this off against the size of the captured data.
Additional Reading: For more information on the SQL Server Profiler utility and SQL
Server Profiler traces, see course 20764: Administering a SQL Database Infrastructure.
SQL Server Profiler trace data files must be preprocessed before they are used by the Distributed Replay
Utility.
The Distributed Replay Utility consists of two components: a server and a client. Either or both of these
components can be installed during SQL Server installation.
When using the Distributed Replay Utility, you will have one server and one or more clients—up to a
maximum of 16. The server coordinates test activity, whilst the clients replay commands as assigned to
them by the server. The server and clients might be installed on the same or different hardware.
Distributing the replay across multiple clients results in a better simulation of activity on the source
system. Workloads from source systems with high levels of activity, which would not be possible to
replicate using a single replay client, can also be replayed.
MCT USE ONLY. STUDENT USE PROHIBITED
3-8 Upgrading SQL Server to SQL Server 2017
At the end of the replay, you will manually review the output from each replay client, looking for
commands that generated error messages. These errors could indicate client application code that is not
compatible with the target server.
For more information on installing, configuring and using the Distributed Replay Utility, see SQL Server
Distributed Replay in the SQL Server online documentation:
Demonstration Steps
1. Ensure that the 20765C-MIA-DC-UPGRADE and 20765C-MIA-SQL-UPGRADE virtual machines are
running and log on to 20765C-MIA-SQL-UPGRADE as ADVENTUREWORKS\Student with the
password Pa55w.rd.
2. Run Setup.cmd in the D:\Demofiles\Mod03 folder as Administrator.
3. When the script has completed, press any key to close the window.
4. Navigate to https://www.microsoft.com/en-us/download/details.aspx?id=42642.
5. In the Microsoft .NET Framework 4.5.2 (Offline Installer) page, ensure that English is selected,
then click Download.
9. Check I have read and accept the license terms, then click Install.
10. When prompted, restart the 20765-MIA-SQL-UPGRADE computer and log on again as
ADVENTUREWORKS\Student with a password of Pa55w.rd.
14. In the Microsoft Data Migration Assistant Setup window, click Next.
15. Select the I accept the terms in the License Agreement check box, and then click Next.
16. Select the I agree to the Privacy Policy check box, and then click Install.
17. In the User Account Control dialog box, click Yes, and then click Finish.
18. On the Start screen, type Microsoft Data Migration Assistant, and then click Microsoft Data
Migration Assistant.
19. In the Data Migration Assistant, on the left-hand side, click the + sign.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 3-9
22. Ensure that Source server type and Target server type are both set to SQL Server.
25. In the SERVER NAME box, type MIA-SQL, check that Authentication type is set to Windows
Authentication.
26. Check the Trust server certificate box and then click Connect.
27. In the Select sources pane, under the list of databases, check TSQL and MDS, and click Add.
29. When analysis is complete, in the left-hand pane under MIA-SQL (SQL Server 2014), click TSQL.
30. In the Compatibility 140 (1) blade, under Behavior changes (1), click SET ROWCOUNT used in
the context of DML statements such as INSERT, UPDATE, or DELETE.
31. Show the students the output from this check and the implications when using SET ROWCOUNT
statements.
Lesson 2
Upgrade of SQL Server Services
Your upgrade plan should take account of all the SQL Server features that are installed on an instance that
you wish to upgrade to SQL Server 2017.
Some SQL Server features might require additional work to facilitate the upgrade process; the nature of
this work might vary, depending on whether you are planning to use an in-place or side-by-side
migration strategy.
Lesson Objectives
At the end of this lesson, you will be able to:
For more information on this topic, see Upgrade Analysis Services in the SQL Server online documentation:
Objects stored in system databases (master, msdb) must also be transferred to the target server. This
might include:
Logins.
Server-level triggers.
For more information on this topic, see Upgrade Database Engine in the SQL Server online
documentation:
Upgrade Database Engine
https://aka.ms/Xxr9pi
Whether you undertake an in-place or side-by-side upgrade, there are two additional steps you must
carry out to upgrade DQS:
Upgrade the schema pf the DQS databases using the command-line utility dsqinstaller.
For more information on this topic, see Upgrade Data Quality Services in the SQL Server online
documentation:
The behavior of the upgrade will vary, depending on whether SSIS is installed on the same machine as the
database engine, and whether both the database engine and SSIS are upgraded at the same time.
When SSIS and the database engine are upgraded together on the same machine, the upgrade
process will remove the system tables used to store SSIS packages in earlier versions of SQL Server,
and replace them with the SQL Server 2017 versions of these tables. This means that earlier versions
of the SQL Server client tools cannot manage or execute SSIS packages stored in SQL Server.
When only the database engine is upgraded, the old versions of the tables continue to be used in
SQL Server 2017.
Regardless of whether SSIS and the database engine are upgraded at the same time or not, the upgrade
process will not:
Remove earlier versions of the Integration Services service.
Migrate existing SSIS packages to the new package format used by SQL Server 2017.
Move packages from file system locations other than the default location.
Update SQL Agent job steps that call the dtexec utility with the file system path for the SQL Server
2017 version of dtexec.
For more information on this topic, see Upgrade Integration Services in the SQL Server online
documentation:
o Any customizations you have made to objects in the MDS database will be overwritten during
this upgrade.
Install the SQL Server 2017 Master Data Services web application.
o When the schema upgrade of the MDS database is complete, any old version of the MDS web
application will no longer be able to connect to the database.
Upgrade any clients using the Master Data Services Add-In for Excel® to the SQL Server 2017 version
of the add-in.
o When the schema upgrade of the MDS database is complete, clients using any old version of the
Master Data Services Add-In for Excel can no longer connect to the database.
You can carry out an in-place upgrade of Master Data Services to SQL Server 2017 without upgrading the
database engine hosting the MDS database.
For more information on this topic, see Upgrade Master Data Services in the SQL Server online
documentation:
When you upgrade Power Pivot for SharePoint to SQL Server 2017, you need to install or upgrade
SharePoint components on your SharePoint server or server farm.
Upgrade prerequisites and the details of the upgrade procedure for Power Pivot for SharePoint vary,
depending on whether you are using SharePoint 2010 or SharePoint 2013. However, the summary of the
upgrade is identical for both versions of SharePoint:
Upgrade all servers running Analysis Services in SharePoint mode (at a minimum, the POWERPIVOT
instance must be upgraded).
Install and configure the SQL Server 2017 Power Pivot for SharePoint Add-In on all servers in the
SharePoint farm. In a multiserver farm, when the upgrade is completed on one server, the remaining
servers in the farm become unavailable until they are upgraded.
The upgrade will not upgrade Power Pivot workbooks that run on the SharePoint servers, but workbooks
created using previous versions of Power Pivot for Excel will continue to function.
The exception to this is workbooks using scheduled data refresh. These workbooks must be using a
version of Power Pivot for Excel that matches the server version. You must manually upgrade these
workbooks, or use the auto-upgrade for data refresh feature in SharePoint 2010.
For more information, including the detailed steps required to upgrade SharePoint 2010 and SharePoint
2013 servers, see the topic Upgrade Power Pivot for SharePoint in the SQL Server online documentation:
Upgrade Power Pivot for SharePoint
https://aka.ms/O7opz3
SQL Server supports replication of data between nodes running different versions of SQL Server. As a
result, nodes in a SQL Server replication topology can be upgraded to SQL Server 2017 independently of
one another, without halting activity across the whole topology.
A Distributor must be running a version of SQL Server greater than or equal to the Publisher version.
Correspondingly, the Publisher must be running a lesser or equal version of SQL Server than its
Distributor.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 3-15
Subscribers to a transactional publication must be running a version within two versions of the
Publisher version. For example, a SQL Server 2017 publisher can have Subscribers running SQL Server
2014, or 2016.
When upgrading instances where the log reader agent is running for transactional replication, you must
stop activity on the database and allow the log reader agent to process any pending operations before
stopping it. When this process is complete, you can upgrade to SQL Server 2017.
For merge replication instances, the Merge Agent and Snapshot Agent should be run for each
subscription following an upgrade.
For more information, see the topic Upgrade Replicated Databases in the SQL Server online
documentation:
Back up customizations to Reporting Services virtual directories in Internet Information Services (IIS).
Remove invalid/expired SSL certificates from IIS. The presence of an invalid SSL certificate will cause
the installation of Reporting Services to fail.
The upgrade process for Reporting Services will be different, depending on whether you use Reporting
Services in native mode or in SharePoint mode. When Reporting Services is running in SharePoint mode,
you will need to upgrade the Reporting Services Add-In for SharePoint after the Reporting Services service
has been upgraded.
If you are using Reporting Services in native mode with a scale-out deployment (where the deployment
includes more than one report server), you must remove all the members from the scaled-out deployment
group before upgrading them. As servers are upgraded, they can be added back into the scaled-out
deployment group.
MCT USE ONLY. STUDENT USE PROHIBITED
3-16 Upgrading SQL Server to SQL Server 2017
For more information, see the topic Upgrade and Migrate Reporting Services in the SQL Server online
documentation:
For more information on the steps needed to complete a two-server side-by-side upgrade of Reporting
Services, see the topic Migrate a Reporting Services Installation (Native Mode) in the SQL Server online
documentation:
You should always manage SQL Server 2017 instances through the SQL Server management tools.
Note: You might need to update the PATH environment variable (or specify a full path for
executable files) to ensure that you can use the new version of command-line tools and utilities.
For more information, see the topic Upgrade SQL Server Management Tools in the SQL Server online
documentation:
For a full list of supported upgrades between different editions of SQL Server 2017, see Supported Version
and Edition Upgrades for SQL Server 2017 in the SQL Server online documentation:
For more information on using SQL Server setup to amend the edition of a SQL Server 2017 installation,
see the topic Upgrade to a Different Edition of SQL Server (Setup) in the SQL Server online documentation:
Note: You can carry out an upgrade or change of edition using the command-line
interface for setup.exe.
MCT USE ONLY. STUDENT USE PROHIBITED
3-18 Upgrading SQL Server to SQL Server 2017
Demonstration Steps
1. Ensure that the 20765C-MIA-DC-UPGRADE and 20765C-MIA-SQL-UPGRADE virtual machines are
running, and log on to 20765C-MIA-SQL-UPGRADE as ADVENTUREWORKS\Student with the
password Pa55w.rd.
4. In SQL Server Installation Center, click Installation, and then click Upgrade from a previous version
of SQL Server.
6. On the License Terms page, select I accept the license terms, and then click Next.
7. On the Product Updates page, click Next. Any error relating to a failure to search for updates
through Windows Update can be ignored.
8. On the Select Instance page, set the value of the Instance to upgrade box to MSSQLSERVER, and
then click Next.
9. On the Reporting Services Migration page, check Uninstall Reporting Services, then click Next.
14. The demonstration stops at this point because you cannot complete the upgrade with an Evaluation
version of SQL Server.
15. On the Feature Rules page, click Cancel, and then click Yes to Cancel the installation.
Categorize Activity
Place each SQL Server component into the appropriate category. Indicate your answer by writing the
category number to the right of each item.
Items
1 Database Engine
2 Integration Services
3 Reporting Services
5 Analysis Services
Category 1 Category 2
Lesson 3
Side-by-Side Upgrade: Migrating SQL Server Data and
Applications
In a side-by-side upgrade, user databases are transferred from an existing SQL Server instance to a new
SQL Server 2017 instance. The new SQL Server 2017 instance can be installed on the same hardware as the
instance you wish to upgrade (a one-server upgrade) or new hardware (a two-server upgrade). The points
covered in this lesson will apply whether you are upgrading to SQL Server 2017 using a one-server
upgrade or a two-server upgrade.
During a side-by-side upgrade to SQL Server 2017, much of your time and effort will be consumed by
moving or copying databases between SQL Server instances.
However, it is common for a SQL Server application to consist of a mixture of database-level objects
(tables and stored procedures, for example) and server-level objects (such as logins and SQL Server Agent
jobs). Consequently, when you are carrying out a side-by-side migration to SQL Server 2017, you will need
to take some manual steps to ensure that your applications continue to operate correctly after migration.
Note: The techniques for migrating databases discussed in this lesson are not only suitable
for carrying out a side-by-side database upgrade; these same techniques can also be used to
move databases between different instances of the same version of SQL Server.
Lesson Objectives
At the end of this lesson, you will be able to:
Describe considerations for upgrading data and applications.
Explain why you might need to create logins on the SQL Server 2017 instance.
Restricting client application access to the databases is likely to cause a period of partial loss of function or
complete downtime for those applications. You must determine the duration of loss of function that is
acceptable to your organization.
SQL Server will automatically carry out these metadata changes whenever a database is restored, or a
database file attached, to a new instance. However, upgrading a database from an older version of SQL
Server to a newer version is the only supported path; you cannot downgrade a database from a newer
version of SQL Server to an older version. You should consider this limitation when designing a rollback
plan for an upgrade to SQL Server 2017.
Database compatibility level means SQL Server 2017 can provide partial support for backward
compatibility with previous versions of the database engine. Databases relying on features that have been
deprecated or removed from SQL Server 2017 can be used in SQL Server 2017 without modification.
Compatibility
Database Engine Version
Level
When you restore or attach a database to a new SQL Server instance, the database compatibility level
remains unchanged until you manually modify it; you might do this if you want to start using new
features. An administrator can modify the compatibility level of a database, either up or down, at any
time.
For more information on database compatibility level, see the topic ALTER DATABASE Compatibility Level
(Transact-SQL) in the SQL Server online documentation:
ALTER DATABASE Compatibility Level (Transact-SQL)
https://aka.ms/Liad9k
Additional Reading: For more information on installing SQL Server 2017, see Module 2 of
this course: Installing SQL Server.
When these steps are complete, you can proceed with the side-by-side migration using one of the
following techniques. Each technique is discussed in more detail later in the lesson:
All these methods can be used to upgrade databases in scenarios that would be unsupported for an in-
place upgrade, such as a move from a 32-bit to a 64-bit version of SQL Server.
Backups are usually smaller than the data files they contain, because free space is not backed up, and
only the active tail of the log file is backed up. To further reduce file size, SQL Server can compress
backups as they are taken (Standard and Enterprise Editions only), or you can compress them with a
compression utility before transfer.
If the database is in full recovery mode, incremental transaction log backups taken from the old
instance of SQL Server can be applied to the SQL Server 2017 copy of the database while it is in
NORECOVERY mode (after an initial full backup has been restored). Doing this can substantially
reduce downtime during the upgrade.
The original database remains unchanged and available for rollback if a problem is found with the
upgrade.
Using backup and restore commands has the following disadvantages:
Sufficient disk space must be available to store both the database backup and the database files, after
they have been restored.
If the database is not in full recovery mode, or you choose not to apply incremental backups,
downtime for the upgrade will include time taken to run a final backup on the old version of SQL
Server; transfer the backup to the new hardware (in a two-server side-by-side upgrade); and restore
the backup to SQL Server 2017.
Because the source database remains available after the backup has been taken, you must manage
client application connections and activity to prevent data changes that will not be reflected in the
upgraded database.
For more information, see the topic Copy Databases with Backup and Restore in the SQL Server online
documentation:
Copy Databases with Backup and Restore
https://aka.ms/Cdrkab
MCT USE ONLY. STUDENT USE PROHIBITED
3-24 Upgrading SQL Server to SQL Server 2017
Because the database on the old instance becomes unavailable when the data files are detached, the
cut-off point between the old version of SQL Server and SQL Server 2017 is very clear. There is no risk
of client applications continuing to make changes to the old version of the database after migration
has started.
If you copy (rather than move) the database files, you can roll back to the old version of SQL Server
more quickly, without needing to wait for a database restore to complete, if there’s an issue with
migration.
Note: If data files and log files for your SQL Server databases are held on a SAN volume,
you might be able to save the time taken to copy files across the network by detaching or
cloning the SAN volume from your old SQL Server and attaching it to your SQL Server 2017
hardware.
Cloning the SAN volume requires more storage space but will keep the old volume unchanged
should you need to roll back the upgrade. You should discuss this with your SAN administrator.
For more information, see the topic Database Detach and Attach (SQL Server) in the SQL Server online
documentation:
Detach/attach. This method uses detach and attach commands exactly as described earlier in this
lesson. The source database files are detached and duplicated, and the duplicates attached to the
target server. As this method has already been discussed, it is not covered further in this topic.
SQL Server Management Objects (SMO). This method uses two steps:
a. The SMO API is used to generate scripts for objects in the database from the source database
(tables, views, stored procedures, and so on). These scripts are applied to the target database to
create copies of the database objects.
b. A SQL Server Integration Services (SSIS) package is generated to transfer data from the source
database to the target database. As an option, the SSIS package can be saved for reference or
modification.
Using the SMO method of the Copy Database Wizard has the following advantages:
When the wizard is used to copy a database between SQL Server instances, options are available to
create server-level objects related to the database being transferred (including logins, SQL Server
Agent jobs, and SSIS packages).
The source database remains available whilst the copy is being carried out.
Storage space is only required for the source and target databases. No storage space is required for
database files or backups.
Using the SMO method of the Copy Database Wizard has the following disadvantages:
Copying data between databases like this will usually be significantly slower than using a backup or
“detach and attach”.
The source database remains available whilst the copy is being carried out. If client applications are
permitted to make changes to the source database whilst the data transfer is taking place, the data
copied to the target system might not be completely consistent; changes to one table could take
place before it is copied and changes to another table might take place after it is copied.
Note: To carry out similar steps to the Copy Database Wizard manually, you could:
Write your own SMO scripts or script database objects using Transact-SQL.
Create your own SSIS package to transfer data, or use another tool such as bcp.exe.
MCT USE ONLY. STUDENT USE PROHIBITED
3-26 Upgrading SQL Server to SQL Server 2017
Similar advantages and disadvantages will apply for manually created steps, as for the Copy
Database Wizard.
For more information, see the topic Use the Copy Database Wizard in the SQL Server online
documentation:
When a database is migrated between SQL Server instances, as part of an upgrade or for any other
reason, database users and groups are copied with it (because the users and groups are stored within the
database). However, the server-level logins associated with the users will not be automatically transferred.
Unless you plan to migrate your database with the Copy Database Wizard SQL Server Management
Objects method (which can optionally be configured to create logins), you will need to create the same
logins on the target SQL Server instance as the ones used to access the database being transferred on the
source SQL Server instance.
Logins can be created with Transact-SQL using the CREATE LOGIN command, or through SQL Server
Management Objects (SMO) using the Login.Create method.
Scripts to recreate existing logins can be generated from SQL Server Management Studio (SSMS); in
Object Explorer, under the server’s Security node, expand Logins, right-click the login you want to script
and click Script Login as… click CREATE To, and then click an output method.
When generating a script for a SQL Server login, SSMS will not include the actual password value; this is a
security feature designed to prevent compromise of SQL Server logins. Instead, the script generated by
SSMS will have a random value for the login’s password.
If you wish to use the same password for a SQL Server login on your source and target systems, two
options are available:
Determine the current password for the login on the source system and create the login on the target
system using a CREATE LOGIN WITH PASSWORD Transact-SQL command.
Identify the hashed password value SQL Server uses to store the password and use the password hash
to create the login on the target system, using a CREATE LOGIN WITH PASSWORD HASHED Transact-
SQL command.
Contained Databases
SQL Server 2012, 2014, and 2016 support contained databases. A contained database facilitates many
features that are normally provided at instance level (such as user authentication) to be carried out at
database level, rather than at instance level. Choosing to allow contained database users (that is, users
with a password stored in the database) can be used to break the dependency between logins and
database users. Contained databases might be partially or fully contained; partial containment allows
some features to be contained in the database and others to be carried out at server level. You should
confirm which features are contained when planning to upgrade a contained database.
Orphaned Users
If a database user is linked to a login that does not
exist on the SQL Server instance hosting the
database, the user is said to be orphaned, or an
orphan.
The user belongs to a database that has been restored or attached from another SQL Server instance;
the linked login has never been created on the instance now hosting the database.
Users associated with a SQL Server login can be repaired using the system stored procedure
sp_change_users_login.
Users associated with a SQL Server login or a Windows login can be repaired with the ALTER USER
WITH LOGIN Transact-SQL command.
An orphaned user can also be repaired by creating a SQL Server login with a security identifier (SID) that
matches the SID found in the user’s definition (see the system view sys.database_principals for details). The
user definition is attached to the login’s SID, not the login name. The CREATE LOGIN command can take a
SID as an optional parameter.
For more information, see the topic Troubleshoot Orphaned Users (SQL Server) in the SQL Server online
documentation:
The following example demonstrates how to enable the show advanced settings option:
By running this command on the source and target SQL Server instances involved in your upgrade, you
can compare the settings from both servers to determine whether any settings should be changed.
Note: Settings can be added and removed between versions of SQL Server. The list of
settings returned by your source and target servers might not return an identical list of settings.
The value of an individual setting might be changed by executing sp_configure with two parameters—the
first parameter being the setting to change, and the second parameter being the new value. After a
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 3-29
setting has been amended, a RECONFIGURE or RECONFIGURE WITH OVERRIDE command must be issued
to apply the new setting. Alternatively, the new setting can be applied by restarting the SQL Server
instance.
For more information, see the topic Server Configuration Options (SQL Server) in the SQL Server online
documentation:
Demonstration Steps
1. Ensure that the 20765C-MIA-DC-UPGRADE and 20765C-MIA-SQL-UPGRADE virtual machines are
running, and log on to 20765C-MIA-SQL-UPGRADE as ADVENTUREWORKS\Student with the
password Pa55w.rd.
2. On the taskbar, click the SQL Server Management Studio shortcut.
6. Select the code under the comment Demonstration - Login and User, and then click Execute.
7. Select the code under the comment Step 1, and then click Execute.
8. Select the code under the comment Step 2, and then click Execute.
9. Select the code under the comment Step 3, and then click Execute.
10. Select the code under the comment Step 4, and then click Execute.
13. Examine the generated script. Note that the password is not correct, and then close the tab.
14. Select the code under the comment Step 6, and then click Execute.
15. Close SQL Server Management Studio without saving changes.
MCT USE ONLY. STUDENT USE PROHIBITED
3-30 Upgrading SQL Server to SQL Server 2017
Verify the correctness of the statement by placing a mark in the column to the right.
Statement Answer
You have been given a full database backup and a transaction log backup taken from the SQL Server 2014
instance. You must restore the database to upgrade it to SQL Server 2017, and create any missing logins.
Objectives
After completing this lab, you will be able to:
Create a database login with SSMS and via the CREATE LOGIN command.
2. Using the SSMS UI, create a login with the following credentials:
o Login: appuser
o Password: Pa55w.rd1
2. Write a CREATE LOGIN statement to create a login with the name reportuser and the SID and
password hash values specified in the Transact-SQL script.
Results: After this exercise, you should be able to create a login using SSMS and the CREATE USER
command.
3. Update Statistics
Restore a database backup taken from one SQL Server instance and restore it to another.
Detect and repair orphaned users.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 3-33
Question: In the task where you ran a report for orphaned users, why was only one
orphaned user found, even though the database had two users?
MCT USE ONLY. STUDENT USE PROHIBITED
3-34 Upgrading SQL Server to SQL Server 2017
You should be able to carry out an in-place or a side-by-side upgrade to SQL Server 2017, and be aware
of any necessary post-upgrade tasks.
Review Question(s)
Question: Which upgrade strategy would best suit your organization? Why?
MCT USE ONLY. STUDENT USE PROHIBITED
4-1
Module 4
Working with Databases
Contents:
Module Overview 4-1
Lesson 1: Introduction to Data Storage with SQL Server 4-2
Module Overview
One of the most important roles for database administrators who work with Microsoft® SQL Server® is the
management of databases. This module provides information about how you can manage your system
and user databases, and associated files.
Objectives
After completing this lesson, you will be able to:
Lesson 1
Introduction to Data Storage with SQL Server
To effectively create and manage databases, you must understand SQL Server files, file location, and
planning for growth, in addition to how data is stored.
Lesson Objectives
After completing this lesson, you will be able to:
Explain how specific redundant array of independent disks (RAID) systems work.
Determine appropriate file placement and the number of files for SQL Server databases.
Database Files
Three types of database file are used by SQL
Server—primary data files, secondary data files, and
transaction log files.
When data pages need to be changed, they are fetched into memory and changed there. The changed
pages are marked as “dirty”. These “dirty pages” are then written to the transaction log in a synchronous
manner. Later, during a background process known as a “checkpoint”, the dirty pages are written to the
database files. For this reason, the pages in the transaction log are critical to the ability of SQL Server to
recover the database to a known committed state. Transaction logs are discussed in detail in this course.
For more information about transaction log files, see SQL Server Transaction Log Architecture and
Management in TechNet:
Note: A Logical write occurs where data is changed in memory (buffer cache).
A Physical write occurs when data is changed on disk.
Note: The log file is also used by other SQL Server features, such as transactional
replication, database mirroring, and change data capture. These are advanced topics and beyond
the scope of this course.
Extents
Groups of eight contiguous pages are referred to as an extent. SQL Server uses extents to simplify the
management of data pages. There are two types of extents:
Uniform Extents. All pages within the extent contain data from only one object.
Mixed Extents. The pages of the extent can hold data from different objects.
The first allocation for an object is at the page level, and always comes from a mixed extent. If they are
free, other pages from the same mixed extent will be allocated to the object as needed. Once the object
has grown bigger than its first extent, all future allocations come from uniform extents.
In both primary and secondary data files, a small number of pages are allocated to track the usage of
extents in the file.
MCT USE ONLY. STUDENT USE PROHIBITED
4-4 Working with Databases
Windows Storage Pools. In Windows storage pools, you can group drives together in a pool, and
then create storage spaces which are virtual drives. You can then use commodity storage hardware to
create large storage spaces and add more drives when you run low on pool capacity. You can create
storage pools from internal and external hard drives (including USB, SATA, and SAS), and from solid-
state drives.
RAID Levels
Many storage solutions use RAID hardware to
provide fault tolerance through data redundancy,
and in some cases, to improve performance. You
can also implement software-controlled RAID 0,
RAID 1, and RAID 5 by using the Windows Server
operating system; other levels might be supported
by third-party SANs. Commonly used types of RAID
include:
I/O performance, particularly when each disk device has its own hardware controller. RAID 0 offers no
redundancy, and if a single disk fails, the volume becomes inaccessible.
RAID 1, Disk Mirroring. A mirror set is a logical storage volume based on space from two disks, with
one disk storing a redundant copy of the data on the other. Mirroring can provide good read
performance, but write performance can suffer. RAID 1 is expensive in terms of storage because 50
percent of the available disk space is used to store redundant data.
RAID 5, Disk Striping with Parity. RAID 5 offers fault tolerance through the use of parity data that is
written across all the disks in a striped volume, comprised of space from three or more disks. RAID 5
typically performs better than RAID 1. However, if a disk in the set fails, performance degrades. RAID
5 is less costly than RAID 1 in terms of disk space because parity data only requires the equivalent of
one disk in the set to store it. For example, in an array of five disks, four would be available for data
storage, which represents 80 percent of the total disk space.
RAID 10, Mirroring with Striping. In RAID 10, a nonfault tolerant RAID 0 stripe set is mirrored. This
arrangement delivers the excellent read/write performance of RAID 0, combined with the fault
tolerance of RAID 1. However, RAID 10 can be expensive to implement because, like RAID 1, 50
percent of the total space is used to store redundant data.
Write operations on RAID 5 can sometimes be relatively slow compared to RAID 1, because of the
need to calculate parity data (RAID 5). If you have a high proportion of write activity, therefore, RAID
5 might not be the best candidate.
Consider the cost per GB. For example, implementing a 500 GB database on a RAID 1 mirror set
would require (at least) two 500 GB disks. Implementing the same database on a RAID 5 array would
require substantially less storage space.
Many databases use a SAN, and the performance characteristics can vary between SAN vendors. For
this reason, if you use a SAN, you should consult your vendors to identify the optimal solution for
your requirements.
With Windows storage spaces, you can create extensible RAID storage solutions that use commodity
disks. This solution offers many of the benefits of a specialist SAN hardware solution, at a significantly
lower cost.
For more information about RAID levels (SQL Server 2008 notes), see:
RAID Levels and SQL Server
http://aka.ms/A87k06
MCT USE ONLY. STUDENT USE PROHIBITED
4-6 Working with Databases
Access Patterns
The access patterns of log and data files are very different. Data access on log files consists primarily of
sequential, synchronous writes, with occasional random disk access. Data access on data files
predominantly offers asynchronous random disk access to the data files from the database. A single
physical storage device does not tend to provide good response times when these types of data access
are combined.
Recovery
While RAID volumes provide some protection from physical storage device failures, complete volume
failures can still occur. If a SQL Server data file is lost, the database can be restored from a backup and the
transaction log reapplied to recover the database to a recent point in time. If a SQL Server log file is lost,
the database can be forced to recover from the data files, with the possibility of some data loss or
inconsistency in the database. However, if both the data and log files are on a single disk subsystem that is
lost, the recovery options usually involve restoring the database from an earlier backup and losing all
transactions since that time. Isolating data and log files can help to avoid the worst impacts of drive
subsystem failures.
Note: Storage solutions use logical volumes as units of storage; a common mistake is to
place data files and log files on different volumes that are based on the same physical storage
devices. When isolating data and log files, ensure that volumes on which you store data and log
files are based on separate underlying physical storage devices.
A reduction in recovery time when separately restoring a database file (for example, if only part of the
data is corrupt).
The ability to have databases larger than the maximum size of a single Windows file.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 4-7
Many administrators are concerned that larger database files will somehow increase the time it takes to
perform backups. The size of a SQL Server backup is not related directly to the size of the database files
because only pages that actually contain data are backed up.
One significant issue that arises with autogrowth is a trade-off related to the size of the growth
increments. If a large increment is specified, there might be a significant delay in the execution of the
Transact-SQL statement that triggers the need for growth. If the specified increment is too small, the
filesystem can become very fragmented and the database performance can suffer, because the data files
have been allocated in small chunks all over a disk subsystem.
In addition to expanding the size of the transaction log, you can also truncate a log file. Truncating the
log purges the file of inactive, committed, transactions and means the SQL Server database engine can
reuse this part of the transaction log. However, you should be careful when truncating the transaction log,
because doing so might affect the recoverability of the database in the event of a failure. Generally, log
truncation is managed as part of a backup strategy.
MCT USE ONLY. STUDENT USE PROHIBITED
4-8 Working with Databases
SAN
DAS
Question: When determining file placement and the number of files, what should you
consider?
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 4-9
Lesson 2
Managing Storage for System Databases
SQL Server uses system databases to maintain internal metadata. Database administrators should be
familiar with the SQL Server system databases and how to manage them.
Lesson Objectives
After completing this lesson, you will be able to:
Configure tempdb.
master
The master database contains all system-wide
information. Anything that is defined at the server
instance level is typically stored in the master
database. If the master database is damaged or
corrupted, SQL Server will not start, so you must
back it up on a regular basis.
msdb
The msdb database holds information about database maintenance tasks; in particular, it contains
information used by the SQL Server Agent for maintenance automation, including jobs, operators, and
alerts. It is also important to regularly back up the msdb database, to ensure that jobs, schedules, history
for backups, restores, and maintenance plans are not lost. In earlier versions of SQL Server, SQL Server
Integration Services (SSIS) packages were often stored within the msdb database. From SQL Server 2014
onward, you should store them in the dedicated SSIS catalog database instead.
model
The model database is the template on which all user databases are established. Any new database uses
the model database as a template. If you create any objects in the model database, they will then be
present in all new databases on the server instance. Many sites never modify the model database. Note
that, even though the model database does not seem overly important, SQL Server will not start if the
model database is not present.
tempdb
The tempdb database holds temporary data. SQL Server truncates or creates this database every time it
starts, so there is no need to perform a backup. In fact, there is no option to perform a backup of the
tempdb database.
MCT USE ONLY. STUDENT USE PROHIBITED
4-10 Working with Databases
resource
The resource database is a read-only hidden database that contains system objects mapped to the sys
schema in every database. This database also holds all system stored procedures, system views and system
functions. In SQL Server versions before SQL Server 2005, these objects were defined in the master
database.
2. In the SQL Server Services node, right-click the instance of SQL Server, click Properties, and then click
the Startup Parameters tab.
3. Edit the Startup Parameters values to point to the planned location for the master database data (-
d parameter) and log (-l parameter) files.
Note: Working with internal objects is an advanced concept beyond the scope of this
course.
Row Versions. Transactions that are associated with snapshot-related transaction isolation levels can
cause alternate versions of rows to be briefly maintained in a special row version store within
tempdb. Row versions can also be produced by other features, such as online index rebuilds, Multiple
Active Result Sets (MARS), and triggers.
User Objects. Most objects that reside in the tempdb database are user-generated and consist of
temporary tables, table variables, result sets of multistatement table-valued functions, and other
temporary row sets.
Because tempdb is used for so many purposes, it is difficult to predict its required size in advance. You
should carefully test and monitor the sizes of your tempdb database in real-life scenarios for new
installations. Running out of disk space in the tempdb database can cause significant disruptions in the
SQL Server production environment, in addition to preventing running applications from completing their
operations. You can use the sys.dm_db_file_space_usage dynamic management view to monitor the disk
space that the files are using. Additionally, to monitor the page allocation or deallocation activity in
tempdb at the session or task level, you can use the sys.dm_db_session_space_usage and
sys.dm_db_task_space_usage dynamic management views.
By default, the tempdb database automatically grows as space is required, because the MAXSIZE of the
files is set to UNLIMITED. Therefore, tempdb can continue growing until space on the disk that contains it
is exhausted.
MCT USE ONLY. STUDENT USE PROHIBITED
4-12 Working with Databases
Note: SQL Server setup adds as many tempdb files as the CPU count. If there are less than
eight CPUs, setup defaults the number of files to eight.
Demonstration Steps
Move tempdb Files
1. Ensure that the MT17B-WS2016-NAT, 20765C-MIA-DC, and 20765C-MIA-SQL virtual machines are
running, and log on to 20765C-MIA-SQL as ADVENTUREWORKS\Student with the password
Pa55w.rd.
2. In the D:\Demofiles\Mod04 folder, run Setup.cmd as Administrator. Click Yes when prompted.
3. Start SQL Server Management Studio and connect to the MIA-SQL database engine using Windows
authentication.
4. In Object Explorer, expand Databases, expand System Databases, and then right-click tempdb and
click Properties.
5. In the Database Properties - tempdb dialog box, on the Files page, note the current files and their
location, and then click Cancel.
7. View the code in the script, and then click Execute. Note the message that is displayed after the code
has run.
8. View the contents of T:\ and note that no files have been created in that location, because the SQL
Server service has not yet been restarted.
9. In Object Explorer, right-click MIA-SQL and click Restart. Click Yes when prompted.
10. If the User Account prompt is displayed, click Yes to allow SQL Server to make changes. When
prompted to allow changes, to restart the service, and to stop the dependent SQL Server Agent
service, click Yes.
11. View the contents of T:\ and note that the tempdb.mdf and templog.ldf files have been moved to
this location.
12. Keep SQL Server Management Studio open for the next demonstration.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 4-13
master
adventureworks
model
tempdb
resource
Sequencing Activity
Put the following steps in order by numbering each to indicate the correct order.
Steps
Lesson 3
Managing Storage for User Databases
User databases are nonsystem databases that you create for applications. Creating databases is a core
competency for database administrators working with SQL Server. In addition to understanding how to
create them, you need to be aware of the impact of file initialization options and know how to alter
existing databases.
When creating databases, you should also consider where the data and logs will be stored on the file
system. You might also want to change this or provide additional storage when the database is in use.
When databases become larger, the data should be allocated across different volumes, rather than storing
it in a single large disk volume. This allocation of data is configured using filegroups and is used to
address both performance and ongoing management needs within databases.
Lesson Objectives
After completing this lesson, you will be able to:
Create databases
Alter databases
CREATE DATABASE
Database names must be unique within an instance
of SQL Server and comply with the rules for
identifiers. A database name is of data type sysname, which is defined as nvarchar(128). This means that
up to 128 characters can be present in the database name and that each character can be chosen from
the double-byte Unicode character set. Database names can be quite long, and you will find that these
become awkward to work with.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 4-15
Data Files
As discussed earlier in this module, a database must have at least one primary data file and one log file.
The ON and LOG ON clauses of the CREATE DATABASE command specify the name and path to use.
In the following code, a database named Sales is being created, comprising of two files—a primary data
file located at M:\Data\Sales.mdf and a log file located at L:\Logs\Sales.ldf.
Each file includes a logical file name in addition to a physical file path. Because operations in SQL Server
use the logical file name to reference the file, the logical file name must be unique within each database.
In this example, the primary data file has an initial file size of 100 MB and a maximum file size of 500 MB.
It will grow by 20 percent of its current size whenever autogrowth needs to occur. The log file has an
initial file size of 20 MB and has no limit on maximum file size. Each time it needs to autogrow, it will grow
by a fixed 10 MB allocation.
While it is possible to create a database by providing just the database name, this results in a database
that is based on the model database—with the data and log files in the default locations—which is
unlikely to be the configuration that you require.
Deleting Databases
To delete (or “drop”) a database, right-click it in Object Explorer and click Delete or use the DROP
DATABASE Transact-SQL statement. Dropping a database automatically deletes all of its files.
Dropping a Database
DROP DATABASE Sales;
MCT USE ONLY. STUDENT USE PROHIBITED
4-16 Working with Databases
Categories of Options
There are several categories of database options:
Auto Options. They control certain automatic behaviors. As a general guideline, Auto Close and Auto
Shrink should be turned off on most systems but Auto Create Statistics and Auto Update Statistics
should be turned on.
Cursor Options. They control cursor behavior and scope. In general, the use of cursors when working
with SQL Server is not recommended, apart from for particular applications such as utilities. Cursors
are not discussed further in this course but it should be noted that their overuse is a common cause
of performance issues.
Database Availability Options. They control whether the database is online or offline, who can
connect to it, and whether or not it is in read-only mode.
o Recovery Model. For more information about database recovery models, refer to course 20764B:
Administering a SQL Server Database Infrastructure.
o Page Verify. Early versions of SQL Server offered an option called Torn Page Detection. This
option caused SQL Server to write a small bitmap across each disk drive sector within a database
page. There are 512 bytes per sector, meaning that there are 16 sectors per database page (8 KB).
This was a fairly crude, yet reasonably effective, way to detect a situation where only some of the
sectors required to write a page were in fact written. In SQL Server 2005, a new CHECKSUM
verification option was added. The use of this option causes SQL Server to calculate and add a
checksum to each page as it is written and to recheck the checksum whenever a page is retrieved
from disk.
Note: Page checksums are only added the next time that any page is written. Enabling the
option does not cause every page in the database to be rewritten with a checksum.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 4-17
Demonstration Steps
Create a Database by Using SQL Server Management Studio
1. Ensure that you have completed the previous demonstration. If not, start the 20765C-MIA-DC and
20765C-MIA-SQL virtual machines, log on to 20765C-MIA-SQL as ADVENTUREWORKS\Student
with the password Pa55w.rd, and run D:\Demofiles\Mod04\Setup.cmd as Administrator.
2. If SQL Server Management Studio is not open, start it and connect to the MIA-SQL database engine
using Windows authentication.
o DemoDB1:
o Path: D:\Demofiles\M od04
o DemoDB1_log:
o Path: D:\Demofiles\Mod04
8. On the Options tab, review the database options, and then click Cancel.
2. Select the code under the comment Create a database and click Execute to create a database
named DemoDB2.
3. Select the code under the comment View database info and click Execute. View the information
that is returned.
4. Keep SQL Server Management Studio open for the next demonstration.
MCT USE ONLY. STUDENT USE PROHIBITED
4-18 Working with Databases
Note: Many of the database set options that you configure by using the ALTER DATABASE
statement can be overridden using a session level set option. This means users or applications can
execute a SET statement to configure the setting just for the current session.
For more information about database set options, see ALTER DATABASE SET Options (Transact-SQL) in the
SQL Server online documentation:
The values you can use are described in the following table:
If a database has already exhausted the space allocated to it and cannot grow a data file automatically,
error 1105 is raised. (The equivalent error number for the inability to grow a transaction log file is 9002.)
This can happen if the database is not set to grow automatically, or if there is not enough disk space on
the hard drive.
Adding Files
One option for expanding the size of a database is to add files. You can do this by using either SSMS or by
using the ALTER DATABASE … ADD FILE statement.
Expanding Files
When expanding a database, you must increase its size by at least 1 MB. Ideally, any file size increase
should be much larger than this. Increases of 100 MB or more are common.
When you expand a database, the new space is immediately made available to either the data or
transaction log file, depending on which file was expanded. When you expand a database, you should
MCT USE ONLY. STUDENT USE PROHIBITED
4-20 Working with Databases
specify the maximum size to which the file is permitted to grow. This prevents the file from growing until
disk space is exhausted. To specify a maximum size for the file, use the MAXSIZE parameter of the ALTER
DATABASE statement, or use the Restrict filegrowth (MB) option when you use the Properties dialog box
in SSMS to expand the database.
Transaction Log
If the transaction log is not set up to expand automatically, it can run out of space when certain types of
activity occur in the database. In addition to expanding the size of the transaction log, the log file can be
truncated. Truncating the log purges the file of inactive, committed transactions and allows the SQL
Server database engine to reuse this unused part of the transaction log. If there are active transactions,
the log file might not be able to be truncated—expanding it might be the only available option.
Shrinking a Database
You can reduce the size of the files in a database by removing unused pages. Although the database
engine will reuse space effectively, there are times when a file no longer needs to be as large as it once
was. Shrinking the file might then become necessary, but it should be considered a rarely used option.
You can shrink both data and transaction log files—this can be done manually, either as a group or
individually, or you can set the database to shrink automatically at specified intervals.
Note: Shrinking a file usually involves moving pages within the files, which can take a long
time.
Regular shrinking of files tends to lead to regrowth of files. For this reason, even though SQL
Server provides an option to automatically shrink databases, this should only be rarely used. As in
most databases, enabling this option will cause substantial fragmentation issues on the disk
subsystem. It is best practice to only perform shrink operations if absolutely necessary.
Truncate Only
TRUNCATE_ONLY is an additional option of DBCC SHRINKFILE that releases all free space at the end of
the file to the operating system, but does not perform any page movement inside the file. The data file is
shrunk only to the last allocated extent. This option often does not shrink the file as effectively as a
standard DBCC SHRINKFILE operation, but is less likely to cause substantial fragmentation and is much
faster.
Introduction to Filegroups
As you have seen in this module, databases consist
of at least two files: a primary data file and a log
file. To improve performance and manageability of
large databases, you can add secondary files.
Every database has a primary filegroup (named PRIMARY); when you add secondary data files to the
database, they automatically become part of the primary filegroup, unless you specify a different
filegroup.
When planning to use filegroups, consider the following facts:
Additionally, you can back up and restore files and filegroups individually. This means you can achieve
faster backup times because you only need to back up the files or filegroups that have changed, instead
of backing up the entire database. Similarly, you can achieve efficiencies when it comes to restoring data.
SQL Server also supports partial backups. You can use a partial backup to separately back up read-only
and read/write filegroups. You can then use these backups to perform a piecemeal restore—to restore
individual filegroups one by one, and bring the database back online, filegroup by filegroup.
Note: You will learn more about partial backups and piecemeal restores later in this course.
When a filegroup contains multiple files, SQL Server can write to all of the files simultaneously, and it
populates them by using a proportional fill strategy. Files of the same size will have the same amount of
data written to them, ensuring that they fill at a consistent rate. Files of different sizes will have different
amounts of data written to them to ensure that they fill up at a proportionally consistent rate. The fact
that SQL Server can write to filegroup files simultaneously means you can use a filegroup to implement a
simple form of striping. You can create a filegroup that contains two or more files, each of which is on a
separate disk. When SQL Server writes to the filegroup, it can use the separate I/O channel for each disk
concurrently, which results in faster write times.
MCT USE ONLY. STUDENT USE PROHIBITED
4-22 Working with Databases
Note: Generally, you should use filegroups primarily to improve manageability and rely on
storage device configuration for I/O performance. However, when a striped storage volume is not
available, using a filegroup to spread data files across physical disks can be an effective
alternative.
Creating Filegroups
You can create additional filegroups and assign files
to them as you create a database; or you can add
new filegroups and files to an existing database.
In the following code example, the Sales database includes a primary filegroup containing a single file
named sales.mdf, a filegroup named Transactions containing two files named sales_tran1.ndf and
sales_tran2.ndf, and a filegroup named Archive containing two files named sales_archive1.ndf and
sales_archive2.ndf.
Creating Filegroups
To add filegroups to an existing database, you can use the ALTER DATABASE … ADD FILEGROUP
statement. To add files to the new filegroup, you can then use the ALTER DATABASE … ADD FILE
statement.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 4-23
The following code example changes the default filegroup in the Sales database to the Transactions
filegroup:
To make a filegroup read-only, use the ALTER DATABASE … MODIFY FILEGROUP statement with the
READONLY option.
To make a read-only filegroup writable, use the ALTER DATABASE … MODIFY FILEGROUP statement with
the READWRITE option.
MCT USE ONLY. STUDENT USE PROHIBITED
4-24 Working with Databases
Verify the correctness of the statement by placing a mark in the column to the right.
Statement Answer
True or false?
Database files can
belong to many
filegroups.
Verify the correctness of the statement by placing a mark in the column to the right.
Statement Answer
When expanding
a database, you
must increase its
size by at least 1
MB.
Question: What Transact-SQL would you use to modify the default filegroup in the
Customers database to the Transactions filegroup?
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 4-25
Lesson 4
Moving and Copying Database Files
Along with adding and removing files from a database, you might sometimes need to move database
files, or even whole databases. You might also need to copy a database.
Lesson Objectives
After completing this lesson, you will be able to:
The following example shows how to move the data file for the AdventureWorks database:
5. Copy other database objects, jobs, user-defined stored procedures, and error messages.
Note: You can also copy database metadata, such as login information and other necessary
master database objects.
Detaching Databases
You detach databases from an instance of SQL
Server by using SSMS or the sp_detach_db stored
procedure. Detaching a database does not remove
the data from the data files or remove the data files
from the server. It removes the metadata entries for
that database from the system databases on that
SQL Server instance. The detached database then no longer appears in the list of databases in SSMS or in
the results of the sys.databases view. After you have detached a database, you can move or copy it, and
then attach it to another instance of SQL Server.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 4-27
UPDATE STATISTICS
SQL Server maintains a set of statistics on the distribution of data in tables and indexes. As part of the
detach process, you can specify an option to perform an UPDATE STATISTICS operation on table and
index statistics. While this is useful if you are going to reattach the database as a read-only database, in
general it is not a good option to use while detaching a database.
Detachable Databases
Not all databases can be detached. Databases that are configured for replication, mirrored, or in a suspect
state, cannot be detached.
Note: Replicated and mirrored databases are advanced topics beyond the scope of this
course.
A more common problem that prevents a database from being detached when you attempt to perform
the operation, is that connections are open to the database. You must ensure that all connections are
dropped before detaching the database. SSMS offers an option to force connections to be dropped
during this operation.
Attaching Databases
You can also use SSMS or the CREATE DATABASE … FOR ATTACH statement to attach databases.
Note: You may find many references to the sp_attach_db and bp_attach_single_file_db
stored procedures. These older system stored procedures are replaced by the FOR ATTACH
option to the CREATE DATABASE statement. Note also that there is no equivalent replacement
for the sp_detach_db procedure.
Note: A common problem when databases are reattached is that database users can
become orphaned. For more information about this problem and its solutions, see Module 3.
Detach a database.
Attach a database.
Demonstration Steps
Detach a Database
1. Ensure that you have completed the previous demonstrations in this module, and that you have
created a database named DemoDB2.
2. In Object Explorer, right-click the Databases folder and click Refresh; verify that the DemoDB2
database is listed.
4. In the Detach Database dialog box, select Drop Connections and Update Statistics, and then click
OK.
MCT USE ONLY. STUDENT USE PROHIBITED
4-28 Working with Databases
5. View the M:\Data and L:\Logs folders and verify that the DemoDB2.mdf and DemoDB2.ldf files
have not been deleted.
Attach a Database
1. In SQL Server Management Studio, in Object Explorer, in the Connect drop-down list, click Database
Engine.
3. In Object Explorer, under MIA-SQL\SQL2, expand Databases and view the databases on this
instance.
4. In Object Explorer, under MIA-SQL\SQL2, right-click Databases and click Attach.
6. In the Locate Database Files - MIA-SQL\SQL2 dialog box, select the M:\Data\DemoDB2.mdf
database file, then click OK.
7. In the Attach Databases dialog box, after you have added the master databases file, note that all of
the database files are listed, then click OK.
8. In Object Explorer, under MIA-SQL\SQL2, under Databases, verify that DemoDB2 is now listed.
MOVE DATABASE
MODIFY FILE
UPDATE PATH
ALTER PATH
ALTER DATABASE
Verify the correctness of the statement by placing a mark in the column to the right.
Statement Answer
Lesson 5
Configuring the Buffer Pool Extension
The topics in this module so far have discussed the storage of system and user database files. However,
SQL Server also supports the use of high performance storage devices, such as solid-state disks (SSDs), to
extend the buffer pool (the cache used to modify data pages in-memory).
Lesson Objectives
After completing this lesson, you will be able to:
Explain the considerations needed when working with the buffer pool extension.
Configure the buffer pool extension.
Only clean pages, containing data that is committed, are stored in the buffer pool extension, ensuring that
there is no risk of data loss in the event of a storage device failure. Additionally, if a storage device
containing the buffer pool extension fails, the extension is automatically disabled. You can easily re-enable
the extension when the failed storage device has been replaced.
Performance gains on online transaction processing (OLTP) applications with a high amount of read
operations can be improved significantly.
SSD devices are often less expensive per megabyte than physical memory, making this a cost-
effective way to improve performance in I/O-bound databases.
The buffer pool extension is easily enabled and requires no changes to existing applications.
Note: The buffer pool extension is not supported in all SQL Server versions.
MCT USE ONLY. STUDENT USE PROHIBITED
4-30 Working with Databases
The buffer pool extension file is stored on high throughput SSD storage.
Scenarios where the buffer pool extension is unlikely to significantly improve performance include:
Data warehouse workloads.
You can view the status of the buffer pool extension by querying the
sys.dm_os_buffer_pool_extension_configuration dynamic management view. You can monitor its
usage by querying the sys.dm_os_buffer_descriptors dynamic management view.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 4-31
To disable the buffer pool extension, use the ALTER SERVER CONFIGURATION statement with the SET
BUFFER POOL EXTENSION OFF clause.
To resize or relocate the buffer pool extension file, you must disable the buffer pool extension, and then
re-enable it with the required configuration. When you disable the buffer pool extension, SQL Server will
have less buffer memory available, which might cause an immediate increase in memory pressure and I/O,
resulting in performance degradation. You should therefore carefully plan reconfiguration of the buffer
pool extension to minimize disruption to application users.
You can view the status of the buffer pool extension by querying the
sys.dm_os_buffer_pool_extension_configuration dynamic management view. You can monitor its
usage by querying the sys.dm_os_buffer_descriptors dynamic management view.
Demonstration Steps
Enable the Buffer Pool Extension
1. Ensure that you have completed the previous demonstration. If not, start the MT17B-WS2016-NAT,
20765C-MIA-DC, and 20765C-MIA-SQL virtual machines, log on to 20765C-MIA-SQL as
ADVENTUREWORKS\Student with the password Pa55w.rd, and then run
D:\Demofiles\Mod04\Setup.cmd as Administrator.
2. If SQL Server Management Studio is not open, start it and connect to the MIA-SQL database engine
using Windows authentication.
4. Review the code under the comment Enable buffer pool extension, and note that it creates a buffer
pool extension file named MyCache.bpe on T:\. On a production system, this file location would
typically be on an SSD device.
MCT USE ONLY. STUDENT USE PROHIBITED
4-32 Working with Databases
5. Use File Explorer to view the contents of the T:\ folder and note that no MyCache.bpe file exists.
6. In SQL Server Management Studio, select the code under the comment Enable buffer pool
extension, then click Execute.
7. Use File Explorer to view the contents of the T:\ folder and note that the MyCache.bpe file has been
created.
8. Select the code under the comment View buffer pool extension details, click Execute, then review
the output in the Results tab and note that buffer pool extension is enabled.
9. Select the code under the comment Monitor buffer pool extension, click Execute, then review the
output in the Results tab.
1. In SQL Server Management Studio, select the code under the comment Disable buffer pool
extension, then click Execute.
2. Use File Explorer to view the contents of the T:\ folder and note that the MyCache.bpe file has been
deleted.
3. In SQL Server Management Studio, select the code under the comment View buffer pool extension
details again, and click Execute. Review the row in the Results tab, and note that the buffer pool
extension is disabled.
4. Close SQL Server Management Studio, without saving any changes.
As a database administrator at Adventure Works Cycles, you are responsible for managing system and
user databases on the MIA-SQL instance of SQL Server. There are several new applications that require
databases, which you must create and configure.
Objectives
After completing this lab, you will be able to:
Configure tempdb.
Create databases.
Attach a database.
Estimated Time: 45 minutes
Password: Pa55w.rd
2. Alter tempdb so that the database files match the following specification:
o Tempdev:
Initial Size: 10 MB
File growth: 5 MB
Maximum size: Unlimited
MCT USE ONLY. STUDENT USE PROHIBITED
4-34 Working with Databases
Results: After this exercise, you should have inspected and configured the tempdb database.
The Internet Sales application is a new e-commerce website and must support a heavy workload that
will capture a large volume of sales order data.
The main tasks for this exercise are as follows:
Initial
Logical Name Filegroup Growth Path
Size
HumanResources_log 5 1 MB / L:\Logs\HumanResources.ldf
MB Unlimited
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 4-35
Initial
Logical Name Filegroup Growth Path
Size
2. Execute the code under the comment View page usage and note the UsedPages and TotalPages
values for the SalesData filegroup.
3. Execute the code under the comments Create a table on the SalesData filegroup and Insert 10,000
rows.
4. Execute the code under the comment View page usage again and verify that the data in the table is
spread across the files in the filegroup.
Results: After this exercise, you should have created a new HumanResources database and an
InternetSales database that includes multiple filegroups.
MCT USE ONLY. STUDENT USE PROHIBITED
4-36 Working with Databases
The database has multiple filegroups, including a filegroup for archive data, which should be configured
as read-only.
2. Configure Filegroups
o AWDataWarehouse.mdf
o AWDataWarehouse_archive.ndf
o AWDataWarehouse_current.ndf
2. Attach the AWDataWarehouse database by selecting the AWDataWarehouse.mdf file and
ensuring that the other files are found automatically.
5. Edit the dbo.FactInternetSales table and modify a record to verify that the table is updateable.
6. Edit the dbo.FactInternetSalesArchive table and attempt to modify a record to verify that the table
is read-only.
Results: After this exercise, you should have attached the AWDataWarehouse database to MIA-SQL.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 4-37
Best Practice: When working with database storage, consider the following best practices:
Create the database in an appropriate size so it doesn’t have to be expanded too often.
Review Question(s)
Question: Why is it typically sufficient to have one log file in a database?
Question: Why should only temporary data be stored in the tempdb system database?
MCT USE ONLY. STUDENT USE PROHIBITED
MCT USE ONLY. STUDENT USE PROHIBITED
5-1
Module 5
Performing Database Maintenance
Contents:
Module Overview 5-1
Lesson 1: Ensuring Database Integrity 5-2
Module Overview
The Microsoft® SQL Server® database engine is capable of managing itself for long periods with minimal
ongoing maintenance. However, obtaining the best performance from the database engine requires a
schedule of routine maintenance operations.
Database corruption is relatively rare but one of the most important tasks in the ongoing maintenance
schedule is to ensure that no corruption has occurred in the databases. The likely success of recovering
from corruption depends upon the amount of time between the corruption occurring, and the
administrator identifying and addressing the issues that caused it. SQL Server indexes can also continue to
work without any maintenance, but they will perform better if you periodically review their performance
and remove any fragmentation that occurs within them. SQL Server includes a Maintenance Plan Wizard
to assist in creating SQL Server Agent jobs that perform these and other ongoing maintenance tasks.
Objectives
After completing this module you will be able to:
Maintain database integrity by using the Database Console Commands (DBCC) CHECKDB.
Lesson 1
Ensuring Database Integrity
It is rare for the database engine to cause corruption directly. However, the database engine depends
upon the hardware platform that it runs on—and that can cause corruption. In particular, issues in the
memory and I/O subsystems can lead to corruption within databases.
If you do not detect corruption soon after it has occurred, then further issues can arise. For example, it
would be difficult to trust the recovery of a corrupt database from a set of backup copies that had
themselves become increasingly corrupted.
You can use the DBCC CHECKDB command to detect, and in some circumstances correct, database
corruption. It is therefore important that you are familiar with how DBCC CHECKDB works.
Lesson Objectives
After completing this lesson, you will be able to:
Describe database integrity.
Without regular checking of the database files, any lack of integrity of the database might lead to bad
information derived from it. Backup does not check the integrity, it only checks the page checksums—and
that is only when you use the WITH CHECKSUM option on the BACKUP command. Although the
CHECKSUM database option is important, the checksum is only checked when SQL Server reads the data.
The exception to this is when SQL Server is backing up the database and using the WITH CHECKSUM
option. Archive data is, by its nature, not read frequently and this can lead to corrupt data within the
database. If corrupt data is not checked as it’s backed up, it may not be found for months.
Question: If you have a perfectly good data archiving process, and a regularly tested restoral
system, do you still need the DBCC commands?
DBCC CHECKDB
The DBCC CHECKDB command makes a thorough check of the structure of a database, to detect almost
all forms of potential corruption. The functionality that DBCC CHECKDB contains is also available as a
range of options if required. The most important of these are described in the following table:
Option Description
DBCC Checks the consistency of the database files disk space allocation units.
CHECKALLOC
DBCC Checks the pages associated with a specified table and the pointers between
CHECKTABLE pages that are associated with the table. DBCC CHECKDB executes DBCC
CHECKTABLE for every user table in the database.
DBCC CHECKDB also performs checks on other types of objects, such as the links for FILESTREAM objects
and consistency checks on the Service Broker objects.
Note: FILESTREAM and Service Broker are advanced topics that are beyond the scope of
this module.
Repair Options
Even though DBCC CHECKDB provides repair options, you cannot always repair a database without data
loss. Usually, the best method for database recovery is to restore a backup of the database by
synchronizing the execution of DBCC CHECKDB with your backup retention policy. This ensures that you
can always restore a database from an uncorrupted database and that all required log backups from that
time onward are available.
MCT USE ONLY. STUDENT USE PROHIBITED
5-4 Performing Database Maintenance
DBCC CHECKDB can take a long time to execute and consumes considerable I/O and CPU resources, so
you will often have to run it while the database is in use. Therefore, DBCC CHECKDB works using internal
database snapshots to ensure that the utility works with a consistent view of the database. If the
performance requirements for having the database activity running while DBCC CHECKDB is executing are
too high, running DBCC CHECKDB against a restored backup of your database is an alternative option. It’s
not ideal, but it’s better than not running DBCC CHECKDB at all.
Disk Space
The use of an internal snapshot causes DBCC CHECKDB to need additional disk space. DBCC CHECKDB
creates hidden files (using NTFS Alternate Streams) on the same volumes where the database files are
located. Sufficient free space on the volumes must be available for DBCC CHECKDB to run successfully.
The amount of disk space required on the volumes depends on how much data is changed during the
execution of DBCC CHECKDB.
DBCC CHECKDB also uses space in the tempdb database while executing. To provide an estimate of the
amount of space required in tempdb, DBCC CHECKDB has an ESTIMATEONLY option.
MAXDOP Override
You can reduce the impact of the DBCC utility on other services running on your server by setting the
MAXDOP option to more than 0 and less than the maximum number of processors in your system. A
configuration value of 0 for any server resource means the server can determine the maximum degree of
parallelism, potentially affecting other tasks.
DBCC CHECKFILEGROUP
The DBCC CHECKFILEGROUP command runs checks against the user objects in the specified filegroup, as
opposed to the complete database. Although this has the potential of saving you considerable checking
of nonuser metadata objects, it is only available if you have created an advanced configuration of
database files and filegroups. You can query the sys.sysfilegroups table to see if this is the case. If the
query only returns the name of the PRIMARY filegroup, then you have a nonoptimal physical environment
and you will not be able to use advanced recovery options. However, the command will still run against
data objects stored in the default PRIMARY device.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 5-5
PHYSICAL_ONLY
NOINDEX
You can use the NOINDEX option to specify not to perform the intensive checks of nonclustered indexes
for user tables. This decreases the overall execution time but does not affect system tables because the
integrity of system table indexes is always checked. The assumption that you are making when using the
NOINDEX option is that you can rebuild the nonclustered indexes if they become corrupt.
EXTENDED_LOGICAL_CHECKS
You can only perform the EXTENDED_LOGICAL_CHECKS when the database is in database compatibility
level 100 (SQL Server 2008) or above. This option performs detailed checks of the internal structure of
objects such as CLR user-defined data types and spatial data types.
TABLOCK
You can use the TABLOCK option to request that DBCC CHECKDB takes a table lock on each table while
performing consistency checks, rather than using the internal database snapshots. This reduces the disk
space requirement at the cost of preventing other users from updating the tables.
ALL_ERRORMSGS and NO_INFOMSGS
The ALL_ERRORMSGS and NO_INFOMSGS options only affect the output from the command, not the
operations that the command performs.
ESTIMATEONLY
The ESTIMATEONLY option estimates the space requirements in the tempdb database if you were to run
the DBCC CHECKDB command. You can then find out how much space the utility will need and avoid the
risk of the process terminating partway through because tempdb is out of space.
MCT USE ONLY. STUDENT USE PROHIBITED
5-6 Performing Database Maintenance
In addition to providing details of errors, the output of DBCC CHECKDB shows the repair option that you
have to use to correct the problem. DBCC CHECKDB offers two repair options. For both options, the
database needs to be in single user mode. The options are:
REPAIR_REBUILD. This rebuilds indexes and removes corrupt data pages. This option only works with
certain mild forms of corruption and does not involve data loss.
REPAIR_ALLOW_DATA_LOSS. This will almost certainly lose some data. It de-allocates any corrupt
pages it finds and changes others that reference the corrupt pages to stop them trying to reach them.
After the operation completes, the database will be consistent, but only from a physical database
integrity point of view. Significant loss of data could have occurred. Repair operations do not consider
any of the constraints that might exist on or between tables. If the specified table is involved in one or
more constraints, you should execute DBCC CHECKCONSTRAINTS after running the repair operation.
In the example on the slide, SQL Server has identified several consistency check errors, and the
REPAIR_ALLOW_DATA_LOSS option is required to repair the database.
If the transaction log becomes corrupt and no backups are available, you can use a special feature called
an emergency mode repair. To do this, put the database in emergency mode and single-user mode, and
then run DBCC CHECKDB with the REPAIR_ALLOW_DATA_LOSS option.
Demonstration Steps
1. Ensure that the MT17B-WS2016-NAT, 20765C-MIA-DC, and 20765C-MIA-SQL virtual machines are
running, and log on to 20765C-MIA-SQL as ADVENTUREWORKS\Student with the password
Pa55w.rd.
2. In the D:\Demofiles\Mod05 folder, run Setup.cmd as Administrator. Click Yes when prompted.
3. Start Microsoft SQL Server Management Studio and connect to the MIA-SQL database engine
instance using Windows authentication.
5. Select the code under the comment -- Run DBCC CHECKDB with default options and click
Execute. This checks the integrity of the AdventureWorks database and provides maximum
information.
6. Select the code under the comment -- Run DBCC CHECKDB without informational messages and
click Execute. This code checks the integrity of the AdventureWorks database and only displays
messages if errors were present.
7. Select the code under the comment -- Run DBCC CHECKDB against CorruptDB and click Execute.
This checks the integrity of the CorruptDB database and identifies some consistency errors in the
dbo.Orders table in this database. The last line of output tells you the minimum repair level required.
8. Select the code under the comment -- Try to access the Orders table and click Execute. This
attempts to query the dbo.Orders table in the CorruptDB database, and returns an error because of
a logical consistency issue.
9. Select the code under the comment -- Access a specific order and click Execute. This succeeds,
indicating that only “some data pages are affected by the inconsistency issue”.
10. Select the code under the comment -- Repair the database and click Execute. Note that this
technique is a last resort, when no valid backup is available. There is no guarantee of logical
consistency in the database, such as the checking of foreign key constraints. These will need checking
after running this command.
11. Select the code under the comment -- Access the Orders table and click Execute. This succeeds,
indicating that the physical consistency is re-established.
12. Select the code under the comment -- Check the internal database structure and click Execute.
Note that no error messages appear, indicating that the database structure is now consistent.
13. Select the code under the comment -- Check data loss and click Execute. Note that a number of
order details records have no matching order records. The foreign key constraint between these
tables originally enforced a relationship, but some data has been lost.
DBCC CHECKTABLE
DBCC CHECKALLOC
DBCC CHECKCATALOG
MCT USE ONLY. STUDENT USE PROHIBITED
5-8 Performing Database Maintenance
Lesson 2
Maintaining Indexes
Another important aspect of SQL Server that requires ongoing maintenance for optimal performance is
the management of indexes. The database engine can use indexes to optimize data access in tables on
the physical media. Over time, through the manipulation of the data, indexed data becomes fragmented.
This fragmentation reduces the performance of storage access operations at the physical level.
Defragmenting or rebuilding the indexes will restore the performance of the database, as will using newer
techniques such as memory optimized tables and indexed views, in addition to appropriate partitioning of
the data as it is stored.
Index management options are often included in regular database maintenance plan schedules. Before
learning how to set up the maintenance plans, it is important to understand more about how indexes
work and how to maintain them.
Lesson Objectives
After completing this lesson, you will be able to:
Indexes can help to improve searching, sorting, and join performance, but they can also impede the
performance of data modification. They require ongoing management and demand additional disk space.
If the query optimizer calculates that it can improve performance, SQL Server will create its own
temporary indexes to do so. However, this is beyond the control of the database administrator or
programmer. An important side effect of the optimizer creating temporary indexes and you spotting this,
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 5-9
is that you can determine whether those indexes would be beneficial for all users and create them
yourself. This then means that the indexes are also available to the query optimizer when it needs them.
Types of Indexes
Rather than storing rows of data as a heap of data
pages, where the data is not stored in any specific
logical or physical order (merely appended to a
chain of relevant pages as allocated), you can
design tables with an internal logical and/or
physical ordering. To implement physical ordering
of the data pages, you create an index known as a
clustered index. On top of a clustered index, you
can also apply logical ordering by using another
type of index called a nonclustered index. For each
clustered index, you can have many nonclustered
indexes without increasing the overhead of
optimizing the data for different queries.
Clustered Indexes
A table with a clustered index has a predefined order for rows within a page and for pages within the
table. The storage engine orders the data pages according to a key, made up of one or more columns.
This key is commonly called a clustering key.
Because the rows of a table should only be in one physical order, there can only be one clustered index on
a table. SQL Server uses an IAM entry to point to the root page of a clustered index; the remainder of the
pages are stored as a balanced b-tree set of pages linked together from the root page outwards.
The physical order of data stored with a clustered index is only sequential at the data page level. When
data changes, rows might migrate from one page to another, to leave space for other rows to be inserted
in the original data page. This is how fragmentation occurs. It helps to bear in mind that fragmentation is
not automatically bad. For a reporting system, high levels of fragmentation can impede performance, but
for a transactional system, high levels of fragmentation can actually assist the functional requirements of
the business.
Nonclustered Indexes
A nonclustered index does not affect the layout of the data in the table in the way that a clustered index
does. If the underlying table has no clustered index, the data is stored as a heap of pages that the data
storage engine allocates as the object grows. The leaf level of a nonclustered index contains pointers,
either to where the data rows are stored in a heap, or to the root node of the corresponding clustered
index for the object. The pointers include a file number, a page number, and a slot number on the page.
Consider searching for a particular order with the OrderID of 23678 in an index of OrderID. In the root
node (that is, the page level), SQL Server searches for the range of OrderID values that contains 23678.
The entry for the range in the root page points to an index page in the next level. In this level, the range
covers smaller ranges, again pointing to pages on the following level of the index. This continues to a
point where every row has an entry for its location. Leaf-level nodes are the index pages at the final point
of the path of the index page chain. With a clustered index, the leaf level of pages/nodes also contain the
actual data pages of the table.
MCT USE ONLY. STUDENT USE PROHIBITED
5-10 Performing Database Maintenance
Index and data pages are linked and double-linked within a logical hierarchy across all pages at the same
level of the hierarchy. This helps SQL Server to perform partial scan operations across an index, from a
given start point, that is located using a seek operation.
Integrated Full-Text Search (iFTS) uses a special type of index that provides flexible searching of text.
Primary and secondary XML indexes assist when querying XML data.
Large data warehouses or operational data stores can use columnstore indexes.
Dynamic Management Functions (DMFs). You can query DMFs in the same way as you call a table-
valued function. That is to say, you must provide parameters to the DMF in brackets.
DMOs are classified into 26 categories. Some examples of these categories include:
Database-Related DMVs. These DMVs provide information about the health of a specific database.
For example, the sys.dm_db_file_space_usage DMV returns data about how each file in the database
uses disk space.
Server-Related DMOs. These DMVs and DMFs provide information about the SQL Server and the
services that are installed on it. For example, the sys.dm_server_services DMV returns data about the
status and configuration of services such as the Full-Text Service and the SQL Server Agent.
Transaction-Related DMOs. These DMVs and DMFs provide information about the transactions that
are in process on the server. For example, the sys.dm_tran_active_transactions DMV shows data
about all the active transactions on the server.
Security-Related DMOs. These DMVs and DMFs provide information about the status of SQL Server
security features. For example, the sys.dm_database_encryption_keys DMV returns data about the
encryption status of a database and the keys used for that encryption.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 5-11
For more information and a complete list of SQL Server DMOs, see:
Index-Related DMOs
DMOs can be used to obtain information about all aspects of server health. One category of DMO,
however, relates specifically to indexes and can be useful in their maintenance. Index-related DMVs and
DMFs include the following:
sys.dm_db_index_usage_stats. This DMV returns counts of different types of index operations and
the time that each type of operation was last performed.
sys.dm_db_index_operational_stats. This DMF returns current I/O, locking, latching and access
method activity for each index in the database.
sys.dm_db_index_physical_stats. This DMF returns size and fragmentation information for the
indexes of a specified table or view.
sys.dm_db_missing_index_details. This DMV returns information about missing indexes. You can
use this information to help optimize the indexes you create.
You will learn more about specific index-related DMOs later in this module.
Index Fragmentation
Index fragmentation occurs over time, as you
insert and delete data in the table. For operations
that read data, indexes perform best when each
page is as full as possible. However, if your indexes
initially start full (or relatively full), adding data to
the table can result in the index pages needing to
be split. Adding a new index entry to the end of
an index is easy, but the process is more
complicated if the entry needs to be inserted in
the middle of an existing full index page.
Internal Fragmentation. This occurs when pages are not completely full of data and often when a page
is split during an insert or update operation—in a case where the data could not fit back into the space
initially allocated to the row.
External Fragmentation. This occurs when pages become out of sequence during Data Manipulation
Language (DML) operations that split existing pages when modifying data. When the index requires and
allocates a new page within an extent that is different to the one that contains the original page, extra
links are required to point between the pages and extents involved. This means that a process needing to
MCT USE ONLY. STUDENT USE PROHIBITED
5-12 Performing Database Maintenance
read the index pages in order must follow pointers to locate the pages. The process then involves
accessing pages that are not sequential within the database.
Detecting Fragmentation
The following code shows two ways to use a stored procedure to detect physical fragmentation. The first
sample shows all rows being returned and the second sample shows the particular rows that we are
interested in, as defined by the WHERE clause.
-- Or
FROM sys.dm_db_index_physical_stats(DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT)
WHERE avg_fragmentation_in_percent > 10
ORDER BY avg_fragmentation_in_percent DESC
For more information about using this stored procedure, see sys.dm_db_index_physical_stats
(Transact-SQL)
https://aka.ms/C04h6d
SQL Server Management Studio (SSMS) also provides details of index fragmentation in the properties
page for each index.
0, which actually means “fill the pages 100 percent”. Any other value indicates the percentage of each
page filled, on the index creation.
The following example shows how to create an index that is 70 percent full, leaving 30 percent free space
on each page:
Using FILLFACTOR
ALTER TABLE Person.Contact
ADD CONSTRAINT PK_Contact_ContactID
PRIMARY KEY CLUSTERED
(
ContactID ASC
) WITH (PAD_INDEX = OFF, FILLFACTOR = 70);
GO
Note: The same action of the values 0 and 100 can seem confusing. While both lead to the
same outcome, 100 indicates that a specific FILLFACTOR value is being set. The value zero
indicates that no FILLFACTOR has been specified and that the default server value should be
applied.
By default, the FILLFACTOR option only applies to leaf-level pages of an index. You can use it in
conjunction with the PAD_INDEX = ON option to cause the same free space to also be allocated in the
nonleaf levels of the index.
Although you can use the fill factor to preallocate space in the index pages to help in some situations, it is
not a panacea for good performance; it might even impede performance where all the changes are on
the last page anyway. Consider the primary key index shown in the previous example. It is unlikely that
this fill factor will improve performance. The primary key ensures that no duplicate rows occur; therefore,
it is unlikely that you will insert values for such rows in the middle of existing ones, because the primary
key is sequential. If you make sure you don’t use variable sized columns, you can update rows in place—
so there is no requirement for empty space for updates.
Make sure you consider the costs in addition to the benefits of setting the empty space in index pages.
Both memory pages and disk space are wasted if the space adds no value. There will also be reduced
network efficiency and an inability of the processing unit to obtain as much useful data when it needs to
process it.
the ALL option, SQL Server drops all indexes on the table and rebuilds them in a single operation. If any
part of that process fails, SQL Server rolls back the entire operation.
Because SQL Server performs rebuilds as logged, single operations, a single rebuild operation can use a
large amount of space in the transaction log. To avoid this, you can change the recovery model of the
database to use the BULK_LOGGED or SIMPLE recovery models before performing the rebuild operation,
so that it is a minimally logged operation. A minimally logged rebuild operation uses much less space in
the transaction log and takes less time. However, if you are working on a transactional database,
remember to switch back to full logging mode and back up the database after you have completed the
operation—otherwise, you will not have a robust backup strategy to insure against database corruption.
Rebuilding an Index
ALTER INDEX CL_LogTime ON dbo.LogTime REBUILD;
REORGANIZE
Reorganizing an index uses minimal system resources. It defragments the leaf level of clustered and
nonclustered indexes on tables by physically reordering the leaf-level pages to match the logical, left to
right order of the leaf nodes. Reorganizing an index also compacts the index pages. The compaction uses
the existing fill factor value for the index. You can interrupt a reorganize without losing the work
performed so far. This means that, on a large index, you could configure partial reorganization to occur
each day.
For heavily fragmented indexes (those with fragmentation greater than 30 percent) rebuilding is usually
the most appropriate option to use. SQL Server maintenance plans include options to rebuild or
reorganize indexes. If you do not use maintenance plans, it’s important to create a job that regularly
performs defragmentation of the indexes in your databases.
Reorganizing an Index
ALTER INDEX ALL ON dbo.LogTime REORGANIZE;
Because of the extra work that needs to be performed, online index rebuild operations are typically slower
than offline ones.
The following example shows how to rebuild an index online:
Updating Statistics
One of the main tasks that SQL Server performs
when it is optimizing queries is to determine
which indexes to use and which ones not to use.
To make these decisions, the query optimizer uses
statistics about the distribution of the data in the
index and data pages. SQL Server automatically
creates statistics for indexed and nonindexed
columns when you enable the
AUTO_CREATE_STATISTICS database option. You
enable and disable this option by using the ALTER
DATABASE statement.
Switching On Automatic Statistics Creation for the Data Columns in the Current Database.
ALTER DATABASE CURRENT
SET AUTO_CREATE_STATISTICS ON
SQL Server automatically updates statistics when AUTO_UPDATE_STATISTICS is set. This is the default
setting so it’s best that you do not disable this option unless you really have to—it is necessary for a
package solution you are implementing on top of SQL Server.
For large tables, the AUTO_UPDATE_STATISTICS_ASYNC option instructs SQL Server to update statistics
asynchronously instead of delaying query execution, where it would otherwise update an outdated
statistic before query compilation.
You can also update statistics on demand. Executing the Transact-SQL code UPDATE STATISTICS
<tablename> causes SQL Server to refresh all statistics on the specified table. You can also run the
sp_updatestats system stored procedure to update all statistics in a database. This stored procedure
checks which table statistics are out of date and refreshes them.
MCT USE ONLY. STUDENT USE PROHIBITED
5-16 Performing Database Maintenance
Demonstration Steps
Maintain Indexes
1. If SQL Server Management Studio is not open, start it and connect to the MIA-SQL database engine
instance using Windows authentication.
5. Select the code under the comment -- Check fragmentation and click Execute. In the results, note
the avg_fragmentation_in_percent and avg_page_space_used_in_percent values for each index level.
6. Select the code under the comment -- Modify the data in the table and click Execute. This updates
the table.
7. Select the code under the comment -- Re-check fragmentation and click Execute. In the results,
note that the avg_fragmentation_in_percent and avg_page_space_used_in_percent values for each
index level have changed because the data pages have become fragmented.
8. Select the code under the comment -- Rebuild the table and its indexes and click Execute. This
rebuilds the indexes on the table.
9. Select the code under the comment -- Check the fragmentation again and click Execute. In the
results, note that the avg_fragmentation_in_percent and avg_page_space_used_in_percent values for
each index level indicate less fragmentation.
Lesson 3
Automating Routine Maintenance Tasks
You have seen how you can manually perform some of the common database maintenance tasks that you
will have to execute on a regular basis. SQL Server provides the Maintenance Plan Wizard that you can use
to create SQL Server Agent jobs to perform the most common database maintenance tasks.
Although the Maintenance Plan Wizard makes this process easy to set up, it is important to realize that
you can use the output of the wizard as a starting point for creating your own maintenance plans.
Alternatively, you can create plans from scratch, using the SQL Server DMOs or system stored procedures.
Lesson Objectives
After completing this lesson, you will be able to:
Shrinking databases.
Rebuilding and reorganizing indexes in a database.
You can create maintenance plans using one schedule for all tasks or individual schedules for each task.
You use combinations of these to group and sequence tasks based on custom requirements.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 5-19
Note: You can use the cleanup tasks available in the maintenance plans to implement a
retention policy for backup files, job history, maintenance plan report files, and msdb database
table entries.
Demonstration Steps
1. If SQL Server Management Studio is not open, start it and connect to the MIA-SQL database engine
instance using Windows authentication.
2. In Object Explorer, under MIA-SQL, expand Management, right-click Maintenance Plans, and click
Maintenance Plan Wizard.
5. On the Select Maintenance Tasks page, select the following tasks, and then click Next:
o Shrink Database
o Rebuild Index
o Update Statistics
6. On the Select Maintenance Task Order page, change the order of the tasks to Rebuild Index,
Shrink Database, then Update Statistics, and then click Next.
7. On the Define Rebuild Index Task page in the Databases list, click AdventureWorks, and then
click OK to close the drop-down list box. Click Next.
8. On the Define Shrink Database Task page, in the Databases list, click AdventureWorks, click OK
to close the drop-down list box, and then click Next.
MCT USE ONLY. STUDENT USE PROHIBITED
5-20 Performing Database Maintenance
9. On the Define Update Statistics page, in the Databases list, click AdventureWorks. Click OK to
close the drop-down list box, and then click Next.
10. On the Select Report Options page, review the default settings, and then click Next.
11. On the Complete the Wizard page, click Finish to create the Maintenance Plan. Wait for the
operation to complete, and then click Close.
Categorize Activity
Your manager asks you to implement a maintenance solution that minimizes data loss and cost in
addition to maximizing performance. Sort items by writing the appropriate category number to the right
of each one.
Items
You are a database administrator at Adventure Works Cycles, with responsibility for databases on the
MIA-SQL Server instance. You must perform the ongoing maintenance of the database on this instance—
this includes ensuring database integrity and managing index fragmentation.
Objectives
After completing this lab, you will be able to:
Defragment indexes.
Password: Pa55w.rd
2. Open Ex1.sln from the D:\Labfiles\Lab05\Starter\Ex1 folder and review the Transact-SQL code in
the Ex1-DBCC.sql file.
4. In a new query window using a separate database connection, shut down the database server without
performing checkpoints in every database.
5. In Windows Explorer, rename the CorruptDB_log.ldf file to simulate the file being lost.
7. In SQL Server Management Studio, try to access the CorruptDB database, and then check its status.
8. Use emergency mode to review the data in the database and note that the erroneous transaction was
committed on disk.
9. Set the database offline, rename the log file back to CorruptDB_log.ldf to simulate accessing it from
a mirror copy, and then set the database online again.
10. Put the database in single user mode and run DBCC CHECKDB with the REPAIR_ALLOW_DATA_LOSS
option.
11. Put the database back into multiuser mode and check that the data is restored to its original state.
12. Close the solution without saving changes, but leave SQL Server Management Studio open for the
next exercise.
Results: After this exercise, you should have used DBBC CHECKDB to repair a corrupt database.
4. Query the sys.dm_db_index_physical_stats function again, noting that the values for the same
columns have changed due to the activity.
6. Query the sys.dm_db_index_physical_stats function again, confirming that the index fragmentation
has decreased.
7. Close the solution without saving changes, but leave SQL Server Management Studio open for the
next exercise.
Results: After this exercise, you should have rebuilt indexes on the Sales.SalesOrderHeader table,
resulting in better performance.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 5-23
o Check the integrity of the database using a short-term lock rather than an internal database
snapshot.
Results: After this exercise, you should have created the required database maintenance plans.
Question: What is the difference between an OLTP database and an OLAP database in terms
of recoverability and the probability that you will have to use Emergency mode?
Best Practice: When planning ongoing database maintenance, consider the following best
practices:
If corruption occurs, consider restoring the database from a backup, and only repair the database as a
last resort.
Defragment your indexes when necessary, but if they get beyond about 30 percent fragmentation,
consider rebuilding instead.
Update statistics on a schedule if you don’t want it to occur during normal operations.
Use maintenance plans to implement regular tasks.
It is also preferable to have multiple files and filegroups for the data and system—and to change the
default from the primary system filegroup to one of the business focused ones. This will ensure that you
can perform partial recoveries rather than having to do a full database recovery, in the case where
damage only occurs to part of the database structures.
MCT USE ONLY. STUDENT USE PROHIBITED
6-1
Module 6
Database Storage Options
Contents:
Module Overview 6-1
Lesson 1: SQL Server Storage Performance 6-2
Module Overview
One of the most important roles for database administrators working with Microsoft® SQL Server® is the
management of databases and storage. It is important to know how data is stored in databases, how to
create databases, how to manage database files, and how to move them. Other tasks related to storage
include managing the tempdb database and using fast storage devices to extend the SQL Server buffer
pool cache.
Objectives
After completing this module, you will be able to:
Understand how I/O performance affects SQL Server.
Lesson 1
SQL Server Storage Performance
In this lesson, you will learn about the different options available for SQL Server storage and consider how
this might affect performance.
Lesson Objectives
After completing this lesson, you will be able to:
Understand the relationship between the number of spindles (disks) and I/O performance.
Understand the requirements for the storage of data files and transaction log files.
I/O Performance
I/O performance has a direct impact on the overall
performance of SQL Server, making it critical that
the storage layer is designed and implemented in
a way that delivers the best performance for the
server’s anticipated workload.
When designing storage for your SQL Server estate, it is important to consider the likely workload. A read-
heavy database will have different requirements to a write-heavy database; a data warehouse will have
different requirements to a transactional system. Having a good idea of the anticipated read/write split
will mean you can develop a storage solution that performs well.
For a new instance of SQL Server, finding the likely workload can be challenging and will involve a certain
amount of guesswork. However, for an existing SQL Server instance, you can use built-in dynamic
management views, such as sys.dm_io_virtual_file_stats, to gain an insight into the SQL Server workload.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 6-3
SQL Server Best Practices Article – SQL Server 2005 Waits and Queues
http://aka.ms/nzz69y
sys.dm_os_wait_stats. This DMV displays the total wait time for each wait type.
sys.dm_exec_session_wait_stats. This DMV displays the same data as the previous DMV but filtered
for a single active user session.
When you are investigating the performance of your storage system, the following wait types are often
relevant:
Page IO Latch Waits. These wait types have names that start with “PAGEIOLATCH”. They indicate
that a task is waiting to access a data page that is not in the buffer pool, so it must be read from disk
to memory. A high wait time for PAGEIOLATCH wait types might indicate a disk I/O bottleneck. It
might further indicate that the I/O subsystem is slow to return the pages to SQL Server.
WRITELOG. The WRITELOG wait type indicates that a task is waiting for a transaction log block buffer
to be flushed to disk. If the disks that store the transaction logs perform poorly, such waits may
multiply.
LOGBUFFER. The LOGBUFFER wait type indicates that a task is waiting for a log buffer to become
available when it is flushing the log content.
ASYNC_IO_COMPLETION. The ASYNC_IO_COMPLETION wait type indicates that a task is waiting for
an I/O operation that does not relate to a data file to complete—for example, initializing a
transaction log file or writing to backup media.
IO_COMPLETION. The IO_COMPLETION wait type indicates that a task is waiting for a synchronous
I/O operation that does not relate to a data file to complete; for example, reading a transaction log
for a rollback or for transactional replication.
MCT USE ONLY. STUDENT USE PROHIBITED
6-4 Database Storage Options
Number of Spindles
All hard disks have physical limitations that affect
their performance. They are mechanical devices
with platters that can only spin at a certain speed,
and heads that can only move at a certain speed,
reading a single piece of data at a time. These
performance-limiting characteristics can be an
issue for SQL Server, which in a production
environment, might need to deal with hundreds or
even thousands of requests per minute.
The key point to remember when designing SQL Server Storage is that many small capacity high-speed
disks will outperform a single large capacity high-speed disk.
RAID Disks
Many storage solutions use RAID hardware to
provide fault tolerance through data redundancy,
and in some cases, to improve performance. You
can also use the Windows Server® operating
system to implement software-controlled RAID 0,
RAID 1, and RAID 5; other levels might be
supported by third-party SANs.
RAID 1, disk mirroring. A mirror set is a logical storage volume that is based on space from two
disks, with one disk storing a redundant copy of the data on the other. Mirroring can provide good
read performance, but write performance can suffer. RAID 1 is expensive for storage because 50
percent of the available disk space is used to store redundant data.
RAID 5, disk striping with parity. RAID 5 offers fault tolerance using parity data that is written
across all the disks in a striped volume that is comprised of space from three or more disks. RAID 5
typically performs better than RAID 1. However, if a disk in the set fails, performance degrades. In
terms of disk space, RAID 5 is less costly than RAID 1 because parity data only requires the equivalent
of one disk in the set to store it. For example, in an array of five disks, four would be available for data
storage, which represents 80 percent of the total disk space.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 6-5
RAID 10, mirroring with striping. In RAID 10, a nonfault tolerant RAID 0 stripe set is mirrored. This
arrangement delivers the excellent read/write performance of RAID 0, combined with the fault
tolerance of RAID 1. However, RAID 10 can be expensive to implement because, like RAID 1, 50
percent of the total space is used to store redundant data.
Write operations on RAID 5 can sometimes be relatively slow compared to RAID 1 because you need
to calculate parity data (RAID 5). If you have a high proportion of write activity, therefore, RAID 5
might not be the best candidate.
Consider the cost per GB. For example, implementing a 500 GB database on a RAID 1 mirror set
would require (at least) two 500 GB disks. Implementing the same database on a RAID 5 array would
require substantially less storage space.
Many databases use a SAN, and the performance characteristics can vary between SAN vendors and
architectures. For this reason, if you use a SAN, you should consult with your vendors to identify the
optimal solution for your requirements. When considering SAN technology for SQL Server, always
look beyond the headline IO figures quoted and consider other characteristics, such as latency, which
can be considerably higher on a SAN, compared to DAS.
Windows storage spaces help you create extensible RAID storage solutions that use commodity disks.
This solution offers many of the benefits of a specialist SAN hardware solution, at a significantly lower
cost.
Storage Spaces
Storage Spaces is a Windows Server 2012
technology that enables administrators to create
highly flexible and recoverable configurations on
hard disks. You use storage spaces to virtualize
storage by grouping industry-standard disks into
storage pools. You can then create virtual disks,
called storage spaces, from the available capacity
of the storage pool.
Parity: writes a single copy of data striped across the disks, along with a parity bit to assist with data
recovery. Parity gives good capacity and read performance, but write performance is generally slower,
due to the need to calculate parity bits. Parity is similar to a RAID 5 disk set.
Simple: stripes data across all disks as a single copy with no parity and is technically similar to a RAID
0 disk set. Simple maximizes storage capacity, giving high performance but offering no resiliency.
Losing a disk will mean data is lost.
2. Click the TASKS list, and then click New Storage Pool.
5. Select the group of physical disks to add the new storage pool, and then click Next.
6. Select the individual disks that you want to include in the storage pool, and then click Next.
7. Click Create.
1. In Server Manager, on the Storage Pools page, select the storage pool that you already created.
2. Under VIRTUAL DISKS, click the TASKS list, and then click New Virtual Disk.
3. In the New Virtual Disk wizard, click Next.
5. Enter a name and a description for the new storage space, and then click Next.
8. Specify the size of the storage space and then click Next.
9. Click Create.
To create a volume:
1. In Server Manager, on the Storage Pools page, under VIRTUAL DISKS, right-click the storage space
you just created, and then click New Volume.
3. Select a server and a storage space on which you want to create a volume, and then click Next.
5. Assign a drive letter or folder to the new volume, and then click Next.
6. Select a file system, allocation unit size, and volume label, and then click Next.
7. Click Create.
Data Files
Every database has at least one data file. The first
data file usually has the filename extension .mdf;
in addition to holding data pages, the file holds
pointers to the other files used by the database.
Additional data files usually have the file extension
.ndf, and are useful for spreading data across
multiple physical disks to improve performance.
log files onto RAID 5 disk sets because the time taken to calculate parity bits can impact SQL Server
performance.
Note: SQL Server also uses the transaction log file for other features, such as transactional
replication, database mirroring, and change data capture. These topics are beyond the scope of
this course.
Extents
Groups of eight contiguous pages are referred to as an extent. SQL Server uses extents to simplify the
management of data pages. There are two types of extents:
Uniform extents. All pages in the extent contain data from only one object.
Mixed extents. The pages of the extent can hold data from different objects.
The first allocation for an object is at the page level, and always comes from a mixed extent. If they are
free, other pages from the same mixed extent will be allocated to the object as needed. After the object
has grown bigger than its first extent, then all future allocations are from uniform extents.
In all data files, a small number of pages are allocated to track the usage of extents within the file.
tempdb Storage
The performance of the tempdb database is
critical to the overall performance of most SQL
Server installations. The tempdb database consists
of the following objects:
Internal objects
Row versions
Transactions that are associated with snapshot-related transaction isolation levels can cause alternate
versions of rows to be briefly maintained in a special row version store in tempdb. Other features can also
produce row versions, such as online index rebuilds, Multiple Active Result Sets (MARS), and triggers.
User objects
Most objects that reside in the tempdb database are user-generated and consist of temporary tables,
table variables, result sets of multistatement table-valued functions, and other temporary row sets.
Because tempdb is used for so many purposes, it is difficult to predict its required size in advance. You
should carefully test and monitor the size of your tempdb database in real-life scenarios for new
installations. Running out of disk space in the tempdb database can cause significant disruptions in the
SQL Server production environment, and prevent applications that are running from completing their
operations. You can use the sys.dm_db_file_space_usage dynamic management view to monitor the disk
space that the files are using. Additionally, to monitor the page allocation or deallocation activity in
tempdb at the session or task level, you can use the sys.dm_db_session_space_usage and
sys.dm_db_task_space_usage dynamic management views.
By default, the tempdb database automatically grows as space is required, because the MAXSIZE of the
files is set to UNLIMITED. Therefore, tempdb can continue growing until it fills all the space on the disk.
Increasing the number of files in tempdb can overcome I/O restrictions and avoid latch contention during
page free space (PFS) scans, when temporary objects are created and dropped, resulting in improved
overall performance. You should not create too many files, because this can degrade the performance. As
a general rule, it is advised to have 0.25-1 file per processing core, with the ratio lower as the number of
cores on the system increases. However, you can only identify the optimal configuration for your system
by doing real live tests.
Demonstration Steps
1. Ensure that the 20765C-MIA-DC and 20765C-MIA-SQL virtual machines are running, and log on to
20765C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.
2. In the D:\Demofiles\Mod06 folder, run Setup.cmd as Administrator. Click Yes when prompted.
3. Start SQL Server Management Studio and connect to the MIA-SQL database engine using Windows
authentication.
MCT USE ONLY. STUDENT USE PROHIBITED
6-10 Database Storage Options
4. In Object Explorer, expand Databases, expand System Databases, right-click tempdb, and then click
Properties.
5. In the Database Properties dialog box, on the Files page, note the current files and their location.
Then click Cancel.
7. View the code in the script, and then click Execute. Note the message that is displayed after the code
has run.
8. View the contents of D:\ and note that no files have been created in that location, because the SQL
Server service has not yet been restarted.
9. In Object Explorer, right-click MIA-SQL, and then click Restart. When prompted, click Yes.
10. In the Microsoft SQL Server Management Studio dialog boxes, when prompted to allow changes,
to restart the service, and to stop the dependent SQL Server Agent service, click Yes.
11. View the contents of D:\ and note that the tempdb MDF and LDF files have been moved to this
location.
12. Keep SQL Server Management Studio open for the next demonstration.
Question: You are implementing a SQL Server instance and decide to use RAID disks. Which
RAID levels might you choose for storing the following types of SQL Server files?
Transaction log files
Data files
tempdb
MCT USE ONLY. STUDENT USE PROHIBITED
6-12 Database Storage Options
Lesson 2
SMB Fileshare
In this lesson, you will learn how about Server Message Block (SMB) Fileshare storage and the benefits of
using it with SQL Server.
Lesson Objectives
After completing this lesson, you will be able to:
The example below shows a CREATE DATABASE statement using an SMB share for storage:
Some SMB shares cannot be used for SQL Server storage. They are:
Demonstration Steps
1. Ensure that the 20765C-MIA-DC and 20765C-MIA-SQL virtual machines are running, and log on to
20765C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.
2. Open File Explorer and navigate to the D:\ drive, right-click the smbshare folder, and then click
Properties.
3. In the smbshare Properties dialog box, on the Sharing tab, in the Network File and Folder
Sharing section, note that this folder is shared with the network path \\MIA-SQL\smbshare, and
then click Cancel.
4. In SQL Server Management Studio, open the file SMBDemo.sql located in the D:\Demofiles\Mod06
folder and execute the code it contains.
MCT USE ONLY. STUDENT USE PROHIBITED
6-14 Database Storage Options
5. In File Explorer, navigate to the D:\smbshare folder and note the database files have been created.
Lesson 3
SQL Server Storage in Microsoft Azure
You can use Microsoft Azure to store SQL Server data files in the cloud. In this lesson, you will learn about
the benefits of storing SQL Server data files in the cloud and how to implement this technology.
Lesson Objectives
After completing this lesson, you will be able to:
To use SQL Server data files in Microsoft Azure, you create a storage account, along with a container. You
then create a SQL Server credential containing policies and the shared access signature essential to access
the container. You can store page blobs inside the container, with each having a maximum size of 1 TB.
There is no limit to the number of containers that a storage account can have, but the total size of all
containers must be under 500 TB.
5. Backup Strategy: With SQL Server data files in Azure, you can use Azure snapshots for almost
instantaneous backups of your data.
o File Shares. These are shared folders that virtual machines and on-premises computers can
access through Server Message Block (SMB) like traditional file and print servers.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 6-17
o Virtual Machine Disks. These are specialized blobs that store virtual hard disk (VHD) files for
virtual machines in Azure.
Blob storage accounts. Within a blob storage account, you can create only block blobs and append
blobs.
In order to place SQL Server data files in Azure, you must create them as page blobs. For this reason, you
must create a general-purpose storage account, with a blob container. Note that a blob storage account
does not support SQL Server data files, because it does not support page blobs.
When you create a general-purpose storage account, you must choose one of two performance tiers:
Standard and Premium. Premium storage accounts currently only support virtual machine disks.
Therefore, to place SQL Server data files in Azure you must choose the Standard tier.
5. Create your database with the data and log files in the Microsoft Azure container. See the following
CREATE DATABASE statement:
For more in-depth information on implementing SQL Server data files in Azure, see:
Tutorial: Using the Microsoft Azure Blob storage service with SQL Server 2016 databases
http://aka.ms/jgwswk
MCT USE ONLY. STUDENT USE PROHIBITED
6-18 Database Storage Options
Lesson 4
Stretch Database
In this lesson, you will learn about Stretch Database, a feature you can use for the migration of historical,
or cold, data to the cloud.
Lesson Objectives
After completing this lesson, you will be able to:
With Stretch Database, cold historic data remains available for users to query, although there might be a
small amount of additional latency associated with queries.
MCT USE ONLY. STUDENT USE PROHIBITED
6-20 Database Storage Options
Note: A user will not normally know where the data they are querying resides. However,
they might notice an increase in latency when querying data hosted in Azure.
You can only configure a table for Stretch Database if it belongs to a Stretch Database enabled database.
Any user with ALTER privilege for the table can configure Stretch Database for it.
There are some limitations on tables that can use Stretch Database. For more details on limitations, see:
Data Security
Only server processes can access Stretch Database linked server definition. It is not possible for a user
login to query the Stretch Database linked server definition directly.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 6-21
Enabling a database for Stretch Database does not change the existing database or table permissions. The
local database continues to check access permissions and manage security because all queries are still
routed through the local server. A user’s permissions remain the same, regardless of where the data is
physically stored.
3. Set the Source as SQL Server and the Target as Azure SQL Database.
4. Click the options for Check database compatibility and Check feature parity.
5. Enter the source server name, with the correct authentication, encryption and certificate settings.
6. Click Connect, check the databases you wish to analyze, and then click Add.
7. Click Start Assessment to begin the analysis. You will be asked to log into Microsoft Azure.
After the Stretch Database Advisor has completed its analysis, the results are displayed showing database
size, along with the number of tables analyzed for each database. You can drill into these results to obtain
details of any tables that are not compatible and the reasons for that.
For more details on tables that might not be compatible with Stretch Database, see:
Note: While tables with Primary and/or unique keys are compatible with Stretch Database,
a stretch enabled table will not enforce uniqueness for these constraints.
MCT USE ONLY. STUDENT USE PROHIBITED
6-22 Database Storage Options
4. Complete the steps in the Enable Database for Stretch wizard to create a Database Master Key;
identify the appropriate tables and configure the Microsoft Azure deployment.
After implementing Stretch Database, you can monitor it from SQL Server Management Studio. In Object
Explorer, expand Databases, right-click the stretch-enabled database, point to Tasks, point to Stretch,
and then click Monitor to open the Stretch Database Monitor. This monitor shows information about
both the local and Azure SQL instances, along with data migration status.
Note: If, after implementing Stretch Database, you later decide to disable Stretch Database
for a table, you must migrate the data to the local instance before disabling—otherwise you will
lose any data in the stretch table.
You can also implement Stretch Database by using Transact-SQL statements. For more information on
using Transact-SQL to enable Stretch Database, see:
Enable Stretch Database for a database
https://aka.ms/K16o95
Stretch Database
https://aka.ms/nzo6ab
Objectives
After completing this lab, you will be able to:
Password: Pa55w.rd
Task 2: Run the Stretch Database Advisor in the Data Migration Assistant
1. In the Data Migration Assistant, create a new project to run Stretch Database Advisor on the
localhost database instance and analyze only the AdventureWorks database.
2. Examine the results, noting that the Sales.OrderTracking and Sales.SalesOrderDetails tables tables
can be stretched to minimize costs and optimize storage.
Results: After this exercise, you will know which tables within the Adventure Works database are eligible
for Stretch Database.
MCT USE ONLY. STUDENT USE PROHIBITED
6-24 Database Storage Options
2. In SQL Server Management Studio, enable Stretch Database for the MIA-SQL SQL Server instance.
Results: After this exercise, you will have Stretch Database implemented for the OrderTracking table.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 6-25
Review Question(s)
Question: What are the advantages of SMB Fileshare storage over SAN storage?
MCT USE ONLY. STUDENT USE PROHIBITED
MCT USE ONLY. STUDENT USE PROHIBITED
7-1
Module 7
Planning to Deploy SQL Server on Microsoft Azure
Contents:
Module Overview 7-1
Lesson 1: SQL Server on Virtual Machines and Azure SQL Database 7-2
Module Overview
Microsoft® SQL Server® is integrated into the Microsoft cloud platform, Microsoft Azure®. It is possible to
use SQL Server in Azure in two ways:
Objectives
At the end of this module, you will be able to:
Describe the options that are available to run SQL Server on Azure.
Lesson 1
SQL Server on Virtual Machines and Azure SQL Database
In this lesson, you will explore the differences between the options for using SQL Server on Azure and
learn about the strengths of each model.
Lesson Objectives
At the end of this lesson, you will be able to:
Describe the options when you implement SQL Server on Azure virtual machines.
Choose whether to integrate SQL Server and Azure SQL Database with Azure Active Directory.
Two different licensing models are available to run SQL Server on Azure virtual machines:
Platform-provided SQL Server image. In this model, you use a virtual machine image that Azure
provides, which includes a licensed copy of SQL Server. Licensed virtual machine images are available
with SQL Server versions 2008 R2, 2012, 2014, 2016, and 2017 in Web Edition, Standard Edition, or
Enterprise Edition. The billing rate of the image incorporates license costs for SQL Server and varies,
based on the specification of the virtual machine and the edition of SQL Server. The billing rate does
not vary by SQL Server version.
Bring-your-own SQL Server license. In this model, you use a SQL Server license that you already
own to install SQL Server on an Azure virtual machine. You may install the 64-bit edition of SQL
Server 2008, 2008 R2, 2012, 2014, 2016, or 2017. This license model is only available to Microsoft
Volume Licensing customers who have purchased Software Assurance.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 7-3
Note: With SQL Server support for Linux, you can now run SQL Server 2017 on Red Hat
Enterprise Linux, SUSE Enterprise Linux Server, or Ubuntu virtual machines.
Regardless of the license model that you select, and in addition to any SQL Server licensing costs, you will
be billed for:
Azure virtual machines have a service level agreement (SLA) of 99.95 percent availability. Note that this
SLA does not extend to services (such as SQL Server) that run on Azure virtual machines.
For more information about licensing requirements for the bring-your-own SQL Server license model, see
License Mobility through Software Assurance on Azure in the Azure documentation:
License Mobility through Software Assurance on Azure
http://aka.ms/mb002x
For current information about pricing for Azure virtual machines and licensing costs for platform-provided
SQL Server images, see Linux Virtual Machines Pricing in the Azure documentation:
By default, when you create resources such as virtual machines and storage accounts in Azure, they are
organized into logical containers called resource groups. You should place all of the Azure resources that
are required to run an application into a single resource group, so that you can easily monitor them,
control access to them, and manage costs. For example, if you want to run a database application on
Azure virtual machines, you should place those virtual machines in a single resource group, along with the
storage accounts where those virtual machines maintain virtual hard disks, and along with other
infrastructure resources such as virtual networks (VNets) and load balancers.
Resources that are created in resource groups are deployed by using the Azure Resource Manager. You
can create templates that use Azure Resource Manager to make it easy to deploy in a single operation all
of the resources that an application needs. This is helpful, for example, when you want to deploy a single
application multiple times, such as in different physical locations or for staging and production.
MCT USE ONLY. STUDENT USE PROHIBITED
7-4 Planning to Deploy SQL Server on Microsoft Azure
In addition to the performance tier cost, you will be billed for outbound Internet traffic.
Azure SQL Database does not have a version or edition that is directly comparable to SQL Server
installations that you manage yourself. New features may be added to Azure SQL Database at any time. A
small subset of SQL Server features is not available to (or not applicable for) Azure SQL Database,
including some elements of Transact-SQL.
Azure SQL Database places limits on maximum database size. This size limit varies by performance tier,
with the highest service tier supporting databases that have a maximum size of 1 terabyte at the time of
writing.
The tools that are available to assist in the migration of databases to Azure SQL Database will be covered
later in this module.
For details of unsupported Transact-SQL features in Azure SQL Database, see Resolving Transact-SQL
differences during migration to SQL Database in the Azure documentation:
For current information about pricing for Azure SQL Database, see SQL Database Pricing in the Azure
documentation:
You want to run hybrid applications with some components in the cloud and some components on-
premises.
Azure SQL Database is recommended when:
You need high availability and automatic failover without any application downtime.
For a more detailed discussion between the two approaches, see the Choose a cloud SQL Server option:
Azure SQL (PaaS) Database or SQL Server on Azure VMs (IaaS) article in the Azure documentation:
Choose a cloud SQL Server option: Azure SQL (PaaS) Database or SQL Server on Azure VMs
(IaaS)
https://aka.ms/Pkq1sl
MCT USE ONLY. STUDENT USE PROHIBITED
7-6 Planning to Deploy SQL Server on Microsoft Azure
A similar arrangement can be creating in Azure by separating web server virtual machines and SQL Server
virtual machines into two VNets. Instead of firewalls, you use Azure network security groups (NSGs) to
filter network traffic and determine which hosts can communicate with the virtual machines on each VNet.
For more information about Azure VNets, see Virtual networks in the Azure documentation:
Virtual networks
https://aka.ms/vuzuv3
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 7-7
Furthermore, you can synchronize an on-premises Active Directory forest with Azure Active Directory. In
this case, each user need only remember a single user name and password to access all resources in the
cloud and on your local network.
Single Database
Single database is the original performance
configuration option that is offered for Azure SQL
Database. In it, each database has its own service
tier.
For both single database and elastic database pool, Azure SQL Database service tiers divide into three
categories, as the following table shows.
Point-in-time restore. The premium tier offers the longest backup retention; the basic tier offers the
shortest.
Disaster recovery strategy. The premium tier offers multiple online replicas, the standard tier offers
a single offline replica, and the basic tier offers database restore.
In-memory tables. Only the premium tier supports in-memory tables.
SQL Database options and performance: Understand what’s available in each service tier
http://aka.ms/dr66jn
Elastic database pools have a pool of available eDTUs and a limit of eDTUs per database, configured by
the service tier. Overall performance of the pool will not degrade until all of the eDTUs that are assigned
to it are actively consumed.
For complete details about the benchmark tests that are used to calculate DTUs, see Azure SQL Database
benchmark overview in the Azure documentation:
Demonstration Steps
1. Ensure that the MT17B-WS2016-NAT, 20765C-MIA-DC, and 20765C-MIA-SQL VMs are running and
log on to 20765C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.
3. Sign in to the Azure portal with your Azure Pass or Microsoft Account credentials.
4. In the Azure portal, click New, then in the Search box, type SQL Server 2017.
6. In the SQL Server 2017 Enterprise Windows Server 2016 pane, in the Select a deployment model
box, click Resource Manager, and then click Create.
7. On the Basics blade, in the Name box, type a name for your server of up to 15 characters. This must
be unique throughout the whole Azure service, so cannot be specified here. A suggested format is
sql2017vm-<your initials><one or more digits>. For example, sql2017vm-js123. Keep a note of
the name you have chosen.
11. Change the value of the Location box to a region near your current geographical location, and click
OK.
12. In the Choose a size blade, click View all then click DS11 Standard, and then click Select.
15. On the Summary blade, click Create. Deployment may take some time to complete.
16. When deployment is complete, click Virtual machines, then click the name of the machine you
created in step 7.
17. In the server name blade, click Connect, then click Open.
19. In the Windows Security dialog box, click More choices, then click Use a different account.
20. In the User name box, type \demoAdmin, in the Password box, type Pa55w.rd1234, and then click
OK.
MCT USE ONLY. STUDENT USE PROHIBITED
7-10 Planning to Deploy SQL Server on Microsoft Azure
22. Inside the remote desktop session, Click Strat, then click Microsoft SQL Server Tools 17 and click SQL
Server Management Studio 17 (SSMS) and connect to the VM instance of SQL Server. Demonstrate
that it’s a standard SQL Server installation.
23. Close the Remote Desktop connection, and then delete the Azure Virtual Machine.
Verify the correctness of the statement by placing a mark in the column to the right.
Statement Answer
Lesson 2
Azure Storage
In this lesson, you will learn about Azure virtual machines, including the sizes available, virtual disks that
you can add, database page compression, and instant file initialization.
Lesson Objectives
After completing this lesson, you will be able to:
Describe the different sizes of Azure virtual machines that are available.
Describe the different types of virtual disks that are available for Azure virtual machines.
Maximum number of data disks. Data disks can be created as binary large objects (BLOBs) in Azure
storage accounts. Any given virtual machine size includes a maximum number of data disks.
Maximum data disk throughput. This is measured in input/output operations per second.
Maximum network interface cards or network bandwidth. Using more network cards can enable
you to increase the network bandwidth that the virtual machine can utilize.
MCT USE ONLY. STUDENT USE PROHIBITED
7-12 Planning to Deploy SQL Server on Microsoft Azure
Note: The ACU should only be used as a guideline. It has been designed as a shortcut to
understanding the relative computing power of virtual machines, to help people make decisions
about which size of virtual machine to use.
Size Series
Azure virtual machines are divided into different size series. This is designed to help you choose the best
machine for your needs. Some examples are described below:
Faster processors.
A high memory-to-core ratio.
Other size series are available, and virtual machine sizes are being retired. For more information about the
sizing of Azure virtual machines, see Sizes for Windows virtual machines in Azure in the Azure
documentation:
Sizes for Windows virtual machines in Azure
https://aka.ms/xecbxs
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 7-13
Temporary Disk
This is not used for storing data; rather, it stores
pagefile.sys. The temporary disk is created automatically and labeled Drive D. The size of the virtual
machine that you have chosen determines the size and type of the temporary disk. You should never store
data on the temporary disk unless you are happy to lose the data. For example, in the event of your
virtual machine being received on a new host, perhaps because of hardware failure, a new temporary disk
will be created and any data will not be moved.
Any storage space that is used on the temporary disk is not chargeable.
Data Disks
You can add data disks to your virtual machine, and the number of disks that you can add depends on the
size of your virtual machine. You can choose the size of the data disk that you want to add, up to a
maximum of 1,023 GB. Data disks can be either premium or standard. Standard storage uses hard disk
drives (HDDs), and premium storage uses SSDs.
Data Compression
SQL Server supports compressed data for both
rowstore and columnstore tables and indexes.
Compression not only reduces the disk space
required to store data, but can also improve
performance. Because compressed data is stored
in fewer pages, the time required to fetch data
from the disk is reduced, thereby improving
performance. Compression does, however, require
CPU processing to compress and decompress the
data. Over the years, CPU power has increased
substantially, whilst disk seek and transfer times
have improved at a slower rate, so compression
makes good use of CPU power.
Note: The number of rows stored remains the same, despite data being compressed.
However, more rows can be stored on a page with compression.
MCT USE ONLY. STUDENT USE PROHIBITED
7-14 Planning to Deploy SQL Server on Microsoft Azure
For rowstore tables and indexes, using compression both reduces the size of the database and for high
I/O workloads, also improves performance. This is a trade-off, however, because compression means that
fewer pages are read, but CPU resources are used to compress and decompress data.
You can compress tables without an index (a heap), tables with a clustered index, nonclustered indexes,
and indexed views. Partitioned tables and indexes can also be compressed, with individual partitions using
row, page, or no compression, as required.
Compressing data will change the way queries perform, as the query plan for compressed data will be
different to the same query accessing uncompressed data. This is because compressed data is stored in
fewer pages.
There are other restrictions and considerations when compressing data, for more information see the
MSDN article, Data Compression.
For an overview of compression in Azure SQL Database, see Data Compression on MSDN:
Data Compression
https://aka.ms/mn5fcw
Instant file initialization skips the step of zeroing the disk space, and instantly uses the disk space without
first writing zeros. Instant file initialization can only be used for data files, not log files.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 7-15
Lesson 3
Azure SQL Server Authentication
In this lesson, you will learn about how security is managed in Azure SQL Database.
Lesson Objectives
After completing this lesson, you will:
Understand the layered security model that Azure SQL Database uses.
o Connection.
o Authenticating users.
Encrypting Data
Data is vulnerable to attack at various points,
including when it is stored in the database and as
it is communicated to users. Data encryption provides the highest level of protection, providing that the
keys are held securely. Azure SQL Database provides options to encrypt data at all points:
During communication, using Transport Layer Security (TLS). Azure SQL Database manages the
encryption keys, including rotation.
While it is stored, using TDE. TDE can be enabled or disabled through the Azure portal or by using
Windows PowerShell®.
Data in use, using Always Encrypted. This is set at the column level, either through the Azure portal
or by using Windows PowerShell.
Firewall rules. These allow or disallow access to the Azure server (or Azure SQL Database) based on
IP addresses. Tight firewall rules are a first-line defense against hacking.
Authentication. This occurs either through SQL Server Authentication or through Azure Active
Directory.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 7-17
Best Practice: Grant users the least permissions necessary. Security is often a trade-off
between convenience and protecting data. Grant the least permissions necessary for users to
complete their work, and grant additional permissions temporarily for exceptional tasks.
Connection Security
Connection security is managed by using firewall
rules. This limits access to the database server only
to those computers that have been identified by
using IP addresses. Many companies use a range
of fixed IP addresses, and these are used to create
firewall rules. Any computer that accesses a server
running Azure SQL Database, or an Azure virtual
machine, must be using an IP address within one
of the firewall rules.
It is possible to set firewall rules at either the
server level or the database level.
If you create firewall rules by using the Azure
portal, you will be prompted to add the current IP address, or add new firewall rules. Alternatively, you
can create firewall rules by using Windows PowerShell.
Authentication
Like the on-premises version of SQL Server, Azure
SQL Database now supports both SQL Server
Authentication and Windows Authentication. You
can now connect to Azure SQL Database by using
Azure Active Directory Authentication. This means
that each time users connect to Azure SQL
Database, they can be authenticated by using
Active Directory credentials.
When you provision a new server running Azure SQL Database, several steps are completed:
A server running Azure SQL Database is created, unless you have specified an existing server.
A server-level login is created with the credentials that you specified. This is similar to the sa login
that is created for on–premises SQL Server instances. You can then create additional logins as
required.
For more information about Azure Active Directory Authentication, see Use Azure Active Directory
Authentication for authentication with SQL Database or SQL Data Warehouse in the Azure documentation:
Use Azure Active Directory Authentication for authentication with SQL Database or SQL Data
Warehouse
https://aka.ms/cz224r
Authorization
SQL Server grants permissions to users to view or
work with a securable. Securables are things such
as tables, views, and stored procedures. Users must
have the appropriate permissions to work with
various securables.
Access is allowed or denied to logins and users.
Transact-SQL commands include:
GRANT
REVOKE
DENY
Sp_addrolemember
Sp_droprolemember
In the Azure portal, use the Dashboard or All Resources to select your server running Azure SQL
Database.
Ensure that Inherit settings from server is selected, and then click View server auditing settings.
In the Auditing & Threat Detection blade, for Auditing, click ON.
Choose Auditing type—either Blob or Table. Audit logs are written to either BLOB storage or table
storage. BLOB storage gives higher performance.
Select an Azure storage account. This is where log files will be saved.
Select the Azure storage account where the log files will be saved, and then in Retention (Days),
select the number of days for which the files will be retained. Log files will be deleted after the
number of days that has been selected.
You can switch Storage key access between Primary and Secondary to allow key rotation. Leave it
at Primary when you are setting up auditing.
Click OK.
Leave Email service and co-administrators selected, and then add Email addresses to enable email
alerts to be sent.
Click Save. Events will be logged for all databases on the server.
Analyzing Logs
You can use Azure Storage Explorer to view the logs in your storage account. This is discussed in a later
module in this course.
If you have chosen BLOB storage, log files are saved in a container called sqldbauditlogs.
In the Auditing & Threat Detection blade, select View audit logs.
For a good explanation about Azure logs, together with links to other articles of interest, see Get started
with SQL database auditing in the Azure documentation:
https://aka.ms/m5gt1b
Question: To what degree is your organization concerned with security when it considers
moving data to the cloud? Do you think the security measures that are available in Azure
SQL Database adequately answer those concerns? If not, what concerns do you feel have not
been addressed?
MCT USE ONLY. STUDENT USE PROHIBITED
7-20 Planning to Deploy SQL Server on Microsoft Azure
Lesson 4
Deploying Databases in Azure SQL Database
Before you use Azure SQL Database, it is important to understand the performance and pricing model
that is used for the service.
Azure SQL Database is offered on a SaaS model, so the performance and, correspondingly, the price of
the service is measured in DTUs. Different performance levels are offered by different service tiers; the
higher the performance of a service tier, the greater the number of DTUs it supports.
For current information about pricing for Azure SQL Database, see SQL Database Pricing in the Azure
documentation:
http://aka.ms/tskc9i
Lesson Objectives
At the end of this lesson, you will be able to:
Describe the performance tiers that are available for Azure SQL Database.
Explain how tier performance is measured by using DTUs.
Demonstration Steps
1. On MIA-SQL, start Windows PowerShell.
Add-AzureRmAccount
When prompted press y, and then press Enter. When the sign-in screen appears, use the same email
and password you use to sign in to the Azure portal. If you have already linked your Azure account
to PowerShell on this VM, use the command:
Login-AzureRMAccount
3. Use the subscription Id returned in the output of the previous step and run the following cmdlet:
(replace <your subscription id> with the GUID value returned by the previous step.)
4. Run the following cmdlet to return the list of Azure data center locations supporting SQL Database:
5. Run the following cmdlet to create a resource group. Substitute a location near your current
geographical location from the result returned by the previous step for <location>:
6. Run the following cmdlet to create a server in the new resource group. Substitute the location used in
the previous step for <location>. Substitute a unique server name for <your server name>; This must
be unique throughout the whole Azure service, so cannot be specified here. A suggested format is
sql2017ps-<your initials><one or more digits>. For example, sql2017ps-js123.
In the credential request dialog box, type the User name psUser and the password Pa55w.rd. This
step may take a few minutes to complete.
7. Run the following cmdlets separately to create a firewall rule to allow your current client to connect
to the server. Substitute the server name created in the previous step for <your server name>.
Substitute your current external IP address for <your external ip>. You can get your current external
IP address from the Azure Portal (see the value returned by the "Add Client IP" button on the firewall
for an existing server), or from third party services such as Google (search for "what is my ip") or
www.whatismyip.com:
8. Run the following cmdlet to create a database on the new server. Substitute the server name created
in a previous step for <your server name>.
Demonstration Steps
1. Open Internet Explorer and go to https://portal.azure.com/.
2. Sign in to the Azure portal with your Azure Pass or Microsoft Account credentials.
3. Click SQL Databases, then click TestPSDB. Note the value of Server name, and then click Show
database connection strings.
5. In the Connect to Server dialog box, type the server name noted in the previous step (it will take the
form <your server name>.database.windows.net).
6. Set Authentication to SQL Server Authentication. In the Login box, type psUser, in the Password:
box, type Pa55w.rd, and then click Connect.
7. In Object Explorer, expand the Databases node to show the TestPSDB database.
MCT USE ONLY. STUDENT USE PROHIBITED
7-22 Planning to Deploy SQL Server on Microsoft Azure
11. Select the script below 2. execute the following query, and click Execute.
12. Open Windows PowerShell, and then type the command below 3. Open Windows PowerShell
and type the following command into the Windows PowerShell window, replacing <your server
name> in the command with the server name used on step 3.
13. At the Password prompt, type Pa55w.rd, and then press Enter.
14. Close PowerShell, close SSMS without saving any changes, and then close Internet Explorer.
Objectives
After completing this lab, you will be able to:
Password: Pa55w.rd
2. Users should only be able to access the SalesOrders database from their IP address.
5. Note down the server level firewall rules and database level firewall rules that you require.
Results: After this exercise, you should have planned the performance levels and firewall settings for your
Azure SQL Database.
MCT USE ONLY. STUDENT USE PROHIBITED
7-24 Planning to Deploy SQL Server on Microsoft Azure
2. Create a new empty database in Azure SQL Database. Add the database to a new logical server by
using a name of your choosing.
3. The new logical server should use the admin login name salesorderadmin with the password
Pa55w.rd.
4. The performance level should match the requirements of the previous exercise.
Results: After this exercise, you will have created an empty Azure SQL Database and configured server
firewall rules.
Results: After this exercise, you will have connected to your database and configured server and database
firewall rules.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 7-25
Review Question(s)
Question: Are Azure database services suitable for your organization?
Is Azure SQL Database or SQL Server on Azure virtual machines more suitable for you?
MCT USE ONLY. STUDENT USE PROHIBITED
MCT USE ONLY. STUDENT USE PROHIBITED
8-1
Module 8
Migrating Databases to Azure SQL Database
Contents:
Module Overview 8-1
Lesson 1: Database Migration Testing Tools 8-2
Module Overview
SQL Server® and Azure SQL Database are closely related products and continue to get closer in terms of
features and supported Transact-SQL keywords. However, because they are designed to work in different
environments, they are not identical. Therefore, when you migrate a database from a SQL Server to Azure
SQL Database, you might find some incompatibilities. In this module, you will learn how to detect and
resolve such issues and how to migrate a compatible database into the cloud.
Objectives
At the end of this module, you will be able to:
Choose a tool to use to test a database for compatibility with Azure SQL Database.
Describe reasons why a database hosted in SQL Server might not be compatible with Azure SQL
Database.
Lesson 1
Database Migration Testing Tools
Azure SQL Database does not support all the features and Transact-SQL statements that are supported by
SQL Server. This is because the two products are designed for different environments. The differences can
cause problems when you migrate a database into the cloud from an on-premises server. The first stage
of any migration project is, therefore, to investigate the features of your database that are incompatible
with Azure SQL Database. In this lesson, you will see four different tools that you can use for this
investigation.
Lesson Objectives
At the end of this lesson, you will be able to:
List the main stages in a project to migrate a database from an on-premises SQL Server to Azure SQL
Database.
Use SQL Server Data Tools for Visual Studio® to investigate a database’s compatibility with Azure
SQL Database.
To perform database migrations smoothly, the following project stages are recommended.
SqlPackage.
You will learn about each of these tools in detail later in this lesson.
If you have used SSDT to identify issues, you can also use that tool to resolve those issues. In this method,
you still fix issues manually by altering the Transact-SQL Data Definition Language (DDL) statements that
create the database in Azure SQL Database. However, the tool provides more help for each issue.
In this step, you can create a BACPAC file to import into Azure. Another option is a SQL Server Integration
Services (SSIS) package.
You will learn more about these approaches in Lesson 3.
3. Click the Add SQL Server button and connect to the on-premises database server that hosts the
database you want to test.
5. In the Create New Project dialog, under Import Settings, select Import application-scoped
objects only.
6. Clear Import referenced logins, Import permissions, and Import database settings, and then
click Start. SSDT imports the database and creates a script file for each object in the database.
8. In the Solution Explorer, right-click the database project, and then click Properties.
9. On the Project Settings page, in the Target Platform drop-down list, select Microsoft Azure SQL
Database V12.
10. In the Solution Explorer, right-click the project, and then click Build. If SSDT detects any
incompatibilities, they are listed in the Error List window with a description.
To download the latest version of SSDT and to find more information, see:
SqlPackage
SqlPackage is a command-line utility that you can
use for exporting and importing operations in
both on-premises SQL Server databases and in
cloud databases. SqlPackage supports the
following operations:
Extract. Creates a database snapshot DACPAC
file from a SQL Server database or from Azure
SQL Database.
Publish. Updates the schema in a live
database to match the schema in a DACPAC
file. If the database does not exist on the
destination server, the publish operation
creates it.
Export. Exports both schema and data from a SQL Server database or from Azure SQL Database into
a BACPAC file.
Import. Imports the schema and data from a BACPAC into a new database.
DeployReport. Creates an XML report that describes the changes that would be made by a publish
operation.
DriftReport. Creates an XML report of the changes that have been made to a registered database.
Script. Creates a Transact-SQL script that you can use to update the schema of a target database to
match the schema of a source database.
Use the /Action: or /a parameter to specify which action to execute. For more information about the
SqlPackage utility, see:
SqlPackage.exe
https://aka.ms/tacja7
Before you use SqlPackage to check compatibility with Azure SQL Database, you should ensure that you
have the latest version. You can download the latest version of SqlPackage as part of the Microsoft SQL
Server Data-Tier Application Framework:
To check for Azure SQL Database compatibility, export a single table from the source database and create
a report file. You then check the report file, which contains information about any errors, including
incompatibilities.
The following SqlPackage command exports a table from the AdventureWorks database on the local SQL
Server. It creates a report file called ExportReport.txt. The “2>&1” string ensures that the report file
contains both standard output and standard errors.
A SqlPackage Example
sqlpackage.exe /Action:Export /ssn:localhost\SQLExpress /sdn:AdventureWorks
/tf:ExportedDatabase.bacpac /p:TableData=dbo.errorlog > ExportReport.txt 2>&1
To use the Export Data-tier Application Wizard to list Azure SQL Database compatibility issues:
1. Open SSMS and connect to the SQL Server instance that hosts the database you want to check.
3. Right-click the database you want to test, click Tasks, and then click Export Data-tier Application.
5. On the Export Settings page, on the Settings tab, configure the export to save the BACPAC file in
any location on the local disk or in Azure.
6. Click the Advanced tab, clear the Select All check box to avoid exporting any data, and then click
Next.
7. Click Finish. If there are any compatibility errors, they are listed on the results page. You can obtain
more information by clicking the link in the Results column. If there are no errors, the wizard
generates a BACPAC file and you can continue with the migration.
The Export Data-tier Application Wizard detects and displays compatibility issues but cannot fix them. You
must use a different tool to resolve the issues before you migrate the database.
MCT USE ONLY. STUDENT USE PROHIBITED
8-6 Migrating Databases to Azure SQL Database
Demonstration Steps
1. Run Setup.cmd in the D:\Demofiles\Mod08 folder as Administrator.
4. Start a command prompt. Type the following command, and then press Enter:
Notepad D:\Demofiles\Mod08\ExportReport.txt
7. Examine the contents of the text file, and then close Notepad.
Lesson 2
Database Migration Compatibility Issues
The tools described in the previous lesson can identify issues that prevent you from migrating a database
to Azure SQL Database from an on-premises SQL Server. However, it is also helpful for you to be aware of
the issues that might arise and how you can fix them. In this lesson, you will see a range of features and
Transact-SQL operations that are not supported in Azure SQL Database, so that you can ensure migrations
execute as smoothly as possible.
Lesson Objectives
At the end of this lesson, you will be able to:
Describe how the design and operating environment differs in SQL Server on-premises and Azure SQL
Database in the cloud.
Identify the features of SQL Server that are not supported in Azure SQL Database and select
alternatives.
Describe Transact-SQL statements that are not supported in Azure SQL Database.
Describe the features of Microsoft SQL Server 2017, SQL Server 2016, 2014, and 2012 that are
discontinued.
Design Differences
When you run a database on-premises in SQL Server, you are responsible for the entire infrastructure,
including the operating system, hardware, and server configuration. Within SQL Server, you must operate
several system databases, such as the master database and tempdb. As a PaaS offering, Azure SQL
Database is designed to free administrators from concerns about operating systems, hardware, and
system databases, so that they can focus on user databases that directly support business requirements.
For this reason, many server-level features, and the Transact-SQL statements that configure them, are
unavailable in Azure SQL Database.
which becomes the new primary replica. In this way, users can continue to work, unaffected by whatever
problem caused the original failure. Always On is an excellent feature for business-critical databases but
requires significant effort in terms of planning, configuration, and maintenance. Several Transact-SQL
statements, such as CREATE AVAILABILITY GROUP, exist solely to configure Always On.
Azure SQL Database does not support Always On because the server hardware and infrastructure is
maintained by Microsoft and not the tenant’s database administrators. Instead, you can choose to use
active geo-replication for any database. In this case, the database is automatically replicated to a data
center in another physical location for the purposes of resilience and availability. However, this feature is a
simple on/off option. No complex configuration procedure or Transact-SQL script is required to configure
it.
Attaching and detaching You can use this feature to Use other methods, such as
databases copy, move, or upgrade a SQL BACPAC files, to move
Server database. databases.
Change Data Capture You can use this feature to No current equivalent.
capture insert, update, and
delete operations.
Common Language Runtime Use classes in the .NET Use Transact-SQL instead.
(CLR) Framework to create database
objects.
FILESTREAM Store Binary Large Objects Store BLOBs within the Azure
(BLOBs) on the file system SQL Database.
instead of the database.
Minimal Logging in Bulk Ensures that transaction logs Transaction logs are
Imports are not flooding during large maintained automatically.
data imports.
Modifying System Data Update system objects by Not permitted because the
using the SQL-SMO API or system configuration is
system stored procedures. maintained by Microsoft.
Service Broker Native support for messaging Use Azure Service Bus queues
and queuing. and messaging instead.
SQL Server Analysis Services Create multidimensional or Use Azure Analysis services
tabular data models for pivot- instead.
table analysis.
Azure SQL Database continues to evolve. For an up-to-date and complete comparison of SQL Server and
Azure SQL Database features, see:
Any Transact-SQL statements that relate to database files. These are automatically managed in Azure
SQL Database.
HAS_DBACCESS.
SET REMOTE_PROC_TRANSACTIONS.
SHUTDOWN.
Trace flags.
Transact-SQL debugging.
USE statements. To change database, you must disconnect, and then reconnect to the new database.
These differences are likely to change as Azure SQL Database evolves. For the latest information, see:
Note: New Azure SQL databases have the READ_COMMITTED_SNAPSHOT option set to
ON. This setting can significantly alter the function of your database code if you use the default
READ COMMITTED transaction isolation level and rely on locking to manage concurrent activity.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 8-11
For more information about the effects of the READ_COMMITTED_SNAPSHOT setting, see the
topic SET TRANSACTION ISOLATION LEVEL (Transact-SQL) in the SQL Server online
documentation:
The ActiveX Subsystem. You can replace any custom ActiveX code with command-line or
PowerShell™ scripts.
Compatibility Level 80. Use the ALTER DATABASE command to set the compatibility level to 100 or
higher.
The WITH APPEND clause on triggers. You must recreate the trigger from scratch instead of
appending code.
SQL Mail. For later versions of SQL Server, use database mail. For Azure SQL Database, there is no
support for email generation within the database system.
Transact-SQL COMPUTE and COMPUTE BY statements. Use ROLLUP instead.
Question: You are migrating a database from SQL Server 2012 to Azure SQL Database. The
database is hosted on two database servers and is synchronized by using database mirroring.
What feature should you use to replace database mirroring in Azure SQL Database?
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 8-13
Lesson 3
Migrating an On-Premises Database to an Azure SQL
Database
When you have identified and fixed any features of your database that are not compatible with Azure SQL
Database, you can proceed to migrate the database. The method you choose for this operation will
depend on your requirements. For example, if you cannot support an interruption in service to users, you
must use transactional replication. In this lesson, you will see several different tools and methods for
migrating database schema and contents.
Lesson Objectives
At the end of this lesson, you will be able to:
Choose an appropriate method to migrate a database from SQL Server to Azure SQL Database.
Describe the contents and use of DACPAC and BACPAC files.
Migration Options
When you have identified and resolved any
incompatibilities with Azure SQL Database, you
can proceed to execute the migration. There are
many methods and tools you can use for this
process.
If you can support downtime during the migration, choose from the following migration methods:
SqlPackage. In lesson 1, you saw how to use this command-line tool to check database compatibility.
By using different options, you can publish a database to Azure SQL Database instead.
SSMS. You can use the Export Data-tier Application Wizard and the Import Data-tier Application
Wizard in SSMS to create a BACPAC file and import it into Azure SQL Database.
MCT USE ONLY. STUDENT USE PROHIBITED
8-14 Migrating Databases to Azure SQL Database
Data Tools for Visual Studio. You can choose Azure SQL Database as a location to publish a
database project from Visual Studio.
BACPAC files with bcp.exe. The bcp.exe command-line tools can import bulk database into an
Azure SQL Database.
In this method, you configure an Azure SQL Database as a subscriber to the database you wish to migrate
in SQL Server on-premises. The transaction replication distributor automatically synchronizes any changes
to the schema and content of the database to Azure. You can then reconfigure clients to use the Azure
database and disable transaction replication when all transactions have reached Azure.
For the latest information about Azure SQL Database migration, see:
File Structure
A DACPAC file is a compressed file with a .dacpac extension. It contains multiple XML sections that
describe the original server instance, the management objects, and other characteristics. You can use the
DacUnpack.exe utility to examine the contents of a DACPAC file.
A BACPAC file is also a compressed file but has a .bacpac extension. The XML schema is identical to that
for DACPAC files. The difference is that table data from the original database is included in JavaScript
Object Notation (JSON) format.
DACPACs are used to deploy and upgrade the design of a database. For example, if you have a database
in a development environment that contains test data, you can use a DACPAC to deploy to a production
environment without deploying the test data.
In most cases, when you migrate a database from SQL Server to Azure SQL Database, you are migrating
from one production environment to another—so you want to migrate the data, in addition to the
database design. For these operations, use a BACPAC.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 8-15
To create a BACPAC from a SQL Server database called AdventureWorks on the local computer’s
SQLExpress instance, use the following command:
To create a BACPAC file by using SSMS, use the Export Data-tier Application Wizard:
1. Open SSMS and connect to the SQL Server instance that hosts the database you want to migrate.
5. On the Export Settings page, on the Settings tab, configure the export to save the BACPAC file in
any location on the local disk, or in Azure, and then click Next.
To create a DACPAC file by using SSMS, use the Extract Data-tier Application Wizard with similar options.
Alternatively, if you connect SSMS to the Azure SQL Database logical server, you can use the Import Data-
tier Application Wizard to create the database in Azure.
MCT USE ONLY. STUDENT USE PROHIBITED
8-16 Migrating Databases to Azure SQL Database
If you do not know that the source and destination database schemas are identical, you should create and
use a format file. A format file stores information about the schema of the source database tables,
including the field names, data types, and column orders. During the import operation, bcp.exe can use
this format file to match columns in the text file with the destination database columns.
To export the data in the Employees table, use the following command. The -T option specifies that you
will authenticate against SQL Server using your current Windows account:
To import the data into Azure SQL Database, use the following command where myserver is the name of
your Azure SQL Database logical server, username is the name of your Azure user account and password is
your password:
Note that the above commands migrate only the Employees table into Azure SQL Database. To migrate
an entire database, you can create a batch file of multiple bcp.exe commands.
bcp Utility
https://aka.ms/Abkcfx
2. Use the Windows Explorer zip utility to open the .zip file and extract all .bcp files from it.
3. Use bcp.exe commands to import the .bcp files for each table into Azure SQL Database tables.
For more information about using bcp.exe with Azure SQL Database, see:
This method focuses on transactional replication for migration to Azure. For more information about
replication, see:
SQL Server Replication
https://aka.ms/Qudq7k
Transactional replication is implemented by setting up SQL Server Agent to execute publishing and
distribution executables. SQL Servers take three roles in the architecture:
Distributors. A distributor is a SQL Server that creates a distribution database and a log file.
Distributors propagate publications from publisher servers to subscriber servers.
Publishers. A publisher is a SQL Server that publishes a database or data objects for replication. A
publication consists of database snapshots that will be replicated to subscribed servers.
When you use replication for migration to Azure SQL Database, the SQL Server that hosts the unmigrated
database is the publisher. The Azure SQL Database logical server is the subscriber. You can choose any
SQL Server as a distributor. Often, the publisher and distributor are the same SQL Server.
For replication to Azure SQL Database, only one-way transactional replication is supported. You cannot
use peer-to-peer replication or merge replication. This is because Azure SQL Database cannot host SQL
Agent or run the replication executables. The distributor cannot be an Azure SQL Database logical server.
The Azure SQL Database logical server must be configured as a push subscriber.
3. Wait until all changes are replicated. When all clients are using Azure, the remaining changes from
on-premises databases will gradually replicate to the cloud database.
4. Uninstall transactional replication. When all clients have been moved to the cloud database and all
transactions have reached Azure, you can uninstall transactional replication. Azure SQL Database now
hosts your production database.
For more information about using transactional replication to migrate a database to Azure, see:
https://aka.ms/jx2m7i
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 8-19
Demonstration Steps
1. Start a command prompt. Type the following to generate an export BACPAC file for the TestPSDB
database:
3. Type the following command to import the database to Azure SQL Database. Substitute <your server
name> with the name of the Azure server hosting the target database:
4. Verify that the import has completed successfully by connecting to the Azure SQL server <your
server name. database.windows.net> using SSMS, then expanding the database TestPSDB and
showing the tables that now exist.
5. Close SSMS, and then close the command prompt.
Verify the correctness of the statement by placing a mark in the column to the right.
Statement Answer
Objectives
After completing this lab, you will be able to:
Password: Pa55w.rd
Task 2: Run the Export Data-tier Application Wizard to Check Database Compatibility
1. Using SQL Server Management Studio, run a verification test on the salesapp1 database to
determine whether it is suitable for migration to Azure SQL Database. Don’t attempt to export any
data at this stage.
2. What is the name of the stored procedure that caused the verification test to fail?
Results: After this exercise, you should have run the tools to check database compatibility with Azure
from SSMS.
3. The new server should use the admin login name salesappadmin with the password Pa55w.rd1.
o To connect to the server, use the fully qualified name of the server you created in the first task in
this exercise (ending .database.windows.net).
2. Use SSMS to connect to the Azure server now hosting the salesapp1 database. Execute a sample
query to verify that the data has been transferred successfully. For example:
Results: After this task, you will have created an Azure SQL Database instance, migrated an on-premises
database to Azure SQL Database, and be able to connect to, and query, the migrated database.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 8-23
Review Question(s)
Question: Are Azure database services suitable for your organization?
Is Azure SQL Database or SQL Server on Azure VMs more suitable for you?
MCT USE ONLY. STUDENT USE PROHIBITED
MCT USE ONLY. STUDENT USE PROHIBITED
9-1
Module 9
Deploying SQL Server on a Microsoft Azure Virtual Machine
Contents:
Module Overview 9-1
Lesson 1: Deploying SQL Server on Azure Virtual Machines 9-2
Module Overview
Microsoft® Azure SQL Database, which you saw in the previous module, is a platform as a service (PaaS)
product onto which you can deploy a database without taking responsibility for the underlying database
software and operating systems. However, you may need to make changes to a database to deploy it to
Azure SQL Database because some features of Microsoft SQL Server® are incompatible with it. If you want
to migrate a database to the cloud with the minimum number of changes, consider deploying to an
instance of SQL Server running on a Microsoft Azure® virtual machine. This environment is much more
closely analogous to the on-premises environment than Azure SQL Database. However, because it is an
infrastructure as a service (IaaS) product, you must take responsibility for the software and operating
systems. In this module, you will see how to set up SQL Server on Azure virtual machines and migrate
databases to those virtual machines.
Objectives
At the end of this module, you will be able to:
Describe how to license, patch, and protect SQL Server when it runs on one or more Azure virtual
machines.
Migrate a database from an on-premises server that runs SQL Server to a virtual machine in Azure.
MCT USE ONLY. STUDENT USE PROHIBITED
9-2 Deploying SQL Server on a Microsoft Azure Virtual Machine
Lesson 1
Deploying SQL Server on Azure Virtual Machines
Azure virtual machines can provide an environment that is very closely analogous to servers that are
running Windows Server® in an on-premises data center. The principal advantage of virtual machines is
that they free you from building and maintaining infrastructure such as server racks, uninterruptible
power supplies, network components, and so on. You can move SQL Server instances to Azure virtual
machines with little or no modification to the databases that they host. In this module, you will learn how
to replicate your on-premises SQL Server architecture in Azure by using virtual machines.
Lesson Objectives
At the end of this lesson, you will be able to:
Understand how to create and connect to SQL Server when it is hosted on a virtual machine in Azure.
Choose a highly available architecture for SQL Server hosted on virtual machines in Azure.
Describe how to protect a SQL Server virtual machine installation from disasters.
The disadvantage of migrating a database to a virtual machine is that you remain responsible for the
operating system and SQL Server. You must apply software updates, maintain security, configure
resilience, ensure that backups are taken, and so on. When you choose Azure SQL Database, the customer
does not perform these functions.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 9-3
The SQL Server Agent Extension for SQL Server Virtual Machines
Virtual machine extensions are small software packages that you can install on virtual machines to support
extra functionality. You can install the SQL Server Agent Extension for SQL Server virtual machines on a
virtual machine that runs Windows Server 2012 and SQL Server 2012, 2014, 2016, or 2017. It is included in
SQL Server images in the Azure Virtual Machine Gallery. This extension supports three features:
SQL Server Automated Backup. You can use this feature to back up databases automatically on the
default instance of SQL Server on a virtual machine.
SQL Server Automated Patching. You can use this feature to schedule times when updates can be
installed.
Azure Key Vault Integration. You can use this feature to install and configure Azure Key Vault
automatically on your SQL Server virtual machine. Azure Key Vault can centrally store and manage
encryption keys that are used in SQL Server functions.
If you created a SQL Server virtual machine that does not have the agent installed, you can use the
following Azure PowerShell command to install it. Substitute your own values for the
ResourceGroupName, VMName, and Location parameters.
Installing the SQL Server Agent Extension for SQL Server virtual machines
Set-AzureRmVMSQLServerExtension -ResourceGroupName "myresourcegroup" -VMName "MyVM” -Name
"SQLIaasExtension" -Version "1.2" -Location "Northern Europe"
Connect to SQL Server through the Internet. In this approach, port 1433 is opened on the
Windows® firewall. Clients are reconfigured with a connection string that includes the fully qualified
domain name (FQDN) of the server that runs SQL Server.
Connect to SQL Server through a virtual private network (VPN). In this approach, virtual
machines that are hosting SQL Server are placed in a virtual network within Azure. You can then
establish a VPN that encrypts all traffic between the virtual network and your on-premises network.
For more information about client connection options, see Connect to a SQL Server Virtual Machine on
Azure (Resource Manager) in the Azure documentation:
For on-premises architectures, it is common to optimize the resilience and availability of databases by
hosting them on multi-server architectures. By using multiple virtual machines in virtual networks, you can
implement the following availability technologies in Azure:
Log shipping
Database mirroring
You will learn more about highly available architectures that use Azure virtual machines later in this
lesson.
Hybrid Architectures
A hybrid architecture, in the context of Azure, is one in which some servers are hosted on-premises and
some in the cloud. Using SQL Server, you can host a database both on-premises and in the cloud in
several different ways. For example:
Always On Availability Groups. Virtual machines in Azure can act as secondary replicas to a primary
replica that is on-premises. In this case, you must create a Windows Server Failover Clustering (WSFC)
cluster that spans your local network and the Azure virtual network. You must also connect the
networks by using a VPN.
Database mirroring. You can configure virtual machines in Azure to mirror a principal database that
is hosted on-premises. No VPN is required for this configuration unless you need the database servers
to run in the same Active Directory® domain.
Log shipping. Virtual machines in Azure can host databases that are log shipping secondaries to
primaries that are hosted on-premises. A VPN connection is required.
For more information and examples of fault-tolerant and hybrid architectures that can be created by
using SQL Server on Azure virtual machines, see High availability and disaster recovery for SQL Server in
Azure Virtual Machines in the Azure documentation:
High availability and disaster recovery for SQL Server in Azure Virtual Machines
https://aka.ms/eiter5
Demonstration Steps
1. Ensure that the MT17B-WS2016-NAT, 20765C-MIA-DC, and 20765C-MIA-SQL VMs are running and
log on to 20765C-MIA-SQL as AdventureWorks\Student with the password Pa55w.rd.
3. Sign in to the Azure portal with your Azure Pass or Microsoft Account credentials.
5. In the search box, type SQL Server 2017, then click SQL Server 2017.
6. In the results, click SQL Server 2017 Enterprise Windows Server 2016.
7. In the SQL Server 2017 Enterprise Windows Server 2016 blade, in the Select a deployment
model list, click Resource Manager, and then click Create.
8. On the Basics blade, in the Name box, type a name for your server. This must be unique throughout
the whole Azure service, so cannot be specified here. A suggested format is sql2017vm<your
initials><one or more digits>. For example, sql2017vmjs123. Keep a note of the name you have
chosen.
13. Under Resource group, click Use existing and then select resource1.
14. In the Location list, select a location near you, and then click OK.
15. In the Choose a size blade, click View all, click DS11_V2 Standard, and then click Select.
18. On the Summary blade, click Create. Deployment may take some time to complete.
19. When deployment is complete, Azure starts the VM and displays the overview blade. Click Connect,
and then click Open.
21. In the Windows Security dialog box, in the User name box, type demoAdmin, in the Password
box, type Pa55w.rd1234, and then click OK.
23. When the remote desktop session has started, start SQL Server Management Studio and connect to
the local SQL Server instance. Demonstrate the structure of the installation.
24. If the Networks pane appears, click No.
You could use an image from the Azure Virtual Machine Gallery that includes SQL Server. In this case,
the per-minute rate that is shown in the gallery for the image and server size that you select includes
the SQL Server license fee.
If you are a Microsoft customer who has Software Assurance, you can install or upload your own
image that includes SQL Server by using the license mobility benefits that the Software Assurance
scheme includes.
MCT USE ONLY. STUDENT USE PROHIBITED
9-6 Deploying SQL Server on a Microsoft Azure Virtual Machine
If you are a Microsoft service provider, you can install or upload your own image that includes a SQL
Server Subscriber Access License (SAL) reported through your Services Provider Licensing Agreement
(SPLA).
Note: SQL Server is licensed on a per-CPU core basis. This is equivalent to virtual cores for
virtual machines in Azure.
Some SQL Server images in the Azure Virtual Machine Gallery are marked as bring your own license
(BYOL) images. These do not include the SQL Server license fee in the per-minute rate. You must add your
own license by using Software Assurance or an SPLA.
The license to run Windows Server is also included in the per-minute cost of a Windows Server virtual
machine. A separate Windows Server client access license (CAL) is not required to connect to a virtual
server that is running Windows Server.
For more information about licensing software in Azure virtual machines, see Virtual Machines Licensing
FAQ on the Azure website:
Many of the fault-tolerant technologies that you use to protect SQL Server on-premises are supported in
Azure. The following paragraphs describe how to configure these technologies in a cloud scenario.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 9-7
High-Availability Solutions
Highly available solutions use multiple servers to provide fault-tolerance to mitigate the impact of failures.
In the event of a server failure, users can connect to a different server and continue to execute queries. If
you want to implement SQL Server entirely on Azure virtual machines, you can consider the following
highly available architectures:
Always On Availability Groups. In an Always On Availability Group, synchronous commit operations
are used to copy database changes from a primary to a secondary replica that is hosted on different
SQL Server instances. In Azure, those instances can be hosted on separate Azure virtual machines with
a third virtual machine acting as a file share witness in the same WSFC cluster. WSFC requires an
Active Directory domain, so you also need an Azure virtual machine to act as a domain controller.
Always On Failover Cluster Instances. In Always On Failover Cluster Instances on-premises, multiple
servers that run SQL Server share disks on which database files are stored. This disk sharing is
achieved by using storage area networks (SANs), iSCSI, or other technologies. In Azure, you can use
Windows Server 2016 Storage Spaces Direct (S2D) to create a virtual SAN or third-party clustering
software.
For more information about configuring highly available SQL Server architectures in Azure, see High
availability and disaster recovery for SQL Server in Azure Virtual Machines in the Azure documentation:
High availability and disaster recovery for SQL Server in Azure Virtual Machines
https://aka.ms/eiter5
Availability Sets
In an Azure data center, physical and virtual resources are organized to permit two types of virtual
machine grouping:
Fault domains. The virtual machines in a single fault domain share a common power source and
network switch. This makes it more likely that a single hardware failure could affect those virtual
machines.
Update domains. Microsoft plans maintenance tasks for the hardware and software that runs virtual
machines to preserve the highest level of security and stability. The virtual machines in a single
update domain may be taken offline for maintenance at the same time.
It is therefore essential that you ensure that the virtual machines in a highly available SQL Server
architecture are spread across different fault domains and different update domains. You can do this by
using availability sets. When you place virtual machines into a single availability set, Azure automatically
spreads those virtual machines across five different update domains and three different fault domains by
default. Resource Manager deployments can increase these numbers for a single availability set.
For more information about availability sets and virtual machines, see Manage the availability of virtual
machines in the Azure documentation:
Always On Availability Groups. If you place the secondary replica on a virtual machine in a different
Azure region from the primary, you ensure that your solution is recoverable even in the event of a
complete site outage. Note that this configuration requires an Azure virtual machine to act as a
domain controller in each region. Virtual machines in each region should be placed in a virtual
network (VNet) and you must configure VNet-to-VNet connectivity.
Database mirroring. By creating a database mirror on a virtual machine in a different region, you
can create a disaster recovery solution that automatically creates and updates a copy of a database in
another site.
Backup and Restore by using the Azure Blob storage service. SQL Server 2014, 2016 and 2017
support Azure Blob storage as a backup destination, whether those servers are running on-premises
or in the cloud. By configuring a virtual server that runs SQL Server in one region to back up to an
Azure storage account in a different region, you can protect against a complete site outage.
For more information about disaster recovery planning for SQL Server in Azure virtual machines, see High
availability and disaster recovery for SQL Server in Azure Virtual Machines in the Azure documentation:
High availability and disaster recovery for SQL Server in Azure Virtual Machines
https://aka.ms/eiter5
A virtual machine that is running Windows Server 2012 R2 or Windows Server 2016.
SQL Server.
A database that uses the full recovery model.
Note: These are the requirements for Automated Backup V2. If you have a virtual machine
that is running Windows Server 2012 and/or SQL Server 2014, you can use Automated Backup
V1. This latter version is similar, but has fewer configuration options, such as those relating to
backup frequency and start times.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 9-9
You can configure Automated Backup in the Azure portal. For example, to configure a patching schedule
for an existing virtual machine:
1. Double-click the virtual machine.
5. Configure the properties and schedule for backups, and then click OK.
For more information about Automated Backups for SQL Server in Azure virtual machines, see Automated
Backup v2 for SQL Server 2016 Azure Virtual Machines (Resource Manager) in the Azure documentation:
Automated Backup v2 for SQL Server 2016 Azure Virtual Machines (Resource Manager)
https://aka.ms/dc0msi
To configure a patching schedule for an existing virtual machine by using Windows PowerShell
commands, execute the following code, substituting your own values and names:
Question: You want to implement an instance of SQL Server on a virtual machine in Azure to
host a business-critical database. You want to ensure that you license the server properly, but
you do not have Software Assurance. How can you license SQL Server to run on the virtual
machine?
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 9-11
Lesson 2
Migrating a Database to a Microsoft Azure Virtual
Machine
There are many different methods that you can use to move an existing database, including both its
schema and content, from an on-premises database server to an instance of SQL Server that runs on an
Azure virtual machine. The method that you choose will depend on your circumstances. For example, if
you can sustain a period of downtime, you may choose to use backups to migrate the database. In this
lesson, you will learn about the available migration methods.
Lesson Objectives
At the end of this lesson, you will be able to:
Choose the most suitable method to migrate a database to an Azure virtual machine.
Use backup and restore operations to migrate a database to an Azure virtual machine.
Describe how the Deploy Database to a Microsoft Azure VM Wizard migrates a database.
Back up the database by using compression, manually copy the backup file to the Azure virtual
machine, and then restore the database there. This method is simple and well tested. The use of
compression minimizes the time that is required. Use this method when you can support the
downtime that is required for backup, upload, and restore.
Perform a backup to an Azure Blob storage account, and then restore the database from that backup.
This method removes the necessity to manually copy the backup file. It is only supported when the
source database server runs SQL Server 2012 SP1 or greater.
MCT USE ONLY. STUDENT USE PROHIBITED
9-12 Deploying SQL Server on a Microsoft Azure Virtual Machine
Detach the database, copy the database files to an Azure Blob storage account, and then attach them
to the SQL Server instance in the virtual machine. Use this method if you plan to store database files
in Azure Blob storage permanently instead of on a virtual hard disk in the virtual machine.
Convert an on-premises server to a virtual hard disk (VHD) image by using Microsoft Virtual Machine
Converter. Upload the VHD to Azure storage, and then create a new Azure virtual machine by using
the upload VHD. Use this method when you want to bring your own SQL Server license to use in
Azure from your Software Assurance scheme.
Ship a hard drive to an Azure physical location by using the Azure Import/Export Service. The Azure
Import/Export Service enables you to send a physical hard disk to an Azure data center. Microsoft can
then add the data on this disk to your Azure Blob storage account. Use this method with large
databases when network bandwidth makes upload operations too slow.
Use the Add Azure Replica Wizard. If you have an Always On deployment in your on-premises SQL
Server system, you can use the Add Azure Replica Wizard to create a new secondary replica on the
Azure virtual machine. After replication has completed, you can fail over to make the Azure virtual
machine the primary replica.
Use SQL Server transactional replication to replicate the database to the Azure virtual
machine. Use this method when you need to minimize downtime, but do not have an Always On
deployment in your on-premises SQL Server system.
For more information about migration methods, see Migrate a SQL Server database to SQL Server in an
Azure virtual machine in the Azure documentation:
This method can migrate any database from an original server that runs SQL Server 2005 or later.
If you choose to compress the backup file during the backup operation, you will be able to upload it
to Azure faster.
It is easiest to perform this kind of migration by taking a full backup. Other types of backup may
require you to restore multiple backup files on the destination virtual machine.
You can use a wizard in SQL Server Management Studio to create backups or use a Transact-SQL script
such as the following one.
After the backup operation is complete, manually copy the database file to the Azure virtual machine. You
can use a Remote Desktop connection, Windows Explorer, or the copy command from the command
prompt.
To restore the database from the backup file, log on to the SQL Server virtual machine in Azure, and then
either use the restore wizard in SQL Server Management Studio or use a Transact-SQL RESTORE
DATABASE command.
The following Transact-SQL command restores a database from a full backup file and moves data and
transaction logs to appropriate locations for the new virtual machine.
Backing Up to a URL
In SQL Server, when you perform a database backup, you can select a URL as a backup destination instead
of a local folder or file share on the local network. You can use this tool to back up the database to an
Azure Blob storage account and then restore from that account on the destination SQL Server virtual
machine in Azure. This approach avoids the need to manually copy the database backup file to Azure. As
for the manual copy approach, you can use backup compression to reduce the size of the backup and
optimize the time that is required for the upload.
MCT USE ONLY. STUDENT USE PROHIBITED
9-14 Deploying SQL Server on a Microsoft Azure Virtual Machine
For complete information about backing up to an Azure storage account, see SQL Server Backup to URL in
online documentation:
SQL Server Backup to URL
https://aka.ms/p4m0pt
Note: The Deploy Database to a Microsoft Azure VM Wizard can only work with virtual
machines, cloud services, and storage accounts in the Classic deployment model in Azure. You
must use other migration methods to move databases to virtual machines in the Resource
Manager deployment model. The latter is the recommended model for new deployments.
You can instruct the wizard to create a new virtual machine and cloud service during the migration.
Alternatively, you can choose to migrate the database to an existing virtual machine that has SQL Server
already installed. If you choose the second approach, you must:
Configure the instance of SQL Server on the destination virtual machine to listen on a specific port
number.
Install and configure the Cloud Adapter for SQL Server on the destination virtual machine.
Configure an open endpoint for the Cloud Adapter for SQL Server on the destination virtual machine
by using a private port number 11435.
If the database that you want to deploy uses FILESTREAM to store binary large objects (BLOBs) outside the
database, you cannot use this wizard to deploy the database to a new virtual machine and cloud service.
Instead, you must create the virtual machine and cloud service manually before deployment.
The destination virtual machine, cloud service, and data disk storage location that the wizard uses must all
be in the same Azure region.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 9-15
For more information about the Deploy Database to a Microsoft Azure VM Wizard, see Deploy a SQL
Server Database to a Microsoft Azure Virtual Machine in online documentation:
Deploy a SQL Server Database to a Microsoft Azure Virtual Machine
https://aka.ms/k9fjy3
Restore the backup on the Azure virtual machine to complete the migration.
Demonstration Steps
1. Ensure that the 20765C-MIA-DC and 20765C-MIA-SQL virtual machines are running and log on to
20765C-MIA-SQL as AdventureWorks\Student with the password Pa55w.rd.
3. When the script has completed, press any key to close the window.
5. In the Connect to Server dialog box, select MIA-SQL and click Connect.
6. In Object Explorer, expand Databases, expand ExampleDB, and then expand Tables.
7. Right-click HR.Employees, and then click Select Top 1000 Rows. Show the students the results of
the query against the local database. You will compare results against the cloud-hosted database
after the migration.
8. On the File menu, point to New, and then click Query with current connection.
USE ExampleDB;
GO
BACKUP DATABASE ExampleDB
TO DISK = 'D:\Demofiles\Mod09\ExampleDB.bak'
WITH COMPRESSION, FORMAT,
MEDIANAME = 'MigrationBackups',
NAME = 'Full Backup of ExampleDB';
GO
10. In File Explorer, browse to D:\Demofiles\Mod09, right-click ExampleDB.bak, and then click Copy.
12. Sign in to the Azure portal with your Azure Pass or Microsoft Account credentials.
15. On the virtual machine blade, click Connect, and then click Open.
17. In the Windows Security dialog box, click More choices, then click Use a different account.
MCT USE ONLY. STUDENT USE PROHIBITED
9-16 Deploying SQL Server on a Microsoft Azure Virtual Machine
19. In the Password box, type Pa55w.rd1234, and then click OK.
21. When the remote desktop session has started, open File Explorer, and browse to C:\.
22. On the Home menu, click Paste. Explorer pastes the backup file into the VM.
25. In Object Explorer, expand Databases and show that there are no user databases.
26. On the File menu, point to New, and then click Query with Current Connection.
27. Type the following Transact-SQL script, and then click Execute:
28. When the query completes, in Object Explorer, right-click Databases, and then click Refresh.
31. Close Internet Explorer, close SSMS without saving any changes, then close the Remote Desktop
connection.
Question: You want to migrate a business-critical production database to an Azure virtual
machine. Users must be able to make changes to the database throughout the migration
process. An Always On Availability Group protects the on-premises system. What method
should you use for the migration?
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 9-17
You have an on-premises database that you want to migrate into the cloud. To minimize the number of
compatibility issues that will need correcting, you have decided to host the migrated database on a
Microsoft® Azure® virtual machine that runs Microsoft SQL Server® 2017. You have been asked to create
the virtual machine and migrate the database by using compressed backups. After the database has been
migrated, you will connect to it and run a query.
Objectives
At the end of this lab, you will be able to:
Password: Pa55w.rd
3. Connect to the Virtual Machine by Using the Microsoft Remote Desktop Connection Client
o Password: Pa55w.rd1234
o Size: DS 11 V2 Standard
Task 3: Connect to the Virtual Machine by Using the Microsoft Remote Desktop
Connection Client
1. After the virtual machine has been created and started, connect to it by using the Remote Desktop
Connection client. Use the following information:
o Password: Pa55w.rd1234
2. In the Remote Desktop Connection client, open SQL Server Management Studio, and then use it to
examine the databases that are present by default in the new SQL Server instance.
3. Close SQL Server Management Studio, and then log off from the Remote Desktop session.
Results: After this exercise, you will have created a virtual machine in your Azure subscription that runs
SQL Server.
2. Use the Remote Desktop client to connect to the SQL Server virtual machine in Azure.
2. Close SQL Server Management Studio, and then log off from the Remote Desktop session.
Results: At the end of this exercise, you will have migrated a database from the on-premises server that
runs SQL Server to the SQL Server instance on the new virtual machine.
o Password: Pa55w.rd1234
Task 2: Connect SQL Server Management Studio to the Azure Virtual Machine
1. Make a note of the public IP address of the virtual machine on the Overview blade.
2. In SQL Server Management Studio, connect to that IP address by using the following information:
o Login: SQLAdmin
o Password: Pa55w.rd1234
3. In the connection to the virtual machine, execute a query against a table in the ExampleDB database.
MCT USE ONLY. STUDENT USE PROHIBITED
9-20 Deploying SQL Server on a Microsoft Azure Virtual Machine
Results: When you have completed this exercise, you will have connected an instance of SQL Server
Management Studio to the Azure instance of SQL Server.
Question: After you created the backup file, the on-premises server remained available for
clients to connect to and modify data. What would happen to these modifications after you
had moved clients to the new database?
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 9-21
Review Question(s)
Question: You have created an Always On Availability Group that includes three virtual
machines in Azure. You want to ensure that the three virtual machines are in different fault
domains and different update domains. What should you do?
MCT USE ONLY. STUDENT USE PROHIBITED
MCT USE ONLY. STUDENT USE PROHIBITED
10-1
Module 10
Managing Databases in the Cloud
Contents:
Module Overview 10-1
Lesson 1: Managing Security in Azure SQL Database 10-4
Module Overview
*** IMPORTANT ***
There are two things that you need to prepare before the course starts. These take some time to
complete, so start them well before the training starts:
1. Creating an AdventureWorksLT database in Microsoft® Azure® for use with the “Encrypting
Sensitive Data” demonstration.
2. Creating a virtual machine in Azure for use with the “Creating a Storage Pool” demonstration.
You will need your Azure login credentials, and the range of public-facing IP addresses that your training
center uses.
4. Click New > Databases > SQL Database. Enter AdventureWorksLT for the database name.
5. Under Subscription, there is no need to change the subscription unless you have more than one
subscription.
6. Under Resource Group, click Create new and type 20765C + YourInitials. The Resource Group name
must be unique; if the name is valid, a green tick appears.
10. Under server name, type 20765C+YourInitials. A green tick appears if the server name is valid.
11. Type Student for the server admin login, and Pa55w.rd for the Password and Confirm password
fields.
13. Do not change the setting for Allow azure services to access server.
MCT USE ONLY. STUDENT USE PROHIBITED
10-2 Managing Databases in the Cloud
15. For Want to use SQL elastic pool, select Not now.
16. Under Pricing tier, click the right arrow for Configure required settings. Select S1 Standard 20
DTUs, then click Apply.
19. Select Create to create the SQL Server® instance, including the AdventureWorksLT database. This will
take a few minutes.
20. To configure the firewall, click All resources, and then click the name of the server you created. The
SQL Server properties blade is displayed.
21. In the Settings section, click Firewall/Virtual Networks (Preview), and click Add Client IP. Click
Save before closing.
The Azure SQL Database is now ready for demonstration later in the module.
2. In the D:\Demofiles\Mod10 folder, right-click Setup.cmd, and then click Run as administrator.
3. In the User Account Control dialog box, click Yes, and wait for the script to finish.
4. Click Start, right-click PowerShell ISE, click More, then click Run as Administrator.
7. Use Run Selection to run the script block under #Install latest AzureRM module.
8. In the NuGet provider is required to continue dialog box, click Yes.
11. In the # Initialize block, under # Initialize variables, amend the $subscriptionName and
$TenantID to those relating to your subscription. You will find these in the Azure portal. The
TenantID is the same as DirectoryID in Azure Active Directory, Properties. You can copy the string
and paste it into the PowerShell script.
12. Use Run Selection to run the script block under # Initialize.
13. Use Run Selection to run the script block under #Sign in to Azure and select subscription.
15. Use Run Selection to run the script block under # Prompt for credentials for VM and SQL Server.
16. When prompted, enter the password Pa55w.rd, and then click OK.
17. Use Run Selection to run the script block under # Create resource group.
18. Use Run Selection to run the script block under # Create a virtual network. Note the warning
message.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 10-3
19. Use Run Selection to run the script block under # Create public IP address and network interface.
Note the error message. This will take a minute or two to run.
20. Use Run Selection to run the script block under # Create virtual machine. This will take several
minutes to complete.
21. Use Run Selection to run the script block under # Add data disks to the VM. Note the warning
message. This will take a minute or so to complete. If prompted to supply the location parameter type
West US 2.
22. Close the Windows PowerShell ISE, without saving any changes.
A new virtual machine that has additional data disks has been created. This is now ready for the “Creating
a Storage Pool” demonstration.
Objectives
After completing this module, you will be able to:
Lesson 1
Managing Security in Azure SQL Database
This lesson considers several security features that are available in Azure SQL Database. As cybercrime
increases, and more data moves to the cloud, it is more important than ever to ensure that databases are
secure. This lesson considers some of the security features that are built into SQL Server and Azure SQL
Database.
Lesson Objectives
In this lesson, you will learn:
What Always Encrypted is, and how to configure it in Azure SQL Database.
1. Data is encrypted at rest and in motion. This means that data is encrypted when it is stored on
disk; when it moves between the client and the server; while it is being processed; and when the
results are displayed.
2. Data owners can store encryption keys on-premises to prevent Microsoft cloud administrators
seeing sensitive data within their databases. Encryption keys are stored on the client side, so an
administrator cannot decrypt the data. The server does not have access to plaintext keys.
Note: When both the client application and the data are hosted in Azure, such as for web-
based applications, the encryption keys must be stored in a cloud key store such as Azure Key
Vault. In this situation, Always Encrypted does not provide complete protection against a rogue
administrator, but it does significantly reduce the attack area.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 10-5
Encryption Types
There are two ways of encrypting data by using Always Encrypted:
1. Randomized encryption. This type of encryption produces a different result every time a value is
encrypted.
2. Deterministic encryption. This type of encryption produces the same result every time a value is
encrypted.
Randomized encryption produces different cipher text each time plaintext is entered. This prevents an
eavesdropper from learning anything about the encryption process. Although it is secure, randomized
encryption does not allow operations on data, nor can you index these columns.
Deterministic encryption always produces the same cipher text each time a plaintext value is encrypted.
Columns that are encrypted by using the deterministic option support equality joins and the GROUP BY
option, which means that you can join tables and filter the results of a WHERE clause. Deterministic
encryption also supports indexed columns. Do not use deterministic encryption for columns that only
have a few values, such as gender or regions. Using deterministic encryption on highly repetitive values is
theoretically vulnerable to a hacker deciphering patterns.
Always Encryption uses a secure 256-bit encryption key. The algorithm that is used to encrypt data is
AEAD_AES_256_CBC_HMAC_SHA_256, which uses cipher block chaining (CBC) to conceal patterns. Two
variations of the algorithm are used so that both deterministic and randomized encryption can be
supported.
If you are interested in the cryptographic algorithm that Always Encrypted uses, see Always Encrypted
Cryptography on Microsoft online documentation:
Encryption Keys
Always Encrypted uses encryption keys that are stored on the client side, not on the server. Two kinds of
keys are required:
Column Master Key. This key is used to encrypt column encryption keys.
You can create the keys by using SQL Server Management Studio. First, navigate to Security, and then
click Always Encrypted Keys. Create the Column Master Key object first because this will be required
when you create the Column Encryption Key object:
1. Right-click Column Master Keys in the tree, and then click New Column Master Key … to display
the New Column Master Key dialog box.
2. Type a name for the key, and then select a location where the key will be stored: either the Windows
certificate store or Azure Key Vault. You will be prompted to sign in to Azure Key Vault.
3. Click Generate Certificate so that the key is stored in an appropriate certificate. Alternatively,
highlight an existing certificate.
1. Right-click Column Encryption Keys, and then click New Column Encryption Key … to display the
Column Encryption Keys dialog box.
2. Type a name for the key, and then select a Column Master Key object. This protects your Column
Encryption Key object.
3. Click OK.
Reference Links: Click the Script option, and then select New Query Window to see the
syntax for both Column Master Key and Column Encryption Key. You can also right-click
existing keys, and then select Script … to view the script in a new query window.
Best Practice: You can create Windows PowerShell scripts to create encryption keys. You
can also create Windows PowerShell scripts to encrypt data.
When you have created both keys, you are ready to encrypt columns by using Always Encrypted.
You can now freely download and upgrade SQL Server Management Studio independently of the SQL
Server database engine. You can also install it side by side with previous versions of SQL Server
Management Studio.
For more information and to download the latest version, see Download SQL Server Management Studio
(SSMS) on MSDN:
Encrypting Data
To encrypt the relevant columns:
1. Using Object Explorer, expand the Tables node. Select the table and column that you want to
encrypt.
2. Right-click the column name, and then click Encrypt Column on the context-sensitive menu to
display the Always Encrypted Wizard.
3. An introductory page is displayed. Click Next, and then optionally, click Do not show this page
again.
6. Select the Column Encryption Key that you created earlier. Optionally, the wizard will create a new
Column Encryption Key if you have not already done so.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 10-7
7. Select either Proceed to finish now or Generate a PowerShell script to run later. If you choose to
generate a Windows PowerShell script, provide a location for the script. Click Next.
8. The wizard displays the summary screen. Check that everything is correct, and then click Next. Either
the column is encrypted, or a Windows PowerShell script is created in the specified location.
To check that the data is encrypted, execute a simple SELECT query to display the column data. Always
Encrypted columns will appear with obfuscated text.
To remove Always Encrypted encryption from a column, right-click the column name, and then click
Encrypt Column to start the Always Encrypted Wizard. Choose plaintext instead of either randomized or
deterministic encryption, and use the same encryption key. Click Next to see a summary, and then click
Next again to decrypt the column.
Note: For Always Encrypted encryption to work, columns must have one of the binary2
collations. If the column does not have a binary2 collation, a message appears to say that the
collation will be changed.
There are other restrictions, such as columns that have the following characteristics:
FILESTREAM columns.
Columns that use the following data types: XML, timestamp/rowversion, image, ntext, text,
sql_variant, hierarchyid, geography, geometry, alias, and user-defined data types.
There are more restrictions. For full details, see the Always Encrypted documentation on Microsoft online
documentation:
Column-Level Encryption
Column-level encryption is also known as cell-
level encryption because it applies encryption to
individual columns in a table. Column-level
encryption uses symmetric key encryption, and
certificates to store the keys. You can then use
Transact-SQL to encrypt columns of data.
1. In on-premises SQL Server, the key encryption hierarchy uses the Service Master Key, which is specific
to the SQL Server instance. In Azure SQL Database, the root encryption uses a certificate that is
managed by Azure SQL Database. This simplifies key management significantly.
2. Azure SQL Database syntax does not support references to files, including backing up the Master Key
and certificates, restoring backups, or importing certificates or asymmetric keys from files. This
restriction can result in encrypted data that is exported from an Azure SQL Database being unable to
be decrypted. You should therefore use certificates that can be extracted and then re-created.
1. Create a database master key (or use the existing database master key).
2. Create a certificate to hold the symmetric key by using CREATE SYMMETRIC KEY with the
KEY_SOURCE and IDENTITY_VALUE fields set so that the key can be re-created elsewhere if needed.
Note: Symmetric encryption uses the same cryptographic key both to encrypt and decrypt
plaintext. Symmetric keys are, therefore, a shared secret between the party that is encrypting the
data, and one or more parties that are decrypting the data. Symmetric key encryption gives
better database performance compared to asymmetric encryption.
EncryptByKey
--If there is no master key, create one now.
IF NOT EXISTS
(SELECT * FROM sys.symmetric_keys WHERE symmetric_key_id = 101)
CREATE MASTER KEY ENCRYPTION BY
PASSWORD = '23987hxJKL93QYV43j9#ghf0%lekjg5k3fd117r$$#1946kcj$n44ncjidld'
GO
UPDATE SalesLT.Customer
SET CardNumber_Encrypted = 12345678
WHERE CustomerID = 1
For more information about using cell-level encryption in Azure SQL Database, see Recommendations for
using Cell Level Encryption in Azure SQL Database on MSDN:
With dynamic data masking, the underlying data is not encrypted; it is just masked or obfuscated. This
means that when the results are returned to anyone who does not have the necessary permissions, either
all or part of the data is masked. Dynamic data masking has minimal impact on the application layer
because it is configured on the database. The data does not change—only the way in which it is presented
changes. Data masking is defined at the column level—for example, credit card numbers, date of birth,
and so on. There are different masking rules that either completely obscure the data, or just partially
obscure it.
Use dynamic data masking together with other security features as part of your overall security strategy.
For a good overview of dynamic data masking, see Dynamic Data Masking on Microsoft online
documentation:
The following table lists the various data masks that are available for you to choose from.
Default The complete value is masked with a abcdefgh would be masked as xxxx. 1234
character that is appropriate to the would be masked as 0000. Shorter fields
data type. For example, characters would be masked with fewer than four
are shown as xxx, and numeric with 0. characters.
Random For numeric data types. Replaces the Depends on the data type.
number actual value with a random value in a
given range.
3. The schema, table, and column names of recommended fields to mask are displayed.
4. Select the field that you want to mask, and then click Add Mask.
5. The field is then displayed in the Masking Rules list, together with a Mask Function.
6. Amend the Mask Function by clicking the mask rule. The Edit Masking Rule blade appears.
7. Under Select how to mask, click the drop-down arrow to view the different masking field formats.
8. Select the correct Masking field format, and then click Update to save it.
You can also use Windows PowerShell to set up dynamic data masking.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 10-11
FILESTREAM columns.
For those viewing the data, by default, users will see masked data, where a mask has been set up. To view
plaintext data, users must be added to the SQL users excluded from masking (administrators are
always excluded) box, or have admin permissions.
TDE protects the complete database from the theft of physical data storage media, but provides no
protection for data as it is processed and communicated. For example, TDE permits a database
administrator, or an intruder, to see plaintext in SQL Server Management Studio query results. If you want
to ensure that database administrators do not see sensitive data, use Always Encrypted.
Always Encrypted and TDE are complementary encryption technologies that provide protection in
different ways, which means that you can encrypt the database by using TDE and encrypt sensitive
columns by using Always Encrypted.
1. On the Azure dashboard, select the database that you want to encrypt.
4. Click Save.
When the database has been encrypted, the Encryption status icon will change first to Encryption is in
progress, and then to Encrypted.
MCT USE ONLY. STUDENT USE PROHIBITED
10-12 Managing Databases in the Cloud
Alternatively, you can configure TDE by using Windows PowerShell or the REST application programming
interface (API). You must connect as the SQL Server Manager, Azure Owner, or Contributor.
You can also use Transact-SQL to encrypt a database in Azure SQL Database. Using SQL Server
Management Studio, attach to the relevant database in Azure SQL Database, and then use the ALTER
DATABASE command.
Use the ALTER DATABASE command to encrypt a database in Azure SQL Database. Use SET ENCRYPTION
OFF to decrypt the database.
ALTER DATABASE
ALTER DATABASE [AdventureWorksLT] SET ENCRYPTION ON;
GO
Encryption Keys
The database is encrypted by using the database encryption key, which is a symmetric key that is itself
protected by a server certificate. The encryption keys are automatically created when the encryption
option is selected. Plus, the encryption keys are changed approximately every 90 days for increased
security. This is a real benefit because secure key management is not trivial.
In contrast to the on-premises versions of SQL Server, including SQL Server running in an Azure virtual
machine, Azure SQL Database does not support integration with Azure Key Vault. However, for most
organizations, this is unlikely to be an issue because Microsoft manages encryption keys when TDE is
selected.
For more information about using TDE with Azure SQL Database, see Transparent Data Encryption with
Azure SQL Database on Microsoft online documentation:
1. In the Azure portal, select the server that you want to configure. The server blade appears.
2. In the Settings group, select Firewall. The firewall blade is displayed.
3. The Client IP address that you are currently using is displayed. You can add this IP address by clicking
Add Client IP.
4. Create firewall rules by providing values for Rule Name, Start IP, and End IP, and then click Save.
5. Alternatively, to remove a rule that you no longer want, select the rule, and then click Discard.
Note: Save each firewall rule as you create it. You cannot save more than one firewall rule
at a time.
Best Practice: Before you add the current Client IP address to the firewall rules, check that
the correct IP address is displayed.
For more information about configuring server-level firewall rules, see the Azure documentation:
Create and manage Azure SQL Database server-level firewall rules using the Azure portal
https://aka.ms/kwgknv
Unlike server-level firewall rules, which are set by using the Azure portal or a Windows PowerShell script,
database-level firewall rules are set by using Transact-SQL. The sp_set_database_firewall_rule stored
procedure is used with either the master database or user databases. Note that the name of the firewall
rule must be unique for each database.
Sp_set_database_firewall_rule
EXEC sp_set_database_firewall_rule @name = N'AdventureWorks HR',
@start_ip_address = 'nnn.nnn.nnn.nnn', @end_ip_address = 'nnn.nnn.nnn.nnn';
Note: If you attempt to connect to a database from an IP address that is not allowed
access, a dialog box appears that enables you to add your client IP address. You must, however,
first sign in to your Azure account.
For more information about configuring database-level firewall rules, see the Azure documentation:
Demonstration Steps
1. Start the MT17B-WS2016-NAT, 20765C-MIA-DC, and 20765C-MIA-SQL virtual machines, and log on
to 20765C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.
2. Open SQL Server Management Studio and connect to the server you created earlier, for example
20765CCE.database.windows.net, using SQL Server Authentication.
3. In the Login box, type Student, and in the Password box, type Pa55w.rd, and then click Connect.
3. In Object Explorer, expand Databases, expand AdventureWorksLT, expand Security, and then
expand Always Encrypted Keys.
4. Right-click Column Master Keys and click New Column Master Key.
5. In the New Column Master Key dialog box, in the Name box, type CMK1.
6. In the Key store list, select Windows Certificate Store - Current User.
7. Click Generate Certificate. The certificate appears in the list, and then click OK.
8. In Object Explorer, right-click Column Encryption Keys and click New Column Encryption Key.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 10-15
9. In the New Column Encryption Key dialog box, in the Name box, type CEK1.
10. In the Column master key list, click CMK1, and then click OK.
3. Click Execute to show that all the columns are displayed in plaintext.
6. On the Column Selection page, select City, and in the Encryption Type column, select
Deterministic, and in the Encryption Key column, select CEK1. Note the collation message, and
then click Next.
8. On the Run Settings page, ensure Proceed to finish now is selected, and then click Next.
9. On the Summary page, click Finish.
12. Click Execute to run the query to show the column appears with obfuscated text. NOTE: Values are
repeated because deterministic encryption has been used, and each City appears more than once.
14. Highlight the query and click Execute to show the number of Cities in each postal code. This is
possible because deterministic encryption was selected.
16. The PostalCode column cannot be encrypted while it is included in an index. Run the first part of the
script to drop the index.
17. In Object Explorer, right-click PostalCode, and then click Encrypt Column.
18. In the Always Encrypted wizard, on the Introduction page, click Next.
19. On the Column Selection page, select PostalCode, and in the Encryption Type column, click
Randomized, and in the Encryption Key column, click CEK1. Note the collation message, and then
click Next.
21. On the Run Settings page, ensure Proceed to finish now is selected, and then click Next.
26. Click Execute to show that the columns appear with obfuscated text.
28. Highlight the query and click Execute to show that the GROUP BY operation fails with randomized
encryption. The error message explains that Deterministic encryption is required for the statement to
succeed.
5. On the Run Settings page, ensure Proceed to finish now is selected, and then click Next.
9. Highlight the query and click Execute to show that the PostalCode column has been decrypted.
10. In Object Explorer, right-click SalesLT.Address, point to Script Table as, point to CREATE To, and
then click New Query Editor Window. Point out the encrypted column, and the encryption
algorithm used.
11. Close SQL Server Management Studio, without saving any changes.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 10-17
Lesson 2
Configuring Azure Storage
In this lesson, you will learn how to configure Azure storage and manage storage pools.
Lesson Objectives
In this lesson, you will:
Understand what Microsoft Azure Storage Explorer is, and how to use it.
In addition, you can create solid-state drives (SSDs) or hard disks for use with Azure virtual machines.
To use Azure storage, you must first create an Azure storage account. An Azure storage account holds
different types of Azure storage.
2. In the New blade, click Storage. A list of the different storage options is displayed.
5. In Deployment model, select either Resource manager or Classic. Select Resource manager for all
new applications.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 10-19
6. In Account kind, select either General or Blob. General storage accounts enable you to store
different types of storage in one account including blobs, tables, queues, and files. Blob storage
accounts are optimized for storing BLOB data, and additionally enable you to specify how often you
will access the data.
7. In Performance, select either Standard or Premium. Premium uses SSDs whereas Standard uses
magnetic disks. You cannot change the Performance setting after the storage account has been
created.
9. Select Storage service encryption, and then select either Disabled or Enabled. The data is
encrypted when it is stored on the disk, and is decrypted as it is retrieved from the database. Keys are
managed automatically, and AES 256-bit encryption is used. At the time of writing, this feature is in
preview for file storage. Note that encryption is only available if you select Resource manager as the
Deployment model.
10. In Location, select the data center where you want your storage account to be created.
11. Click Create to create your storage account. A message is displayed when the storage account has
been successfully created.
Note: Although it is possible to change some options after the storage account has been
created, Performance and Replication cannot be changed.
Blobs Storage
Blob storage is used to store data as unstructured BLOBs. You can choose between either a Hot access
tier, for data that you need to access often, or a Cool access tier for data that is accessed infrequently
and is less expensive. If you will only store Blobs, select Blob in Account Kind when you create your
Azure storage account.
For more information about Azure Storage Explorer, and specifically about managing BLOB storage by
using Azure Storage Explorer, see the Azure documentation:
You can configure storage spaces and storage pools as part of an availability group, or on a single Azure
virtual machine. Use Server Manager to configure storage pools, or Windows PowerShell.
2. Open Server Manager, navigate to File and Storage Services, navigate to Volumes, and then
navigate to Storage Pools.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 10-21
3. In Storage Pools, select Primordial. This represents the unassigned data disks.
4. Next to Storage Pools, select Tasks, and then choose New Storage Pool.
5. The New Storage Pool Wizard starts and displays the Before you begin screen. Click Next.
6. On the Specify a storage pool name and subsystem page, type a name, and then click Next.
7. On the Select physical disks for the storage pool page, select the disks that you want to include.
8. On the Confirm Selection page, click Create. When it has completed, click Close.
Storage Tiers
Storage tiers are a feature of storage spaces, and
they enable you to optimize performance, for the
most economical price. To use storage tiers, you
must create a virtual hard disk that has a mix of
hard disk drive (HDD) storage, and SSD storage.
Two tiers are created: the SSD tier is used for
data that is accessed frequently and the HDD tier
is used for data that is accessed less frequently.
Note: To configure storage tiers correctly, the operating system must recognize disks as
either HDD or SSD. This sometimes requires running a Windows PowerShell script to ensure that
they are categorized correctly. Storage tiers will not work if they are not correctly identified.
New=StorageTier
New-StorageTier –StoragePoolFriendlyName TieredPool -FriendlyName MySSDTier –MediaType
SSD
Demonstration Steps
2. In the D:\Demofiles\Mod10 folder, right-click Setup.cmd, and then click Run as administrator.
3. In the User Account Control dialog box, click Yes, and wait for the script to finish.
4. Open Internet Explorer and navigate to the Azure portal at www.azure.portal.com. Sign in with
your Azure pass credentials.
5. Click Resource Groups, and then click StorageSpacesDemo.
8. Change to the Start screen, type Remote Desktop and then click Remote Desktop Connection.
9. Connect to the Azure VM by pasting the IP address into the Computer box (delete everything after
the IP address), and then click Connect.
10. When prompted, type the password Pa55w.rd, and then click OK.
11. In the Remote Desktop Connection dialog box, click Yes. The Server Manager dashboard is
displayed.
13. In Server Manager, click File and Storage Services, Volumes, and then Storage Pools.
15. Next to Storage Pools, click Tasks, and then click New Storage Pool.
16. In the New Storage Pool Wizard dialog box, on the Before you begin page, click Next.
17. On the Specify a storage pool name and subsystem page, in the Name box, type MyStoragePool,
and click Next.
18. On the Select physical disks for the storage pool, select all four disks, and click Next.
21. Click MyStoragePool to see that it is made up of the four disks you selected.
3. On the Start menu, right-click Windows PowerShell ISE, point to More, and then click Run as
administrator.
7. In the Open dialog box, browse to C:\, click ChangeMediaType, and then click Open.
8. Use Run Selection to run the script under # List the physical disks. Note that the media type is
unspecified.
9. Copy the UniqueID of the disk identified as LUN 1 to the $uniqueIdPremium1 variable value
(between ““).
10. Copy the UniqueID of the disk identified as LUN 2 to the $uniqueIDStandard1 variable value.
11. Copy the UniqueID of the disk identified as LUN 3 to the $uniqueIDStandard2 variable value.
12. Copy the UniqueID of the disk identified as LUN 4 to the $uniqueIDStandard3 variable value.
13. Once all the variables have a value, use Run Selection to run the script under # Initialize variables.
14. Use Run Selection to run the script under # Set media type.
15. To check the media types have been assigned correctly, run the script under # List the physical
disks. You will see that the media types are now set.
2. Next to Virtual Disks, click Tasks, and click New Virtual Disk.
3. In the Select the storage pool dialog box, click MyStoragePool, and then click OK.
4. In the New Virtual Disk Wizard, on the Before you begin page, click Next.
5. On the Specify the virtual disk name, in the Name box, type MyVirtualDisk.
6. Select the Create Storage tiers on this virtual disk check box, and then click Next.
8. On the Select the storage layout page, click Simple, and then click Next.
9. On the Specify the provisioning type page, click Fixed, and then click Next.
10. On the Specify the size of the virtual disk page, in the Faster Tier box, type 85, and in the Standard
Tier box, type 180, and then click Next.
11. On the Confirm selections page, click Create. The tiered virtual disk will be created.
12. On the View results page, click Close to finish. You can then create a drive letter.
13. In the New Volume Wizard, on the Before you begin page, click Next.
14. On the Select the server and disk page, click MyVirtualDisk, and then click Next.
15. On the Specify the size of the volume page, click Next.
19. On the Completion page, click Close. You can now see the new virtual disk you created.
Queue storage
Hierarchical storage
File storage
BLOB storage
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 10-25
Lesson 3
Azure Automation
In this lesson, you will be introduced to Azure features that enable you to automate tasks. This lesson
includes an introduction to Windows PowerShell and thfe Azure SQL Database PowerShell cmdlets. You
will be introduced to Azure Automation, including creating an Azure Automation account and using
runbooks to automate tasks.
Lesson Objectives
After completing this lesson, you will be able to:
Understand what Azure Automation is, and how you can use it.
Azure storage
Azure backups
MCT USE ONLY. STUDENT USE PROHIBITED
10-26 Managing Databases in the Cloud
Windows PowerShell. This type of runbook consists of Windows PowerShell text-based scripts.
Windows PowerShell Workflow. This type of runbook consists of text-based workflows in Windows
PowerShell Workflow.
Graphical. This type of runbook is based on Windows PowerShell, but edited by using a graphical
editor.
In the Azure Automation Runbook Gallery, each type of runbook has an appropriate icon. It is necessary
to edit graphical runbooks in the Azure portal, and provide a visual representation of the tasks. Windows
PowerShell and Windows PowerShell Workflow runbooks are both text-based, and can be created and
edited either in the Azure portal, or by using a Windows PowerShell editor and then importing the
runbook.
Registry settings.
4. On the Add Automation Account blade, in the Name box, type a unique name that uses lowercase
characters and numbers. A green tick appears when an acceptable name has been entered.
5. In the Resource group box, either create a new resource group or select an existing one from the
drop-down box.
8. Click Create. The Azure Automation account takes a short while to be created.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 10-27
When you have created an Azure Automation account, you can view the Runbook Gallery and amend or
create runbooks and jobs.
For information about downloading Windows Management Framework 5.0, go to the Microsoft
Download Center:
To find out more information about an alias, type Get-Help followed by the alias—for example, Get-Help
cls.
Parameters
Cmdlets may have one or more parameters, to affect the way in which the cmdlet behaves. The parameter
may be a direct input, or piped from another cmdlet.
Named or positional. A named parameter is where you type the parameter after the cmdlet—for
example, Get-Command –All—to list all of the installed cmdlets. A positional parameter is defined
by its position. The default parameter type for cmdlets is the named parameter.
Mandatory or optional. By default, Windows PowerShell cmdlets take optional parameters. You can
also write your own Windows PowerShell cmdlets and, if necessary, define a parameter as mandatory.
Switch parameters. Switch parameters are either on or off—that is, if you do not provide a
parameter, the value is automatically set to false. An example of a switch parameter is Get-Help –
Full.
MCT USE ONLY. STUDENT USE PROHIBITED
10-28 Managing Databases in the Cloud
Get-
New-
Remove-
Set-
Start-
Stop-
Suspend-
Use-
For a list of Azure SQL Database cmdlets, see AzureRM.Sql on Microsoft online documentation:
To create a new server, use the New-AzureRmSqlServer cmdlet. When you create a new server on Azure,
you must specify the data center and the resource group that you want to use. The resource group is a
way of keeping resources together. You must create the resource group in the same location in which you
want to host your database in Azure SQL Database.
When you create a server, you must also provide a valid data center name. You can use the Get-
AzureRmResourceProvider cmdlet to list the available data centers. To return the information that you
need, you must limit the results to only data centers that host Azure SQL Database. This is done by using a
pipe to return only Microsoft.Sql locations.
List the Azure data centers that support Azure SQL Database.
Get-AzureRmResourceProvider
(Get-AzureRmResourceProvider -ListAvailable | Where-Object {$_.ProviderNamespace -eq
'Microsoft.Sql'}).Locations
You must also specify a specific subscription by using the Select-AzureRmSubscription cmdlet. To create
a new resource group, use the New-AzureRmResourceGroup cmdlet.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 10-29
After you have created the resource group, you can create the server by using the New-AzureRmServer
cmdlet. The server name must be in lowercase characters and unique.
New-AzureRmSqlServer
New-AzureRmSqlServer -ResourceGroupName "PowerShellTest" -ServerName "mytest2017ce" -
Location "West Europe" -ServerVersion "12.0"
New-AzureRmSqlServerFirewallRule
New-AzureRmSqlServerFirewallRule -ResourceGroupName "PowerShellTest" -ServerName
"mytest2017ce" -FirewallRuleName "myFirewallRule" -StartIpAddress “nnn.nnn.nnn.nnn” -
EndIpAddress “nnn.nnn.nnn.nnn”
Finally, we can create an Azure SQL Database on the Azure server. Use the New-AzureRmSqlDatabase
cmdlet.
New-AzureRmSqlDatabase
New-AzureRmSqlDatabase -ResourceGroupName "PowerShellTest" -ServerName "mytest2017ce" -
DatabaseName "mytestdb" -Edition Standard -RequestedServiceObjectiveName "S1"
This example has shown you the Windows PowerShell cmdlets that are used to create a new Azure server
and a new Azure SQL Database. There are many more cmdlets in the Azure SQL Database PowerShell
module.
MCT USE ONLY. STUDENT USE PROHIBITED
10-30 Managing Databases in the Cloud
Type. Select Windows PowerShell scripts, graphical runbooks, and Windows PowerShell workflows.
Click OK after you have selected the filters that you require.
In the Runbook Gallery, you will find sample runbooks that perform various tasks such as starting Azure
V2 or classic virtual machines, stopping Azure V2 or classic virtual machines, and connecting to an Azure
virtual machine, in addition to several educational runbooks that you can use to familiarize yourself with
Azure Automation. The runbooks that Microsoft has created have gone through a review process with the
Azure Automation product team, so you can be sure that they are good examples of automation.
Note: Runbooks that have been contributed by the community are not necessarily
supported, and there is no requirement for them to be maintained to use the latest cmdlets. They
do, however, share how people have successfully automated tasks.
To see what a runbook does, click the title to see a description and a process flow diagram. Each runbook
is rated and a count of the number of downloads is displayed, together with when the runbook was last
updated. There are three different types of runbooks:
Graphical runbook
On the Automation Account blade, under Process Automation, click Runbooks. A list of all of the
runbooks is displayed.
Alternatively, on the Azure Automation account dashboard, under Resources, click Runbooks.
Click the runbook that you imported, and then click Edit. The Edit blade appears.
Demonstration Steps
1. Start the MT17B-WS2016-NAT, 20765C-MIA-DC and 20765C-MIA-SQL virtual machines, and log on
to 20765C-MIA-SQL as AdventureWorks\Student with the password Pa55w.rd.
11. Click Create. The Automation Account takes a short while to be created. A message is displayed when
the account has been created.
12. Click All resources to see that the new Automation account has been created.
13. Click on the new Automation account name as named in step 7. The Automation Account blade is
displayed.
14. Click Runbooks Gallery.
15. Point out the options to filter the Runbook Gallery, including Gallery Source, Type, and Publisher,
and then click OK.
21. On the Import blade, click OK. The runbook is imported into your account.
23. On line 33, overtype World with your name, and then click Save.
26. In the Start Runbook blade, click OK, the workflow runs in a test window.
Sequencing Activity
Put the following steps in order by numbering each to indicate the correct order.
Steps
Select a runbook
Objectives
After completing this lab, you will be able to:
Password: Pa55w.rd
6. Create a server admin login with the name Student and a password of Pa55w.rd.
9. Click Create.
11. Leave the Azure portal open for the next lab exercise.
MCT USE ONLY. STUDENT USE PROHIBITED
10-34 Managing Databases in the Cloud
5. Run each section of the script in turn. Note that all the columns are displayed unmasked.
6. Leave SQL Server Management Studio open for the next lab exercise.
2. Select the AdventureWorksLT database that you created in the last task.
b. SalesLT.Customer.EmailAddress—email mask
4. Run the portion of the script marked Test Dynamic Data Masking. Check that the two columns have
been masked correctly.
6. Run the portion of the script marked Test as Admin. Note that the columns appear unmasked.
8. Close SQL Server Management Studio and the Azure portal window.
Results: After this exercise, you will have added data masks to columns that contain sensitive data and
verified that the data is masked to unauthorized users.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 10-35
3. In the User Account Control dialog box, click Yes, and wait for the script to finish.
8. In the Initialize section of the script, amend the $subscriptionName and $tenantID variables to
those associated with your subscription. (TenantID is the same as the Azure Active Directory
DirectoryID).
9. Run the script titled # Initialize.
11. Run the script # Check name. If the storage account name is not available, change the name in the #
Initialize section and rerun both scripts.
12. Run the script # Prompt for credentials and when prompted, enter Pa55w.rd in both dialog boxes.
18. Run the script under # Create and configure SQL Server.
21. Leave the Azure portal open for the next lab exercise.
MCT USE ONLY. STUDENT USE PROHIBITED
10-36 Managing Databases in the Cloud
2. On the dashboard, create a new Azure Automation account named automate + your initials.
2. Open the Azure Automation account that you have just created.
4. Import the Stop Azure V2 VMs runbook into your Azure Automation account.
5. Publish the Stop Azure V2 VMs runbook.
7. Select Output to view the results. Note the erroneous error message.
8. Verify that the virtual machine has been stopped by viewing the properties of VM1.
3. Import the Start Azure V2 VMs runbook into your Azure Automation account.
6. Configure parameters and run settings with the correct resource group and virtual machine name.
Results: After this exercise, you will have understood how a Windows PowerShell® script is used to create
Azure resources, created an Azure Automation account, and used Azure Automation to stop a virtual
machine.
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 10-37
Question: What are the different types of dynamic data masking? When might you use each
one?
Azure storage
Azure Automation
Best Practice: Whether you hold your data on-premises or in Azure, carry out a threat
analysis. This will help you to identify where data protection is weakest, and which features might
help you to mitigate the risks that you have identified.
Azure offers significant benefits for managing data, including security, geo-replication, anywhere access,
automation, and unlimited storage. This module has introduced three important aspects of moving data
to Azure.
Review Question(s)
Question: What are the main data security concerns in your organization? Which features of
Azure SQL Database are most appropriate to mitigate those concerns?
MCT USE ONLY. STUDENT USE PROHIBITED
Provisioning SQL Databases 10-39
Course Evaluation
Your evaluation of this course will help Microsoft understand the quality of your learning experience.
Please work with your training provider to access the course evaluation form.
Microsoft will keep your answers to this survey private and confidential and will use your responses to
improve your future learning experience. Your open and honest feedback is valuable and appreciated.
MCT USE ONLY. STUDENT USE PROHIBITED
MCT USE ONLY. STUDENT USE PROHIBITED
L2-1
3. In the User Account Control dialog box, click Yes, and then wait for the script to finish.
2. When the tool has run, review the checks that were performed. (If the checks are not visible, click
Show details.)
Results: After this exercise, you should have run the SQL Server setup program and used the tools in the
SQL Server Installation Center to assess the computer’s readiness for SQL Server installation.
2. If the Microsoft Updates or Product Updates pages are displayed, clear any check boxes and click
Next.
MCT USE ONLY. STUDENT USE PROHIBITED
L2-2 Provisioning SQL Databases
3. On the Install Rules page, note that the list of rules has been checked. If the list of checks is not
shown, click Show Details. If a warning about Windows Firewall is displayed, you can continue.
4. On the Install Rules page, click Next.
5. On the Installation Type page, ensure that Perform a new installation of SQL Server is selected,
and then click Next.
6. On the Product Key page, in the Specify a free edition box, select Evaluation, and then click Next.
7. On the License Terms page, note the Microsoft Software License Terms, select I accept the license
terms, and then click Next.
8. On the Feature Selection page, select Database Engine Services, and then click Next.
9. On the Instance Configuration page, ensure that Named instance is selected, in the Named
instance box, type SQLTEST, and then click Next.
10. On the Server Configuration page, on the SQL Server Agent and SQL Server Database Engine
rows, enter the following values:
o Password: Pa55w.rd
11. On the Collation tab, ensure that SQL_Latin1_General_CP1_CI_AS is selected and click Next.
12. On the Database Engine Configuration page, on the Server Configuration tab, in the
Authentication Mode section, select Mixed Mode (SQL Server authentication and Windows
authentication). Enter and confirm the password, Pa55w.rd.
13. Click Add Current User; this will add the user ADVENTUREWORKS\Student (Student) to the list of
Administrators.
14. On the Data Directories tab, change the User database directory to M:\SQLTEST\Data.
15. Change the User database log directory to L:\SQLTEST\Logs.
16. On the TempDB tab, review the default values that have been selected for tempdb data files.
17. On the FILESTREAM tab, ensure that Enable FILESTREAM for Transact-SQL access is not selected,
and then click Next.
18. On the Ready to Install page, review the summary, then click Install and wait for the installation to
complete.
Results: After this exercise, you should have installed an instance of SQL Server.
MCT USE ONLY. STUDENT USE PROHIBITED
L2-3
3. In the left-hand pane of the SQL Server Configuration Manager window, click SQL Server Services.
2. In SQL Server Configuration Manager, expand SQL Native Client 11.0 Configuration (32bit), click
Client Protocols, and verify that the TCP/IP protocol is Enabled for 32-bit client applications.
3. Click Aliases, and note that there are currently no aliases defined for 32-bit clients.
5. In the Alias - New window, in the Alias Name text box, type Test.
8. In SQL Server Configuration Manager, expand SQL Native Client 11.0 Configuration, click Client
Protocols, and verify that the TCP/IP protocol is enabled for 64-bit client applications.
9. Click Aliases, and note that there are currently no aliases defined for 64-bit clients.
11. In the Alias - New window, in the Alias Name text box, type Test.
12. In the Protocol drop-down list box, click TCP/IP.
13. In the Server text box, type MIA-SQL\SQLTEST and click OK.
2. At the command prompt, enter the following command to connect to the MIA-SQL\SQLTEST instance
of SQL Server:
sqlcmd -S MIA-SQL\SQLTEST -E
3. At the sqlcmd prompt, enter the following command to display the SQL Server instance name, and
then press ENTER:
SELECT @@ServerName;
GO
5. Start SQL Server Management Studio, and when prompted, connect to the database engine named
Test using Windows Authentication.
6. In Object Explorer, right-click Test, and then click Properties.
7. Verify that the value of the Name property is MIA-SQL\SQLTEST and click Cancel.
10. When prompted to confirm that you want to stop the MSSQL$SQLTEST service, click Yes.
11. When the service has stopped, close SQL Server Management Studio.
Results: After this exercise, you should have started the SQL Server service and connected using SSMS.
4. Review the content in conjunction with the Install SQL Server From the Command Prompt topic in the
SQL Server online documentation. In particular, note the values of the following properties:
a. INSTANCEID
b. INSTANCENAME
c. ACTION
d. FEATURES
e. TCPENABLED
f. SQLUSERDBDIR
g. SQLUSERDBLOGDIR
h. SQLTEMPDBFILECOUNT
2. Locate the INSTANCENAME parameter in the file. Edit so that its value is SQLDEV. The line should
look like this:
INSTANCENAME="SQLDEV"
3. Locate the INSTANCEID parameter in the file. Edit so that its value is SQLDEV. The line should look
like this:
INSTANCEID="SQLDEV"
MCT USE ONLY. STUDENT USE PROHIBITED
L2-5
4. Locate the TCPENABLED parameter in the file. Edit so that its value is 0. The line should look like this:
TCPENABLED="0"
5. Locate the SQLUSERDBDIR parameter in the file. Edit so that its value is C:\devdb. The line should
look like this:
SQLUSERDBDIR="C:\devdb"
6. Locate the SQLUSERDBLOGDIR parameter in the file. Edit so that its value is C:\devdb. The line
should look like this:
SQLUSERDBLOGDIR="C:\devdb"
7. Locate the SQLTEMPDBFILECOUNT parameter in the file. Edit so that its value is 2. The line should
look like this:
SQLTEMPDBFILECOUNT="2"
Results: After this exercise, you will have reviewed and edited an unattended installation configuration
file.
MCT USE ONLY. STUDENT USE PROHIBITED
MCT USE ONLY. STUDENT USE PROHIBITED
L3-1
3. In the User Account Control dialog box, click Yes, wait for the script to finish, and then press any
key.
5. Click SQL Server authentication, then in the Password and Confirm password boxes, type
Pa55w.rd1.
6. Clear User must change password at next login, then click OK.
4. In Solution Explorer, expand Queries, and then double-click the query Lab Exercise 01 - create
login.sql.
5. In the query window, highlight the statement USE master, and click Execute.
6. In the query pane, after the Task 2 description, type the following query:
7. From the task description, copy and paste the password hash value (starting 0x02…) over
<password_hash_value>.
8. From the task description, copy and paste the SID value (starting 0x44…) over <sid_value>.
Results: After this exercise, you should be able to create a login using SSMS and the CREATE USER
command.
2. On the General page, in the Source section, click Device, and then click the ellipsis (…) button.
4. In the Locate Backup File - MIA-SQL dialog box, browse to the D:\Labfiles\Lab03\Starter folder,
click TSQL1.bak, and then click OK.
7. On the Options page, in the Recovery state list, click RESTORE WITH NORECOVERY, and then click
OK.
8. In the Microsoft SQL Server Management Studio dialog box, click OK.
3. In the Source for restore section, click From device, and then click the ellipsis (…) button.
4. In the Select backup devices dialog box, click Add.
5. In the Locate Backup File - MIA-SQL dialog box, browse to the D:\Labfiles\Lab03\Starter folder,
click TSQL1_trn1.trn, and then click OK.
7. In the Select the backup sets to restore section, select the check box in the Restore column, and
then click OK.
8. In the Microsoft SQL Server Management Studio dialog box, click OK.
2. In the query window, after the Task 3 heading, type the following:
EXEC TSQL.sys.sp_updatestats;
Restore a database backup taken from one SQL Server instance and restore it to another.
2. In the query window, highlight the statement USE TSQL; and then click Execute.
2. In the Databases Properties - TSQL window, on the Options page, change the value of the
Compatibility level box to SQL Server 2017 (140), and then click OK.
2. In Object Explorer, expand Databases, expand System Databases, right-click tempdb, and click
Properties.
3. On the Files page, view the current file settings, and then click Cancel.
4. On the toolbar, click New Query.
USE master;
GO
ALTER DATABASE tempdb
MODIFY FILE (NAME = tempdev, SIZE = 10MB, FILEGROWTH = 5MB, FILENAME =
'T:\tempdb.mdf');
ALTER DATABASE tempdb
MODIFY FILE (NAME = templog, SIZE=5MB, FILEGROWTH = 1MB, FILENAME =
'T:\templog.ldf');
GO
8. When prompted to allow changes, to restart the service, and to stop any dependent services, click
Yes.
9. View the contents of T:\ and note that the tempdb.mdf and templog.ldf files have been moved to
this location.
10. In SQL Server Management Studio, in Object Explorer, right-click tempdb, and click Properties.
11. On the Files page, verify that the file settings have been modified, and then click Cancel.
12. Save the script file as Configure TempDB.sql in the D:\Labfiles\Lab04\Starter folder.
13. Keep SQL Server Management Studio open for the next exercise.
Results: After this exercise, you should have inspected and configured the tempdb database.
MCT USE ONLY. STUDENT USE PROHIBITED
L4-2 Provisioning SQL Databases
3. In Object Explorer, right-click the Databases folder, and then click Refresh to confirm that the
HumanResources database has been created.
5. Keep SQL Server Management Studio open for the next exercise.
3. Under the existing code, enter the following statements, select the statements you have just added,
and then click Execute:
5. Keep SQL Server Management Studio open for the next exercise.
2. Select the code under the comment View page usage and click Execute. This query retrieves data
about the files in the InternetSales database.
3. Note the UsedPages and TotalPages values for the SalesData filegroup.
MCT USE ONLY. STUDENT USE PROHIBITED
L4-3
4. Select the code under the comment Create a table on the SalesData filegroup and click Execute.
5. Select the code under the comment Insert 10,000 rows and click Execute.
6. Select the code under the comment View page usage again and click Execute.
7. Note the UsedPages value for the SalesData filegroup, and verify that the data in the table is spread
across the files in the filegroup.
8. Keep SQL Server Management Studio open for the next exercise.
Results: After this exercise, you should have created a new HumanResources database and an
InternetSales database that includes multiple filegroups.
2. Using File Explorer, move the following files from the D:\Labfiles\Lab04\Starter folder to the
M:\Data folder:
o AWDataWarehouse.mdf
o AWDataWarehouse_archive.ndf
o AWDataWarehouse_current.ndf
3. In SQL Server Management Studio, in Object Explorer, right-click Databases and click Attach.
5. In the Locate Database Files - MIA-SQL dialog box, in the M:\Data\ folder, select the
AWDataWarehouse.mdf database file, and click OK.
6. In the Attach Databases dialog box, after you have added the master databases file, note that all of
the database files are listed, and then click OK.
3. Select the Read-Only check box for the Archive filegroup and click OK.
5. On the Storage page, verify that the dbo.FactInternetSales table is stored in the Current filegroup.
Then click Cancel.
7. On the Storage page, verify that the dbo.FactInternetSalesArchive table is stored in the Archive
filegroup, and then click Cancel.
MCT USE ONLY. STUDENT USE PROHIBITED
L4-4 Provisioning SQL Databases
8. In Object Explorer, right-click the dbo.FactInternetSales table and click Edit Top 200 Rows.
9. In the results output, change the SalesAmount value for the first record to 2500, press Enter to
update the record, and then close the dbo.FactInternetSales results.
10. In Object Explorer, right-click the dbo.FactInternetSalesArchive table and click Edit Top 200 Rows.
11. In the results output, change the SalesAmount value for the first record to 3500 and press Enter to
update the record.
12. View the error message that is displayed, click OK, and then press Esc to cancel the update and close
the dbo.FactInternetSalesArchive results.
13. Close SQL Server Management Studio without saving any changes.
Results: After this exercise, you should have attached the AWDataWarehouse database to MIA-SQL.
MCT USE ONLY. STUDENT USE PROHIBITED
L5-1
3. In the User Account Control dialog box, click Yes to confirm that you want to run the command file,
and wait for the script to finish.
3. In the Open Project dialog box, navigate to the D:\Labfiles\Lab05\Starter\Ex1 folder, click
Ex1.ssmssln, and then click Open.
4. In Solution Explorer, expand Queries, and double-click Ex1-DBCC.sql.
5. In the query pane, review the code under the comment -- Update the Discontinued column for the
Chai product, select the code, and then click Execute.
6. In the query pane, review the code under the comment -- Commit the transaction, select the code,
and then click Execute.
7. In the query pane, review the code under the comment -- Simulate an automatic checkpoint, select
the code, and then click Execute.
9. In the query pane, review the code under the comment -- Open up a different connection and run
this, select the code, and then click Execute.
12. Type CorruptDBlog_renamed and press ENTER. In the File Access Denied message box, click
Continue. This simulates losing the transaction log file in a disk failure.
13. Click Start, type SQL Server Configuration, and then click SQL Server 2017 Configuration
Manager.
15. In SQL Server Configuration Manager, under SQL Server Configuration Manager (Local), click SQL
Server Services.
16. In the right-hand pane, right-click SQL Server (MSSQLSERVER), and then click Start.
MCT USE ONLY. STUDENT USE PROHIBITED
L5-2 Provisioning SQL Databases
17. Right-click SQL Server Agent (MSSQLSERVER), and then click Start.
19. In SQL Server Management Studio, in the Ex1-DBCC.sql query pane, review the code under the
comment -- Try to access the CorruptDB database, select the code, and then click Execute. Note
that the query causes an error.
20. Repeat Step 19, and note that the query causes an error.
21. In the query pane, review the code under the comment -- Check the status of the database, select
the code, and then click Execute.
22. In the query pane, review the code under the comment -- Confirm that the database is not online,
select the code, and then click Execute.
23. In Object Explorer, right-click MIA-SQL, click Refresh, and then expand Databases. Note that
CorruptDB shows as Recovery Pending.
24. In the query pane, review the code under the comment -- Use Emergency mode to review the data
in the database, select the code, and then click Execute.
25. In the query pane, review the code under the comment -- Review the state of the discontinued
products data, select the code, and then click Execute. Note that the query shows zero products in
stock, because the erroneous transaction was committed on disk and the transaction log file has been
lost.
26. In the query pane, review the code under the comment -- Set CorruptDB offline, select the code,
and then click Execute.
27. In Windows Explorer, navigate to the D:\Labfiles\Lab05\Starter\Data folder, right-click
CorruptDB_log_renamed, and click Rename.
28. Type CorruptDB_log and press ENTER. In the File Access Denied message box, click Continue. This
simulates replacing the lost transaction log file with a mirror copy.
29. In SQL Server Management Studio, in the query pane, review the code under the comment -- After
replacing the transaction log file, set CorruptDB back online, select the code, and then click
Execute.
30. In the query pane, review the code under the comment -- Set the database in single user mode
and use DBCC CHECKDB to repair the database, select the code, and then click Execute.
31. In the query pane, review the code under the comment -- Switch the database back into multi-
user mode, select the code, and then click Execute.
32. In the query pane, review the code under the comment -- Check the data has returned to the pre-
failure state, select the code, and then click Execute. Note that the Discontinued column now
shows that 77 products are in stock—the state that the data was in before the erroneous transaction
was run.
33. Close the solution without saving changes, but leave SQL Server Management Studio open for the
next exercise.
Results: After this exercise, you should have used DBBC CHECKDB to repair a corrupt database.
MCT USE ONLY. STUDENT USE PROHIBITED
L5-3
2. In the Open Project dialog box, navigate to the D:\Labfiles\Lab05\Starter\Ex2 folder, click
Ex2.ssmssln, and then click Open.
4. In the query pane, review the code under the comment -- View the statistics for index
fragmentation on the Sales tables in the AdventureWorks database, select the code, and then
click Execute.
5. In the query pane, review the code under the comment -- View the statistics for index
fragmentation on the SalesOrderHeader table, select the code, and then click Execute.
8. In the query pane, review the code under the comment -- View the statistics for index
fragmentation following the data insertion, select the code, and then click Execute.
10. In the query pane, review the code under the comment -- Rebuild the indexes, select the code, and
then click Execute.
11. In the query pane, review the code under the comment -- View the statistics for index
fragmentation following the index rebuild, select the code, and then click Execute.
Results: After this exercise, you should have rebuilt indexes on the Sales.SalesOrderHeader table,
resulting in better performance.
MCT USE ONLY. STUDENT USE PROHIBITED
L5-4 Provisioning SQL Databases
3. On the Select Plan Properties page, in the Name box, type Maintenance Plan for Backup of
AdventureWorks Database, select Separate schedules for each task, and then click Next.
4. On the Select Maintenance Tasks page, select the following tasks, and then click Next:
o Back Up Database (Full)
6. On the Define Back Up Database (Full) Task page, in the Databases(s) list, click AdventureWorks,
and then click OK to close the drop-down list box.
7. Review the options on the Destination and Options tabs to see further changes possible.
8. On the General tab, in the Schedule section, click Change. Review the default schedule, and then
click OK.
10. On the Define Back Up Database (Differential) Task page, in the Database(s) list, select
AdventureWorks, and then click OK to close the drop-down list box.
12. In the New Job Schedule dialog box, in the Frequency section, in the Occurs drop-down list box,
click Daily, and then click OK.
13. On the Define Back Up Database (Differential) Task page, click Next.
14. On the Define Back Up Database (Transaction Log) Task page, in the Database(s) list, select
AdventureWorks, and then click OK to close the drop-down list box.
16. In the New Job Schedule dialog box, in the Frequency section, in the Occurs drop-down list box,
click Daily, in the Daily frequency section, select Occurs every, and then click OK.
17. On the Define Back Up Database (Transaction Log) Task page, click Next.
18. On the Select Report Options page, accept the default options, and then click Next.
19. On the Complete the Wizard page, click Finish. Wait for the operation to complete, and then click
Close.
3. On the Select Plan Properties page, in the Name box, type Maintenance Plan for Checking
Integrity of the AdventureWorks Database, and then click Next.
4. On the Select Maintenance Tasks page, select Check Database Integrity and click Next.
MCT USE ONLY. STUDENT USE PROHIBITED
L5-5
6. On the Define Database Check Integrity Task page, in the Database(s) list, click AdventureWorks,
and then click OK to close the drop-down list box.
7. On the Define Database Check Integrity Task page, select the Tablock check box to minimize
resource usage and maximize performance of the operations, and then click Next.
9. On the Complete the Wizard page, click Finish to create the Maintenance Plan. Wait for the
operation to complete, and then click Close.
2. In the Execute Maintenance Plan dialog box, wait until the maintenance plan succeeds, and then
click Close.
3. Right-click Maintenance Plan for Checking Integrity of the AdventureWorks Database, and then
click View History.
4. In the Log File Viewer - MIA-SQL dialog box, expand the Date value for the Daily Maintenance
plan to see the individual task.
5. Review the data in the Log file summary section, and then click Close.
Results: After this exercise, you should have created the required database maintenance plans.
MCT USE ONLY. STUDENT USE PROHIBITED
MCT USE ONLY. STUDENT USE PROHIBITED
L6-1
2. In the D:\Labfiles\Lab06\Starter folder, right-click Setup.cmd and then click Run as administrator.
3. In the User Account Control dialog box, click Yes, and wait for the script to finish.
5. Click Download.
6. Click Save.
10. In the Privacy Policy page, click I agree to the Privacy Policy, and click Install.
11. In the User Account Control message box, click Yes.
12. Check the box for Launch Microsoft Data Migration Assistant, and click Finish.
Task 2: Run the Stretch Database Advisor in the Data Migration Assistant
1. In Data Migration Assistant, click + to create a new project.
2. In the New Project Type dialog, ensure that Assessment is selected, then enter StretchDatabase as
the project name.
3. Set the Source server type to SQL Server and Target Server type to SQL Server, then click Create.
5. On the Options tab, ensure the New features’ recommendation option is checked, then click Next.
6. On the Connect to a server tab, in the Server Name box, type MIA-SQL, select Windows
Authentication, check the Trust server certificate check box, and then click Connect.
10. Under High Value, click Stretch database to minimize storage costs. Note that this
recommendation applies to the Sales.OrderTracking and Sales.SalesOrderDetail tables.
11. Close Data Migration Assistant without saving the results.
MCT USE ONLY. STUDENT USE PROHIBITED
L6-2 Provisioning SQL Databases
Results: After this exercise, you will know which tables within the Adventure Works database are eligible
for Stretch Database.
3. In the new query pane, type the following Transact-SQL, and then click Execute to enable Stretch
Database for the SQL Server instance.
2. In the Enable Database for Stretch wizard, on the Introduction page, click Next.
3. On the Select tables page, in the tables list, select the check box next to the table OrderTracking,
and then click Next.
5. On the Sign in to your account page, enter your Azure pass credentials, and then click Sign in.
6. On the Configure Azure page, in the Microsoft Azure Sign In section, in the Select a subscription
to use box, select Azure Pass, in the Select Azure region choose an appropriate region.
7. In the Select Azure server section, ensure the Create new server option is selected.
8. In the Server admin login box, type Student, in the Password and Confirm Password boxes, type
Pa55w.rd, and then click Next.
9. On the Secure credentials page, in the New Password and Confirm Password boxes, type
Pa55w.rd1234, and then click Next.
10. On the Select IP address page, in the From and To box, type the IP addresses as provided by your
instructor, and then click Next.
11. On the Summary page, review the details shown, and then click Finish. SQL Server will now configure
Stretch Database for the OrderTracking table. This process may take several minutes.
Results: After this exercise, you will have Stretch Database implemented for the OrderTracking table.
MCT USE ONLY. STUDENT USE PROHIBITED
MCT USE ONLY. STUDENT USE PROHIBITED
L7-1
2. Normally, you would download one of the tools and run them for an hour, but, to save time, we have
a pre-created a CSV results file.
3. In Cores, type 4.
4. In File Upload, click Browse and select D:\Labfiles\Lab07\Starter\sql-perfmon-log.csv.
5. Click Calculate.
2. Users should only be able to access the SalesOrders database from their IP address.
3. Users have an internal IP address range of 169.254.1.1 to 169.254.3.128.
5. Note down the server level firewall rules and database level firewall rules that you require.
Results: After this exercise, you should have planned the performance levels and firewall settings for your
Azure SQL Database.
3. Click New, then click Databases, and then click SQL Database.
5. In the Resource group box, click Create new, then type a name for your resource group—this must
be unique, so cannot be specified here. A suggested format is sql2017-<your initials><one or
more digits>. For example, sql2017-js123. Keep a note of the name you have chosen.
7. Click Server.
9. On the New server blade, in the Server name box type a name for your server—this must be unique
throughout the whole Azure service, so cannot be specified here. A suggested format is sql2017-
<your initials><one or more digits>. For example, sql2017-js123. Keep a note of the name you
have chosen.
13. On the SQL Database blade, in Want to use SQL elastic pool?, click Yes.
16. In the Elastic database pool blade, in the Name box, enter ElasticPool1.
17. Click Pricing tier, then click Standard, and click Select.
18. Click Configure pool.
24. It will take some time for the new server and database to be created. The Azure portal will notify you
when this step is finished.
25. Leave Internet Explorer open for the next task.
2. In the All resources pane, click the server name you created in the previous task.
4. In the Firewall settings blade, click Add client IP, and then click Save.
Results: After this exercise, you will have created an empty Azure SQL Database and configured server
firewall rules.
MCT USE ONLY. STUDENT USE PROHIBITED
L7-3
2. Click SQL databases and verify that SalesOrders appears in the list of databases.
5. In the Connect to Server dialog box, in the Server name box, type the fully qualified server name
you created in the first task of this exercise. The server name must end with the suffix
.database.windows.net.
7. In the Login box, type salesordersadmin, in the Password box, type Pa55w.rd, and then click
Connect.
9. Expand SalesOrders.
10. Expand Tables.
11. Note that there are currently no tables as this is a new, empty database.
3. In the query window, type the following code and press Execute:
5. In the query window, type the following code and press Execute:
6. In the query window, type the following code and press Execute:
7. Note that you can now see your database firewall rule.
4. Click Delete.
MCT USE ONLY. STUDENT USE PROHIBITED
L7-4 Provisioning SQL Databases
5. Click Yes.
Results: After this exercise, you will have connected to your database and configured server and database
firewall rules.
MCT USE ONLY. STUDENT USE PROHIBITED
L8-1
4. Wait for the script to finish, and then press any key to continue.
Task 2: Run the Export Data-tier Application Wizard to Check Database Compatibility
1. Start SQL Server Management Studio and connect to the MIA-SQL database engine using Windows
authentication.
2. In Object Explorer, expand Databases, right-click salesapp1, point to Tasks, and then click Export
Data-tier Application.
3. In the Export Data-tier Application 'salesapp1' dialog box, on the Introduction page, click Next.
4. On the Export Settings page, on the Settings tab, ensure that Save to local disk is selected, and
then in the text box, type D:\Labfiles\Lab08\salesapp1.bacpac.
5. On the Advanced tab, clear the Select All check box, and then click Next.
6. On the Summary page, verify the options you have selected, and then click Finish. Wait for the test
to complete.
7. On the Results page, examine the output of the process. Notice that the database has failed
verification. Click any of the instances of Error in the Results column of the output to see details of
the error.
The verification test reports a failure because of the syntax of the T-SQL statement in the
dbo.up_CrossDatabaseQuery stored procedure.
USE salesapp1;
GO
DROP PROCEDURE dbo.up_CrossDatabaseQuery;
GO
MCT USE ONLY. STUDENT USE PROHIBITED
L8-2 Provisioning SQL Databases
2. In the Export Data-tier Application 'salesapp1' dialog box, on the Introduction page, click Next.
3. On the Export Settings page, on the Settings tab, ensure that Save to local disk is selected, and
then in the text box, type D:\Labfiles\Lab08\salesapp1.bacpac.
4. On the Advanced tab, clear the Select All check box, and then click Next.
5. On the Summary page, verify the options you have selected, and then click Finish. Wait for the test
to complete.
6. On the Results page, examine the output of the process. Notice that the database has passed
verification, and then click Close.
Results: After this exercise, you should have run the tools to check database compatibility with Azure
from SSMS.
2. Sign in to the Azure portal with your Azure Pass or Microsoft Account credentials.
3. Click New, click Databases, and then click SQL Database.
9. Under Location, select a region nearest your current geographical location, and then click Select.
10. On the SQL Database blade, verify that Select source has the value Blank database, and then click
Create.
11. It will take some time for the new server and database to be created. The Azure portal will notify you
when this step is finished.
2. In the All resources pane, click the server name you created in the previous task (if you followed the
suggested naming convention, the server name will start sql2017).
4. In the Firewall settings blade, click Add client IP, and then click Save.
5. When the firewall changes are complete, click OK.
2. In the Object Explorer pane, expand Databases, right-click salesapp1, point to Tasks, and then click
Deploy Database to Microsoft Azure SQL Database.
3. In the Deploy Database 'salesapp1' dialog box, on the Introduction page, click Next.
10. On the Results page, review the outcome of the wizard, and then click Close.
2. Click SQL databases and verify that salesapp1 appears in the list of databases.
5. On the Query menu, point to Connection, and then click Change Connection.
6. In the Connect to Database Engine dialog box, in the Server name box, type the fully qualified
server name you created in the first task of this exercise (if you followed the suggested naming
convention, the server name will start sql2017). The server name must end with the suffix
.database.windows.net.
7. In the Authentication list, click SQL Server Authentication.
8. In the Login box, type salesappadmin, in the Password box, type Pa55w.rd1, and then click
Connect.
MCT USE ONLY. STUDENT USE PROHIBITED
L8-4 Provisioning SQL Databases
11. Click Execute and observe that 10 rows are returned from the Azure database.
Results: After this task, you will have created an Azure SQL Database instance, migrated an on-premises
database to Azure SQL Database, and be able to connect to, and query, the migrated database.
MCT USE ONLY. STUDENT USE PROHIBITED
L9-1
4. Wait for the script to finish, and then press any key to continue.
2. Sign in to the Azure portal with your Azure Pass or Microsoft account credentials.
3. In the navigation on the left, click New, and then click Compute.
5. In the Search box, type SQL Server, and then press Enter.
6. In the list of results, click SQL Server 2017 Enterprise Windows Server 2016.
7. In the Select a deployment model list, ensure that Resource Manager is selected, and then click
Create.
8. On the Basics blade, in the Name box, type a name for your VM. This must be unique throughout
the whole Azure service, so cannot be specified here. A suggested format is sql2017vm-<your
initials><one or more digits>. For example, sql2017vm-js123. Keep a note of the name you have
chosen.
12. Under Resource group, click Create new, and then type SQLResourceGroup.
13. In the Location list, select a location near you, and then click OK.
14. On the Size blade, click View all, click DS11_V2 Standard, and then click Select.
16. On the SQL Server Settings blade, click OK to accept the default values.
17. On the Summary blade, click Create. Azure creates the new VM. The process may take some time.
MCT USE ONLY. STUDENT USE PROHIBITED
L9-2 Provisioning SQL Databases
Task 3: Connect to the Virtual Machine by Using the Microsoft Remote Desktop
Connection Client
1. After the virtual machine has been created and started, the Azure portal displays the virtual machine’s
blade. Click Connect, and then click Open.
3. In the Windows Security dialog box, click More choices, then click Use a different account.
6. In the Remote Desktop Connection dialog box, click Yes. The Remote Desktop Connection client
connects and displays the desktop for the new server. Server Manager starts.
11. Close SQL Server Management Studio and log out of the remote desktop session.
Results: After this exercise, you will have created a virtual machine in your Azure subscription that runs
SQL Server.
4. In the Back Up Database - ExampleDB dialog box, on the General page, under Destination, click
Remove.
5. Click Add.
6. In the Select Backup Destination dialog box, click the ellipses (…) button.
7. In the Locate Database Files - MIA-SQL dialog box, in the File name box, type
D:\Labfiles\Lab09\Backups.bak, and then click OK.
10. Under Overwrite media, click Back up to a new media set, and erase all existing backup sets.
12. User Select a page in the top left, click Backup Options.
MCT USE ONLY. STUDENT USE PROHIBITED
L9-3
14. In the Set backup compression list, click Compress backup, and then click OK.
15. In the Microsoft SQL Server Management Studio dialog box, click OK.
6. In the Remote Desktop Connection dialog box, click Yes. The Azure VM desktop appears.
7. In the host virtual machine, start File Explorer, and browse to D:\LabFiles\Lab09.
9. Switch back to the Azure VM, start File Explorer, and browse to F:\
10. On the Home menu, click Paste. The remote desktop protocol client copies the backup file to the
VM.
4. In the Restore Database dialog box, on the General page, under Source, click Device, and then click
the ellipses (…) button.
5. In the Select backup devices dialog box, in the Backup media type list, click File, and then click
Add.
6. In the Locate Backup File dialog box, browse to F:\, click Backups.bak, and then click OK.
9. Select the Relocate all files to folder check box, and then click OK. SQL Server restores the database.
10. In the Microsoft SQL Server Management Studio dialog box, click OK.
11. In the Object Explorer, expand Databases. If the ExampleDB database does not appear, right-click
Databases, and then click Refresh.
Results: At the end of this exercise, you will have migrated a database from the on-premises server that
runs SQL Server to the SQL Server instance on the new virtual machine.
MCT USE ONLY. STUDENT USE PROHIBITED
L9-4 Provisioning SQL Databases
7. In the top right, click the Notifications button. The Notifications pane displays the status of the
configuration task. This may take several minutes.
Task 2: Connect SQL Server Management Studio to the Azure Virtual Machine
1. When the virtual machine has been successfully updated, in the Azure portal, in the virtual machine’s
blade, click Overview.
4. In the Connect to Server dialog box, in the Server name box, type the IP address you just noted.
5. In the Authentication list, click SQL Server Authentication.
Results: When you have completed this exercise, you will have connected an instance of SQL Server
Management Studio to the Azure instance of SQL Server.
MCT USE ONLY. STUDENT USE PROHIBITED
L10-1
2. Sign in to the Azure portal with your Azure Pass or Microsoft Account credentials.
3. Click New, click Databases, and from the Databases blade select SQL Database.
7. In the Resource group box, select Create New and type 20765C + your initials. A green tick
appears if the name is acceptable. Add one or more digits if necessary. Alternatively, use a resource
group created in a previous lab exercise.
9. In server, create a New Server. The New server blade appears. Alternatively, select a server created in
a previous lab exercise.
10. If you are creating a new server, in the Server name box, type 20765C + your initials. This must be
unique so add one or more digits if necessary. A green tick appears if the name is acceptable. Make a
note of the server name.
14. In the Location box, select the region nearest to your current geographical location.
15. Leave Allow azure services to access server selected, and then click Select.
16. In the Want to use SQL elastic pool box, click Not now.
18. Click Create. If you have created a new server, it will take a little time for the new server and database
to be created. The Azure portal will notify you when this step is finished.
19. Select the server, and in the Settings group, click Firewall/Virtual Networks (Preview).
20. Click Add client IP, click Save, and then click OK.
21. Leave the Azure portal open for the next lab exercise.
2. In the Login box, type Student, and in the password box, type Pa55w.rd, and then click Connect.
MCT USE ONLY. STUDENT USE PROHIBITED
L10-2 Provisioning SQL Databases
4. In the Open File dialog box, navigate to D:\Labfiles\Lab10\Starter, and then double-click Data
Masking.sql.
6. Under the Create Test User comment, highlight the code, and then click Execute.
7. Under the Execute as TestUser comment, highlight the code, and then click Execute.
8. Under the Test Dynamic Data Masking comment, highlight the code, and then click Execute. Note
that all the columns are displayed unmasked.
9. Run the script titled Revert.
10. Leave SQL Server Management Studio open for the next lab exercise.
2. On the Dashboard, click on the AdventureWorksLT database you created earlier in the exercise.
a. SalesLT.Customer.LastName.
b. SalesLT.Customer.EmailAddress.
5. In the Masking rules section, click the mask function next to SalesLT_Customer_EmailAddress. This
is currently set to Default value.
6. In the Edit Masking Rule blade, in the Masking field format list, click Email ([email protected],
click Update, and then close the blade.
7. Click Save.
8. A message appears to confirm you have successfully saved Dynamic Data Masking settings, click OK.
9. In the Masking rules section, click the mask function next to SalesLT.Customer.LastName. This is
currently set to Default value.
10. In the Edit Masking Rule blade, in the Masking field format list, click Custom string (prefix
[padding] suffix).
13. In the Exposed Suffix box, type 0, click Update, and then close the blade.
14. Click Save. When the settings have been saved, click OK.
3. In the Open File dialog box, navigate to D:\Labfiles\Lab10\Starter, and then double-click Check
Masking.sql.
MCT USE ONLY. STUDENT USE PROHIBITED
L10-3
6. Run the script under the heading Test Dynamic Data Masking. Check that the columns have been
masked correctly.
8. Run the script under the heading Test as Admin. Note that the columns are displayed unmasked.
10. Close SQL Server Management Studio and the Azure portal window.
Results: After this exercise, you will have added data masks to columns that contain sensitive data and
verified that the data is masked to unauthorized users.
4. Using Internet Explorer, open the Azure portal (https://portal.zure.com) using your Azure pass
credentials.
5. On the Start Menu, right-click Windows PowerShell, click More, then click Run as Administrator.
8. In the Open File dialog box, navigate to D:\Labfiles\Lab10\Starter, and then double-click
CreateResources.ps1.
10. Amend the $subscriptionName variable to the name of your Azure subscription name.
11. Amend the $tenantID variable to the correct TenantID. You will find this in the Azure Active
Directory section, in Properties. Copy the string Directory ID to the TenantID section in the script.
12. Highlight the script block titled # Initialize and click Run Selection to execute the script.
13. Run the script under # Sign in to Azure and select subscription. You will be prompted to sign in.
14. Run the script under # Check name. If the storage name is not available, go back to the # Initialize
section, add a number to the storage account variable, run the script to initialize the new name, and
then run the # Check name script again.
15. Run the script under # Prompt for credentials. In both dialog boxes, type the password Pa55w.rd.
16. Run the script under # Create resource groups.
MCT USE ONLY. STUDENT USE PROHIBITED
L10-4 Provisioning SQL Databases
18. Run the script under # Create virtual network. Note the warning.
19. Run the script under # Create public IP. Note the warning.
20. Run the script under # Create virtual machine. This may take a couple of minutes to complete.
21. Run the script under # Create and configure SQL Server. This may take a couple of minutes to
complete. If an error message appears the server name is not available, go back to the # Initialize
section, add a number to the server Name variable, run the script to initialize the new name, and then
run the # Create and configure SQL Server script again
24. Leave the Azure portal open for the next lab exercise.
11. Click All resources to see that the new Automation account has been created.
2. Click All resources, and then click automate (the automation account you have just created).
5. In the Browse Gallery blade, click Stop Azure V2 VMs. This is a graphical runbook.
7. In the Import blade, check that the StopAzureV2Vm runbook is selected, and click OK.
9. In the Edit Graphical Runbook blade, click Publish, and then click Yes. A message confirms that the
runbook has been published.
10. In the StopAzureV2Vm blade, check that the status is Published, and then click Start.
11. In the Start Runbook blade, in the ResourceGroupName box, type ResGroup1, and in the
VMName box, type VM1, and then click OK.
12. In the StopAzureV2Vm job blade, click Output. Although an error message appears, the VM has
been stopped.
13. Verify that the VM has been stopped by closing the Output blade, click All resources, and then click
VM1. Note that the status is Stopped (deallocated).
2. In All resources, click automate (the automation account you created in the previous lab exercise).
3. Under Process Automation, click Runbooks.
5. In the Browse Gallery blade, click Start Azure V2 VMs. This is a graphical runbook.
6. In the Start Azure V2 VMs blade, click Import.
7. In the Import blade, check that the StartAzureV2Vm runbook is selected, and click OK.
10. In the StartAzureV2Vm blade, check that the status is Published, and then click Schedule.
11. In the Schedule Runbook blade, click Link a schedule to your runbook.
13. In the Name box, type MySchedule. Leave the Description box blank.
14. In the Starts box, select a time 6 minutes into the future.
15. In the Time zone box, select your local time zone.
16. In the Recurrence box, click Once, and then click Create.
17. In the Schedule Runbook blade, click Configure parameters and run settings.
19. In the VMName box, type VM1, and then click OK.
21. In the StartAzureV2Vm blade, under Details, click Schedules to view the schedule you created.
Close the Schedules blade.
22. Wait 5 minutes, and when the job is due to run, click Jobs.
23. The job should appear as running at the scheduled time. Wait until the job has completed.
24. In All resources, click VM1. Note that the status is Running.
MCT USE ONLY. STUDENT USE PROHIBITED
L10-6 Provisioning SQL Databases
2. In All resources, click automate (the automation account you created in the previous lab exercise).
4. Click AzureAutomationTutorial.
6. In the Edit Graphical Runbook blade, click Publish, and then click Yes. A message confirms that the
runbook has been published.
7. In the AzureAutomationTutorial Runbook blade, verify that the status is Published, then click
Start, and then click Yes.
8. Wait until the job has been completed, and then click Output. A list of all resources is displayed.
Results: After this exercise, you will have understood how a Windows PowerShell® script is used to create
Azure resources, created an Azure Automation account, and used Azure Automation to stop a virtual
machine.