0% found this document useful (0 votes)
38 views100 pages

Module 9 10 11 12

Uploaded by

saadehsan.17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views100 pages

Module 9 10 11 12

Uploaded by

saadehsan.17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 100

9-1

Module 9
Designing and Implementing Stored Procedures
Contents:
Module Overview 9-1
Lesson 1: Introduction to Stored Procedures 9-2

Lesson 2: Working with Stored Procedures 9-7

Lesson 3: Implementing Parameterized Stored Procedures 9-16


Lesson 4: Controlling Execution Context 9-21

Lab: Designing and Implementing Stored Procedures 9-25

Module Review and Takeaways 9-29

Module Overview
This module describes the design and implementation of stored procedures.

Objectives
After completing this module, you will be able to:

 Understand what stored procedures are, and what benefits they have.
 Design, create, and alter stored procedures.

 Control the execution context of stored procedures.

 Implement stored procedures that use parameters.


9-2 Designing and Implementing Stored Procedures

Lesson 1
Introduction to Stored Procedures
Microsoft® SQL Server® database management software includes several built-in system stored
procedures, in addition to giving users the ability to create their own. In this lesson, you will learn about
the role of stored procedures and the potential benefits of using them. System stored procedures provide
a large amount of prebuilt functionality that you can use when you are building applications. Not all
Transact-SQL statements are allowed within a stored procedure.

Lesson Objectives
After completing this lesson, you will be able to:

 Describe the role of stored procedures.

 Identify the potential benefits of using stored procedures.

 Work with system stored procedures.

 Identify statements that are not permitted within the body of a stored procedure declaration.

What Is a Stored Procedure?


A stored procedure is a named collection of
Transact-SQL statements that is stored within the
database. Stored procedures offer a way of
encapsulating repetitive tasks; they support user-
declared variables, conditional execution, and
other powerful programming features.
Transact-SQL Code and Logic Reuse

When applications interact with SQL Server, they


send commands to the server in one of two ways:
1. The application could send each batch of
Transact-SQL commands to the server to be
executed, and resend the same commands if the same function needs to be executed again later.

2. Alternatively, a stored procedure could be created at the server level to encapsulate all of the
Transact-SQL statements that are required.

Stored procedures are named, and are called by their name. The application can then execute the stored
procedure each time it needs that same functionality, rather than sending all of the individual statements
that would otherwise be required.

Stored Procedures
Stored procedures are similar to procedures, methods, and functions in high level languages. They can
have input and output parameters, in addition to a return value.
A stored procedure can return rows of data; in fact, multiple rowsets can be returned from a single stored
procedure.

Stored procedures can be created in either Transact-SQL code or managed .NET code, and are run using
the EXECUTE statement.
Developing SQL Databases 9-3

Benefits of Stored Procedures


Using stored procedures offers several benefits
over issuing Transact-SQL code directly from an
application.

Security Boundary

Stored procedures can be part of a scheme that


helps to increase application security. Users are
given permission to execute a stored procedure
without being given permission to access the
objects that the stored procedure accesses. For
example, you can give a user—or set of users via a
role—permission to execute a stored procedure
that updates a table without granting the user any
permissions to the underlying tables.

Modular Programming

Code reuse is important. Stored procedures help modular programming by allowing logic to be created
once and then reused many times, from many applications. Maintenance is easier because, if a change is
required, you often only need to change the procedure, not the application code. Changing a stored
procedure could avoid the need to change the data access logic in a group of applications.

Delayed Binding

You can create a stored procedure that accesses (or references) a database object that does not yet exist.
This can be helpful in simplifying the order in which database objects need to be created. This is known as
deferred name resolution.

Performance
Using stored procedures, rather than many lines of Transact-SQL code, can offer a significant reduction in
the level of network traffic.

Transact-SQL code needs to be compiled before it is executed. In many cases, when a stored procedure is
compiled, SQL Server will retain and reuse the query plan that it previously generated, avoiding the cost
of compiling the code.

Although you can reuse execution plans for ad-hoc Transact-SQL code, SQL Server favors the reuse of
stored procedure execution plans. Query plans for ad-hoc Transact-SQL statements are among the first
items to be removed from memory when necessary.

The rules that govern the reuse of query plans for ad-hoc Transact-SQL code are largely based on exactly
matching the query text. Any difference—for example, white space or casing—will cause a different query
plan to be created. The one exception is when the only difference is the equivalent of a parameter.

Overall, however, stored procedures have a much higher chance of achieving query plan reuse.
9-4 Designing and Implementing Stored Procedures

Working with System Stored Procedures


SQL Server includes a large amount of built-in
functionality in the form of system stored
procedures and system extended stored
procedures.

Types of System Stored Procedure

There are two types of system stored procedure:

1. System stored procedures.

2. System extended stored procedures.

Both these types of system stored procedure are


supplied prebuilt within SQL Server. The core
difference between the two is that the code for system stored procedures is written in Transact-SQL, and
is supplied in the master database that is included in every SQL Server installation. The code for the
system extended stored procedures, however, is written in unmanaged native code, typically C++, and
supplied via a dynamic-link library (DLL). Since SQL Server 2005, the objects that the procedures access
are located in a hidden resource database rather than directly in the master database—but this has no
impact on the way they work.

Originally, there was a distinction in the naming of these stored procedures, where system stored
procedures had an sp_ prefix and system extended stored procedures had an xp_ prefix. Over time, the
need to maintain backward compatibility has caused a mixture of these prefixes to appear in both types
of procedure. Most system stored procedures still have an sp_ prefix, and most system extended stored
procedures still have an xp_ prefix, but there are exceptions to both of these rules.
System Stored Procedures

Unlike normal stored procedures, system stored procedures can be executed from within any database
without needing to specify the master database as part of their name. Typically, they are used for
administrative tasks that related to configuring servers, databases, and objects or for retrieving
information about them. System stored procedures are created within the sys schema. Examples of system
stored procedures are sys.sp_configure, sys.sp_addmessage, and sys.sp_executesql.

System Extended Stored Procedures

System extended stored procedures are used to extend the functionality of the server in ways that you
cannot achieve by using Transact-SQL code alone. Examples of system extended stored procedures are
sys.xp_dirtree, sys.xp_cmdshell, and sys.sp_trace_create. (Note how the last example here has an sp_
prefix.)

User Extended Stored Procedures

Creating user-defined extended stored procedures and attaching them to SQL Server is still possible but
the functionality is deprecated, and an alternative should be used where possible.
Extended stored procedures run directly within the memory space of SQL Server—this is not a safe place
for users to be executing code. User-defined extended stored procedures are well known to the SQL
Server product support group as a source of problems that prove difficult to resolve. Where possible, you
should use managed-code stored procedures instead of user-defined extended stored procedures.
Creating stored procedures using managed code is covered in Module 13: Implementing Managed Code
in SQL Server.
Developing SQL Databases 9-5

Statements Not Permitted in Stored Procedures


Not all Transact-SQL statements can be used
within a stored procedure. The following
statements are not permitted:

 USE databasename

 CREATE AGGREGATE

 CREATE DEFAULT
 CREATE RULE

 CREATE SCHEMA

 CREATE or ALTER FUNCTION

 CREATE or ALTER PROCEDURE

 CREATE or ALTER TRIGGER

 CREATE or ALTER VIEW

 SET PARSEONLY

 SET SHOWPLAN_ALL

 SET SHOWPLAN_TEXT
 SET SHOWPLAN_XML

Despite these restrictions, it is still possible for a stored procedure to access objects in another database.
To access objects in another database, reference them using their three- or four-part name, rather than
trying to switch databases with a USE statement.

Demonstration: Working with System Stored Procedures and Extended


Stored Procedures
In this demonstration, you will see how to:
 Execute system stored procedures.

Demonstration Steps
1. Ensure that the 20762C-MIA-DC and 20762C-MIA-SQL virtual machines are running, and then log
on to 20762C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.

2. Navigate to the folder D:\Demofiles\Mod09 and execute Setup.cmd as an administrator.

3. In the User Account Control dialog box, click Yes.


4. Start SQL Server Management Studio and connect to the MIA-SQL instance using Windows
authentication.

5. In SQL Server Management Studio, open the file D:\Demofiles\Mod09\Module09.ssmssln.


6. In Solution Explorer, in the Queries folder, double-click the 11 - Demonstration1A.sql script file.

7. Highlight the text under the comment Step 1 - Switch to the AdventureWorks database, and click
Execute.
9-6 Designing and Implementing Stored Procedures

8. Highlight the text under the comment Step 2 - Execute the sp_configure system stored
procedure, and click Execute.
9. Highlight the text under the comment Step 3 - Execute the xp_dirtree extended system stored
procedure, and click Execute.

10. Keep SQL Server Management Studio open for the next demo.
Question: The system stored procedure prefix (sp_) and the extended stored procedure
prefix (xp_) have become a little muddled over time. What does this say about the use of
prefixes when naming objects like stored procedures?
Developing SQL Databases 9-7

Lesson 2
Working with Stored Procedures
Now that you know why stored procedures are important, you need to understand the practicalities that
are involved in working with them.

Lesson Objectives
After completing this lesson, you will be able to:

 Create a stored procedure.

 Execute stored procedures.

 Use the WITH ROWSETS clause.


 Alter a stored procedure.

 Drop a stored procedure.

 Identify stored procedure dependencies.


 Explain guidelines for creating stored procedures.

 Obfuscate stored procedure definitions.

Creating a Stored Procedure


You use the CREATE PROCEDURE Transact-SQL
statement to create a stored procedure.
CREATE PROCEDURE is commonly abbreviated to
CREATE PROC. You cannot replace a procedure
by using the CREATE PROC statement. In versions
of SQL Server before SQL Server 2016 Service Pack
1, you need to alter it explicitly by using an ALTER
PROC statement or by dropping it and then
recreating it. In versions of SQL Server including
and after SQL Server 2016 Service Pack 1, you can
use the CREATE OR ALTER command to create a
stored procedure, or alter it if it already exists.

The CREATE PROC statement must be the only one in the Transact-SQL batch. All statements from the AS
keyword until the end of the script or until the end of the batch (using a batch separator such as GO) will
become part of the body of the stored procedure.

Creating a stored procedure requires both the CREATE PROCEDURE permission in the current database
and the ALTER permission on the schema that the procedure is being created in. It is important to keep
connection settings such as QUOTED_IDENTIFIER and ANSI_NULLS consistent when you are working
with stored procedures. Stored procedure settings are taken from the settings for the session in which it
was created.
Stored procedures are always created in the current database with the single exception of stored
procedures that are created with a number sign (#) prefix in their name. The # prefix on a name indicates
that it is a temporary object—it is therefore created in the tempdb database and removed at the end of
the user's session.
9-8 Designing and Implementing Stored Procedures

For more information about creating stored procedures, see Microsoft Docs:

Create a Stored Procedure


https://aka.ms/Hopurd

Debugging Stored Procedures


When you are working with stored procedures, a good practice is first to write and test the Transact-SQL
statements that you want to include in your stored procedure. Then, if you receive the results that you
expected, wrap the Transact-SQL statements in a CREATE PROCEDURE statement.

Note: Wrapping the body of a stored procedure with a BEGIN…END block is not required
but it is considered good practice. Note also that you can terminate the execution of a stored
procedure by executing a RETURN statement within the stored procedure.

Executing a Stored Procedure


You use the Transact-SQL EXECUTE statement to
execute stored procedures. EXECUTE is commonly
abbreviated to EXEC.

EXECUTE Statement
The EXECUTE statement is most commonly used
to execute stored procedures, but can also be
used to execute other objects such as dynamic
Structured Query Language (SQL) statements.
Using EXEC, you can execute system stored
procedures within the master database without
having to explicitly refer to that database.
Executing user stored procedures in another database requires that you use the three-part naming
convention. Executing user stored procedures in a schema other than your default schema requires that
you use the two-part naming convention.

Two-Part Naming on Referenced Objects


When you are creating stored procedures, use at least two-part names for referenced objects. If you refer
to a table by both its schema name and its table name, you avoid any ambiguity about which table you
are referring to, and you maximize the chance of SQL Server being able to reuse query execution plans.

If you use only the name of a table, SQL Server will first search in your default schema for the table. Then,
if it does not locate a table that has that name, it will search the dbo schema. This minimizes options for
query plan reuse for SQL Server because, until the moment when the stored procedure is executed, SQL
Server cannot tell which objects it needs, because different users can have different default schemas.

Two-Part Naming When Creating Stored Procedures


If you create a stored procedure by only supplying the name of the procedure (and not the schema
name), SQL Server will attempt to create the stored procedure in your default schema. Scripts that create
stored procedures in this way tend to be fragile because the location of the created stored procedure
would depend upon the default schema of the user who was executing the script.
Developing SQL Databases 9-9

Two-Part Naming When Executing Stored Procedures


When you execute a stored procedure, you should supply the name of both the schema and the stored
procedure. If you supply only the name of the stored procedure, SQL Server has to attempt to find the
stored procedure in several places.

If the stored procedure name starts with sp_ (not recommended for user stored procedures), SQL Server
will search locations in the following order, in an attempt to find the stored procedure:

 The sys schema in the master database.

 The default schema for the user who is executing the stored procedure.

 The dbo schema in the current database for the stored procedure.

Having SQL Server perform unnecessary steps to locate a stored procedure reduces performance.

For more information about executing a stored procedure, see Microsoft Docs:

Execute a Stored Procedure


https://aka.ms/O1i3nv

Altering a Stored Procedure


The Transact-SQL ALTER PROCEDURE statement
is used to replace an existing procedure. ALTER
PROCEDURE is commonly abbreviated to ALTER
PROC.

ALTER PROC
The main reason for using the ALTER PROC
statement is to retain any existing permissions on
the procedure while it is being changed. Users
might have been granted permission to execute
the procedure. However, if you drop the
procedure, and then recreate it, the permission will
be removed and would need to be granted again.

Note that the type of procedure cannot be changed using ALTER PROC. For example, a Transact-SQL
procedure cannot be changed to a managed-code procedure by using an ALTER PROCEDURE statement
or vice versa.

Connection Settings
The connection settings, such as QUOTED_IDENTIFIER and ANSI_NULLS, that will be associated with the
modified stored procedure will be those taken from the session that makes the change, not from the
original stored procedure—it is important to keep these consistent when you are making changes.

Complete Replacement
Note that, when you alter a stored procedure, you need to resupply any options (such as the WITH
ENCRYPTION clause) that were supplied while creating the procedure. None of these options are retained
and they are replaced by whatever options are supplied in the ALTER PROC statement.
9-10 Designing and Implementing Stored Procedures

Dropping a Stored Procedure


To remove a stored procedure from the database,
use the DROP PROCEDURE statement, commonly
abbreviated to DROP PROC.

You can obtain a list of procedures in the current


database by querying the sys.procedures system
view.

Dropping a system extended stored procedure is


accomplished by executing the stored procedure
sp_dropextendedproc.

Dropping a stored procedure requires either


ALTER permission on the schema that the
procedure is part of, or CONTROL permission on the procedure itself.
For more information about deleting a stored procedure, see Microsoft Docs:

Delete a Stored Procedure


https://aka.ms/Kxgolm

Stored Procedures Error Handling


However well you design a stored procedure,
errors can always occur. When you build error
handling into your stored procedures, you can be
sure that errors will be handled appropriately. The
TRY … CATCH construct is the best way to handle
errors within a stored procedure.

TRY … CATCH constructs


Use TRY … CATCH constructs to handle stored
procedure errors. This is made up of two blocks of
code—a TRY block that contains the code for the
stored procedure, and a CATCH block that
contains error handling code. If the code executes
without error, the code in the CATCH block will not run. However, if the stored procedure does generate
an error, then the code in the CATCH block can handle the error appropriately.

The TRY block starts with the keywords BEGIN TRY, and is finished with the keywords END TRY. The stored
procedure code goes in between BEGIN TRY and END TRY. Similarly, the CATCH block is started with the
keywords BEGIN CATCH, and is finished with the keywords END CATCH.

In this code example, a stored procedure is created to add a new store. It uses the TRY … CATCH construct
to catch any errors. In the event of an error, code within the CATCH block is executed; in this instance, it
returns details about the error.
Developing SQL Databases 9-11

An example of a stored procedure with error handling code:

The TRY … CATCH Construct


CREATE PROCEDURE Sales.NewStore
@SalesPersonID AS int,
@StoreName AS nvarchar(50)

AS
SET NOCOUNT ON;
BEGIN TRY
INSERT INTO Person.BusinessEntity (rowguid)
VALUES (DEFAULT);

DECLARE @BusinessEntityID int = SCOPE_IDENTITY();

INSERT INTO Sales.Store (BusinessEntityID, [Name], SalesPersonID)


VALUES (@BusinessEntityID, @StoreName, @SalesPersonID);

END TRY

BEGIN CATCH
SELECT ERROR_NUMBER() AS ErrorNumber,
ERROR_SEVERITY() AS ErrorSeverity,
ERROR_STATE() AS ErrorState,
ERROR_PROCEDURE() AS ErrorProcedure,
ERROR_LINE() AS ErrorLine,
ERROR_MESSAGE() AS ErrorMessage;
END CATCH

Note: The error functions shown in the example are only used within CATCH blocks. They
will return NULL if used outside a CATCH block.

Transaction Handling
You might need to manage transactions within a
stored procedure. Perhaps you want to ensure that
if one action fails, then all actions are rolled back.
Explicit transactions are managed using the BEGIN
TRANSACTION and COMMIT TRANSACTION
keywords.

In this code example, we create a table with two


columns and a primary key constraint. We then
create a stored procedure to add rows to the
table. We want both inserts to succeed together,
or fail together. We do not want one insert to
succeed, and the other to fail. To achieve this, we
first create a TRY … CATCH block, as discussed in the previous topic. The two insert statements are
enclosed within an explicit transaction. If one fails, both will be rolled back.
9-12 Designing and Implementing Stored Procedures

Using transactions within a stored procedure:

Transactions
CREATE TABLE MyTable
(Col1 tinyint PRIMARY KEY, Col2 CHAR(3) NOT NULL);
GO

CREATE PROCEDURE MyStoredProcedure @PriKey tinyint, @CharCol char(3)


AS
BEGIN TRY
BEGIN TRANSACTION;
INSERT INTO MyTable VALUES (@PriKey, @CharCol)
INSERT INTO MyTable VALUES (@PriKey -1, @CharCol)
COMMIT TRANSACTION;
END TRY

BEGIN CATCH
ROLLBACK
END CATCH;
GO

If you execute the stored procedure with values that enable two rows to be successfully inserted, it will
execute successfully, and no transactions will be rolled back. For example:

EXEC MyStoredProcedure 1, ‘abc’

However, if you execute the stored procedure with a value that causes one row to fail, then the catch
block will be invoked, and the complete transaction will be rolled back. For example:

EXEC MyStoredProcedure 2, ‘xyz’

In the second instance, the primary key constraint will be violated, and no records will be inserted.

@@TRANCOUNT
The @@TRANCOUNT function keeps count of the number of transactions by incrementing each time a
BEGIN TRANSACTION is executed. It decrements by one each time a COMMIT TRANSACTION is executed.
Using the table created in the previous code example, this code fragment shows how @@TRANCOUNT
increments and decrements.

@@TRANCOUNT
SET XACT_ABORT ON;

SELECT @@TRANCOUNT;
BEGIN TRANSACTION;
SELECT @@TRANCOUNT;
INSERT MyTable VALUES (40, 'abc');
COMMIT TRANSACTION;
SELECT @@TRANCOUNT;
BEGIN TRANSACTION;
INSERT MyTable VALUES (41, 'xyz');
SELECT @@TRANCOUNT;
COMMIT TRANSACTION;
SELECT @@TRANCOUNT

SELECT * FROM MyTable2


Developing SQL Databases 9-13

Note: SET XACT_ABORT ON or OFF determines how SQL Server behaves when a statement
fails within a transaction. If SET XACT_ABORT is ON, then the entire transaction is rolled back.

Transactions are discussed in more detail in Module 17.

Stored Procedure Dependencies


Before you drop a stored procedure, you should
check for any other objects that are dependent
upon the stored procedure. SQL Server includes a
number of system stored procedures to help
identify stored procedure dependencies.

The sys.sql_expression_dependencies view


replaces the previous use of the sp_depends
system stored procedure. The
sys.sql_expression_dependencies view provides
a “one row per name” dependency on user-
defined entities in the current database.
sys.dm_sql_referenced_entities and
sys.dm_sql_referencing_entities offer more targeted views over the data that the
sys.sql_expression_dependencies view provides.

You will see an example of how these dependency views are used in the next demonstration.

Guidelines for Creating Stored Procedures


There are several important guidelines that you
should consider when you are creating stored
procedures.

Qualify Names Inside Stored Procedures


Earlier in this lesson, the importance of using at
least a two- or three-part naming convention was
discussed. This applies both to the creation of
stored procedures and to their execution. If your
stored procedure calls another stored procedure,
use the fully qualified name.

Keeping Consistent SET Options


Database Engine saves the settings of both SET QUOTED_IDENTIFIER and SET ANSI_NULLS when a
Transact-SQL stored procedure is created or altered. These original settings are used when the stored
procedure is executed.

SET NOCOUNT ON
Ensure the first statement in your stored procedure is SET NOCOUNT ON. This will increase performance
by suppressing messages returned to the client following SELECT, INSERT, UPDATE, MERGE, and DELETE
statements.
9-14 Designing and Implementing Stored Procedures

Applying Consistent Naming Conventions


Avoid creating stored procedures that use sp_ or xp_ as a prefix. SQL Server uses the sp_ prefix to
designate system stored procedures and any name that you choose may conflict with a current or future
system procedure. An sp_ or xp_ prefix also affects the way SQL Server searches for the stored procedure.

It is important to have a consistent way of naming your stored procedures. There is no right or wrong
naming convention but you should decide on a method for naming objects and apply that method
consistently. You can enforce naming conventions on most objects by using Policy-Based Management or
DDL triggers. These areas are beyond the scope of this course.

Using @@nestlevel to See Current Nesting Level


Stored procedures are nested when one stored procedure calls another or executes managed code by
referencing a common language runtime (CLR) routine, type, or aggregate. You can nest stored
procedures and managed-code references up to 32 levels. You can use @@nestlevel to check the nesting
level of the current stored procedure execution.

Using Return Codes


Return codes are a little like OUTPUT parameters, but return status information about how your stored
procedure executed. Return codes are integers that you define to provide relevant information about
success, failure, or other possible outcomes during execution. Return codes are flexible, and useful in
some circumstances; however, it is good practice to document the meaning of return codes so that they
can be used consistently between systems and developers.

Keeping to One Procedure for Each Task


Avoid trying to write one stored procedure to address multiple requirements. Doing so limits the
possibilities for reuse and can hinder performance.

Obfuscating Stored Procedures


You can use SQL Server to obfuscate the definition
of stored procedures by using the WITH
ENCRYPTION clause. However, you must use this
with caution because it can make working with the
application more difficult and may not achieve the
required aims.

The encryption provided by the WITH


ENCRYPTION clause is not particularly strong, and
is known to be relatively easy to defeat, because
the encryption keys are stored in known locations
within the encrypted text. There are both direct
methods and third-party tools that can reverse the
encryption provided by the WITH ENCRYPTION clause.
In addition, encrypted code is much harder to work with in terms of diagnosing and tuning performance
issues—so it is important you weigh up the benefits before using the WITH ENCRYPTION clause.
Developing SQL Databases 9-15

Demonstration: Stored Procedures


In this demonstration, you will see how to:

 Create, execute, and alter a stored procedure.

Demonstration Steps
1. In Solution Explorer, in the Queries folder, double-click the 21 - Demonstration2A.sql script file.

2. Highlight the code under the comment Step 1 - Switch to the AdventureWorks database, and
click Execute.

3. Highlight the code under the comment Step 2 - Create the GetBlueProducts stored procedure,
and click Execute.
4. Highlight the code under the comment Step 3 - Execute the GetBlueProducts stored procedure,
and click Execute.

5. Highlight the code under the comment Step 4 - Create the GetBlueProductsAndModels stored
procedure, and click Execute.

6. Highlight the code under the comment Step 5 - Execute the GetBlueProductsAndModels stored
procedure which returns multiple rowsets, and click Execute.

7. Highlight the code under the comment Step 6 - Alter the procedure because the 2nd query does
not show only blue products, and click Execute.

8. Highlight the code under the comment Step 7 - And re-execute the GetBlueProductsAndModels
stored procedure, and click Execute.
9. Highlight the code under the comment Step 8 - Query sys.procedures to see the list of
procedures, and click Execute.

10. Keep SQL Server Management Studio open for the next demo.

Check Your Knowledge


Question

Obfuscating the body of a stored


procedure is best avoided, but when
might you want to use this functionality?

Select the correct answer.

When transferring the stored


procedure between servers.

When emailing the stored procedure


code to a colleague.

When the stored procedure takes


input parameters that should not be
disclosed.

When the stored procedure contains


intellectual property that needs
protecting.
9-16 Designing and Implementing Stored Procedures

Lesson 3
Implementing Parameterized Stored Procedures
The stored procedures that you have seen in this module have not involved parameters. They have
produced their output without needing any input from the user and they have not returned any values,
apart from the rows that they have returned. Stored procedures are more flexible when you include
parameters as part of the procedure definition, because you can create more generic application logic.
Stored procedures can use both input and output parameters, and return values.

Although the reuse of query execution plans is desirable in general, there are situations where this reuse is
detrimental. You will see situations where this can occur and consider options for workarounds to avoid
the detrimental outcomes.

Lesson Objectives
After completing this lesson, you will be able to:
 Parameterize stored procedures.

 Use input parameters.

 Use output parameters.

 Explain the issues that surround parameter sniffing and performance, and describe the potential
workarounds.

Working with Parameterized Stored Procedures


Parameterized stored procedures enable a much
higher level of code reuse. They contain three
major components: input parameters, output
parameters, and return values.

Input Parameters
Parameters are used to exchange data between
stored procedures and the application or tool that
called the stored procedure. They enable the caller
to pass a data value to the stored procedure. To
define a stored procedure that accepts input
parameters, you declare one or more variables as
parameters in the CREATE PROCEDURE
statement. You will see an example of this in the next topic.

Output Parameters
Output parameters enable the stored procedure to pass a data value or a cursor variable back to the
caller. To use an output parameter within Transact-SQL, you must specify the OUTPUT keyword in both
the CREATE PROCEDURE statement and the EXECUTE statement.

Return Values
Every stored procedure returns an integer return code to the caller. If the stored procedure does not
explicitly set a value for the return code, the return code is 0 if no error occurs; otherwise, a negative value
is returned.
Developing SQL Databases 9-17

Return values are commonly used to return a status result or an error code from a procedure and are sent
by the Transact-SQL RETURN statement.
Although you can send a value that is related to business logic via a RETURN statement, in general, you
should use output parameters to generate values rather than the RETURN value.

For more information about using parameters with stored procedures, see Microsoft Docs:
Parameters
https://aka.ms/Ai6kha

Using Input Parameters


Stored procedures can accept input parameters in
a similar way to how parameters are passed to
functions, methods, or subroutines in higher-level
languages.

Stored procedure parameters must be prefixed


with the @ symbol and must have a data type
specified. The data type will be checked when a
call is made.

There are two ways to call a stored procedure


using input parameters. One is to pass the
parameters as a list in the same order as in the
CREATE PROCEDURE statement; the other is to
pass a parameter name and value pair. You cannot combine these two options in a single EXEC call.

Default Values
You can provide default values for a parameter where appropriate. If a default is defined, a user can
execute the stored procedure without specifying a value for that parameter.
An example of a default parameter value for a stored procedure parameter:

An example of a default parameter value for a stored procedure parameter:

Default Values
CREATE PROCEDURE Sales.OrdersByDueDateAndStatus
@DueDate datetime,
@Status tinyint = 5
AS

Two parameters have been defined (@DueDate and @Status). The @DueDate parameter has no default
value and must be supplied when the procedure is executed. The @Status parameter has a default value
of 5. If a value for the parameter is not supplied when the stored procedure is executed, a value of 5 will
be used.

Validating Input Parameters


As a best practice, validate all incoming parameter values at the beginning of a stored procedure to trap
missing and invalid values early. This might include such things as checking whether the parameter is
NULL. Validating parameters early avoids doing substantial work in the procedure that then has to be
rolled back due to an invalid parameter value.
9-18 Designing and Implementing Stored Procedures

Executing a Stored Procedure with Input Parameters


Executing a stored procedure that has input parameters is simply a case of providing the parameter values
when calling the stored procedure.

This is an example of the previous stored procedure with one input parameter supplied and one
parameter using the default value:

This is an example of the previous stored procedure with one input parameter supplied and one
parameter using the default value:

Executing a Stored Procedure That Has Input Parameters with a Default Value.
EXEC Sales.OrdersByDueDateAndStatus '20050713';

This execution supplies a value for both @DueDate and @Status. Note that the names of the parameters
have not been mentioned. SQL Server knows which parameter is which by its position in the parameter
list.
This is an example of a stored procedure being executed and both parameters are defined by name:

This is an example of a stored procedure being executed and both parameters are defined by name:

Executing a Stored Procedure with Input Parameters


EXEC Sales.OrdersByDueDateAndStatus '20050613',8;

In this case, the stored procedure is being called by using both parameters, but they are being explicitly
identified by name.
In this example, the results will be the same, even though they are in a different order, because the
parameters are defined by name:

In this example, the results will be the same, even though they are in a different order, because the
parameters are defined by name:

Identifying Parameters by Name


EXEC Sales.OrdersByDueDateAndStatus @Status = 5,
@DueDate = '20050713';

Using Output Parameters


Output parameters are declared and used in a
similar way to input parameters, but output
parameters have a few special requirements.

Requirements for Output Parameters


 You must specify the OUTPUT keyword when
you are declaring an output parameter in a
stored procedure.

 You must specify the OUTPUT keyword in the


list of parameters that are passed with the
EXEC statement.
Developing SQL Databases 9-19

The code example shows the declaration of an OUTPUT parameter:

Output Parameter Declaration


CREATE PROC Sales.GetOrderCountByDueDate
@DueDate datetime, @OrderCount int OUTPUT
AS

In this case, the @DueDate parameter is an input parameter and the @OrderCount parameter has been
specified as an output parameter. Note that, in SQL Server, there is no true equivalent of a .NET output
parameter. SQL Server OUTPUT parameters are really input/output parameters.

To execute a stored procedure with output parameters you must first declare variables to hold the
parameter values. You then execute the stored procedure and retrieve the OUTPUT parameter value by
selecting the appropriate variable. The next code example shows this.

This code example shows how to call a stored procedure with input and output parameters:

Stored Procedure with Input and Output Parameters


DECLARE @DueDate datetime = '20050713';
DECLARE @OrderCount int;
EXEC Sales.GetOrderCountByDueDate @DueDate, @OrderCount OUTPUT;
SELECT @OrderCount;

In the EXEC call, note that the @OrderCount parameter is followed by the OUTPUT keyword. If you do not
specify the output parameter in the EXEC statement, the stored procedure would still execute as normal,
including preparing a value to return in the output parameter. However, the output parameter value
would not be copied back into the @OrderCount variable and you would not be able to retrieve the
value. This is a common bug when working with output parameters.

Parameter Sniffing and Performance


When a stored procedure is executed, it is usually
good to be able to reuse query plans.

SQL Server attempts to reuse query execution


plans from one execution of a stored procedure to
the next and, although this is mostly helpful, in
some circumstances a stored procedure may
benefit from a completely different execution plan
for different sets of parameters.

This problem is often referred to as a parameter-


sniffing problem, and SQL Server provides a
number of ways to deal with it. Note that
parameter sniffing only applies to parameters, not
to variables within the batch. The code for these looks very similar, but variable values are not “sniffed” at
all and this can lead to poor execution plans.

WITH RECOMPILE
You can add a WITH RECOMPILE option when you are declaring a stored procedure. This causes the
procedure to be recompiled each time it is executed.
9-20 Designing and Implementing Stored Procedures

sp_recompile System Stored Procedure


If you call sp_recompile, any existing plans for the stored procedure that is passed to it will be marked as
invalid and the procedure will be recompiled next time it is executed. You can also pass the name of a
table or view to this procedure. In that case, all existing plans that reference the object will be invalidated
and recompiled the next time they are executed.

EXEC WITH RECOMPILE


If you add WITH RECOMPILE to the EXEC statement, SQL Server will recompile the procedure before
running it and will not store the resulting plan. In this case, the original plan would be preserved and can
be reused later.

OPTIMIZE FOR
You use the OPTIMIZE FOR query hint to specify the value of a parameter that should be assumed when
compiling the procedure, regardless of the actual value of the parameter.

An example of the OPTIMIZE FOR query hint is shown in the following code example:

OPTIMIZE FOR
CREATE PROCEDURE dbo.GetProductNames
@ProductIDLimit int
AS
BEGIN
SELECT ProductID,Name
FROM Production.Product
WHERE ProductID < @ProductIDLimit
OPTION (OPTIMIZE FOR (@ProductIDLimit = 1000))
END;

Question: What is the main advantage of creating parameterized stored procedures over
nonparameterized stored procedures?
Developing SQL Databases 9-21

Lesson 4
Controlling Execution Context
Stored procedures normally execute in the security context of the user who is calling the procedure.
Providing a chain of ownership extends from the stored procedure to the objects that are referenced, the
user can execute the procedure without the need for permissions on the underlying objects. Ownership-
chaining issues with stored procedures are identical to those for views. Sometimes you need more precise
control over the security context in which the procedure is executing.

Lesson Objectives
After completing this lesson, you will be able to:

 Control execution context.

 Use the EXECUTE AS clause.

 View the execution context.

Controlling Executing Context


The security context in which a stored procedure
executes is referred to as its execution context.
This context is used to establish the identity
against which permissions to execute statements
or perform actions are checked.

Execution Contexts
A login token and a user token represent an
execution context. The tokens identify the primary
and secondary principals against which
permissions are checked, and the source that is
used to authenticate the token. A login that
connects to an instance of SQL Server has one
login token and one or more user tokens, depending on the number of databases to which the account
has access.

User and Login Security Tokens


A security token for a user or login contains the following:
 One server or database principal as the primary identity.

 One or more principals as secondary identities.

 Zero or more authenticators.

 The privileges and permissions of the primary and secondary identities.

A login token is valid across the instance of SQL Server. It contains the primary and secondary identities
against which server-level permissions and any database-level permissions that are associated with these
identities are checked. The primary identity is the login itself. The secondary identity includes permissions
that are inherited from rules and groups.
9-22 Designing and Implementing Stored Procedures

A user token is valid only for a specific database. It contains the primary and secondary identities against
which database-level permissions are checked. The primary identity is the database user itself. The
secondary identity includes permissions that are inherited from database roles. User tokens do not contain
server-role memberships and do not honor the server-level permissions that are granted to the identities
in the token, including those that are granted to the server-level public role.

Controlling Security Context


Although the default behavior of execution contexts is usually appropriate, there are times when you
need to execute within a different security context. This can be achieved by adding the EXECUTE AS clause
to the stored procedure definition.

The EXECUTE AS Clause


The EXECUTE AS clause sets the execution context
of modules such as stored procedures. It is useful
when you need to override the default security
context.

Explicit Impersonation
SQL Server supports the ability to impersonate
another principal, either explicitly by using the
stand-alone EXECUTE AS statement, or implicitly
by using the EXECUTE AS clause on modules.

You can use the stand-alone EXECUTE AS


statement to impersonate server-level principals,
or logins, by using the EXECUTE AS LOGIN statement. You can also use the stand-alone EXECUTE AS
statement to impersonate database-level principals, or users, by using the EXECUTE AS USER statement.

To execute as another user, you must first have IMPERSONATE permission on that user. Any login in the
sysadmin role has IMPERSONATE permission on all users.

Implicit Impersonation
You can perform implicit impersonations by using the WITH EXECUTE AS clause on modules to
impersonate the specified user or login at the database or server level. This impersonation depends on
whether the module is a database-level module, such as a stored procedure or function, or a server-level
module, such as a server-level trigger.

When you impersonate a principal by using the EXECUTE AS LOGIN statement or within a server-scoped
module by using the EXECUTE AS clause, the scope of the impersonation is server-wide. This means that,
after the context switch, you can access any resource within the server on which the impersonated login
has permissions.
When you impersonate a principal by using the EXECUTE AS USER statement or within a database-scoped
module by using the EXECUTE AS clause, the scope of impersonation is restricted to the database. This
means that references to objects that are outside the scope of the database will return an error.
Developing SQL Databases 9-23

Viewing Execution Context


Should you need to programmatically query the
current security context details, you can use the
sys.login_token and sys.user_token system views
to obtain the required information.

sys.login_token System View


The sys.login_token system view shows all tokens
that are associated with the login. This includes
the login itself and the roles of which the user is a
member.

sys.user_token System View


The sys.user_token system view shows all tokens
that are associated with the user within the database.

Demonstration: Viewing Execution Context


In this demonstration, you will see how to:
 View and change the execution context.

Demonstration Steps
1. In Solution Explorer, expand the Queries folder and then double-click the 31 -
Demonstration3A.sql script file.

2. Highlight the code under the comment Step 1 - Open a new query window to the tempdb
database, and click Execute.
3. Highlight the code under the comment Step 2 - Create a stored procedure that queries
sys.login_token and sys.user_token, and click Execute.

4. Highlight the code under the comment Step 3 - Execute the stored procedure and review the
rowsets returned, and click Execute.

5. Highlight the code under the comment Step 4 - Use the EXECUTE AS statement to change
context, and click Execute.

6. Highlight the code under the comment Step 5 - Try to execute the procedure. Why does it not it
work?, click Execute and note the error message.
7. Highlight the code under the comment Step 6 - Revert to the previous security context, and click
Execute.

8. Highlight the code under the comment Step 7 - Grant permission to SecureUser to execute the
procedure, and click Execute.

9. Highlight the code under the comment Step 8 - Now try again and note the output, and click
Execute.
10. Highlight the code under the comment Step 9 - Alter the procedure to execute as owner, and
click Execute.

11. Highlight the code under the comment Step 10 - Execute as SecureUser again and note the
difference, and click Execute.
9-24 Designing and Implementing Stored Procedures

12. Highlight the code under the comment Step 11 - Drop the procedure, and click Execute.

13. Close SQL Server Management Studio without saving any changes.

Check Your Knowledge


Question

What permission is needed to EXECUTE


AS another login or user?

Select the correct answer.

sysadmin

IMPERSONATE

TAKE OWNERSHIP
Developing SQL Databases 9-25

Lab: Designing and Implementing Stored Procedures


Scenario
You need to create a set of stored procedures to support a new reporting application. The procedures will
be created within a new Reports schema.

Objectives
After completing this lab, you will be able to:

Create a stored procedure.

 Change the execution context of a stored procedure.

 Create a parameterized stored procedure.

Estimated Time: 45 minutes


Virtual machine: 20762C-MIA-SQL

User name: ADVENTUREWORKS\Student

Password: Pa55w.rd

Exercise 1: Create Stored Procedures


Scenario
In this exercise, you will create two stored procedures to support one of the new reports.

Supporting Documentation
Stored Procedure: Reports.GetProductColors

Input Parameters: None

Output Parameters: None

Output Columns: Color (from Marketing.Product)

Notes: Colors should not be returned more than once in the output. NULL values
should not be returned.

Stored Procedure: Reports.GetProductsAndModels

Input Parameters: None

Output Parameters: None

Output Columns: ProductID, ProductName, ProductNumber, SellStartDate, SellEndDate and


Color (from Marketing.Product), ProductModelID (from
Marketing.ProductModel), EnglishDescription, FrenchDescription,
ChineseDescription.

Output Order: ProductID, ProductModelID.

Notes: For descriptions, return the Description column from the


Marketing.ProductDescription table for the appropriate language. The
LanguageID for English is “en”, for French is “fr” and for Chinese is “zh-cht”.
If no specific language description is available, return the invariant
9-26 Designing and Implementing Stored Procedures

Stored Procedure: Reports.GetProductColors


language description if it is present. The LanguageID for the invariant
language is a blank string ''. Where neither the specific language nor
invariant language descriptions exist, return the ProductName instead.

The main tasks for this exercise are as follows:

1. Prepare the Lab Environment

2. Review the Reports.GetProductColors Stored Procedure Specification


3. Design, Create and Test the Reports.GetProductColors Stored Procedure

4. Review the Reports.GetProductsAndModels Stored Procedure Specification

5. Design, Create and Test the Reports.GetProductsAndModels Stored Procedure

 Task 1: Prepare the Lab Environment


1. Ensure that the 20762C-MIA-DC and 20762C-MIA-SQL virtual machines are both running, and then
log on to 20762C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.
2. In the D:\Labfiles\Lab09\Starter folder, right-click Setup.cmd, and then click Run as
administrator.

3. In the User Account Control dialog box, click Yes, and wait for the script to finish.

 Task 2: Review the Reports.GetProductColors Stored Procedure Specification


 Review the design requirements in the Exercise Scenario for Marketing.GetProductColors.

 Task 3: Design, Create and Test the Reports.GetProductColors Stored Procedure


 Design and implement the stored procedure in accordance with the design specifications.

 Task 4: Review the Reports.GetProductsAndModels Stored Procedure Specification


 Review the supplied design requirements in the supporting documentation in the Exercise Scenario
for Reports.GetProductsAndModels.

 Task 5: Design, Create and Test the Reports.GetProductsAndModels Stored


Procedure
 Design and implement the stored procedure in accordance with the design specifications.

Results: After completing this lab, you will have created and tested two stored procedures,
Reports.GetProductColors and Reports.GetProductsAndModels.
Developing SQL Databases 9-27

Exercise 2: Create Parameterized Stored Procedures


Scenario
In this exercise, you will create a stored procedure to support one of the new reports.

Supporting Documentation
Stored Procedure Marketing.GetProductsByColor

Input parameters @Color (same data type as the Color column in the Production.Product
table).

Output parameters None

Output columns ProductID, ProductName, ListPrice (returned as a column named Price),


Color, Size and SizeUnitMeasureCode (returned as a column named
UnitOfMeasure) (from Production.Product).

Output order ProductName

Notes The procedure should return products that have no Color if the
parameter is NULL.

The main tasks for this exercise are as follows:


1. Review the Reports.GetProductsByColor Stored Procedure Specification

2. Design, Create and Test the Reports.GetProductsByColor Stored Procedure

 Task 1: Review the Reports.GetProductsByColor Stored Procedure Specification


 Review the design requirements in the Exercise Scenario for Reports.GetProductsByColor.

 Task 2: Design, Create and Test the Reports.GetProductsByColor Stored Procedure


1. Design and implement the stored procedure.

2. Execute the stored procedure.

Note: Ensure that approximately 26 rows are returned for blue products. Ensure that
approximately 248 rows are returned for products that have no color.

Results: After completing this exercise, you will have:

Created the GetProductsByColor parameterized stored procedure.


9-28 Designing and Implementing Stored Procedures

Exercise 3: Change Stored Procedure Execution Context


Scenario
In this exercise, you will alter the stored procedures to use a different execution context.

The main tasks for this exercise are as follows:

1. Alter the Reports.GetProductColors Stored Procedure to Execute As OWNER

2. Alter the Reports.GetProductsAndModels Stored Procedure to Execute As OWNER.

3. Alter the Reports.GetProductsByColor Stored Procedure to Execute As OWNER

 Task 1: Alter the Reports.GetProductColors Stored Procedure to Execute As OWNER


 Alter the stored procedure Reports.GetProductColors so that it executes as owner.

 Task 2: Alter the Reports.GetProductsAndModels Stored Procedure to Execute As


OWNER.
 Alter the stored procedure Reports.GetProductsAndModels so that it executes as owner.

 Task 3: Alter the Reports.GetProductsByColor Stored Procedure to Execute As


OWNER
 Alter the stored procedure, Reports.GetProductsByColor so that it executes as owner.

Results: After completing this exercise, you will have altered the three stored procedures created in earlier
exercises, so that they run as owner.
Developing SQL Databases 9-29

Module Review and Takeaways


Best Practice:

 Include the SET NOCOUNT ON statement in your stored procedures immediately after the AS
keyword. This improves performance.

 While it is not mandatory to enclose Transact-SQL statements within a BEGIN END block in a stored
procedure, it is good practice and can help make stored procedures more readable.
 Reference objects in stored procedures using a two- or three-part naming convention. This reduces
the processing that the database engine needs to perform.

Avoid using SELECT * within a stored procedure even if you need all columns from a table.
Specifying the column names explicitly reduces the chance of issues, should columns be added to
a source table.
10-1

Module 10
Designing and Implementing User-Defined Functions
Contents:
Module Overview 10-1
Lesson 1: Overview of Functions 10-2

Lesson 2: Designing and Implementing Scalar Functions 10-5

Lesson 3: Designing and Implementing Table-Valued Functions 10-10


Lesson 4: Considerations for Implementing Functions 10-14

Lesson 5: Alternatives to Functions 10-20

Lab: Designing and Implementing User-Defined Functions 10-22

Module Review and Takeaways 10-24

Module Overview
Functions are routines that you use to encapsulate frequently performed logic. Rather than having to
repeat the function logic in many places, code can call the function. This makes code more maintainable,
and easier to debug.
In this module, you will learn to design and implement user-defined functions (UDFs) that enforce
business rules or data consistency. You will also learn how to modify and maintain existing functions.

Objectives
After completing this module, you will be able to:

 Describe different types of functions.

 Design and implement scalar functions.


 Design and implement table-valued functions (TVFs).

 Describe considerations for implementing functions.

 Describe alternatives to functions.


10-2 Designing and Implementing User-Defined Functions

Lesson 1
Overview of Functions
Functions are routines that consist of one or more Transact-SQL statements that you can use to
encapsulate code for reuse. A function takes zero or more input parameters and returns either a scalar
value or a table. Functions do not support output parameters but do return results, either as a single value
or as a table.

This lesson provides an overview of UDFs and system functions.

Lesson Objectives
After completing this lesson, you will be able to:

 Describe different types of functions.

 Use system functions.

Types of Functions
Most high level programming languages offer
functions as blocks of code that are called by name
and can process input parameters. Microsoft® SQL
Server® offers three types of functions: scalar
functions, TVFs, and system functions.
You can create two types of TVFs: inline TVFs and
multistatement TVFs.

Note: Functions cannot modify data or insert


rows into a database.

Scalar Functions
Scalar functions return a single data value of the type that is defined in a RETURNS clause. An example of
a scalar function would be one that extracts the protocol from a URL. From the string
“http://www.microsoft.com”, the function would return the string “http”.

Inline Table-Valued Functions


An inline TVF returns a table that is the result of a single SELECT statement. This is similar to a view, but an
inline TVF is more flexible, in that parameters can be passed to the SELECT statement within the function.

For example, if a table holds details of sales for an entire country, you could create individual views to
return details of sales for particular states. You could write an inline TVF that takes the state code or ID as
a parameter, and returns sales data for the state that match the parameter. In this way, you would only
need a single function to provide details for all states, rather than separate views for each state.

Multistatement Table-Valued Functions


A multistatement TVF returns a table derived from one or more Transact-SQL statements. It is similar to a
stored procedure. Multistatement TVFs are created for the same reasons as inline TVFs, but are used when
the logic that the function needs to implement is too complex to be expressed in a single SELECT
statement. You can call them from within a FROM clause.
Developing SQL Databases 10-3

System Functions
System functions are provided with SQL Server to perform a variety of operations. You cannot modify
them. System functions are described in the next topic.

For more details about the restrictions and usage of UDFs, see Microsoft Docs:

Create User-Defined Functions (Database Engine)


http://aka.ms/fuhvvs

System Functions
SQL Server has a wide variety of system functions
that you can use in queries to return data or to
perform operations on data. System functions are
also known as built-in functions.

Note: Virtual functions provided in earlier


versions of SQL Server had names with a fn_ prefix.
These are now located in the sys schema.

Scalar Functions
Most system functions are scalar functions. They
provide the functionality that is commonly provided by functions in other high level languages, such as
operations on data types (including strings and dates and times) and conversions between data types. SQL
Server provides a library of mathematical and cryptographic functions. Other functions provide details of
the configuration of the system, and its security.

Rowset Functions
These return objects that can be used instead of Transact-SQL table reference statements. For example,
OPENJSON is a SQL Server function that can be used to import JSON into SQL Server or transform JSON
to a relational format.

Aggregate Functions
Aggregates such as MIN, MAX, AVG, SUM, and COUNT perform calculations across groups of rows. Many
of these functions automatically ignore NULL rows.

Ranking Functions
Functions such as ROW_NUMBER, RANK, DENSE RANK, and NTILE perform windowing operations on rows
of data.

For more information about system functions that return values, settings and objects, see Microsoft Docs:

System Functions (Transact-SQL)


http://aka.ms/ouqysl

For more information about different types of system functions, see Microsoft Docs:
What are the SQL database functions?
http://aka.ms/jw8w5j
10-4 Designing and Implementing User-Defined Functions

Apart From Data Type and Data Time, Which SQL Server Functions Have You Used?
Responses will vary, based on experience.

Check Your Knowledge


Question

Which of these is a ranking function?

Select the correct answer.

OPENROWSET

ROWCOUNT_BIG

GROUPING_ID

ROW_NUMBER

OPENXML
Developing SQL Databases 10-5

Lesson 2
Designing and Implementing Scalar Functions
You have seen that functions are routines that consist of one or more Transact-SQL statements that you
can use to encapsulate code for reuse—and that functions can take zero or more input parameters, and
return either scalar values or tables.

This lesson provides an overview of scalar functions and explains why and how you use them, in addition
to explaining the syntax for creating them.

Lesson Objectives
After completing this lesson, you will be able to:

 Describe a scalar function.

 Create scalar functions.

 Explain deterministic and nondeterministic functions.

What Is a Scalar Function?


You use scalar functions to return information from
a database. A scalar function returns a single data
value of the type that is defined in a RETURN
clause.

Scalar Functions
Scalar functions are created using the CREATE
FUNCTION statement. The body of a function is
defined within a BEGIN…END block. The function
body contains the series of Transact-SQL statements
that return the value.
Consider the function definition in the following
code example:

CREATE FUNCTION
CREATE FUNCTION dbo.ExtractProtocolFromURL
( @URL nvarchar(1000))
RETURNS nvarchar(1000)
AS BEGIN
RETURN CASE WHEN CHARINDEX(N':',@URL,1) >= 1
THEN SUBSTRING(@URL,1,CHARINDEX(N':',@URL,1) - 1)
END;
END;

Note: Note that the body of the function consists of a single RETURN statement that is
wrapped in a BEGIN…END block.
10-6 Designing and Implementing User-Defined Functions

You can use the function in the following code example as an expression, wherever a single value could
be used:

Using a Function As an Expression


SELECT dbo.ExtractProtocolFromURL(N'http://www.microsoft.com');
IF (dbo.ExtractProtocolFromURL(@URL) = N'http')
...

You can also implement scalar functions in managed code. Managed code will be discussed in Module 13:
Implementing Managed Code in SQL Server. The allowable return values for scalar functions differ
between functions that are defined in Transact-SQL and functions that are defined by using managed
code.

Creating Scalar Functions


User-defined functions are created by using the
CREATE FUNCTION statement, modified by using
the ALTER FUNCTION statement, and removed by
using the DROP FUNCTION statement. Even
though you must wrap the body of the function
(apart from inline functions) in a BEGIN … END
block, CREATE FUNCTION must be the only
statement in the batch.
In SQL Server 2016 Service Pack 1 and later, you can
use the CREATE OR ALTER statement to add a
function, or update its definition if the function
already exists.

Note: Altering a function retains any permissions already associated with it.

Scalar User-Defined Functions (UDFs)


You use scalar functions to return information from a database. A scalar function returns a single data
value of the type defined in a RETURNS clause. The body of the function, which is defined in a
BEGIN…END block, contains the series of Transact-SQL statements that return the value.

Guidelines
Consider the following guidelines when you create scalar UDFs:

 Make sure that you use two-part naming for the function and for all database objects that the
function references.

 Avoid Transact-SQL errors that lead to a statement being canceled and the process continuing with
the next statement in the module (such as within triggers or stored procedures) because they are
treated differently inside a function. In functions, such errors cause the execution of the function to
stop.

Side Effects
A function that modifies the underlying database is considered to have side effects. In SQL Server,
functions are not permitted to have side effects. You cannot change data in a database within a function;
you should not call a stored procedure; and you cannot execute dynamic Structured Query Language
(SQL) code.
Developing SQL Databases 10-7

For more information about the create function, see Microsoft Docs:

Create Function (Transact-SQL)


http://aka.ms/jh54s1

This is an example of a CREATE FUNCTION statement. This function calculates the area of a rectangle,
when four coordinates are entered. Note the use of the ABS function, an in-built mathematical function
that returns a positive value (absolute value) for a given input.

Example of the CREATE FUNCTION statement to create a scalar UDF:

CREATE FUNCTION
CREATE FUNCTION dbo.RectangleArea
(@X1 float, @Y1 float, @X2 float, @Y2 float)
RETURNS float
AS BEGIN
RETURN ABS(@X1 - @X2) * ABS(@Y1 - @Y2);
END;

Deterministic and Nondeterministic Functions


Both built-in functions and UDFs fall into one of
two categories: deterministic and nondeterministic.
This distinction is important because it determines
where you can use a function. For example, you
cannot use a nondeterministic function in the
definition of a calculated column.

Deterministic Functions
A deterministic function is one that will always
return the same result when it is provided with the
same set of input values for the same database
state.

Consider the function definition in the following code example:

Deterministic Function
CREATE FUNCTION dbo.AddInteger
(@FirstValue int, @SecondValue int)
RETURNS int
AS BEGIN
RETURN @FirstValue + @SecondValue;
END;
GO

Every time the function is called with the same two integer values, it will return exactly the same result.

Nondeterministic Functions
A nondeterministic function is one that may return different results for the same set of input values each
time it is called, even if the database remains in the same state. Date and time functions are examples of
nondeterministic functions.
10-8 Designing and Implementing User-Defined Functions

Consider the function in the following code example:

Nondeterministic Function
CREATE FUNCTION dbo.CurrentUTCTimeAsString()
RETURNS varchar(40)
AS BEGIN
RETURN CONVERT(varchar(40),SYSUTCDATETIME(),100);
END;

Each time the function is called, it will return a different value, even though no input parameters are
supplied.

You can use the OBJECTPROPERTY() function to determine if a UDF is deterministic.

For more information about deterministic and nondeterministic functions, see Microsoft Docs:

Deterministic and Nondeterministic Functions


http://aka.ms/doi8in

The following code example creates a function, and then uses OBJECTPROPERTY to return whether or not
it is deterministic. Note the use of the OBJECT_ID function to return the ID of the TodayAsString function.

Using OBJECTPROPERTY to show whether or not a function is deterministic:

OBJECTPROPERTY
CREATE FUNCTION dbo.TodayAsStringC(@Format int= 112)
RETURNS VARCHAR (20)
AS BEGIN
RETURN CONVERT (VARCHAR (20),
CAST(SYSDATETIME() AS date), @Format);
END;
GO

SELECT OBJECTPROPERTY(OBJECT_ID('dob.TodayAsString'), 'IsDeterministic');

Demonstration: Working with Scalar Functions


In this demonstration, you will see how to:

 Work with scalar functions.

Demonstration Steps
1. Ensure that the 20762C-MIA-DC and 20762C-MIA-SQL virtual machines are running, and then log
on to 20762C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.

2. Run D:\Demofiles\Mod10\Setup.cmd as an administrator.

3. In the User Access Control dialog box, click Yes.

4. Wait for setup.cmd to complete successfully.

5. On the taskbar, click Microsoft SQL Server Management Studio.

6. In the Connect to Server dialog box, in Server name, type MIA-SQL and then click Connect.

7. On the File menu, point to Open, and then click Project/Solution.


8. In the Open Project dialog box, navigate to D:\Demofiles\Mod10\Demo10.ssmssqlproj, and then
click Open.
Developing SQL Databases 10-9

9. In Solution Explorer, expand the Queries folder, and then double-click 21 - Demonstration 2A.sql.

10. Select the code under Step A to use the tempdb database, and then click Execute.

11. Select the code under Step B to create a function that calculates the end date of the previous month,
and then click Execute.

12. Select the code under Step C to query the function, and then click Execute.

13. Select the code under Step D to establish if the function is deterministic, and then click Execute.

14. Select the code under Step E to drop the function, and then click Execute.

15. Create a function under Step F using the EOMONTH function (the code should resemble the
following), and then click Execute.

CREATE FUNCTION dbo.EndOfPreviousMonth (@DateToTest date)


RETURNS date
AS BEGIN
RETURN EOMONTH ( dateadd(month, -1, @DateToTest ));
END;
GO

16. Select the code under Step G to query the new function, and then click Execute.

17. Select the code under Step H to drop the function, and then click Execute.
18. Keep SQL Server Management Studio open with the Demo10.ssmssqlpro solution loaded for the
next demo.

Check Your Knowledge


Question

Which of these data types can be returned


by a scalar function used in managed
code?

Select the correct answer.

rowversion

table

cursor

integer

Verify the correctness of the statement by placing a mark in the column to the right.

Statement Answer

True or false? A deterministic


function is one that may return
different results for the same set of
input values each time it is called,
even if the database remains in the
same state.
10-10 Designing and Implementing User-Defined Functions

Lesson 3
Designing and Implementing Table-Valued Functions
In this lesson, you will learn how to work with functions that return tables instead of single values. These
functions are known as table-valued functions (TVFs). There are two types of TVFs: inline and
multistatement.

The ability to return a table of data is important because it means a function can be used as a source of
rows in place of a table in a Transact-SQL statement. In many cases, this avoids the need to create
temporary tables.

Lesson Objectives
After completing this lesson, you will be able to:

 Describe TVFs.
 Describe inline TVFs.

 Describe multistatement TVFs.

What Are Table-Valued Functions?


Unlike scalar functions, TVFs return a table that can
contain many rows of data, each with many
columns.

Table-Valued Functions

There are two types of TVFs you can deploy.

Inline TVFs

These return an output table that is defined by a


RETURN statement that consists of a single SELECT
statement.

Multistatement TVFs

If the logic of the function is too complex to include in a single SELECT statement, you need to implement
the function as a multistatement TVF. Multistatement TVFs construct a table within the body of the
function, and then return the table. They also need to define the schema of the table to be returned.

Note: You can use both types of TVF as the equivalent of parameterized views.

For more information about TVFs, see Microsoft TechNet:

Table-Valued User-Defined Functions


http://aka.ms/hx8dvv
Developing SQL Databases 10-11

Inline Table-Valued Functions


You can use inline functions to achieve the
functionality of parameterized views. One of the
limitations of a view is that you cannot include a
user-provided parameter within the view when you
create it.

In the following code example, note that the return


type is TABLE. The definition of the columns of the
table is not shown. You do not explicitly define the
schema of the returned table. The output table
schema is derived from the SELECT statement that
you provide within the RETURN statement. Every
column that the SELECT statement returns should
also have a distinct name.

Example of a table-valued function:

Table valued function


USE AdventureWorks;
GO

CREATE FUNCTION Sales.GetLastOrdersForCustomer


(@CustomerID int, @NumberOfOrders int)
RETURNS TABLE
AS
RETURN (SELECT TOP (@NumberOfOrders) soh.SalesOrderID, soh.OrderDate,
soh.PurchaseOrderNumber
FROM Sales.SalesOrderHeader AS soh
WHERE soh.CustomerID = @CustomerID
ORDER BY soh.OrderDate DESC
);

GO

SELECT * FROM Sales.GetLastOrdersForCustomer (17288,2)

For inline functions, the body of the function is not enclosed in a BEGIN…END block. A syntax error occurs
if you attempt to use this block. The CREATE FUNCTION statement—or CREATE OR ALTER statement, in
versions where it is supported—still needs to be the only statement in the batch.

For more information about inline TVFs, see Microsoft Technet:

Inline User-Defined Functions


http://aka.ms/Abkksx
10-12 Designing and Implementing User-Defined Functions

Multistatement Table-Valued Functions


You use a multistatement TVF to create more
complex queries that determine which rows are
returned in the table. You can use UDFs that return
a table to replace views. This is very useful when the
logic that is required for constructing the return
table is more complex than would be possible
within the definition of a view.

A TVF, like a stored procedure, can use complex


logic and multiple Transact-SQL statements to build
a table.

In the example on the slide, a function is created


that returns a table of dates. For each row, two
columns are returned: the position of the date within the range of dates, and the calculated date. The
system does not already include a table of dates, so a loop needs to be constructed to calculate the
required range of dates. You cannot implement this in a single SELECT statement unless another object,
such as a table of numbers, is already present in the database. In each iteration of the loop, an INSERT
operation is performed on the table that is later returned.

In the same way that you use a view, you can use a TVF in the FROM clause of a Transact-SQL statement.
The following code is an example of a multistatement TVF:

Multistatement TVF
CREATE FUNCTION dbo.GetDateRange (@StartDate date, @NumberOfDays int)
RETURNS @DateList TABLE
(Position int, DateValue date)
AS BEGIN
DECLARE @Counter int = 0;
WHILE (@Counter < @NumberofDays) BEGIN
INSERT INTO @DateList
VALUES (@Counter + 1, DATEADD (day, @Counter, @StartDate));
SET @Counter += 1;
END;
RETURN;
END;
GO

SELECT* FROM dbo.GetDateRange('2009-12-31', 14);

Demonstration: Implementing Table-Valued Functions


In this demonstration, you will see how to:

 Implement TVFs.

Demonstration Steps
1. In Solution Explorer, expand the Queries folder, and then double-click 31 - Demonstration 3A.sql.

2. Select the code under Step A to use the AdventureWorks database, and then click Execute.

3. Select the code under Step B to create a table-valued function, and then click Execute.
4. Select the code under Step C to query the function, and then click Execute.
Developing SQL Databases 10-13

5. Select the code under Step D to use CROSS APPLY to call the function, and then click Execute.

6. Select the code under Step E to drop the function, and then click Execute.

7. Close SQL Server Management Studio without saving any changes.

Verify the correctness of the statement by placing a mark in the column to the right.

Statement Answer

True or false? Both scalar functions


and TVFs can return a table that
might contain many rows of data,
each with many columns—TVFs
are a simpler way of returning data
in tables.

Question: You have learned that TVFs return tables. What are the two types of TVF and how
do they differ?
10-14 Designing and Implementing User-Defined Functions

Lesson 4
Considerations for Implementing Functions
Although it’s important to create functions in Transact-SQL, there are some considerations you need to be
aware of. For example, you should avoid negative performance impacts through inappropriate use of
functions—a common issue. This lesson provides guidelines for the implementation of functions, and
describes how to control their security context.

Lesson Objectives
After completing this lesson, you will be able to:

 Describe the performance impacts of scalar functions.

 Describe the performance impacts of table-valued functions.

 Control the execution context.

 Use the EXECUTE AS clause.

 Explain some guidelines for creating functions.

Performance Impacts of Scalar Functions


The code for views is incorporated directly into the
code for the query that accesses the view. This is
not the case for scalar functions.

Common Performance Problems


The overuse of scalar functions is a common cause
of performance problems in SQL Server systems. For
example, a WHERE clause predicate that calls a
scalar function for every target row. In many cases,
extracting the code from the function definition
and incorporating it directly into the query will
resolve the performance issue. You will see an
example of this in the next lab.

Example of code that will not perform well:

Using a Scalar Function Within a SELECT Statement


SELECT soh.CustomerID, soh.SalesOrderID
FROM Sales.SalesorderHeader AS soh
WHERE soh.SalesOrderID = dbo.GetLatestBulkOrder(soh.CustomerID)
ORDER BY soh.CustomerID, soh.SalesOrderID;
Developing SQL Databases 10-15

Performance Impacts of Table-Valued Functions


The code for a TVF may or may not be incorporated
into the query that uses the function, depending on
the type of TVF. Inline TVFs are directly
incorporated into the code of the query that uses
them.

Common Performance Problems


Multistatement TVFs are not incorporated into the
code of the query that uses them. The
inappropriate usage of such TVFs is a common
cause of performance issues in SQL Server.

You can use the CROSS APPLY operator to call a


TVF for each row in the table on the left within the query. Designs that require the calling of a TVF for
every row in a table can lead to significant performance overhead. You should examine the design to see
if there is a way to avoid the need to call the function for each row.

Another example of poorly performing code:

Calling a Function for Every Row


SELECT c.customerID,c.AccountNumber,glofc.SalesOrderID, glofc.OrderDate
FROM sales.Customer AS c
CROSS APPLY
sales.GetLastordersForcustomer(c.customerID,3) AS glofc
ORDER BY c.CustomerID,glofc.SalesOrderID;

Note: Interleaved execution, introduced as part of the Adaptive Query Processing feature
of SQL Server 2017, is designed to improve the performance of multistatement TVFs by
generating more accurate cardinality estimates, leading to more accurate query execution plans.
Adaptive Query Processing is enabled for all databases with a compatibility level of 140 or higher.

Controlling the Execution Context


Execution context establishes the identity against
which permissions are checked. The user or login
that is connected to the session, or calling a module
(such as a stored procedure or function),
determines the execution context.

When you use the EXECUTE AS clause to change


the execution context so that a code module
executes as a user other than the caller, the code is
said to “impersonate” the alternative user.

Before you can create a function that executes as


another user, you need to have IMPERSONATE
permission on that user, or be part of the dbo role.
10-16 Designing and Implementing User-Defined Functions

The EXECUTE AS Clause


The EXECUTE AS clause sets the execution context
of a session.

You can use the EXECUTE AS clause in a stored


procedure or function to set the identity that is
used as the execution context for the stored
procedure or function.

EXECUTE AS means you can create procedures that


execute code that the user who is executing the
procedure is not permitted to execute. In this way,
you do not need to be concerned about broken
ownership chains or dynamic SQL execution.
SQL Server supports the ability to impersonate another principal, either explicitly by using the stand-alone
EXECUTE AS statement, or implicitly by using the EXECUTE AS clause on modules. You can use the
stand-alone EXECUTE AS statement to impersonate server-level principals, or logins, by using the
EXECUTE AS LOGIN statement. You can also use the stand-alone EXECUTE AS statement to impersonate
database-level principals, or users, by using the EXECUTE AS USER statement.

Implicit impersonations that are performed through the EXECUTE AS clause on modules impersonate the
specified user or login at the database or server level. This impersonation depends on whether the module
is a database-level module, such as a stored procedure or function, or a server-level module, such as a
server-level trigger.
When you are impersonating a principal by using the EXECUTE AS LOGIN statement, or within a server-
scoped module by using the EXECUTE AS clause, the scope of the impersonation is server-wide. This
means that, after the context switch, you can any resource within the server on which the impersonated
login has permissions.
However, when you are impersonating a principal by using the EXECUTE AS USER statement, or within a
database-scoped module by using the EXECUTE AS clause, the scope of impersonation is restricted to the
database by default. This means that references to objects that are outside the scope of the database will
return an error.

Execute As:

Syntax for Execute As


CREATE FUNCTION Sales.GetOrders
RETURNS TABLE
WITH EXECUTE AS {CALLER SELF 1 OWNER 'user_name'}
AS
.....
Developing SQL Databases 10-17

Guidelines for Creating Functions


Consider the following guidelines when you create
user-defined functions (UDFs):

 UDF Performance. In many cases, inline


functions are much faster than multistatement
functions. Wherever possible, try to implement
functions as inline functions.

 UDF Size. Avoid building large, general-


purpose functions. Keep functions relatively
small and targeted at a specific purpose. This
will avoid code complexity, but will also
increase the opportunities for reusing the
functions.

 UDF Naming. Use two-part naming to qualify the name of any database objects that are referred to
within the function. You should also use two-part naming when you are choosing the name of the
function.
 UDFs and Exception Handling. Avoid statements that will raise Transact-SQL errors, because
exception handling is not permitted within functions.
 UDFs with Indexes. Consider the impact of using functions in combination with indexes. In
particular, note that a WHERE clause that uses a predicate, such as the following code example, is
likely to remove the usefulness of an index on CustomerID:

For example, consider the function definition in the following code fragment:

Functions with Indexes


WHERE Function(CustomerID) = Value

Demonstration: Controlling the Execution Context


In this demonstration, you will see how to:

 Alter the execution context of a function.

Demonstration Steps
Alter the Execution Context of a Function

1. Ensure that the 20762C-MIA-DC and 20762C-MIA-SQL virtual machines are running, and then log
on to 20762C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.

2. Run D:\Demofiles\Mod10\Setup.cmd as an administrator.

3. In the User Access Control dialog box, click Yes.

4. On the taskbar, click Microsoft SQL Server Management Studio.

5. In the Connect to Server dialog box, in Server name box, type MIA-SQL, and then click Connect.

6. On the File menu, point to Open, click Project/Solution.

7. In the Open Project dialog box, navigate to D:\Demofiles\Mod10\Demo10.ssmssqlproj, and then


click Open.
10-18 Designing and Implementing User-Defined Functions

8. In Solution Explorer, expand the Queries folder, and then double-click 41 - Demonstration 4A.sql.

9. Select the code under Step A to use the master database, and then click Execute.

10. Select the code under Step B to create a test login, and then click Execute.

11. Select the code under Step C to use the AdventureWorks database and create a user, and then click
Execute.

12. Select the code under Step D to create a function with default execution context, and then click
Execute.

13. Select the code under Step E to try to add WITH EXECUTE AS, and then click Execute.

14. Select the code under Step F to recreate the function as a multistatement table-valued function, and
then click Execute.

15. Select the code under Step G to select from the function, and then click Execute.

16. Select the code under Step H to drop the objects, and then click Execute.

17. Close SQL Server Management Studio without saving any changes.

Verify the correctness of the statement by placing a mark in the column to the right.

Statement Answer

True or false? The underuse of


scalar functions is a common cause
of performance problems in SQL
Server systems.

Verify the correctness of the statement by placing a mark in the column to the right.

Statement Answer

True or false? Before you can


create a function that executes as
another user, you need to have
IMPERSONATE permission on that
user, or be part of the dbo role.
Developing SQL Databases 10-19

Check Your Knowledge


Question

Which of these is not considered when


creating UDFs?

Select the correct answer.

Function Naming

Function Code Size

Function Log Size

Function Exception Handling

Function Performance
10-20 Designing and Implementing User-Defined Functions

Lesson 5
Alternatives to Functions
Functions are one option for implementing code. This lesson explores situations where other solutions
may be appropriate and helps you choose which solution to use.

Lesson Objectives
After completing this lesson, you will be able to:

 Compare table-valued functions and stored procedures.

 Compare table-valued functions and views.

Comparing Table-Valued Functions and Stored Procedures


You can often use TVFs and stored procedures to
achieve similar outcomes. However, not all client
applications can call both. This means that you
cannot necessarily use them interchangeably. Each
approach also has its pros and cons.
Although you can access the output rows of a
stored procedure by using an INSERT EXEC
statement, it is easier to consume the output of a
function in code than the output of a stored
procedure.

For example, you cannot execute the following


code:

Cannot Select from a Stored Procedure


SELECT * FROM (EXEC dbo.GetCriticalPathNodes);

You could assign the output of a function to a variable in code.


Stored procedures can modify data in database tables. Functions cannot modify data in database tables.
Functions that include such “side effects” are not permitted. Functions can have significant performance
impacts when they are called for each row in a query; for example, when a TVF is called by using a CROSS
APPLY or OUTER APPLY statement.

Stored procedures can execute dynamic SQL statements. Functions are not permitted to execute dynamic
SQL statements.

Stored procedures can include detailed exception handling. Functions cannot contain exception handling.

Stored procedures can return multiple resultsets from a single stored procedure call. TVFs can return a
single rowset from a function call. There is no mechanism to permit the return of multiple rowsets from a
single function call.
Developing SQL Databases 10-21

Comparing Table-Valued Functions and Views


TVFs can provide similar outcomes to views.

Views and TVFs that do not contain parameters can


usually be consumed by most client applications
that access tables. Not all such applications can pass
parameters to a TVF.

You can update views and inline TVFs. This is not


the case for multistatement TVFs.

Views can have INSTEAD OF triggers associated


with them. This is mostly used to provide for
updatable views based on multiple base tables.

Views and inline TVFs are incorporated into


surrounding queries. Multistatement TVFs are not incorporated into surrounding queries, and often lead
to performance issues when they are used inappropriately.

Check Your Knowledge


Question

What is wrong with this TVF code


fragment?
SELECT * FROM (EXEC
dbo.GetCriticalPathNodes);

Select the correct answer.

Incorrect syntax for a TVF.

You cannot select from a stored


procedure in a TVF.

dbo.GetCriticalPathNodes does not


exist.

The statement needs more parentheses.

None of these—this code is good.

Verify the correctness of the statement by placing a mark in the column to the right.

Statement Answer

True or false? You can update


views, inline TFVs, and
multistatement TVFs.
10-22 Designing and Implementing User-Defined Functions

Lab: Designing and Implementing User-Defined Functions


Scenario
The existing marketing application includes some functions. Your manager has requested your assistance
in creating a new function for formatting phone numbers—you also need to modify an existing function
to improve its usability.

Objectives
After completing this lab, you will be able to:

 Create a function.

 Modify an existing function.

Estimated Time: 30 minutes


Virtual machines: 20762C-MIA-SQL

User name: ADVENTUREWORKS\Student

Password: Pa55w.rd

Exercise 1: Format Phone Numbers


Scenario
Your manager has noticed that different users tend to format phone numbers that are entered into the
database in different ways. She has asked you to create a function that will be used to format the phone
numbers. You need to design, implement, and test the function.
The main tasks for this exercise are as follows:

 Review the design requirements.

 Design and create the function.

 Test the function.

The main tasks for this exercise are as follows:

1. Prepare the Lab Environment

2. Review the Design Requirements

3. Design and Create the Function

4. Test the Function

 Task 1: Prepare the Lab Environment


1. Ensure that the 20762C-MIA-DC and 20762C-MIA-SQL virtual machines are both running, and then
log on to 20762C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.

2. Run D:\Labfiles\Lab10\Starter\Setup.cmd as an administrator.

 Task 2: Review the Design Requirements


1. Open the Supporting Documentation.docx in the D:\Labfiles\Lab10\Starter folder.

2. Review the Function Specifications: Phone Number section in the supporting documentation.

 Task 3: Design and Create the Function


 Design and create the function for reformatting phone numbers.
Developing SQL Databases 10-23

 Task 4: Test the Function


 Execute the FormatPhoneNumber function to ensure that the function correctly formats the phone
number.

Results: After this exercise, you should have created a new FormatPhoneNumber function within the
dbo schema.

Exercise 2: Modify an Existing Function


Scenario
An existing function, dbo.StringListToTable, takes a comma-delimited list of strings and returns a table.
In some application code, this causes issues with data types because the list often contains integers rather
than just strings.

The main tasks for this exercise are as follows:


1. Review the requirements.

2. Design and create the function.

3. Test the function.


4. Test the function by using an alternate delimiter, such as the pipe character (|).

The main tasks for this exercise are as follows:

1. Review the Requirements

2. Design and Create the Function

3. Test the Function

4. Test the Function by Using an Alternate Delimiter

 Task 1: Review the Requirements


 In the Supporting Documentation.docx, review the Requirements: Comma Delimited List
Function section in the supporting documentation.

 Task 2: Design and Create the Function


 Design and create the dbo.IntegerListToTable function.

 Task 3: Test the Function


 Execute the dbo.IntegerListToTable function to ensure that it returns the correct results.

 Task 4: Test the Function by Using an Alternate Delimiter


 Test the dbo.IntegerListToTable function, and then pass in an alternate delimiter, such as the pipe
character (|).

Results: After this exercise, you should have created a new IntegerListToTable function within a dbo
schema.
10-24 Designing and Implementing User-Defined Functions

Module Review and Takeaways


In this module, you have learned about designing, creating, deploying, and testing functions.

Best Practice: When working with functions, consider the following best practices:

 Avoid calling multistatement TVFs for each row of a query. In many cases, you can dramatically
improve performance by extracting the code from the query into the surrounding query.

 Use the WITH EXECUTE AS clause to override the security context of code that needs to perform
actions that the user who is executing the code does not have.

Review Question(s)
Question: When you are using the EXECUTE AS clause, what privileges should you grant to
the login or user that is being impersonated?

Question: When you are using the EXECUTE AS clause, what privileges should you grant to
the login or user who is creating the code?
11-1

Module 11
Responding to Data Manipulation Via Triggers
Contents:
Module Overview 11-1
Lesson 1: Designing DML Triggers 11-2

Lesson 2: Implementing DML Triggers 11-9

Lesson 3: Advanced Trigger Concepts 11-15


Lab: Responding to Data Manipulation by Using Triggers 11-23

Module Review and Takeaways 11-26

Module Overview
Data Manipulation Language (DML) triggers are powerful tools that you can use to enforce domain,
entity, referential data integrity and business logic. The enforcement of integrity helps you to build
reliable applications. In this module, you will learn what DML triggers are, how they enforce data integrity,
the different types of trigger that are available to you, and how to define them in your database.

Objectives
After completing this module, you will be able to:

 Design DML triggers.

 Implement DML triggers.


 Explain advanced DML trigger concepts, such as nesting and recursion.
11-2 Responding to Data Manipulation Via Triggers

Lesson 1
Designing DML Triggers
Before you begin to create DML triggers, you should become familiar with best practice design guidelines,
which help you to avoid making common errors.

Several types of DML trigger are available—this lesson goes through what they do, how they work, and
how they differ from Data Definition Language (DDL) triggers. DML triggers have to be able to work with
both the previous state of the database and its changed state. You will see how the inserted and deleted
virtual tables provide that capability.

DML triggers are often added after applications are built—so it’s important to check that a trigger will not
cause errors in the existing applications. The SET NOCOUNT ON command helps to avoid the side effects
of triggers.

Lesson Objectives
After completing this lesson, you will be able to:

 Describe DML triggers.

 Explain how AFTER triggers differ from INSTEAD OF triggers, and where each should be used.
 Access both the “before” and “after” states of the data by using the inserted and deleted virtual
tables.

 Avoid affecting existing applications by using SET NOCOUNT ON.


 Describe performance-related considerations for triggers.

What Are DML Triggers?


A DML trigger is a special kind of stored
procedure that executes when an INSERT,
UPDATE, or DELETE statement modifies the data in
a specified table or view. This includes any INSERT,
UPDATE, or DELETE statement that forms part of a
MERGE statement. A trigger can query other
tables and include complex Transact-SQL
statements.

DDL triggers are similar to DML triggers, but DDL


triggers fire when DDL events occur. DDL events
occur for most CREATE, ALTER, or DROP
statements in the Transact-SQL language.

Logon triggers are a special form of trigger that fire when a new session is established. (There is no logoff
trigger.)

Note: Terminology: The word “fire” is used to describe the point at which a trigger is
executed as the result of an event.
Developing SQL Databases 11-3

Trigger Operation
The trigger and the statement that fires it are treated as a single operation, which you can roll back from
within the trigger. By rolling back an operation, you can undo the effect of a Transact-SQL statement if
the logic in your triggers determines that the statement should not have been executed. If the statement
is part of another transaction, the outer transaction is also rolled back.

Triggers can cascade changes through related tables in the database; however, in many cases, you can
execute these changes more efficiently by using cascading referential integrity constraints.

Complex Logic and Meaningful Error Messages


Triggers can guard against malicious or incorrect INSERT, UPDATE, and DELETE operations and enforce
other restrictions that are more complex than those that are defined by using CHECK constraints. For
example, a trigger could check referential integrity for one column, only when another column holds a
specific value.

Unlike CHECK constraints, triggers can reference columns in other tables. For example, a trigger can use a
SELECT statement from another table to compare to the inserted or updated data, and to perform
additional actions, such as modifying the data or displaying a user-defined error message.

Triggers can evaluate the state of a table before and after a data modification, and take actions based on
that difference. For example, you may want to check that the balance of a customer’s account does not
change by more than a certain amount if the person processing the change is not a manager.

With triggers, you can also create custom error messages for when constraint violations occur. This could
make the messages that are passed to users more meaningful.

Multiple Triggers
With multiple triggers of the same type (INSERT, UPDATE, or DELETE) on a table, you can make multiple
different actions occur in response to the same modification statement. You might create multiple triggers
to separate the logic that each performs, but note that you do not have complete control over the order
in which they fire. You can only specify which triggers should fire first and last.

For more information about triggers, see Microsoft Docs:


CREATE TRIGGER (Transact-SQL)
https://aka.ms/Bghmb0

AFTER Triggers vs. INSTEAD OF Triggers


There are two types of DML trigger: AFTER
triggers and INSTEAD OF triggers. The main
difference between them relates to when they fire.
You can implement both types of DML trigger in
either Transact-SQL or managed code. In this
module, you will explore how they are designed
and implemented by using Transact-SQL.
Even if an UPDATE statement (or other data
modification statement) modifies many rows, the
trigger only fires a single time. For that reason,
you must design triggers to handle multiple rows.
This design differs from other database engines
11-4 Responding to Data Manipulation Via Triggers

where triggers are written to target single rows and are called multiple times when a statement affects
multiple rows.

AFTER Triggers
AFTER triggers fire after the data modifications, which are part of the event to which they relate,
complete. This means that an INSERT, UPDATE, or DELETE statement executes and modifies the data in
the database. After that modification has completed, AFTER triggers associated with that event fire—but
still within the same operation that triggered them.

Common reasons for implementing AFTER triggers are:

 To provide auditing of the changes that were made.

 To implement complex rules involving the relationship between tables.

 To implement default values or calculated values within rows.


In many cases, you can replace trigger-based code with other forms of code. For example, Microsoft®
SQL Server® data management software might provide auditing. Relationships between tables are more
typically implemented by using foreign key constraints. Default values and calculated values are typically
implemented by using DEFAULT constraints and persisted calculated columns. However, in some
situations, the complexity of the logic that is required will make triggers a good solution.
If the trigger executes a ROLLBACK statement, the data modification statement with which it is associated
will be rolled back. If that statement was part of a larger transaction, that outer transaction would be
rolled back, too.

INSTEAD OF Triggers
An INSTEAD OF trigger is a special type of trigger that executes alternate code instead of executing the
statement from which it was fired.
When you use an INSTEAD OF trigger, only the code in the trigger is executed. The original INSERT,
UPDATE, or DELETE operation that caused the trigger to fire does not occur.

INSTEAD OF triggers are most commonly used to make views that are based on multiple base tables
updatable.

Inserted and Deleted Virtual Tables


When you design a trigger, you must be able to
make decisions based on what changes have been
made to the data. To make effective decisions, you
need to access details of both the unmodified and
modified versions of the data. DML triggers
provide this through a pair of virtual tables called
“inserted” and “deleted”. These virtual tables are
often then joined to the modified table data as
part of the logic within the trigger.
Developing SQL Databases 11-5

INSERT, UPDATE, and DELETE Operations


The following list describes what happens after each of these operations:

 INSERT: the inserted virtual table holds details of the rows that have just been inserted. The
underlying table also contains those rows.

 UPDATE:

o The inserted virtual table holds details of the modified versions of the rows. The underlying table
also contains those rows in the modified form.

o The deleted virtual table holds details of the rows from before the modification was made. The
underlying table holds the modified versions.

 DELETE: the deleted virtual table holds details of the rows that have just been deleted. The
underlying table no longer contains those rows.

INSTEAD OF Triggers
An INSTEAD OF trigger can be associated with an event on a table. When you attempt an INSERT,
UPDATE, or DELETE statement that triggers the event, the inserted and deleted virtual tables hold details
of the modifications that must be made, but have not yet happened.

Scope of Inserted and Deleted


The inserted and deleted virtual tables are only available during the execution of the trigger code and are
scoped directly to the trigger code. This means that, if the trigger code were to execute a stored
procedure, that stored procedure would not have access to the inserted and deleted virtual tables.

SET NOCOUNT ON
When you are adding a trigger to a table, you
must avoid affecting the behavior of applications
that are accessing the table, unless the intended
purpose of the trigger is to prevent misbehaving
applications from making inappropriate data
changes.

It is common for application programs to issue


data modification statements and to check the
returned count of the number of rows that are
affected.

This process is often performed as part of an


optimistic concurrency check.

Consider the following code example:

UPDATE Statement
UPDATE Customer
SET Customer.FullName = @NewName,
Customer.Address = @NewAddress
WHERE Customer.CustomerID = @CustomerID
AND Customer.Concurrency = @Concurrency;
11-6 Responding to Data Manipulation Via Triggers

In this case, the Concurrency column is a rowversion data type column. The application was designed so
that the update only occurs if the Concurrency column has not been altered. Using rowversion columns,
every modification to the row causes a change in the rowversion column.

When the application intends to modify a single row, it issues an UPDATE statement for that row. The
application then checks the count of updated rows that are returned by SQL Server. When the application
sees that only a single row has been modified, the application knows that only the row that it intended to
change was affected. It also knows that no other user had modified the row before the application read
the data.
A common mistake when you are adding triggers is that if the trigger also causes row modifications (for
example, writes an audit row into an audit table), that count is returned in addition to the expected count.
You can avoid this situation by using the SET NOCOUNT ON statement. Most triggers should include this
statement.

Note: Returning Rowsets


Although you can include a SELECT statement within a trigger and for it to return rows, the
creation of this type of side effect is discouraged. The ability to do this is now deprecated and
should not be used in new development work. There is a configuration setting, “disallow results
from triggers”, which, when it is set to 1, disallows this capability.

Considerations for Triggers


For performance reasons, it is generally preferable
to use constraints rather than triggers. Triggers are
complex to debug because the actions that they
perform are not visible directly in the code that
causes them to fire. Triggers also increase how
long data modification transactions take, because
they add extra steps that SQL Server needs to
process during these operations. You should
design triggers to be as short as possible and to be
specific to a given task, rather than being
designed to perform a large number of tasks
within a single trigger.

Note that you can disable and re-enable triggers by using the ALTER TRIGGER statement.

Constraints vs. Triggers


When an AFTER trigger decides to disallow a data modification, it does so by executing a ROLLBACK
statement. The ROLLBACK statement undoes all of the work that the original statement performed.
However, you can achieve higher performance by avoiding the data modification ever occurring.

Note: Reminder: Constraints are rules that define allowed column or table values.

Constraints are checked before any data modification is attempted, so they often provide much higher
performance than is possible with triggers, particularly in ROLLBACK situations. You can use constraints
for relatively simple checks; triggers make it possible to check more complex logic.
Developing SQL Databases 11-7

Rowversions and tempdb


Since SQL Server 2005, trigger performance has been improved. In earlier versions of SQL Server, the
inserted and deleted virtual tables were constructed from entries in the transaction log. From SQL Server
2005 onward, a special rowversion table has been provided in the tempdb database. The rowversion table
holds copies of the data in the inserted and deleted virtual tables for the duration of the trigger. This
design has improved overall performance, but means that excessive usage of triggers could cause
performance issues within the tempdb database.

Managing Trigger Security


When a DML or DDL trigger is executed, it is executed according to the calling user context. Context
refers to the user privilege—for example, sysadmin (fixed role) or ADVENTUREWORKS\Student (user-
defined role).

The default context can be a security issue because it could be used by people who wish to execute
malicious code.

The following example shows how to change permissions for Student to sysadmin:

Change Permissions DDL Example


CREATE TRIGGER DDL_trigStudent
ON DATABASE
FOR ALTER_TABLE
AS
GRANT CONTROL SERVER TO Student;
GO

The Student user now has CONTROL SERVER permissions. Student has been able to grant permission
that she/he could not have normally done. The code has granted escalated permissions.

Best Practice: To prevent triggers from firing under escalated privileges, you should first
understand what triggers you have in the database and server instance. You can use the
sys.triggers and sys.server_triggers views to find out. Secondly, use the DISABLE_TRIGGER
statement to disable triggers that use escalated privileges.

For further information about managing triggers, see Microsoft Docs:

Manage Trigger Security


http://aka.ms/nzsq10
11-8 Responding to Data Manipulation Via Triggers

Check Your Knowledge


Question

Which types of statements fire DML


triggers?

Select the correct answer.

CREATE, ALTER, DROP

MERGE, ALTER, DELETE

MERGE, CREATE, INSERT

CREATE, UPDATE, DELETE

DELET, INSERT, UPDATE

Question: What reasons can you think of for deploying AFTER triggers?
Developing SQL Databases 11-9

Lesson 2
Implementing DML Triggers
The first lesson provided information about designing DML triggers. We now consider how to implement
the designs that have been created.

Lesson Objectives
After completing this lesson, you will be able to:

 Implement AFTER INSERT triggers.

 Implement AFTER DELETE triggers.

 Implement AFTER UPDATE triggers.

AFTER INSERT Triggers


An AFTER INSERT trigger executes whenever an
INSERT statement enters data into a table or view
on which the trigger is configured. The action of
the INSERT statement is completed before the
trigger fires, but the trigger action is logically part
of the INSERT operation.

AFTER INSERT Trigger Actions


When an AFTER INSERT trigger fires, new rows are
added to both the base table and the inserted
virtual table. The inserted virtual table holds a
copy of the rows that have been inserted into the
base table.
The trigger can examine the inserted virtual table to determine what to do in response to the
modification.

Multirow Inserts
In the code example on the slide, insertions for the Sales.Opportunity table are being audited to a table
called Sales.OpportunityAudit. Note that the trigger processes all inserted rows at the same time. A
common error when designing AFTER INSERT triggers is to write them with the assumption that only a
single row is being inserted.

Example code that works with multiple rows:

CREATE TRIGGER
CREATE TRIGGER Sales.InsertCustomer
ON Sales.Customer
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO Sales.Customer (CustomerID, PersonID, StoreID, TerritoryID)
SELECT CustomerID, PersonID, StoreID, TerritoryID
FROM inserted;
END;
GO
11-10 Responding to Data Manipulation Via Triggers

For more information about DML Trigger, see Microsoft Docs:

DML Triggers
https://aka.ms/xqgne8

Demonstration: Working with AFTER INSERT Triggers


In this demonstration, you will see how to:

 Create an AFTER INSERT trigger.

Demonstration Steps
Create an AFTER INSERT Trigger

1. Ensure that the 20762C-MIA-DC and 20762C-MIA-SQL virtual machines are running, and then log
on to 20762C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.

2. Run D:\Demofiles\Mod11\Setup.cmd as an administrator.

3. In the User Account Control dialog box, click Yes.

4. On the taskbar, click Microsoft SQL Server Management Studio.

5. In the Connect to Server dialog box, in the Server name box, type MIA-SQL, and then click
Connect.

6. On the File menu, point to Open, click Project/Solution.

7. In the Open Project dialog box, navigate to D:\Demofiles\Mod11\Demo11, click


Demo11.ssmssqlproj, and then click Open.
8. In Solution Explorer, expand the Queries folder, and then double-click the 21 - Demonstration
2A.sql script file to open it.

9. Select the code under the Step A comment, and then click Execute.
10. Select the code under the Step B comment, and then click Execute.

11. Select the code under the Step C comment, and then click Execute.

12. Select the code under the Step D comment, and then click Execute.

13. Select the code under the Step E comment, and then click Execute. Note the error message.

14. Select the code under the Step F comment, and then click Execute.

15. Do not close SQL Server Management Studio.


Developing SQL Databases 11-11

AFTER DELETE Triggers


An AFTER DELETE trigger executes whenever a
DELETE statement removes data from a table or
view on which the trigger is configured. The action
of the DELETE statement is completed before the
trigger fires, but logically within the operation of
the statement that fired the trigger.

AFTER DELETE Trigger Actions


When an AFTER DELETE trigger fires, rows are
removed from the base table. The deleted virtual
table holds a copy of the rows that have been
deleted from the base table. The trigger can
examine the deleted virtual table to determine
what to do in response to the modification.

Multirow Deletes
In the code example, rows in the Production.Product table are being flagged as discontinued if the
product subcategory row with which they are associated in the Production.SubCategory table is deleted.
Note that the trigger processes all deleted rows at the same time. A common error when designing AFTER
DELETE triggers is to write them with the assumption that only a single row is being deleted.

Setting the Discontinued Date with a Trigger if a Subcategory is Deleted


CREATE TRIGGER TR_Category_Delete
ON Production.ProductSubCategory
AFTER DELETE AS
BEGIN
SET NOCOUNT ON;
UPDATE p set p.DiscontinuedDate=SYSDATETIME()
FROM Production.Product AS p
WHERE EXISTS (SELECT 1 FROM deleted AS d
WHERE p.ProductSubcategoryID =d.ProductSubcategoryID);
END;
GO

TRUNCATE TABLE
When rows are deleted from a table by using a DELETE statement, any AFTER DELETE triggers are fired
when the deletion is completed. TRUNCATE TABLE is an administrative option that removes all rows from
a table. It needs additional permissions above those required for deleting rows. It does not fire any AFTER
DELETE triggers that are associated with the table.
11-12 Responding to Data Manipulation Via Triggers

Demonstration: Working with AFTER DELETE Triggers


In this demonstration, you will see how to:

 Create and test AFTER DELETE triggers.

Demonstration Steps
Create and Test AFTER DELETE Triggers

1. In Solution Explorer, in the Queries folder, and then double-click the 22 - Demonstration 2B.sql
script file to open it.

2. Select the code under the Step A comment, and then click Execute.

3. Select the code under the Step B comment, and then click Execute.

4. Select the code under the Step C comment, and then click Execute.

5. Select the code under the Step D comment, and then click Execute. Note the error message.

6. Select the code under the Step E comment, and then click Execute.

7. Do not close SQL Server Management Studio.

AFTER UPDATE Triggers


An AFTER UPDATE trigger executes whenever an
UPDATE statement modifies data in a table or
view on which the trigger is configured. The action
of the UPDATE statement is completed before the
trigger fires.

AFTER UPDATE Trigger Actions


When an AFTER UPDATE trigger fires, update
actions are treated as a set of deletions of how the
rows were and insertions of how the rows are now.
Rows that are to be modified in the base table are
copied to the deleted virtual table, and the
updated versions of the rows are copied to the
inserted virtual table. The inserted virtual table holds a copy of the rows in their modified state, the same
as how the rows appear now in the base table.

The trigger can examine both the inserted and deleted virtual tables to determine what to do in response
to the modification.

Multirow Updates
In the code example, the Product.ProductReview table contains a column called ModifiedDate. The
trigger is being used to ensure that when changes are made to the Product.ProductReview table, the
value in the ModifiedDate column reflects when any changes last happened.
Developing SQL Databases 11-13

Note that the trigger processes all updated rows at the same time. A common error when designing
AFTER UPDATE triggers is to write them with the assumption that only a single row is being updated.

After Update Trigger


CREATE TRIGGER TR_ProductReview_Update
ON Production.ProductReview
AFTER UPDATE AS
BEGIN
SET NOCOUNT ON;
UPDATE pr
SET pr.ModifiedDate = SYSDATETIME()
FROM Production.ProductReview AS pr
INNER JOIN inserted as i
ON i.ProductReviewID=pr.ProductReviewID;
END;

Demonstration: Working with AFTER UPDATE Triggers


In this demonstration, you will see how to:

 Create and test AFTER UPDATE triggers.

Demonstration Steps
Create and Test AFTER UPDATE Triggers

1. In Solution Explorer, in the Queries folder, double-click 23 - Demonstration 2C.sql to open it.
2. Select the code under the Step A comment, and then click Execute.

3. Select the code under the Step B comment, and then click Execute.

4. Select the code under the Step C comment, and then click Execute.

5. Select the code under the Step D comment, and then click Execute.

6. Select the code under the Step E comment, and then click Execute.

7. Select the code under the Step F comment, and then click Execute.
8. Select the code under the Step G comment, and then click Execute.

9. Select the code under the Step H comment, and then click Execute.

10. Select the code under the Step I comment, and then click Execute.

11. Select the code under the Step J comment, and then click Execute.

12. Select the code under the Step K comment, and then click Execute. Note that no triggers are
returned.
13. Do not close SQL Server Management Studio.
11-14 Responding to Data Manipulation Via Triggers

Verify the correctness of the statement by placing a mark in the column to the right.

Statement Answer

Is the following statement true or


false?
“When rows are deleted from a
table by using a DELETE
statement, any AFTER DELETE
triggers are fired when the
deletion is completed. DELETE
ROWS is an administrative option
that removes all rows from a
table.”

Question:
Analyze this create trigger code and indicate the four errors. You can assume the table and
columns have been created.

Outline some code you could use to test the trigger.

CREATE TRIGGER TR_SellingPrice_InsertUpdate

IN dbo.SellingPrice

AFTER INPUT, UPDATE AS BEGIN

SET NOCOUNT OPEN;


INSERT sp

SET sp.ExtendedAmount = sp.SubTotal

+ sp.TaxAmount
+ sp.FreightAmount

FROM dbo.SellingPrice AS sp

INNER JOIN inserted AS i

ON sp.SellingPriceID = i.SellingPriceId;

END;

GO
Developing SQL Databases 11-15

Lesson 3
Advanced Trigger Concepts
In the previous two lessons, you have learned to design and implement DML AFTER triggers. However, to
make effective use of these triggers, you have to know and understand some additional areas of
complexity that are related to them. This lesson considers when to use triggers, and when to consider
alternatives.

Lesson Objectives
After completing this lesson, you will be able to:

 Implement DML INSTEAD OF triggers.

 Explain how nested triggers work and how configurations might affect their operation.

 Explain considerations for recursive triggers.

 Use the UPDATE function to build logic based on the columns being updated.

 Describe the order in which multiple triggers fire when defined on the same object.

 Explain the alternatives to using triggers.

INSTEAD OF Triggers
INSTEAD OF triggers cause the execution of
alternate code instead of executing the statement
that caused them to fire.

INSTEAD OF Triggers vs. BEFORE


Triggers
Some other database engines provide BEFORE
triggers. In those databases, the action in the
BEFORE trigger happens before the data
modification statement that also occurs. INSTEAD
OF triggers in SQL Server are different from the
BEFORE triggers that you may have encountered
in other database engines. Using an INSTEAD OF
trigger as it is implemented in SQL Server, only the code in the trigger is executed. The original operation
that caused the trigger to fire is not executed.

Updatable Views
You can define INSTEAD OF triggers on views that have one or more base tables, where they can extend
the types of updates that a view can support.

This trigger executes instead of the original triggering action. INSTEAD OF triggers increase the variety of
types of updates that you can perform against a view. Each table or view is limited to one INSTEAD OF
trigger for each triggering action (INSERT, UPDATE, or DELETE).

You can specify an INSTEAD OF trigger on both tables and views. You cannot create an INSTEAD OF
trigger on views that have the WITH CHECK OPTION clause defined. You can perform operations on the
base tables within the trigger. This avoids the trigger being called again. For example, you could perform
a set of checks before inserting data, and then perform the insert on the base table.
11-16 Responding to Data Manipulation Via Triggers

INSTEAD OF Trigger
CREATE TRIGGER TR_ProductReview_Delete
ON Production.ProductReview
INSTEAD OF DELETE AS
BEGIN
SET NOCOUNT ON;
UPDATE pr set pr.ModifiedDate = SYSDATETIME()
FROM Production.ProductReview AS pr
INNER JOIN deleted as d
ON pr.ProductReviewID = d.ProductReviewID;
END;

Demonstration: Working with INSTEAD OF Triggers


In this demonstration, you will see how to:

 Create and test an INSTEAD OF DELETE trigger.

Demonstration Steps
Create and Test an INSTEAD OF DELETE Trigger
1. In Solution Explorer, in the Queries folder, double-click 31 - Demonstration 3A.sql in the Solution
Explorer script file to open it.

2. Select the code under the Step A comment, and then click Execute.

3. Select the code under the Step B comment, and then click Execute.

4. Select the code under the Step C comment, and then click Execute.

5. Select the code under the Step D comment, and then click Execute.

6. Select the code under the Step E comment, and then click Execute.
7. Select the code under the Step F comment, and then click Execute.

8. Select the code under the Step G comment, and then click Execute.

9. Select the code under the Step H comment, and then click Execute.

10. Select the code under the Step I comment, and then click Execute.

11. Select the code under the Step J comment, and then click Execute.

12. Select the code under the Step K comment, and then click Execute.
13. Select the code under the Step L comment, and then click Execute. Note the error message.

14. Select the code under the Step M comment, and then click Execute. Note the error message.

15. Select the code under the Step N comment, and then click Execute.
16. Select the code under the Step O comment, and then click Execute.

17. Select the code under the Step P comment, and then click Execute.

18. Select the code under the Step R comment, and then click Execute.

19. Select the code under the Step S comment, and then click Execute.

20. Select the code under the Step U comment, and then click Execute.

21. Select the code under the Step V comment, and then click Execute.
Developing SQL Databases 11-17

22. Do not close SQL Server Management Studio.

How Nested Triggers Work


Triggers can contain UPDATE, INSERT, or DELETE
statements. When these statements on one table
cause triggers on another table to fire, the triggers
are considered to be nested.

Triggers are often used for auditing purposes.


Nested triggers are essential for full auditing to
occur. Otherwise, actions would occur on tables
without being audited.

You can control whether nested trigger actions are


permitted. By default, these actions are permitted
by using a configuration option at the server level.
You can also detect the current nesting level by
querying @@nestlevel.

A failure at any level of a set of nested triggers cancels the entire original statement, and all data
modifications are rolled back.

A nested trigger will not fire twice in the same trigger transaction; a trigger does not call itself in response
to a second update to the same table within the trigger.

Complexity of Debugging
We noted in an earlier lesson that debugging triggers can sometimes be difficult. Nested triggers are
particularly difficult to debug. One common method that is used during debugging is to include PRINT
statements within the body of the trigger code so that you can determine where a failure occurred.
However, you should make sure these statements are only used during debugging phases.

Considerations for Recursive Triggers


A recursive trigger is a trigger that performs an
action which causes the same trigger to fire again
either directly or indirectly. Any trigger can
contain an UPDATE, INSERT, or DELETE statement
that affects the same table or another table. By
switching on the recursive trigger option on a
database, a trigger that changes data in a table
can start itself again, in a recursive execution.

Direct Recursion
Direct recursion occurs when a trigger fires and
performs an action on the same table that causes
the same trigger to fire again.

For example, an application updates table T1, which causes trigger Trig1 to fire. Trigger Trig1 updates
table T1 again, which causes trigger Trig1 to fire again.
11-18 Responding to Data Manipulation Via Triggers

Indirect Recursion
Indirect recursion occurs when a trigger fires and performs an action that causes another trigger to fire on
a different table which, in turn, causes an update to occur on the original table which, in turn, causes the
original trigger to fire again.

For example, an application updates table T2, which causes trigger Trig2 to fire. Trig2 updates table T3,
which causes trigger Trig3 to fire. In turn, trigger Trig3 updates table T2, which causes trigger Trig2 to
fire again.

To prevent indirect recursion of this sort, turn off the nested triggers option at the server instance level.

Recursive Triggers Considerations


The following list provides considerations for recursive triggers:

 Careful design and thorough testing is required to ensure that the 32-level nesting limit is not
exceeded.

 Can be difficult to control the order of table updates.

 Can usually be replaced by nonrecursive logic.

 The RECURSIVE_TRIGGERS option only affects direct recursion.

UPDATE Function
It is a common requirement to build logic that
only takes action if particular columns are being
updated.
You can use the UPDATE function to detect
whether a particular column is being updated in
the action of an UPDATE statement. For example,
you might want to take a particular action only
when the size of a product changes. The column is
referenced by the name of the column. The
Update function is used in the AFTER INSERT and
AFTER UPDATE triggers.

Note: Function or Statement? Be careful


not to confuse the UPDATE function with the UPDATE statement.

Change of Value
Note that the UPDATE function does not indicate if the value is actually changing. It only indicates if the
column is part of the list of columns in the SET clause of the UPDATE statement. To detect if the value in a
column is actually being changed to a different value, you must interrogate the inserted and deleted
virtual tables.
Developing SQL Databases 11-19

COLUMNS_UPDATED Function
SQL Server also provides a function called COLUMNS_UPDATED. This function returns a bitmap that
indicates which columns are being updated. The values in the bitmap depend upon the positional
information for the columns. Hard-coding that sort of information in the code within a trigger is generally
not considered good coding practice because it affects the readability—and therefore the
maintainability—of your code. It also reduces the reliability of your code because schema changes to the
table could break it.

In this example, the trigger uses the UPDATE function to identify updates to a particular column:

Update Function
CREATE TRIGGER pdate_ListPriceAudit
ON Production.Product AFTER UPDATE AS
BEGIN
IF UPDATE(ListPrice)
BEGIN
INSERT INTO Production.ListPriceAudit (ProductID, ListPrice,
ChangedWhen)
SELECT i.ProductiD, i.ListPrice, SYSDATETIME()
FROM inserted AS i;
END;
END;

Firing Order for Triggers


You can assign multiple triggers to a single event
on a single object. Only limited control is available
over the firing order of these triggers.

sp_settriggerorder
Developers often want to control the firing order
of multiple triggers that are defined for a single
event on a single object. For example, a developer
might create three AFTER INSERT triggers on the
same table, each implementing different business
rules or administrative tasks.
In general, code within one trigger should not
depend upon the order of execution of other triggers. Limited control of firing order is available through
the sp_settriggerorder system stored procedure. With sp_settriggerorder, you can specify the triggers
that will fire first and last from a set of triggers that all apply to the same event, on the same object.

The possible values for the @order parameter are First, Last, or None. None is the default action. An
error will occur if the First and Last triggers both refer to the same trigger.

For DML triggers, the possible values for the @stmttype parameter are INSERT, UPDATE, or DELETE.

Use sp_settriggerorder to set the sequence that a trigger fires.

Sp_settriggerorder
EXEC sp_settriggerorder
@triggername = 'Production.TR_Product_Update_ListPriceAudit',
@order = 'First',
@stmttype = 'UPDATE';
11-20 Responding to Data Manipulation Via Triggers

Alternatives to Triggers
Triggers are useful in many situations, and are
sometimes necessary to handle complex logic.
However, triggers are sometimes used in situations
where alternatives might be preferable.

Checking Values
You could use triggers to check that values in
columns are valid or within given ranges. However,
in general, you should use CHECK constraints
instead—CHECK constraints perform this
validation before the data modification is
attempted.

If you are using triggers to check the correlation of values across multiple columns within a table, you
should generally create table-level CHECK constraints instead.

Defaults
You can use triggers to provide default values for columns when no values have been provided in INSERT
statements. However, you should generally use DEFAULT constraints for this instead.

Foreign Keys
You can use triggers to check the relationship between tables. However, you should generally use
FOREIGN KEY constraints for this.

Computed Columns
You can use triggers to maintain the value in one column based on the value in other columns. In general,
you should use computed columns or persisted computed columns for this.

Precalculating Aggregates
You can use triggers to maintain precalculated aggregates in one table, based on the values in rows in
another table. In general, you should use indexed views to provide this functionality.

Suitable Situations for Using Triggers


Although general guidelines are provided here, replacing the triggers with these alternatives is not always
possible. For example, the logic that is required when checking values might be too complex for a CHECK
constraint.

As another example, a FOREIGN KEY constraint cannot be contained on a column that is also used for
other purposes. Consider a column that holds an employee number only if another column holds the
value “E”. This typically indicates a poor database design, but you can use triggers to ensure this sort of
relationship.
Developing SQL Databases 11-21

Demonstration: Replacing Triggers with Computed Columns


In this demonstration, you will see:

 How to replace a trigger with a computed column.

Demonstration Steps
Replace a Trigger with a Computed Column

1. In Solution Explorer, in the Queries folder, double-click 32 - Demonstration 3B.sql in the Solution
Explorer script file to open it.

2. Select the code under the Step A comment, and then click Execute.

3. Select the code under the Step B comment, and then click Execute.

4. Select the code under the Step C comment, and then click Execute.

5. Select the code under the Step D comment, and then click Execute.

6. Select the code under the Step E comment, and then click Execute.

7. Select the code under the Step F comment, and then click Execute.
8. Select the code under the Step G comment, and then click Execute.

9. Close SQL Server Management Studio, without saving any changes.


11-22 Responding to Data Manipulation Via Triggers

Check Your Knowledge


Question

What are the four missing terms from this


statement indicated by <XXX>?
<XXX>
These triggers are most commonly used
to enable views that are based on
multiple base tables to be updatable. You
can define <XXX> triggers on views that
have one or more base tables, where they
can extend the types of updates that a
view can support.
This trigger executes instead of the
original triggering action. <XXX> triggers
increase the variety of types of updates
that you can perform against a view. Each
table or view is limited to one <xxx>
trigger for each triggering action
(<XXX>).
You can specify an <XXX> trigger on
both tables and views. You cannot create
an <XXX> trigger on views that have the
<XXX> clause defined. You can perform
operations on the base tables within the
trigger. This avoids the trigger being
called again. For example, you could
perform a set of checks before inserting
data, and then perform the insert on the
base table.

Select the correct answer.

How Nested Triggers Work; INSTEAD


OF; DELETE; CHECK OPTION

Updatable Views; AFTER INSERT;


INSERT, UPDATE, or DELETE; NO
ROWCOUNT

Updatable Views; INSTEAD OF; DML


or DDL; CHECK OPTION

Updatable Views; RATHER THAN;


UPDATE; CHECK OPTION

Updatable Views; INSTEAD OF;


INSERT, UPDATE, or DELETE; CHECK
OPTION
Developing SQL Databases 11-23

Lab: Responding to Data Manipulation by Using Triggers


Scenario
You are required to audit any changes to data in a table that contains sensitive balance data. You have
decided to implement this by using DML triggers because the SQL Server Audit mechanism does not
provide directly for the requirements in this case.

Supporting Documentation
The Production.ProductAudit table is used to hold changes to high value products. The data to be
inserted in each column is shown in the following table:

Column Data type Value to insert

AuditID int IDENTITY

ProductID int ProductID

UpdateTime datetime2 SYSDATETIME()

ModifyingUser varchar(30) ORIGINAL_LOGIN()

OriginalListPrice decimal(18,2) ListPrice before


update

NewListPrice decimal(18,2) ListPrice after


update

Objectives
After completing this lab, you will be able to:
 Create triggers

 Modify triggers

 Test triggers

Estimated Time: 30 minutes

Virtual Machine: 20762C-MIA-SQL

User name: ADVENTUREWORKS\Student

Password: Pa55w.rd
11-24 Responding to Data Manipulation Via Triggers

Exercise 1: Create and Test the Audit Trigger


Scenario
The Production.Product table includes a column called ListPrice. Whenever an update is made to the
table, if either the existing balance or the new balance is greater than 1,000 US dollars, an entry must be
written to the Production.ProductAudit audit table.

Note: Inserts or deletes on the table do not have to be audited. Details of the current user
can be taken from the ORIGINAL_LOGIN() function.

The main tasks for this exercise are as follows:

1. Prepare the Lab Environment

2. Review the Design Requirements

3. Design a Trigger

4. Test the Behavior of the Trigger

 Task 1: Prepare the Lab Environment


1. Ensure that the 20762C-MIA-DC and 20762C-MIA-SQL virtual machines are both running, and then
log on to 20762C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.

2. In the D:\Labfiles\Lab11\Starter folder, right-click Setup.cmd, and then click Run as


administrator.

 Task 2: Review the Design Requirements


1. In the D:\Labfiles\Lab11\Starter folder, open the Supporting Documentation.docx.

2. In SQL Server Management Studio, review the existing structure of the Production.ProductAudit
table and the values required in each column, based on the supporting documentation.
3. Review the existing structure of the Production.Product table on SSMS.

 Task 3: Design a Trigger


 Design and create a trigger that meets the needs of the supporting documentation.

 Task 4: Test the Behavior of the Trigger


 Execute data modification statements that are designed to test whether the trigger is working as
expected.

Results: After this exercise, you should have created a new trigger. Tests should have shown that it is
working as expected.
Developing SQL Databases 11-25

Exercise 2: Improve the Audit Trigger


Scenario
Now that the trigger created in the first exercise has been deployed to production, the operations team is
complaining that too many entries are being audited. Many accounts have more than 10,000 US dollars
as a balance and minor movements of money are causing audit entries. You must modify the trigger so
that only changes in the balance of more than 10,000 US dollars are audited instead.

The main tasks for this exercise are as follows:

1. Modify the trigger based on the updated requirements.

2. Delete all rows from the Marketing.CampaignAudit table.

3. Test the modified trigger.


The main tasks for this exercise are as follows:

1. Modify the Trigger

2. Delete all Rows from the Marketing.CampaignAudit Table


3. Test the Modified Trigger

 Task 1: Modify the Trigger


1. Review the design of the existing trigger and decide what modifications are required.
2. Use an ALTER TRIGGER statement to change the existing trigger so that it will meet the updated
requirements.

 Task 2: Delete all Rows from the Marketing.CampaignAudit Table


 Execute a DELETE statement to remove all existing rows from the Marketing.CampaignAudit table.

 Task 3: Test the Modified Trigger


1. Execute data modification statements that are designed to test whether the trigger is working as
expected.

2. Close SQL Server Management Studio without saving anything.

Results: After this exercise, you should have altered the trigger. Tests should show that it is now working
as expected.
11-26 Responding to Data Manipulation Via Triggers

Module Review and Takeaways


Best Practice: In many business scenarios, it makes sense to mark records as deleted with a status
column and use a trigger or stored procedure to update an audit trail table. The changes can then be
audited, the data is not lost, and the IT staff can perform purges or archival of the deleted records.

Avoid using triggers in situations where constraints could be used instead.

Review Question(s)
Question: How do constraints and triggers differ regarding timing of execution?
12-1

Module 12
Using In-Memory Tables
Contents:
Module Overview 12-1
Lesson 1: Memory-Optimized Tables 12-2

Lesson 2: Natively Compiled Stored Procedures 12-11

Lab: Using In-Memory Database Capabilities 12-16


Module Review and Takeaways 12-19

Module Overview
Microsoft® SQL Server® 2014 data management software introduced in-memory online transaction
processing (OLTP) functionality features to improve the performance of OLTP workloads. Subsequent
versions of SQL Server add several enhancements, such as the ability to alter a memory-optimized table
without recreating it. Memory-optimized tables are primarily stored in memory, which provides the
improved performance by reducing hard disk access.
Natively compiled stored procedures further improve performance over traditional interpreted Transact-
SQL.

Objectives
After completing this module, you will be able to:
 Use memory-optimized tables to improve performance for latch-bound workloads.

 Use natively compiled stored procedures.


12-2 Using In-Memory Tables

Lesson 1
Memory-Optimized Tables
You can use memory-optimized tables as a way to improve the performance of latch-bound OLTP
workloads. Memory-optimized tables are stored in memory, and do not use locks to enforce concurrency
isolation. This dramatically improves performance for many OLTP workloads.

Lesson Objectives
After completing this lesson, you will be able to:

 Describe the key features of memory-optimized tables.

 Describe scenarios for memory-optimized tables.

 Add a filegroup for memory-optimized tables.

 Create memory-optimized tables.

 Use indexes in memory-optimized tables.


 Use the Memory-Optimization Advisor.

 Query memory-optimized tables.

What Are Memory-Optimized Tables?


Memory-optimized tables in SQL Server are
defined as C structs and compiled as dynamic-link
libraries (DLLs) that can be loaded into memory.
The query processor in SQL Server transparently
converts Transact-SQL queries against memory-
optimized tables into the appropriate C calls, so
you use them just like any other table in a SQL
Server database.
Memory-optimized tables:

 Are defined as C structs, compiled into DLLs,


and loaded into memory.

 Can persist their data to disk as FILESTREAM data, or they can be nondurable.

 Do not apply any locking semantics during transactional data modifications.

 Contain at least one index.


 Can coexist with disk-based tables in the same database.

 Can be queried by using Transact-SQL through interop services that the SQL Server query processor
provides.

Supported Data Types and Features

Most data types in memory-optimized tables are supported. However, some are not supported, including
text and image.
Developing SQL Databases 12-3

For more information about the data types that memory-optimized tables support, see the topic
Supported Data Types for In-Memory OLTP in Microsoft Docs:
Supported Data Types for In-Memory OLTP
http://aka.ms/jf3ob7

Memory-optimized tables support most features that disk-based tables support.

For information about features that are not supported, see the “Memory-Optimized Tables” section of the
topic Transact SQL Constructs Not Supported by In-Memory OLTP in Microsoft Docs:

Transact-SQL Constructs Not Supported by In-Memory OLTP


https://aka.ms/Uckvs9

Scenarios for Memory-Optimized Tables


Memory-optimized tables provide some
performance benefits by storing data in the
memory and reducing disk I/O. However, SQL
Server uses caching to optimize queries that
access commonly used data anyway, so the gains
from in-memory storage may not be significant
for some tables. The primary feature of memory-
optimized tables that improves database
performance is the lack of any locking to manage
transaction isolation. Memory-optimized tables
are therefore likely to be of most benefit when
you need to optimize performance for latch-
bound workloads that support concurrent access to the same tables.

Common Scenarios for Memory-Optimized Tables


Common latch-bound scenarios include OLTP workloads in which:

 Multiple concurrent queries modify large numbers of rows in a transaction.

 A table contains “hot” pages. For example, a table that contains a clustered index on an incrementing
key value will inherently suffer from concurrency issues because all insert transactions occur in the last
page of the index.

Considerations for Memory-Optimized Table Concurrency


When you update data in memory-optimized tables, SQL Server uses an optimistic concurrency row-
versioning mechanism to track changes to rows, so that the values in a row at a specific time are known.
The in-memory nature of memory-optimized tables means that data modifications occur extremely
quickly and conflicts are relatively rare. However, if a conflict error is detected, the transaction in which
the error occurred is terminated. You should therefore design applications to handle concurrency conflict
errors in a similar way to handling deadlock conditions.

Concurrency errors that can occur in memory-optimized tables include:


 Write conflicts. These occur when an attempt is made to update or delete a record that has been
updated since the transaction began.
12-4 Using In-Memory Tables

 Repeatable read validation failures. These occur when a row that the transaction has read has
changed since the transaction began.
 Serializable validation failures. These occur when a new (or phantom) row is inserted into the range
of rows that the transaction accesses while it is still in progress.

 Commit dependency failures. These occur when a transaction has a dependency on another
transaction that has failed to commit.

Creating a Filegroup for Memory-Optimized Data


Databases in which you want to create memory-
optimized tables must contain a filegroup for
memory-optimized data.

You can add a filegroup for memory-optimized


data to a database by using the ALTER DATABASE
statement, as the following example shows:

Adding a Filegroup for Memory-Optimized


Data
ALTER DATABASE MyDB
ADD FILEGROUP mem_data CONTAINS
MEMORY_OPTIMIZED_DATA;
GO
ALTER DATABASE MyDB
ADD FILE (NAME = 'MemData', FILENAME = 'D:\Data\MyDB_MemData.ndf')
TO FILEGROUP mem_data;

You can also add a filegroup for memory-optimized data to a database on the Filegroups page of the
Database Properties dialog box in SQL Server Management Studio (SSMS).

Creating Memory-Optimized Tables


Use the CREATE TABLE statement with the
MEMORY_OPTIMIZED option to create a memory-
optimized table.

Durability
When you create a memory-optimized table, you
can specify the durability of the table data.

By default, the durability option is set to


SCHEMA_AND_DATA. For SCHEMA_AND_DATA
durability, the data in the table is persisted to
FILESTREAM data in the memory-optimized
filegroup on which the table is created. The data is
written to disk as a stream, not in 8-KB pages as used by disk-based tables.

You can also specify a durability of SCHEMA_ONLY so that only the table definition is persisted. Any data
in the table will be lost in the event of the database server shutting down. The ability to set the durability
option to SCHEMA_ONLY is useful when the table is used for transient data, such as a session state table
in a web server farm.
Developing SQL Databases 12-5

Primary Keys
All tables that have a durability option of SCHEMA_AND_DATA must include a primary key. You can
specify this inline for single-column primary keys, or you can specify it after all of the column definitions.
Memory-optimized tables do not support clustered primary keys. You must specify the word
NONCLUSTERED when declaring the primary key.

To create a memory-optimized table, execute a CREATE TABLE statement that has the
MEMORY_OPTIMIZED option set to ON, as shown in the following example:

Creating a Memory-Optimized Table


CREATE TABLE dbo.MemoryTable
(OrderId INTEGER NOT NULL PRIMARY KEY NONCLUSTERED,
OrderDate DATETIME NOT NULL,
ProductCode INTEGER NULL,
Quantity INTEGER NULL)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);

To create a memory-optimized table that has a composite primary key, you must specify the PRIMARY
KEY constraint after the column definitions, as shown in the following example:

Creating a Memory-Optimized Table That Has a Composite Primary Key


CREATE TABLE dbo.MemoryTable2
(OrderId INTEGER NOT NULL,
LineItem INTEGER NOT NULL,
OrderDate DATETIME NOT NULL,
ProductCode INTEGER NULL,
Quantity INTEGER NOT NULL
PRIMARY KEY NONCLUSTERED (OrderID, LineItem))
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);

Indexes in Memory-Optimized Tables


Memory-optimized tables support three kinds of
indexes: Nonclustered hash indexes, nonclustered
indexes, and columnstore indexes.

The most relevant use of columnstore indexes on


memory-optimized tables is in real-time
operational analytical processing, which is beyond
the scope of this module.
For more information about columnstore indexes
in real-time operational analytical processing see
the topic Get Started with Columnstore for Real-
time Operational Analytics in Microsoft Docs:

Get started with Columnstore for real time operational analytics


http://aka.ms/n7pu2i

When you create a memory-optimized table that has a primary key, an index will be created for the
primary key. You can create up to seven other indexes in addition to the primary key. All memory-
optimized tables must include at least one index, which can be the index that was created for the primary
key.
12-6 Using In-Memory Tables

Nonclustered Hash Indexes


Nonclustered hash indexes (also known simply as hash indexes) are optimized for equality seeks, but do
not support range scans or ordered scans. Any queries that contain inequality operators such as <, >, and
BETWEEN would not benefit from a hash index.

When you create a hash index for a primary key, or in addition to a primary key index, you must specify
the bucket count. Buckets are storage locations in which rows are stored. You apply an algorithm to the
indexed key values to determine the bucket in which the row is stored. When a bucket contains multiple
rows, a linked list is created by adding a pointer in the first row to the second row, then in the second row
to the third row, and so on.

To create a hash index, you must specify the BUCKET_COUNT value, as shown in the following example:

Creating a Table-Level Primary Key Hash Index in a Memory-Optimized Table


CREATE TABLE dbo.MemoryTable3
(OrderId INTEGER NOT NULL,
LineItem INTEGER NOT NULL,
ProductCode INTEGER NOT NULL
PRIMARY KEY NONCLUSTERED HASH (OrderID, LineItem) WITH (BUCKET_COUNT = 1000000))
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);

Note: If there is more than one column in the key of the hash index, the WHERE clause of
any query that uses the index must include equality tests for all columns in the key. Otherwise,
the query plan will have to scan the whole table. When querying the table that was created in the
preceding example, the WHERE clause should include an equality test for the OrderId and
LineItem column.

You can create hash indexes, in addition to the primary key, by specifying the indexes after the column
definitions, as shown in the following example:

Creating a Hash Index in Addition to the Primary Key in a Memory-Optimized Table


CREATE TABLE dbo.IndexedMemoryTable
(OrderId INTEGER NOT NULL,
LineItem INTEGER NOT NULL,
OrderDate DATETIME NOT NULL,
ProductCode INTEGER NULL,
Quantity INTEGER NOT NULL
PRIMARY KEY NONCLUSTERED HASH (OrderID, LineItem) WITH (BUCKET_COUNT = 1000000),
INDEX idx_MemTab_ProductCode NONCLUSTERED HASH(ProductCode) WITH (BUCKET_COUNT = 1000000)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);

Nonclustered Indexes
Nonclustered indexes (also known as range indexes) use a latch-free variation of a binary tree (b-tree)
structure, called a “BW-tree,” to organize the rows based on key values. Nonclustered indexes support
equality seeks, range scans, and ordered scans.
Developing SQL Databases 12-7

You can create nonclustered indexes, in addition to the primary key, by specifying the indexes after the
column definitions, as shown in the following example:

Creating a Nonclustered Index in Addition to the Primary Key in a Memory-Optimized Table


CREATE TABLE dbo.IndexedMemoryTable2
(OrderId INTEGER NOT NULL PRIMARY KEY NONCLUSTERED,
OrderDate DATETIME NOT NULL,
ProductCode INTEGER NULL,
Quantity INTEGER NULL
INDEX idx_MemTab_ProductCode2 NONCLUSTERED (ProductCode))
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);

Note: SQL Server 2017 has improved the performance of nonclustered indexes on
memory-optimized tables, thereby reducing the time required to recover a database.

Deciding Which Type of Index to Use


Nonclustered indexes benefit from a wider range of query operations. You should use a nonclustered
index if any of the following scenarios apply:
 You might need to query the column or columns in the index key, using predicates with inequality
operators such as <,>, or BETWEEN.

 The index key contains more than one column, and you might use queries with predicates that do not
apply to all of the columns.

 The index requires a sort order.

If you are sure that none of the above scenarios apply, you could consider using a hash index to optimize
equality seeks.

Note: You can have hash indexes and nonclustered indexes in the same table. Prior to SQL
Server 2017, there was a limit of eight indexes including the primary key for memory-optimized
tables. This limitation has been removed.

Converting Tables with Memory Optimization Advisor


Memory Optimization Advisor will review your
existing disk-based tables and run through a
checklist to verify whether your environment and
the specific tables are suitable for you to convert
the tables to memory-optimized tables.
Memory Optimization Advisor takes you through
the following steps:

1. Memory Optimization Checklist

This step reports on any features of your disk-


based tables that are not supported in
memory-optimized tables.
12-8 Using In-Memory Tables

2. Memory Optimization Warnings


Memory optimization warnings do not prevent a disk-based table from being migrated to a memory-
optimized table, or stop the table from functioning after it has been converted—but the warnings will
list any other associated objects, such as stored procedures, that might not function correctly after
migration.
3. Review Optimization Options

You can now specify options such as the filegroup; the new name for the original, unmigrated, disk-
based table; and whether to transfer the data from the original table to the new memory-optimized
table.

4. Review Primary Key Conversion

If you are migrating to a durable table, you must specify a primary key or create a new primary key at
this stage. You can also specify whether the index should be a hash index or not.

5. Review Index Conversion

This step gives you the same options as primary key migration for each of the indexes on the table.

6. Verify Migration Actions

This step lists the options that you have specified in the previous stages and enables you to migrate
the table, or to create a script to migrate the table at a subsequent time.
To start Memory Optimization Advisor, in SQL Server Management Studio, right-click a table in Object
Explorer, and then select Memory Optimization Advisor.

Note: The Memory Optimization Advisor steps depend on the table. Actual pages may vary
from those described above.

Querying Memory-Optimized Tables


When a database contains memory-optimized
tables, there are two methods by which the tables
can be queried. These methods involve using
interpreted Transact-SQL and using natively
compiled stored procedures.

Interpreted Transact-SQL
Transact-SQL in queries and stored procedures
(other than native stored procedures) is referred to
as interpreted Transact-SQL. You can use
interpreted Transact-SQL statements to access
memory-optimized tables in the same way as
traditional disk-based tables. The SQL Server query
engine provides an interop layer that does the necessary interpretation to query the compiled in-memory
table. You can use this technique to create queries that access both memory-optimized tables and disk-
based tables—for example, by using a JOIN clause. When you access memory-optimized tables, you can
use most of the Transact-SQL operations that you use when accessing disk-based tables.
Developing SQL Databases 12-9

For more information about the Transact-SQL operations that are not possible when you access memory-
optimized tables, see the topic Accessing Memory-Optimized Tables Using Interpreted Transact-SQL in
Microsoft Docs:

Accessing Memory-Optimized Tables Using Interpreted Transact-SQL


http://aka.ms/wycxxr

Natively Compiled Stored Procedures


You can increase the performance of workloads that use memory-optimized tables further by creating
natively compiled stored procedures. You can define these by using CREATE PROCEDURE statements that
the SQL Server query engine converts to native C code. The C version of the stored procedure is compiled
into a DLL, which is loaded into memory. You can only use natively compiled stored procedures to access
memory-optimized tables; they cannot reference disk-based tables. You will see how to use natively
compiled stored procedures in the next lesson.

Demonstration: Using Memory-Optimized Tables


In this demonstration, you will see how to:

 Create a database with a filegroup for memory-optimized data.


 Use memory-optimized tables.

Demonstration Steps
Create a Database with a Filegroup for Memory-Optimized Data
1. Ensure that the 20762C-MIA-DC and 20762C-MIA-SQL virtual machines are running, and then log
on to 20762C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd

2. In the D:\Demofiles\Mod12 folder, right-click Setup.cmd, and then click Run as administrator.

3. In the User Account Control dialog box, click Yes.

4. Start SQL Server Management Studio, and then connect to the MIA-SQL database engine instance by
using Windows authentication.

5. In Object Explorer, under MIA-SQL, right-click Databases, and then click New Database.

6. In the New Database dialog box, in the Database name box, type MemDemo.

7. On the Filegroups page, in the MEMORY OPTIMIZED DATA section, click Add Filegroup.

8. In the Name box, type MemFG. Note that the filegroups in this section are used to contain
FILESTREAM files because memory-optimized tables are persisted as streams.

9. On the General page, click Add to add a database file. Then add a new file that has the following
properties:

o Logical Name: MemData

o File Type: FILESTREAM Data

o Filegroup: MemFG

10. In the Script drop-down list, click Script Action to New Query Window.

11. In the New Database dialog box, click Cancel to view the script file that has been generated.
12-10 Using In-Memory Tables

12. Review the script, noting the syntax that has been used to create a filegroup for memory-optimized
data. You can use similar syntax to add a filegroup to an existing database.
13. Click Execute to create the database.

Use Memory-Optimized Tables

1. On the File menu, point to Open, and then click Project/Solution.

2. In the Open Project dialog box, navigate to D:\Demofiles\Mod12\Demo12.ssmssln, and then click
Open.

3. In Solution Explorer, expand Queries, and then double-click 11 - Demonstration 1A.sql.

4. Select the code under Step 1 - Create a memory-optimized table, and then click Execute.

5. Select the code under Step 2 - Create a disk-based table, and then click Execute.

6. Select the code under Step 3 - Insert 500,000 rows into DiskTable, and then click Execute.

This code uses a transaction to insert rows into the disk-based table.

7. When code execution is complete, look at the lower right of the query editor status bar, and note
how long it has taken.

8. Select the code under Step 4 - Verify DiskTable contents, and then click Execute.

9. Confirm that the table now contains 500,000 rows.

10. Select the code under Step 5 - Insert 500,000 rows into MemoryTable, and then click Execute.
This code uses a transaction to insert rows into the memory-optimized table.

11. When code execution is complete, look at the lower right of the query editor status bar and note how
long it has taken. It should be significantly lower than the time that it takes to insert data into the
disk-based table.

12. Select the code under Step 6 - Verify MemoryTable contents, and then click Execute.

13. Confirm that the table now contains 500,000 rows.


14. Select the code under Step 7 - Delete rows from DiskTable, and then click Execute.

15. Note how long it has taken for this code to execute.

16. Select the code under Step 8 - Delete rows from MemoryTable, and then click Execute.
17. Note how long it has taken for this code to execute. It should be significantly lower than the time that
it takes to delete rows from the disk-based table.

18. Select the code under Step 9 - View memory-optimized table stats, and then click Execute.

19. Close SQL Server Management Studio, without saving any changes.

Question: You are creating an index for a date column in a memory-optimized table. What
is likely to be the most suitable type of index? Explain your reasons.
Developing SQL Databases 12-11

Lesson 2
Natively Compiled Stored Procedures
Natively compiled stored procedures are stored procedures that are compiled into native code. They are
written in traditional Transact-SQL code, but are compiled when they are created rather than when they
are executed, which improves performance.

Lesson Objectives
After completing this lesson, you will be able to:

 Describe the key features of natively compiled stored procedures.

 Create natively compiled stored procedures.

What Are Natively Compiled Stored Procedures?


Natively compiled stored procedures are written in
Transact-SQL, but are then compiled into native
code when they are created. This differs from
traditional disk-based stored procedures (also
known as interpreted stored procedures), which
are compiled for the first time that they run.
Compiling at creation time can cause errors that
would not appear in an interpreted stored
procedure until it is executed.
Natively compiled stored procedures access
memory-optimized tables with greater speed and
efficiency than interpreted stored procedures.
Natively compiled stored procedures contain one block of Transact-SQL that is called an atomic block.
This block will either succeed or fail as a single unit. If a statement within a natively compiled stored
procedure fails, the entire block will be rolled back to what it was before the procedure was executed. As
you will see in the next topic, atomic blocks have three possible transaction isolation levels, which you
must specify when creating a native stored procedure. Atomic blocks are not available to interpreted
stored procedures.

For more information about natively compiled stored procedures, see the topic Natively Compiled Stored
Procedures in Microsoft Docs:

Natively Compiled Stored Procedures


http://aka.ms/lzfmaq
12-12 Using In-Memory Tables

When to use Natively Compiled Stored Procedures


Natively compiled stored procedures can give
significant performance benefits, when used in the
right situations. For best results, consider using
natively compiled stored procedures when:

 Performance is critical. Natively compiled


stored procedures work best in parts of an
application that require fast processing.

 There are large numbers of rows to be


processed. Natively compiled stored
procedures give less benefit for single rowsets
or when a small number of rows are returned.
 The stored procedure is called frequently. Because it is precompiled, you get a significant benefit for
frequently used stored procedures.

 Logic requires aggregation functions, nested-loop joins, multistatement selects, inserts, updates, and
deletes, or other complex expressions. They also work well with procedural logic—for example,
conditional statements and loop constructs.

Finally, for best results, do not use named parameters with natively compiled stored procedures. Instead
use ordinal parameters where the parameters are referred to by position.

Note: Natively compiled stored procedures only work with in-memory tables.

For more information about when to use a natively compiled stored procedure, see Microsoft Docs:

Best Practices for Calling Natively Compiled Stored Procedures


https://aka.ms/alfha9

Creating Natively Compiled Stored Procedures


To create a natively compiled stored procedure,
you must use the CREATE PROCEDURE statement
with the following options:
 NATIVE_COMPILATION

 SCHEMABINDING

 EXECUTE AS

In addition to these options, you must initiate a


transaction in your stored procedure by using the
BEGIN ATOMIC clause, specifying the transaction
isolation level and language. You can specify one
of the following transaction isolation levels:
Developing SQL Databases 12-13

 SNAPSHOT. Using this isolation level, all data that the transaction reads is consistent with the version
that was stored at the start of the transaction. Data modifications that other, concurrent transactions
have made are not visible, and attempts to modify rows that other transactions have modified result
in an error.
 REPEATABLE READ. Using this isolation level, every read is repeatable until the end of the
transaction. If another, concurrent transaction has modified a row that the transaction had read, the
transaction will fail to commit due to a repeatable read validation error.
 SERIALIZABLE. Using this isolation level, all data is consistent with the version that was stored at the
start of the transaction, and repeatable reads are validated. In addition, the insertion of “phantom”
rows by other, concurrent transactions will cause the transaction to fail.
The following code example shows a CREATE PROCEDURE statement that is used to create a natively
compiled stored procedure:

Creating a Natively Compiled Stored Procedure


CREATE PROCEDURE dbo.DeleteCustomer @CustomerID INT
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS
BEGIN ATOMIC WITH
(TRANSACTION ISOLATION LEVEL = SNAPSHOT,
LANGUAGE = 'us_English')
DELETE dbo.Customer WHERE CustomerID = @CustomerID
DELETE dbo.OpenOrders WHERE CustomerID = @CustomerID
END;

Some features are not supported in native stored procedures. For information about features that are not
supported, see the “Natively Compiled Stored Procedures and User-Defined Functions” section in the
topic Transact-SQL Constructs Not Supported by In-Memory OLTP in the SQL Server Technical
Documentation:

Transact-SQL Constructs Not Supported by In-Memory OLTP


https://aka.ms/I7lyio

Execution Statistics
Execution statistics for natively compiled stored
procedures is not enabled by default. There is a
small performance impact with collecting statistics,
so you must explicitly enable the option when you
need it. You can enable or disable the collection of
statistics using sys.sp_xtp_control_proc_exec_stats.

Once you have enabled the collection of statistics,


you can monitor performance using
sys.dm_exec_procedure_stats.
12-14 Using In-Memory Tables

Use a dynamic management view (DMV) to collect statistics, after you have enabled the collection of
statistics.

Sys.dm_exec_procedure_stats
SELECT *
FROM sys.dm_exec_procedure_stats

Note: Statistics are not automatically updated for in-memory tables. You must use the
UPDATE STATISTICS command to update specific tables or indexes, or the sp_updatestats to
update all the statistics.

Demonstration: Creating a Natively Compiled Stored Procedure


In this demonstration, you will see how to:
 Create a natively compiled stored procedure.

Demonstration Steps
Create a Natively Compiled Stored Procedure
1. Ensure that the 20762C-MIA-DC and 20762C-MIA-SQL virtual machines are running, and then log
on to 20762C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd

2. Ensure that you have run the previous demonstration.

3. Start SQL Server Management Studio, and then connect to the MIA-SQL database engine instance by
using Windows authentication.

4. On the File menu, click Open, click Project/Solution.

5. In the Open Project dialog box, navigate to D:\Demofiles\Mod12\Demo12.ssmssln, and then click
Open.

6. In Solution Explorer, expand Queries, and then double-click 21 - Demonstration 2A.sql.


7. Select the code under Step 1 - Use the MemDemo database, and then click Execute.

8. Select the code under Step 2 - Create a native stored proc, and then click Execute.

9. Select the code under Step 3 - Use the native stored proc, and then click Execute.

10. Note how long it has taken for the stored procedure to execute. This should be significantly lower
than the time that it takes to insert data into the memory-optimized table by using a Transact-SQL
INSERT statement.

11. Select the code under Step 4 - Verify MemoryTable contents, and then click Execute.

12. Confirm that the table now contains 500,000 rows.

13. Close SQL Server Management Studio without saving any changes.
Developing SQL Databases 12-15

Verify the correctness of the statement by placing a mark in the column to the right.

Statement Answer

You are executing a native stored


procedure that inserts a row into
a memory-optimized table for
customer data, and then inserts a
row into a memory-optimized
table for sales data. The row is
inserted into the customer table
successfully, but the statement to
insert the row into the sales table
fails, causing the procedure to
return an error.
True or false? When you check
the tables by running a SELECT
query, the row that was
successfully inserted will show in
the customer table.
12-16 Using In-Memory Tables

Lab: Using In-Memory Database Capabilities


Scenario
You are planning to optimize some database workloads by using the in-memory database capabilities of
SQL Server. You will create memory-optimized tables and natively compiled stored procedures to
optimize OLTP workloads.

Objectives
After completing this lab, you will be able to:

 Create a memory-optimized table.

 Create a natively compiled stored procedure.

Estimated Time: 45 minutes


Virtual machine: 20762C-MIA-SQL

User name: ADVENTUREWORKS\Student

Password: Pa55w.rd

Exercise 1: Using Memory-Optimized Tables


Scenario
The Adventure Works website, through which customers can order goods, uses the InternetSales
database. The database already includes tables for sales transactions, customers, and payment types. You
need to add a table to support shopping cart functionality. The shopping cart table will experience a high
volume of concurrent transactions, so, to maximize performance, you want to implement it as a memory-
optimized table.

The main tasks for this exercise are as follows:


1. Prepare the Lab Environment

2. Add a Filegroup for Memory-Optimized Data

3. Create a Memory-Optimized Table

 Task 1: Prepare the Lab Environment


1. Ensure that the MIA-DC and MIA-SQL virtual machines are both running, and then log on to MIA-
SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.

2. In the D:\Labfiles\Lab12\Starter folder, right-click Setup.cmd, and then click Run as


administrator.

3. When you are prompted, click Yes to confirm that you want to run the command file, and then wait
for the script to finish.

 Task 2: Add a Filegroup for Memory-Optimized Data


1. Add a filegroup for memory-optimized data to the InternetSales database.

2. Add a file for memory-optimized data to the InternetSales database. You should store the file in the
filegroup that you created in the previous step.
Developing SQL Databases 12-17

 Task 3: Create a Memory-Optimized Table


1. Create a memory-optimized table named ShoppingCart in the InternetSales database, with the
durability option set to SCHEMA_AND_DATA.

2. The table should include the following columns:

o SessionID: integer

o TimeAdded: datetime

o CustomerKey: integer

o ProductKey: integer

o Quantity: integer

3. The table should include a composite primary key nonclustered index on the SessionID and
ProductKey columns.

4. To test the table, insert the following rows, and then write and execute a SELECT statement to return
all of the rows.

SessionID TimeAdded CustomerKey ProductKey Quantity

1 <Time> 2 3 1

1 <Time> 2 4 1

For <Time>, use whatever the current time is.

Results: After completing this exercise, you should have created a memory-optimized table and a natively
compiled stored procedure in a database with a filegroup for memory-optimized data.

Exercise 2: Using Natively Compiled Stored Procedures


Scenario
The Adventure Works website now includes a memory-optimized table. You want to create a natively
compiled stored procedure to take full advantage of the performance benefits of in-memory tables.
The main tasks for this exercise are as follows:

1. Create Natively Compiled Stored Procedures

 Task 1: Create Natively Compiled Stored Procedures


1. Create a natively compiled stored procedure named AddItemToCart. The stored procedure should
include a parameter for each column in the ShoppingCart table, and should insert a row into the
ShoppingCart table by using a SNAPSHOT isolation transaction.
2. Create a natively compiled stored procedure named DeleteItemFromCart. The stored procedure
should include SessionID and ProductKey parameters, and should delete matching rows from the
ShoppingCart table by using a SNAPSHOT isolation transaction.
3. Create a natively compiled stored procedure named EmptyCart. The stored procedure should include
SessionID parameters, and should delete matching rows from the ShoppingCart table by using a
SNAPSHOT isolation transaction.
12-18 Using In-Memory Tables

4. To test the AddItemToCart procedure, write and execute a Transact-SQL statement that calls
AddItemToCart to add the following items, and then write and execute a SELECT statement to return
all of the rows in the ShoppingCart table.

SessionID TimeAdded CustomerKey ProductKey Quantity

1 <Time> 2 3 1

1 <Time> 2 4 1

3 <Time> 2 3 1

3 <Time> 2 4 1

For <Time>, use whatever the current time is.


5. To test the DeleteItemFromCart procedure, write and execute a Transact-SQL statement that calls
DeleteItemFromCart to delete any items where SessionID is equal to 3 and the product key is equal
to 4, and then write and execute a SELECT statement to return all of the rows in the ShoppingCart
table.

6. To test the EmptyCart procedure, write and execute a Transact-SQL statement that calls EmptyCart
to delete any items where SessionID is equal to 3, and then write and execute a SELECT statement to
return all of the rows in the ShoppingCart table.

7. Close SQL Server Management Studio without saving any changes.

Results: After completing this exercise, you should have created a natively compiled stored procedure.
Developing SQL Databases 12-19

Module Review and Takeaways


In this module, you have learned how to:

 Use memory-optimized tables to improve performance for latch-bound workloads.

 Use natively compiled stored procedures.

Review Question(s)

Check Your Knowledge


Question

Which of the following statements is


true?

Select the correct answer.

Interpreted stored procedures cannot


be applied to memory-optimized
tables.

Native stored procedures cannot be


applied to disk-based tables.

Native stored procedures can be


applied to memory-optimized tables
and to disk-based tables.

Interpreted stored procedures can


contain atomic blocks.

Native stored procedures compile the


code the first time the stored
procedure is executed.

You might also like