Module 9 10 11 12
Module 9 10 11 12
Module 9
Designing and Implementing Stored Procedures
Contents:
Module Overview 9-1
Lesson 1: Introduction to Stored Procedures 9-2
Module Overview
This module describes the design and implementation of stored procedures.
Objectives
After completing this module, you will be able to:
Understand what stored procedures are, and what benefits they have.
Design, create, and alter stored procedures.
Lesson 1
Introduction to Stored Procedures
Microsoft® SQL Server® database management software includes several built-in system stored
procedures, in addition to giving users the ability to create their own. In this lesson, you will learn about
the role of stored procedures and the potential benefits of using them. System stored procedures provide
a large amount of prebuilt functionality that you can use when you are building applications. Not all
Transact-SQL statements are allowed within a stored procedure.
Lesson Objectives
After completing this lesson, you will be able to:
Identify statements that are not permitted within the body of a stored procedure declaration.
2. Alternatively, a stored procedure could be created at the server level to encapsulate all of the
Transact-SQL statements that are required.
Stored procedures are named, and are called by their name. The application can then execute the stored
procedure each time it needs that same functionality, rather than sending all of the individual statements
that would otherwise be required.
Stored Procedures
Stored procedures are similar to procedures, methods, and functions in high level languages. They can
have input and output parameters, in addition to a return value.
A stored procedure can return rows of data; in fact, multiple rowsets can be returned from a single stored
procedure.
Stored procedures can be created in either Transact-SQL code or managed .NET code, and are run using
the EXECUTE statement.
Developing SQL Databases 9-3
Security Boundary
Modular Programming
Code reuse is important. Stored procedures help modular programming by allowing logic to be created
once and then reused many times, from many applications. Maintenance is easier because, if a change is
required, you often only need to change the procedure, not the application code. Changing a stored
procedure could avoid the need to change the data access logic in a group of applications.
Delayed Binding
You can create a stored procedure that accesses (or references) a database object that does not yet exist.
This can be helpful in simplifying the order in which database objects need to be created. This is known as
deferred name resolution.
Performance
Using stored procedures, rather than many lines of Transact-SQL code, can offer a significant reduction in
the level of network traffic.
Transact-SQL code needs to be compiled before it is executed. In many cases, when a stored procedure is
compiled, SQL Server will retain and reuse the query plan that it previously generated, avoiding the cost
of compiling the code.
Although you can reuse execution plans for ad-hoc Transact-SQL code, SQL Server favors the reuse of
stored procedure execution plans. Query plans for ad-hoc Transact-SQL statements are among the first
items to be removed from memory when necessary.
The rules that govern the reuse of query plans for ad-hoc Transact-SQL code are largely based on exactly
matching the query text. Any difference—for example, white space or casing—will cause a different query
plan to be created. The one exception is when the only difference is the equivalent of a parameter.
Overall, however, stored procedures have a much higher chance of achieving query plan reuse.
9-4 Designing and Implementing Stored Procedures
Originally, there was a distinction in the naming of these stored procedures, where system stored
procedures had an sp_ prefix and system extended stored procedures had an xp_ prefix. Over time, the
need to maintain backward compatibility has caused a mixture of these prefixes to appear in both types
of procedure. Most system stored procedures still have an sp_ prefix, and most system extended stored
procedures still have an xp_ prefix, but there are exceptions to both of these rules.
System Stored Procedures
Unlike normal stored procedures, system stored procedures can be executed from within any database
without needing to specify the master database as part of their name. Typically, they are used for
administrative tasks that related to configuring servers, databases, and objects or for retrieving
information about them. System stored procedures are created within the sys schema. Examples of system
stored procedures are sys.sp_configure, sys.sp_addmessage, and sys.sp_executesql.
System extended stored procedures are used to extend the functionality of the server in ways that you
cannot achieve by using Transact-SQL code alone. Examples of system extended stored procedures are
sys.xp_dirtree, sys.xp_cmdshell, and sys.sp_trace_create. (Note how the last example here has an sp_
prefix.)
Creating user-defined extended stored procedures and attaching them to SQL Server is still possible but
the functionality is deprecated, and an alternative should be used where possible.
Extended stored procedures run directly within the memory space of SQL Server—this is not a safe place
for users to be executing code. User-defined extended stored procedures are well known to the SQL
Server product support group as a source of problems that prove difficult to resolve. Where possible, you
should use managed-code stored procedures instead of user-defined extended stored procedures.
Creating stored procedures using managed code is covered in Module 13: Implementing Managed Code
in SQL Server.
Developing SQL Databases 9-5
USE databasename
CREATE AGGREGATE
CREATE DEFAULT
CREATE RULE
CREATE SCHEMA
SET PARSEONLY
SET SHOWPLAN_ALL
SET SHOWPLAN_TEXT
SET SHOWPLAN_XML
Despite these restrictions, it is still possible for a stored procedure to access objects in another database.
To access objects in another database, reference them using their three- or four-part name, rather than
trying to switch databases with a USE statement.
Demonstration Steps
1. Ensure that the 20762C-MIA-DC and 20762C-MIA-SQL virtual machines are running, and then log
on to 20762C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.
7. Highlight the text under the comment Step 1 - Switch to the AdventureWorks database, and click
Execute.
9-6 Designing and Implementing Stored Procedures
8. Highlight the text under the comment Step 2 - Execute the sp_configure system stored
procedure, and click Execute.
9. Highlight the text under the comment Step 3 - Execute the xp_dirtree extended system stored
procedure, and click Execute.
10. Keep SQL Server Management Studio open for the next demo.
Question: The system stored procedure prefix (sp_) and the extended stored procedure
prefix (xp_) have become a little muddled over time. What does this say about the use of
prefixes when naming objects like stored procedures?
Developing SQL Databases 9-7
Lesson 2
Working with Stored Procedures
Now that you know why stored procedures are important, you need to understand the practicalities that
are involved in working with them.
Lesson Objectives
After completing this lesson, you will be able to:
The CREATE PROC statement must be the only one in the Transact-SQL batch. All statements from the AS
keyword until the end of the script or until the end of the batch (using a batch separator such as GO) will
become part of the body of the stored procedure.
Creating a stored procedure requires both the CREATE PROCEDURE permission in the current database
and the ALTER permission on the schema that the procedure is being created in. It is important to keep
connection settings such as QUOTED_IDENTIFIER and ANSI_NULLS consistent when you are working
with stored procedures. Stored procedure settings are taken from the settings for the session in which it
was created.
Stored procedures are always created in the current database with the single exception of stored
procedures that are created with a number sign (#) prefix in their name. The # prefix on a name indicates
that it is a temporary object—it is therefore created in the tempdb database and removed at the end of
the user's session.
9-8 Designing and Implementing Stored Procedures
For more information about creating stored procedures, see Microsoft Docs:
Note: Wrapping the body of a stored procedure with a BEGIN…END block is not required
but it is considered good practice. Note also that you can terminate the execution of a stored
procedure by executing a RETURN statement within the stored procedure.
EXECUTE Statement
The EXECUTE statement is most commonly used
to execute stored procedures, but can also be
used to execute other objects such as dynamic
Structured Query Language (SQL) statements.
Using EXEC, you can execute system stored
procedures within the master database without
having to explicitly refer to that database.
Executing user stored procedures in another database requires that you use the three-part naming
convention. Executing user stored procedures in a schema other than your default schema requires that
you use the two-part naming convention.
If you use only the name of a table, SQL Server will first search in your default schema for the table. Then,
if it does not locate a table that has that name, it will search the dbo schema. This minimizes options for
query plan reuse for SQL Server because, until the moment when the stored procedure is executed, SQL
Server cannot tell which objects it needs, because different users can have different default schemas.
If the stored procedure name starts with sp_ (not recommended for user stored procedures), SQL Server
will search locations in the following order, in an attempt to find the stored procedure:
The default schema for the user who is executing the stored procedure.
The dbo schema in the current database for the stored procedure.
Having SQL Server perform unnecessary steps to locate a stored procedure reduces performance.
For more information about executing a stored procedure, see Microsoft Docs:
ALTER PROC
The main reason for using the ALTER PROC
statement is to retain any existing permissions on
the procedure while it is being changed. Users
might have been granted permission to execute
the procedure. However, if you drop the
procedure, and then recreate it, the permission will
be removed and would need to be granted again.
Note that the type of procedure cannot be changed using ALTER PROC. For example, a Transact-SQL
procedure cannot be changed to a managed-code procedure by using an ALTER PROCEDURE statement
or vice versa.
Connection Settings
The connection settings, such as QUOTED_IDENTIFIER and ANSI_NULLS, that will be associated with the
modified stored procedure will be those taken from the session that makes the change, not from the
original stored procedure—it is important to keep these consistent when you are making changes.
Complete Replacement
Note that, when you alter a stored procedure, you need to resupply any options (such as the WITH
ENCRYPTION clause) that were supplied while creating the procedure. None of these options are retained
and they are replaced by whatever options are supplied in the ALTER PROC statement.
9-10 Designing and Implementing Stored Procedures
The TRY block starts with the keywords BEGIN TRY, and is finished with the keywords END TRY. The stored
procedure code goes in between BEGIN TRY and END TRY. Similarly, the CATCH block is started with the
keywords BEGIN CATCH, and is finished with the keywords END CATCH.
In this code example, a stored procedure is created to add a new store. It uses the TRY … CATCH construct
to catch any errors. In the event of an error, code within the CATCH block is executed; in this instance, it
returns details about the error.
Developing SQL Databases 9-11
AS
SET NOCOUNT ON;
BEGIN TRY
INSERT INTO Person.BusinessEntity (rowguid)
VALUES (DEFAULT);
END TRY
BEGIN CATCH
SELECT ERROR_NUMBER() AS ErrorNumber,
ERROR_SEVERITY() AS ErrorSeverity,
ERROR_STATE() AS ErrorState,
ERROR_PROCEDURE() AS ErrorProcedure,
ERROR_LINE() AS ErrorLine,
ERROR_MESSAGE() AS ErrorMessage;
END CATCH
Note: The error functions shown in the example are only used within CATCH blocks. They
will return NULL if used outside a CATCH block.
Transaction Handling
You might need to manage transactions within a
stored procedure. Perhaps you want to ensure that
if one action fails, then all actions are rolled back.
Explicit transactions are managed using the BEGIN
TRANSACTION and COMMIT TRANSACTION
keywords.
Transactions
CREATE TABLE MyTable
(Col1 tinyint PRIMARY KEY, Col2 CHAR(3) NOT NULL);
GO
BEGIN CATCH
ROLLBACK
END CATCH;
GO
If you execute the stored procedure with values that enable two rows to be successfully inserted, it will
execute successfully, and no transactions will be rolled back. For example:
However, if you execute the stored procedure with a value that causes one row to fail, then the catch
block will be invoked, and the complete transaction will be rolled back. For example:
In the second instance, the primary key constraint will be violated, and no records will be inserted.
@@TRANCOUNT
The @@TRANCOUNT function keeps count of the number of transactions by incrementing each time a
BEGIN TRANSACTION is executed. It decrements by one each time a COMMIT TRANSACTION is executed.
Using the table created in the previous code example, this code fragment shows how @@TRANCOUNT
increments and decrements.
@@TRANCOUNT
SET XACT_ABORT ON;
SELECT @@TRANCOUNT;
BEGIN TRANSACTION;
SELECT @@TRANCOUNT;
INSERT MyTable VALUES (40, 'abc');
COMMIT TRANSACTION;
SELECT @@TRANCOUNT;
BEGIN TRANSACTION;
INSERT MyTable VALUES (41, 'xyz');
SELECT @@TRANCOUNT;
COMMIT TRANSACTION;
SELECT @@TRANCOUNT
Note: SET XACT_ABORT ON or OFF determines how SQL Server behaves when a statement
fails within a transaction. If SET XACT_ABORT is ON, then the entire transaction is rolled back.
You will see an example of how these dependency views are used in the next demonstration.
SET NOCOUNT ON
Ensure the first statement in your stored procedure is SET NOCOUNT ON. This will increase performance
by suppressing messages returned to the client following SELECT, INSERT, UPDATE, MERGE, and DELETE
statements.
9-14 Designing and Implementing Stored Procedures
It is important to have a consistent way of naming your stored procedures. There is no right or wrong
naming convention but you should decide on a method for naming objects and apply that method
consistently. You can enforce naming conventions on most objects by using Policy-Based Management or
DDL triggers. These areas are beyond the scope of this course.
Demonstration Steps
1. In Solution Explorer, in the Queries folder, double-click the 21 - Demonstration2A.sql script file.
2. Highlight the code under the comment Step 1 - Switch to the AdventureWorks database, and
click Execute.
3. Highlight the code under the comment Step 2 - Create the GetBlueProducts stored procedure,
and click Execute.
4. Highlight the code under the comment Step 3 - Execute the GetBlueProducts stored procedure,
and click Execute.
5. Highlight the code under the comment Step 4 - Create the GetBlueProductsAndModels stored
procedure, and click Execute.
6. Highlight the code under the comment Step 5 - Execute the GetBlueProductsAndModels stored
procedure which returns multiple rowsets, and click Execute.
7. Highlight the code under the comment Step 6 - Alter the procedure because the 2nd query does
not show only blue products, and click Execute.
8. Highlight the code under the comment Step 7 - And re-execute the GetBlueProductsAndModels
stored procedure, and click Execute.
9. Highlight the code under the comment Step 8 - Query sys.procedures to see the list of
procedures, and click Execute.
10. Keep SQL Server Management Studio open for the next demo.
Lesson 3
Implementing Parameterized Stored Procedures
The stored procedures that you have seen in this module have not involved parameters. They have
produced their output without needing any input from the user and they have not returned any values,
apart from the rows that they have returned. Stored procedures are more flexible when you include
parameters as part of the procedure definition, because you can create more generic application logic.
Stored procedures can use both input and output parameters, and return values.
Although the reuse of query execution plans is desirable in general, there are situations where this reuse is
detrimental. You will see situations where this can occur and consider options for workarounds to avoid
the detrimental outcomes.
Lesson Objectives
After completing this lesson, you will be able to:
Parameterize stored procedures.
Explain the issues that surround parameter sniffing and performance, and describe the potential
workarounds.
Input Parameters
Parameters are used to exchange data between
stored procedures and the application or tool that
called the stored procedure. They enable the caller
to pass a data value to the stored procedure. To
define a stored procedure that accepts input
parameters, you declare one or more variables as
parameters in the CREATE PROCEDURE
statement. You will see an example of this in the next topic.
Output Parameters
Output parameters enable the stored procedure to pass a data value or a cursor variable back to the
caller. To use an output parameter within Transact-SQL, you must specify the OUTPUT keyword in both
the CREATE PROCEDURE statement and the EXECUTE statement.
Return Values
Every stored procedure returns an integer return code to the caller. If the stored procedure does not
explicitly set a value for the return code, the return code is 0 if no error occurs; otherwise, a negative value
is returned.
Developing SQL Databases 9-17
Return values are commonly used to return a status result or an error code from a procedure and are sent
by the Transact-SQL RETURN statement.
Although you can send a value that is related to business logic via a RETURN statement, in general, you
should use output parameters to generate values rather than the RETURN value.
For more information about using parameters with stored procedures, see Microsoft Docs:
Parameters
https://aka.ms/Ai6kha
Default Values
You can provide default values for a parameter where appropriate. If a default is defined, a user can
execute the stored procedure without specifying a value for that parameter.
An example of a default parameter value for a stored procedure parameter:
Default Values
CREATE PROCEDURE Sales.OrdersByDueDateAndStatus
@DueDate datetime,
@Status tinyint = 5
AS
Two parameters have been defined (@DueDate and @Status). The @DueDate parameter has no default
value and must be supplied when the procedure is executed. The @Status parameter has a default value
of 5. If a value for the parameter is not supplied when the stored procedure is executed, a value of 5 will
be used.
This is an example of the previous stored procedure with one input parameter supplied and one
parameter using the default value:
This is an example of the previous stored procedure with one input parameter supplied and one
parameter using the default value:
Executing a Stored Procedure That Has Input Parameters with a Default Value.
EXEC Sales.OrdersByDueDateAndStatus '20050713';
This execution supplies a value for both @DueDate and @Status. Note that the names of the parameters
have not been mentioned. SQL Server knows which parameter is which by its position in the parameter
list.
This is an example of a stored procedure being executed and both parameters are defined by name:
This is an example of a stored procedure being executed and both parameters are defined by name:
In this case, the stored procedure is being called by using both parameters, but they are being explicitly
identified by name.
In this example, the results will be the same, even though they are in a different order, because the
parameters are defined by name:
In this example, the results will be the same, even though they are in a different order, because the
parameters are defined by name:
In this case, the @DueDate parameter is an input parameter and the @OrderCount parameter has been
specified as an output parameter. Note that, in SQL Server, there is no true equivalent of a .NET output
parameter. SQL Server OUTPUT parameters are really input/output parameters.
To execute a stored procedure with output parameters you must first declare variables to hold the
parameter values. You then execute the stored procedure and retrieve the OUTPUT parameter value by
selecting the appropriate variable. The next code example shows this.
This code example shows how to call a stored procedure with input and output parameters:
In the EXEC call, note that the @OrderCount parameter is followed by the OUTPUT keyword. If you do not
specify the output parameter in the EXEC statement, the stored procedure would still execute as normal,
including preparing a value to return in the output parameter. However, the output parameter value
would not be copied back into the @OrderCount variable and you would not be able to retrieve the
value. This is a common bug when working with output parameters.
WITH RECOMPILE
You can add a WITH RECOMPILE option when you are declaring a stored procedure. This causes the
procedure to be recompiled each time it is executed.
9-20 Designing and Implementing Stored Procedures
OPTIMIZE FOR
You use the OPTIMIZE FOR query hint to specify the value of a parameter that should be assumed when
compiling the procedure, regardless of the actual value of the parameter.
An example of the OPTIMIZE FOR query hint is shown in the following code example:
OPTIMIZE FOR
CREATE PROCEDURE dbo.GetProductNames
@ProductIDLimit int
AS
BEGIN
SELECT ProductID,Name
FROM Production.Product
WHERE ProductID < @ProductIDLimit
OPTION (OPTIMIZE FOR (@ProductIDLimit = 1000))
END;
Question: What is the main advantage of creating parameterized stored procedures over
nonparameterized stored procedures?
Developing SQL Databases 9-21
Lesson 4
Controlling Execution Context
Stored procedures normally execute in the security context of the user who is calling the procedure.
Providing a chain of ownership extends from the stored procedure to the objects that are referenced, the
user can execute the procedure without the need for permissions on the underlying objects. Ownership-
chaining issues with stored procedures are identical to those for views. Sometimes you need more precise
control over the security context in which the procedure is executing.
Lesson Objectives
After completing this lesson, you will be able to:
Execution Contexts
A login token and a user token represent an
execution context. The tokens identify the primary
and secondary principals against which
permissions are checked, and the source that is
used to authenticate the token. A login that
connects to an instance of SQL Server has one
login token and one or more user tokens, depending on the number of databases to which the account
has access.
A login token is valid across the instance of SQL Server. It contains the primary and secondary identities
against which server-level permissions and any database-level permissions that are associated with these
identities are checked. The primary identity is the login itself. The secondary identity includes permissions
that are inherited from rules and groups.
9-22 Designing and Implementing Stored Procedures
A user token is valid only for a specific database. It contains the primary and secondary identities against
which database-level permissions are checked. The primary identity is the database user itself. The
secondary identity includes permissions that are inherited from database roles. User tokens do not contain
server-role memberships and do not honor the server-level permissions that are granted to the identities
in the token, including those that are granted to the server-level public role.
Explicit Impersonation
SQL Server supports the ability to impersonate
another principal, either explicitly by using the
stand-alone EXECUTE AS statement, or implicitly
by using the EXECUTE AS clause on modules.
To execute as another user, you must first have IMPERSONATE permission on that user. Any login in the
sysadmin role has IMPERSONATE permission on all users.
Implicit Impersonation
You can perform implicit impersonations by using the WITH EXECUTE AS clause on modules to
impersonate the specified user or login at the database or server level. This impersonation depends on
whether the module is a database-level module, such as a stored procedure or function, or a server-level
module, such as a server-level trigger.
When you impersonate a principal by using the EXECUTE AS LOGIN statement or within a server-scoped
module by using the EXECUTE AS clause, the scope of the impersonation is server-wide. This means that,
after the context switch, you can access any resource within the server on which the impersonated login
has permissions.
When you impersonate a principal by using the EXECUTE AS USER statement or within a database-scoped
module by using the EXECUTE AS clause, the scope of impersonation is restricted to the database. This
means that references to objects that are outside the scope of the database will return an error.
Developing SQL Databases 9-23
Demonstration Steps
1. In Solution Explorer, expand the Queries folder and then double-click the 31 -
Demonstration3A.sql script file.
2. Highlight the code under the comment Step 1 - Open a new query window to the tempdb
database, and click Execute.
3. Highlight the code under the comment Step 2 - Create a stored procedure that queries
sys.login_token and sys.user_token, and click Execute.
4. Highlight the code under the comment Step 3 - Execute the stored procedure and review the
rowsets returned, and click Execute.
5. Highlight the code under the comment Step 4 - Use the EXECUTE AS statement to change
context, and click Execute.
6. Highlight the code under the comment Step 5 - Try to execute the procedure. Why does it not it
work?, click Execute and note the error message.
7. Highlight the code under the comment Step 6 - Revert to the previous security context, and click
Execute.
8. Highlight the code under the comment Step 7 - Grant permission to SecureUser to execute the
procedure, and click Execute.
9. Highlight the code under the comment Step 8 - Now try again and note the output, and click
Execute.
10. Highlight the code under the comment Step 9 - Alter the procedure to execute as owner, and
click Execute.
11. Highlight the code under the comment Step 10 - Execute as SecureUser again and note the
difference, and click Execute.
9-24 Designing and Implementing Stored Procedures
12. Highlight the code under the comment Step 11 - Drop the procedure, and click Execute.
13. Close SQL Server Management Studio without saving any changes.
sysadmin
IMPERSONATE
TAKE OWNERSHIP
Developing SQL Databases 9-25
Objectives
After completing this lab, you will be able to:
Password: Pa55w.rd
Supporting Documentation
Stored Procedure: Reports.GetProductColors
Notes: Colors should not be returned more than once in the output. NULL values
should not be returned.
3. In the User Account Control dialog box, click Yes, and wait for the script to finish.
Results: After completing this lab, you will have created and tested two stored procedures,
Reports.GetProductColors and Reports.GetProductsAndModels.
Developing SQL Databases 9-27
Supporting Documentation
Stored Procedure Marketing.GetProductsByColor
Input parameters @Color (same data type as the Color column in the Production.Product
table).
Notes The procedure should return products that have no Color if the
parameter is NULL.
Note: Ensure that approximately 26 rows are returned for blue products. Ensure that
approximately 248 rows are returned for products that have no color.
Results: After completing this exercise, you will have altered the three stored procedures created in earlier
exercises, so that they run as owner.
Developing SQL Databases 9-29
Include the SET NOCOUNT ON statement in your stored procedures immediately after the AS
keyword. This improves performance.
While it is not mandatory to enclose Transact-SQL statements within a BEGIN END block in a stored
procedure, it is good practice and can help make stored procedures more readable.
Reference objects in stored procedures using a two- or three-part naming convention. This reduces
the processing that the database engine needs to perform.
Avoid using SELECT * within a stored procedure even if you need all columns from a table.
Specifying the column names explicitly reduces the chance of issues, should columns be added to
a source table.
10-1
Module 10
Designing and Implementing User-Defined Functions
Contents:
Module Overview 10-1
Lesson 1: Overview of Functions 10-2
Module Overview
Functions are routines that you use to encapsulate frequently performed logic. Rather than having to
repeat the function logic in many places, code can call the function. This makes code more maintainable,
and easier to debug.
In this module, you will learn to design and implement user-defined functions (UDFs) that enforce
business rules or data consistency. You will also learn how to modify and maintain existing functions.
Objectives
After completing this module, you will be able to:
Lesson 1
Overview of Functions
Functions are routines that consist of one or more Transact-SQL statements that you can use to
encapsulate code for reuse. A function takes zero or more input parameters and returns either a scalar
value or a table. Functions do not support output parameters but do return results, either as a single value
or as a table.
Lesson Objectives
After completing this lesson, you will be able to:
Types of Functions
Most high level programming languages offer
functions as blocks of code that are called by name
and can process input parameters. Microsoft® SQL
Server® offers three types of functions: scalar
functions, TVFs, and system functions.
You can create two types of TVFs: inline TVFs and
multistatement TVFs.
Scalar Functions
Scalar functions return a single data value of the type that is defined in a RETURNS clause. An example of
a scalar function would be one that extracts the protocol from a URL. From the string
“http://www.microsoft.com”, the function would return the string “http”.
For example, if a table holds details of sales for an entire country, you could create individual views to
return details of sales for particular states. You could write an inline TVF that takes the state code or ID as
a parameter, and returns sales data for the state that match the parameter. In this way, you would only
need a single function to provide details for all states, rather than separate views for each state.
System Functions
System functions are provided with SQL Server to perform a variety of operations. You cannot modify
them. System functions are described in the next topic.
For more details about the restrictions and usage of UDFs, see Microsoft Docs:
System Functions
SQL Server has a wide variety of system functions
that you can use in queries to return data or to
perform operations on data. System functions are
also known as built-in functions.
Scalar Functions
Most system functions are scalar functions. They
provide the functionality that is commonly provided by functions in other high level languages, such as
operations on data types (including strings and dates and times) and conversions between data types. SQL
Server provides a library of mathematical and cryptographic functions. Other functions provide details of
the configuration of the system, and its security.
Rowset Functions
These return objects that can be used instead of Transact-SQL table reference statements. For example,
OPENJSON is a SQL Server function that can be used to import JSON into SQL Server or transform JSON
to a relational format.
Aggregate Functions
Aggregates such as MIN, MAX, AVG, SUM, and COUNT perform calculations across groups of rows. Many
of these functions automatically ignore NULL rows.
Ranking Functions
Functions such as ROW_NUMBER, RANK, DENSE RANK, and NTILE perform windowing operations on rows
of data.
For more information about system functions that return values, settings and objects, see Microsoft Docs:
For more information about different types of system functions, see Microsoft Docs:
What are the SQL database functions?
http://aka.ms/jw8w5j
10-4 Designing and Implementing User-Defined Functions
Apart From Data Type and Data Time, Which SQL Server Functions Have You Used?
Responses will vary, based on experience.
OPENROWSET
ROWCOUNT_BIG
GROUPING_ID
ROW_NUMBER
OPENXML
Developing SQL Databases 10-5
Lesson 2
Designing and Implementing Scalar Functions
You have seen that functions are routines that consist of one or more Transact-SQL statements that you
can use to encapsulate code for reuse—and that functions can take zero or more input parameters, and
return either scalar values or tables.
This lesson provides an overview of scalar functions and explains why and how you use them, in addition
to explaining the syntax for creating them.
Lesson Objectives
After completing this lesson, you will be able to:
Scalar Functions
Scalar functions are created using the CREATE
FUNCTION statement. The body of a function is
defined within a BEGIN…END block. The function
body contains the series of Transact-SQL statements
that return the value.
Consider the function definition in the following
code example:
CREATE FUNCTION
CREATE FUNCTION dbo.ExtractProtocolFromURL
( @URL nvarchar(1000))
RETURNS nvarchar(1000)
AS BEGIN
RETURN CASE WHEN CHARINDEX(N':',@URL,1) >= 1
THEN SUBSTRING(@URL,1,CHARINDEX(N':',@URL,1) - 1)
END;
END;
Note: Note that the body of the function consists of a single RETURN statement that is
wrapped in a BEGIN…END block.
10-6 Designing and Implementing User-Defined Functions
You can use the function in the following code example as an expression, wherever a single value could
be used:
You can also implement scalar functions in managed code. Managed code will be discussed in Module 13:
Implementing Managed Code in SQL Server. The allowable return values for scalar functions differ
between functions that are defined in Transact-SQL and functions that are defined by using managed
code.
Note: Altering a function retains any permissions already associated with it.
Guidelines
Consider the following guidelines when you create scalar UDFs:
Make sure that you use two-part naming for the function and for all database objects that the
function references.
Avoid Transact-SQL errors that lead to a statement being canceled and the process continuing with
the next statement in the module (such as within triggers or stored procedures) because they are
treated differently inside a function. In functions, such errors cause the execution of the function to
stop.
Side Effects
A function that modifies the underlying database is considered to have side effects. In SQL Server,
functions are not permitted to have side effects. You cannot change data in a database within a function;
you should not call a stored procedure; and you cannot execute dynamic Structured Query Language
(SQL) code.
Developing SQL Databases 10-7
For more information about the create function, see Microsoft Docs:
This is an example of a CREATE FUNCTION statement. This function calculates the area of a rectangle,
when four coordinates are entered. Note the use of the ABS function, an in-built mathematical function
that returns a positive value (absolute value) for a given input.
CREATE FUNCTION
CREATE FUNCTION dbo.RectangleArea
(@X1 float, @Y1 float, @X2 float, @Y2 float)
RETURNS float
AS BEGIN
RETURN ABS(@X1 - @X2) * ABS(@Y1 - @Y2);
END;
Deterministic Functions
A deterministic function is one that will always
return the same result when it is provided with the
same set of input values for the same database
state.
Deterministic Function
CREATE FUNCTION dbo.AddInteger
(@FirstValue int, @SecondValue int)
RETURNS int
AS BEGIN
RETURN @FirstValue + @SecondValue;
END;
GO
Every time the function is called with the same two integer values, it will return exactly the same result.
Nondeterministic Functions
A nondeterministic function is one that may return different results for the same set of input values each
time it is called, even if the database remains in the same state. Date and time functions are examples of
nondeterministic functions.
10-8 Designing and Implementing User-Defined Functions
Nondeterministic Function
CREATE FUNCTION dbo.CurrentUTCTimeAsString()
RETURNS varchar(40)
AS BEGIN
RETURN CONVERT(varchar(40),SYSUTCDATETIME(),100);
END;
Each time the function is called, it will return a different value, even though no input parameters are
supplied.
For more information about deterministic and nondeterministic functions, see Microsoft Docs:
The following code example creates a function, and then uses OBJECTPROPERTY to return whether or not
it is deterministic. Note the use of the OBJECT_ID function to return the ID of the TodayAsString function.
OBJECTPROPERTY
CREATE FUNCTION dbo.TodayAsStringC(@Format int= 112)
RETURNS VARCHAR (20)
AS BEGIN
RETURN CONVERT (VARCHAR (20),
CAST(SYSDATETIME() AS date), @Format);
END;
GO
Demonstration Steps
1. Ensure that the 20762C-MIA-DC and 20762C-MIA-SQL virtual machines are running, and then log
on to 20762C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.
6. In the Connect to Server dialog box, in Server name, type MIA-SQL and then click Connect.
9. In Solution Explorer, expand the Queries folder, and then double-click 21 - Demonstration 2A.sql.
10. Select the code under Step A to use the tempdb database, and then click Execute.
11. Select the code under Step B to create a function that calculates the end date of the previous month,
and then click Execute.
12. Select the code under Step C to query the function, and then click Execute.
13. Select the code under Step D to establish if the function is deterministic, and then click Execute.
14. Select the code under Step E to drop the function, and then click Execute.
15. Create a function under Step F using the EOMONTH function (the code should resemble the
following), and then click Execute.
16. Select the code under Step G to query the new function, and then click Execute.
17. Select the code under Step H to drop the function, and then click Execute.
18. Keep SQL Server Management Studio open with the Demo10.ssmssqlpro solution loaded for the
next demo.
rowversion
table
cursor
integer
Verify the correctness of the statement by placing a mark in the column to the right.
Statement Answer
Lesson 3
Designing and Implementing Table-Valued Functions
In this lesson, you will learn how to work with functions that return tables instead of single values. These
functions are known as table-valued functions (TVFs). There are two types of TVFs: inline and
multistatement.
The ability to return a table of data is important because it means a function can be used as a source of
rows in place of a table in a Transact-SQL statement. In many cases, this avoids the need to create
temporary tables.
Lesson Objectives
After completing this lesson, you will be able to:
Describe TVFs.
Describe inline TVFs.
Table-Valued Functions
Inline TVFs
Multistatement TVFs
If the logic of the function is too complex to include in a single SELECT statement, you need to implement
the function as a multistatement TVF. Multistatement TVFs construct a table within the body of the
function, and then return the table. They also need to define the schema of the table to be returned.
Note: You can use both types of TVF as the equivalent of parameterized views.
GO
For inline functions, the body of the function is not enclosed in a BEGIN…END block. A syntax error occurs
if you attempt to use this block. The CREATE FUNCTION statement—or CREATE OR ALTER statement, in
versions where it is supported—still needs to be the only statement in the batch.
In the same way that you use a view, you can use a TVF in the FROM clause of a Transact-SQL statement.
The following code is an example of a multistatement TVF:
Multistatement TVF
CREATE FUNCTION dbo.GetDateRange (@StartDate date, @NumberOfDays int)
RETURNS @DateList TABLE
(Position int, DateValue date)
AS BEGIN
DECLARE @Counter int = 0;
WHILE (@Counter < @NumberofDays) BEGIN
INSERT INTO @DateList
VALUES (@Counter + 1, DATEADD (day, @Counter, @StartDate));
SET @Counter += 1;
END;
RETURN;
END;
GO
Implement TVFs.
Demonstration Steps
1. In Solution Explorer, expand the Queries folder, and then double-click 31 - Demonstration 3A.sql.
2. Select the code under Step A to use the AdventureWorks database, and then click Execute.
3. Select the code under Step B to create a table-valued function, and then click Execute.
4. Select the code under Step C to query the function, and then click Execute.
Developing SQL Databases 10-13
5. Select the code under Step D to use CROSS APPLY to call the function, and then click Execute.
6. Select the code under Step E to drop the function, and then click Execute.
Verify the correctness of the statement by placing a mark in the column to the right.
Statement Answer
Question: You have learned that TVFs return tables. What are the two types of TVF and how
do they differ?
10-14 Designing and Implementing User-Defined Functions
Lesson 4
Considerations for Implementing Functions
Although it’s important to create functions in Transact-SQL, there are some considerations you need to be
aware of. For example, you should avoid negative performance impacts through inappropriate use of
functions—a common issue. This lesson provides guidelines for the implementation of functions, and
describes how to control their security context.
Lesson Objectives
After completing this lesson, you will be able to:
Note: Interleaved execution, introduced as part of the Adaptive Query Processing feature
of SQL Server 2017, is designed to improve the performance of multistatement TVFs by
generating more accurate cardinality estimates, leading to more accurate query execution plans.
Adaptive Query Processing is enabled for all databases with a compatibility level of 140 or higher.
Implicit impersonations that are performed through the EXECUTE AS clause on modules impersonate the
specified user or login at the database or server level. This impersonation depends on whether the module
is a database-level module, such as a stored procedure or function, or a server-level module, such as a
server-level trigger.
When you are impersonating a principal by using the EXECUTE AS LOGIN statement, or within a server-
scoped module by using the EXECUTE AS clause, the scope of the impersonation is server-wide. This
means that, after the context switch, you can any resource within the server on which the impersonated
login has permissions.
However, when you are impersonating a principal by using the EXECUTE AS USER statement, or within a
database-scoped module by using the EXECUTE AS clause, the scope of impersonation is restricted to the
database by default. This means that references to objects that are outside the scope of the database will
return an error.
Execute As:
UDF Naming. Use two-part naming to qualify the name of any database objects that are referred to
within the function. You should also use two-part naming when you are choosing the name of the
function.
UDFs and Exception Handling. Avoid statements that will raise Transact-SQL errors, because
exception handling is not permitted within functions.
UDFs with Indexes. Consider the impact of using functions in combination with indexes. In
particular, note that a WHERE clause that uses a predicate, such as the following code example, is
likely to remove the usefulness of an index on CustomerID:
For example, consider the function definition in the following code fragment:
Demonstration Steps
Alter the Execution Context of a Function
1. Ensure that the 20762C-MIA-DC and 20762C-MIA-SQL virtual machines are running, and then log
on to 20762C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.
5. In the Connect to Server dialog box, in Server name box, type MIA-SQL, and then click Connect.
8. In Solution Explorer, expand the Queries folder, and then double-click 41 - Demonstration 4A.sql.
9. Select the code under Step A to use the master database, and then click Execute.
10. Select the code under Step B to create a test login, and then click Execute.
11. Select the code under Step C to use the AdventureWorks database and create a user, and then click
Execute.
12. Select the code under Step D to create a function with default execution context, and then click
Execute.
13. Select the code under Step E to try to add WITH EXECUTE AS, and then click Execute.
14. Select the code under Step F to recreate the function as a multistatement table-valued function, and
then click Execute.
15. Select the code under Step G to select from the function, and then click Execute.
16. Select the code under Step H to drop the objects, and then click Execute.
17. Close SQL Server Management Studio without saving any changes.
Verify the correctness of the statement by placing a mark in the column to the right.
Statement Answer
Verify the correctness of the statement by placing a mark in the column to the right.
Statement Answer
Function Naming
Function Performance
10-20 Designing and Implementing User-Defined Functions
Lesson 5
Alternatives to Functions
Functions are one option for implementing code. This lesson explores situations where other solutions
may be appropriate and helps you choose which solution to use.
Lesson Objectives
After completing this lesson, you will be able to:
Stored procedures can execute dynamic SQL statements. Functions are not permitted to execute dynamic
SQL statements.
Stored procedures can include detailed exception handling. Functions cannot contain exception handling.
Stored procedures can return multiple resultsets from a single stored procedure call. TVFs can return a
single rowset from a function call. There is no mechanism to permit the return of multiple rowsets from a
single function call.
Developing SQL Databases 10-21
Verify the correctness of the statement by placing a mark in the column to the right.
Statement Answer
Objectives
After completing this lab, you will be able to:
Create a function.
Password: Pa55w.rd
2. Review the Function Specifications: Phone Number section in the supporting documentation.
Results: After this exercise, you should have created a new FormatPhoneNumber function within the
dbo schema.
Results: After this exercise, you should have created a new IntegerListToTable function within a dbo
schema.
10-24 Designing and Implementing User-Defined Functions
Best Practice: When working with functions, consider the following best practices:
Avoid calling multistatement TVFs for each row of a query. In many cases, you can dramatically
improve performance by extracting the code from the query into the surrounding query.
Use the WITH EXECUTE AS clause to override the security context of code that needs to perform
actions that the user who is executing the code does not have.
Review Question(s)
Question: When you are using the EXECUTE AS clause, what privileges should you grant to
the login or user that is being impersonated?
Question: When you are using the EXECUTE AS clause, what privileges should you grant to
the login or user who is creating the code?
11-1
Module 11
Responding to Data Manipulation Via Triggers
Contents:
Module Overview 11-1
Lesson 1: Designing DML Triggers 11-2
Module Overview
Data Manipulation Language (DML) triggers are powerful tools that you can use to enforce domain,
entity, referential data integrity and business logic. The enforcement of integrity helps you to build
reliable applications. In this module, you will learn what DML triggers are, how they enforce data integrity,
the different types of trigger that are available to you, and how to define them in your database.
Objectives
After completing this module, you will be able to:
Lesson 1
Designing DML Triggers
Before you begin to create DML triggers, you should become familiar with best practice design guidelines,
which help you to avoid making common errors.
Several types of DML trigger are available—this lesson goes through what they do, how they work, and
how they differ from Data Definition Language (DDL) triggers. DML triggers have to be able to work with
both the previous state of the database and its changed state. You will see how the inserted and deleted
virtual tables provide that capability.
DML triggers are often added after applications are built—so it’s important to check that a trigger will not
cause errors in the existing applications. The SET NOCOUNT ON command helps to avoid the side effects
of triggers.
Lesson Objectives
After completing this lesson, you will be able to:
Explain how AFTER triggers differ from INSTEAD OF triggers, and where each should be used.
Access both the “before” and “after” states of the data by using the inserted and deleted virtual
tables.
Logon triggers are a special form of trigger that fire when a new session is established. (There is no logoff
trigger.)
Note: Terminology: The word “fire” is used to describe the point at which a trigger is
executed as the result of an event.
Developing SQL Databases 11-3
Trigger Operation
The trigger and the statement that fires it are treated as a single operation, which you can roll back from
within the trigger. By rolling back an operation, you can undo the effect of a Transact-SQL statement if
the logic in your triggers determines that the statement should not have been executed. If the statement
is part of another transaction, the outer transaction is also rolled back.
Triggers can cascade changes through related tables in the database; however, in many cases, you can
execute these changes more efficiently by using cascading referential integrity constraints.
Unlike CHECK constraints, triggers can reference columns in other tables. For example, a trigger can use a
SELECT statement from another table to compare to the inserted or updated data, and to perform
additional actions, such as modifying the data or displaying a user-defined error message.
Triggers can evaluate the state of a table before and after a data modification, and take actions based on
that difference. For example, you may want to check that the balance of a customer’s account does not
change by more than a certain amount if the person processing the change is not a manager.
With triggers, you can also create custom error messages for when constraint violations occur. This could
make the messages that are passed to users more meaningful.
Multiple Triggers
With multiple triggers of the same type (INSERT, UPDATE, or DELETE) on a table, you can make multiple
different actions occur in response to the same modification statement. You might create multiple triggers
to separate the logic that each performs, but note that you do not have complete control over the order
in which they fire. You can only specify which triggers should fire first and last.
where triggers are written to target single rows and are called multiple times when a statement affects
multiple rows.
AFTER Triggers
AFTER triggers fire after the data modifications, which are part of the event to which they relate,
complete. This means that an INSERT, UPDATE, or DELETE statement executes and modifies the data in
the database. After that modification has completed, AFTER triggers associated with that event fire—but
still within the same operation that triggered them.
INSTEAD OF Triggers
An INSTEAD OF trigger is a special type of trigger that executes alternate code instead of executing the
statement from which it was fired.
When you use an INSTEAD OF trigger, only the code in the trigger is executed. The original INSERT,
UPDATE, or DELETE operation that caused the trigger to fire does not occur.
INSTEAD OF triggers are most commonly used to make views that are based on multiple base tables
updatable.
INSERT: the inserted virtual table holds details of the rows that have just been inserted. The
underlying table also contains those rows.
UPDATE:
o The inserted virtual table holds details of the modified versions of the rows. The underlying table
also contains those rows in the modified form.
o The deleted virtual table holds details of the rows from before the modification was made. The
underlying table holds the modified versions.
DELETE: the deleted virtual table holds details of the rows that have just been deleted. The
underlying table no longer contains those rows.
INSTEAD OF Triggers
An INSTEAD OF trigger can be associated with an event on a table. When you attempt an INSERT,
UPDATE, or DELETE statement that triggers the event, the inserted and deleted virtual tables hold details
of the modifications that must be made, but have not yet happened.
SET NOCOUNT ON
When you are adding a trigger to a table, you
must avoid affecting the behavior of applications
that are accessing the table, unless the intended
purpose of the trigger is to prevent misbehaving
applications from making inappropriate data
changes.
UPDATE Statement
UPDATE Customer
SET Customer.FullName = @NewName,
Customer.Address = @NewAddress
WHERE Customer.CustomerID = @CustomerID
AND Customer.Concurrency = @Concurrency;
11-6 Responding to Data Manipulation Via Triggers
In this case, the Concurrency column is a rowversion data type column. The application was designed so
that the update only occurs if the Concurrency column has not been altered. Using rowversion columns,
every modification to the row causes a change in the rowversion column.
When the application intends to modify a single row, it issues an UPDATE statement for that row. The
application then checks the count of updated rows that are returned by SQL Server. When the application
sees that only a single row has been modified, the application knows that only the row that it intended to
change was affected. It also knows that no other user had modified the row before the application read
the data.
A common mistake when you are adding triggers is that if the trigger also causes row modifications (for
example, writes an audit row into an audit table), that count is returned in addition to the expected count.
You can avoid this situation by using the SET NOCOUNT ON statement. Most triggers should include this
statement.
Note that you can disable and re-enable triggers by using the ALTER TRIGGER statement.
Note: Reminder: Constraints are rules that define allowed column or table values.
Constraints are checked before any data modification is attempted, so they often provide much higher
performance than is possible with triggers, particularly in ROLLBACK situations. You can use constraints
for relatively simple checks; triggers make it possible to check more complex logic.
Developing SQL Databases 11-7
The default context can be a security issue because it could be used by people who wish to execute
malicious code.
The following example shows how to change permissions for Student to sysadmin:
The Student user now has CONTROL SERVER permissions. Student has been able to grant permission
that she/he could not have normally done. The code has granted escalated permissions.
Best Practice: To prevent triggers from firing under escalated privileges, you should first
understand what triggers you have in the database and server instance. You can use the
sys.triggers and sys.server_triggers views to find out. Secondly, use the DISABLE_TRIGGER
statement to disable triggers that use escalated privileges.
Question: What reasons can you think of for deploying AFTER triggers?
Developing SQL Databases 11-9
Lesson 2
Implementing DML Triggers
The first lesson provided information about designing DML triggers. We now consider how to implement
the designs that have been created.
Lesson Objectives
After completing this lesson, you will be able to:
Multirow Inserts
In the code example on the slide, insertions for the Sales.Opportunity table are being audited to a table
called Sales.OpportunityAudit. Note that the trigger processes all inserted rows at the same time. A
common error when designing AFTER INSERT triggers is to write them with the assumption that only a
single row is being inserted.
CREATE TRIGGER
CREATE TRIGGER Sales.InsertCustomer
ON Sales.Customer
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO Sales.Customer (CustomerID, PersonID, StoreID, TerritoryID)
SELECT CustomerID, PersonID, StoreID, TerritoryID
FROM inserted;
END;
GO
11-10 Responding to Data Manipulation Via Triggers
DML Triggers
https://aka.ms/xqgne8
Demonstration Steps
Create an AFTER INSERT Trigger
1. Ensure that the 20762C-MIA-DC and 20762C-MIA-SQL virtual machines are running, and then log
on to 20762C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd.
5. In the Connect to Server dialog box, in the Server name box, type MIA-SQL, and then click
Connect.
9. Select the code under the Step A comment, and then click Execute.
10. Select the code under the Step B comment, and then click Execute.
11. Select the code under the Step C comment, and then click Execute.
12. Select the code under the Step D comment, and then click Execute.
13. Select the code under the Step E comment, and then click Execute. Note the error message.
14. Select the code under the Step F comment, and then click Execute.
Multirow Deletes
In the code example, rows in the Production.Product table are being flagged as discontinued if the
product subcategory row with which they are associated in the Production.SubCategory table is deleted.
Note that the trigger processes all deleted rows at the same time. A common error when designing AFTER
DELETE triggers is to write them with the assumption that only a single row is being deleted.
TRUNCATE TABLE
When rows are deleted from a table by using a DELETE statement, any AFTER DELETE triggers are fired
when the deletion is completed. TRUNCATE TABLE is an administrative option that removes all rows from
a table. It needs additional permissions above those required for deleting rows. It does not fire any AFTER
DELETE triggers that are associated with the table.
11-12 Responding to Data Manipulation Via Triggers
Demonstration Steps
Create and Test AFTER DELETE Triggers
1. In Solution Explorer, in the Queries folder, and then double-click the 22 - Demonstration 2B.sql
script file to open it.
2. Select the code under the Step A comment, and then click Execute.
3. Select the code under the Step B comment, and then click Execute.
4. Select the code under the Step C comment, and then click Execute.
5. Select the code under the Step D comment, and then click Execute. Note the error message.
6. Select the code under the Step E comment, and then click Execute.
The trigger can examine both the inserted and deleted virtual tables to determine what to do in response
to the modification.
Multirow Updates
In the code example, the Product.ProductReview table contains a column called ModifiedDate. The
trigger is being used to ensure that when changes are made to the Product.ProductReview table, the
value in the ModifiedDate column reflects when any changes last happened.
Developing SQL Databases 11-13
Note that the trigger processes all updated rows at the same time. A common error when designing
AFTER UPDATE triggers is to write them with the assumption that only a single row is being updated.
Demonstration Steps
Create and Test AFTER UPDATE Triggers
1. In Solution Explorer, in the Queries folder, double-click 23 - Demonstration 2C.sql to open it.
2. Select the code under the Step A comment, and then click Execute.
3. Select the code under the Step B comment, and then click Execute.
4. Select the code under the Step C comment, and then click Execute.
5. Select the code under the Step D comment, and then click Execute.
6. Select the code under the Step E comment, and then click Execute.
7. Select the code under the Step F comment, and then click Execute.
8. Select the code under the Step G comment, and then click Execute.
9. Select the code under the Step H comment, and then click Execute.
10. Select the code under the Step I comment, and then click Execute.
11. Select the code under the Step J comment, and then click Execute.
12. Select the code under the Step K comment, and then click Execute. Note that no triggers are
returned.
13. Do not close SQL Server Management Studio.
11-14 Responding to Data Manipulation Via Triggers
Verify the correctness of the statement by placing a mark in the column to the right.
Statement Answer
Question:
Analyze this create trigger code and indicate the four errors. You can assume the table and
columns have been created.
IN dbo.SellingPrice
+ sp.TaxAmount
+ sp.FreightAmount
FROM dbo.SellingPrice AS sp
ON sp.SellingPriceID = i.SellingPriceId;
END;
GO
Developing SQL Databases 11-15
Lesson 3
Advanced Trigger Concepts
In the previous two lessons, you have learned to design and implement DML AFTER triggers. However, to
make effective use of these triggers, you have to know and understand some additional areas of
complexity that are related to them. This lesson considers when to use triggers, and when to consider
alternatives.
Lesson Objectives
After completing this lesson, you will be able to:
Explain how nested triggers work and how configurations might affect their operation.
Use the UPDATE function to build logic based on the columns being updated.
Describe the order in which multiple triggers fire when defined on the same object.
INSTEAD OF Triggers
INSTEAD OF triggers cause the execution of
alternate code instead of executing the statement
that caused them to fire.
Updatable Views
You can define INSTEAD OF triggers on views that have one or more base tables, where they can extend
the types of updates that a view can support.
This trigger executes instead of the original triggering action. INSTEAD OF triggers increase the variety of
types of updates that you can perform against a view. Each table or view is limited to one INSTEAD OF
trigger for each triggering action (INSERT, UPDATE, or DELETE).
You can specify an INSTEAD OF trigger on both tables and views. You cannot create an INSTEAD OF
trigger on views that have the WITH CHECK OPTION clause defined. You can perform operations on the
base tables within the trigger. This avoids the trigger being called again. For example, you could perform
a set of checks before inserting data, and then perform the insert on the base table.
11-16 Responding to Data Manipulation Via Triggers
INSTEAD OF Trigger
CREATE TRIGGER TR_ProductReview_Delete
ON Production.ProductReview
INSTEAD OF DELETE AS
BEGIN
SET NOCOUNT ON;
UPDATE pr set pr.ModifiedDate = SYSDATETIME()
FROM Production.ProductReview AS pr
INNER JOIN deleted as d
ON pr.ProductReviewID = d.ProductReviewID;
END;
Demonstration Steps
Create and Test an INSTEAD OF DELETE Trigger
1. In Solution Explorer, in the Queries folder, double-click 31 - Demonstration 3A.sql in the Solution
Explorer script file to open it.
2. Select the code under the Step A comment, and then click Execute.
3. Select the code under the Step B comment, and then click Execute.
4. Select the code under the Step C comment, and then click Execute.
5. Select the code under the Step D comment, and then click Execute.
6. Select the code under the Step E comment, and then click Execute.
7. Select the code under the Step F comment, and then click Execute.
8. Select the code under the Step G comment, and then click Execute.
9. Select the code under the Step H comment, and then click Execute.
10. Select the code under the Step I comment, and then click Execute.
11. Select the code under the Step J comment, and then click Execute.
12. Select the code under the Step K comment, and then click Execute.
13. Select the code under the Step L comment, and then click Execute. Note the error message.
14. Select the code under the Step M comment, and then click Execute. Note the error message.
15. Select the code under the Step N comment, and then click Execute.
16. Select the code under the Step O comment, and then click Execute.
17. Select the code under the Step P comment, and then click Execute.
18. Select the code under the Step R comment, and then click Execute.
19. Select the code under the Step S comment, and then click Execute.
20. Select the code under the Step U comment, and then click Execute.
21. Select the code under the Step V comment, and then click Execute.
Developing SQL Databases 11-17
A failure at any level of a set of nested triggers cancels the entire original statement, and all data
modifications are rolled back.
A nested trigger will not fire twice in the same trigger transaction; a trigger does not call itself in response
to a second update to the same table within the trigger.
Complexity of Debugging
We noted in an earlier lesson that debugging triggers can sometimes be difficult. Nested triggers are
particularly difficult to debug. One common method that is used during debugging is to include PRINT
statements within the body of the trigger code so that you can determine where a failure occurred.
However, you should make sure these statements are only used during debugging phases.
Direct Recursion
Direct recursion occurs when a trigger fires and
performs an action on the same table that causes
the same trigger to fire again.
For example, an application updates table T1, which causes trigger Trig1 to fire. Trigger Trig1 updates
table T1 again, which causes trigger Trig1 to fire again.
11-18 Responding to Data Manipulation Via Triggers
Indirect Recursion
Indirect recursion occurs when a trigger fires and performs an action that causes another trigger to fire on
a different table which, in turn, causes an update to occur on the original table which, in turn, causes the
original trigger to fire again.
For example, an application updates table T2, which causes trigger Trig2 to fire. Trig2 updates table T3,
which causes trigger Trig3 to fire. In turn, trigger Trig3 updates table T2, which causes trigger Trig2 to
fire again.
To prevent indirect recursion of this sort, turn off the nested triggers option at the server instance level.
Careful design and thorough testing is required to ensure that the 32-level nesting limit is not
exceeded.
UPDATE Function
It is a common requirement to build logic that
only takes action if particular columns are being
updated.
You can use the UPDATE function to detect
whether a particular column is being updated in
the action of an UPDATE statement. For example,
you might want to take a particular action only
when the size of a product changes. The column is
referenced by the name of the column. The
Update function is used in the AFTER INSERT and
AFTER UPDATE triggers.
Change of Value
Note that the UPDATE function does not indicate if the value is actually changing. It only indicates if the
column is part of the list of columns in the SET clause of the UPDATE statement. To detect if the value in a
column is actually being changed to a different value, you must interrogate the inserted and deleted
virtual tables.
Developing SQL Databases 11-19
COLUMNS_UPDATED Function
SQL Server also provides a function called COLUMNS_UPDATED. This function returns a bitmap that
indicates which columns are being updated. The values in the bitmap depend upon the positional
information for the columns. Hard-coding that sort of information in the code within a trigger is generally
not considered good coding practice because it affects the readability—and therefore the
maintainability—of your code. It also reduces the reliability of your code because schema changes to the
table could break it.
In this example, the trigger uses the UPDATE function to identify updates to a particular column:
Update Function
CREATE TRIGGER pdate_ListPriceAudit
ON Production.Product AFTER UPDATE AS
BEGIN
IF UPDATE(ListPrice)
BEGIN
INSERT INTO Production.ListPriceAudit (ProductID, ListPrice,
ChangedWhen)
SELECT i.ProductiD, i.ListPrice, SYSDATETIME()
FROM inserted AS i;
END;
END;
sp_settriggerorder
Developers often want to control the firing order
of multiple triggers that are defined for a single
event on a single object. For example, a developer
might create three AFTER INSERT triggers on the
same table, each implementing different business
rules or administrative tasks.
In general, code within one trigger should not
depend upon the order of execution of other triggers. Limited control of firing order is available through
the sp_settriggerorder system stored procedure. With sp_settriggerorder, you can specify the triggers
that will fire first and last from a set of triggers that all apply to the same event, on the same object.
The possible values for the @order parameter are First, Last, or None. None is the default action. An
error will occur if the First and Last triggers both refer to the same trigger.
For DML triggers, the possible values for the @stmttype parameter are INSERT, UPDATE, or DELETE.
Sp_settriggerorder
EXEC sp_settriggerorder
@triggername = 'Production.TR_Product_Update_ListPriceAudit',
@order = 'First',
@stmttype = 'UPDATE';
11-20 Responding to Data Manipulation Via Triggers
Alternatives to Triggers
Triggers are useful in many situations, and are
sometimes necessary to handle complex logic.
However, triggers are sometimes used in situations
where alternatives might be preferable.
Checking Values
You could use triggers to check that values in
columns are valid or within given ranges. However,
in general, you should use CHECK constraints
instead—CHECK constraints perform this
validation before the data modification is
attempted.
If you are using triggers to check the correlation of values across multiple columns within a table, you
should generally create table-level CHECK constraints instead.
Defaults
You can use triggers to provide default values for columns when no values have been provided in INSERT
statements. However, you should generally use DEFAULT constraints for this instead.
Foreign Keys
You can use triggers to check the relationship between tables. However, you should generally use
FOREIGN KEY constraints for this.
Computed Columns
You can use triggers to maintain the value in one column based on the value in other columns. In general,
you should use computed columns or persisted computed columns for this.
Precalculating Aggregates
You can use triggers to maintain precalculated aggregates in one table, based on the values in rows in
another table. In general, you should use indexed views to provide this functionality.
As another example, a FOREIGN KEY constraint cannot be contained on a column that is also used for
other purposes. Consider a column that holds an employee number only if another column holds the
value “E”. This typically indicates a poor database design, but you can use triggers to ensure this sort of
relationship.
Developing SQL Databases 11-21
Demonstration Steps
Replace a Trigger with a Computed Column
1. In Solution Explorer, in the Queries folder, double-click 32 - Demonstration 3B.sql in the Solution
Explorer script file to open it.
2. Select the code under the Step A comment, and then click Execute.
3. Select the code under the Step B comment, and then click Execute.
4. Select the code under the Step C comment, and then click Execute.
5. Select the code under the Step D comment, and then click Execute.
6. Select the code under the Step E comment, and then click Execute.
7. Select the code under the Step F comment, and then click Execute.
8. Select the code under the Step G comment, and then click Execute.
Supporting Documentation
The Production.ProductAudit table is used to hold changes to high value products. The data to be
inserted in each column is shown in the following table:
Objectives
After completing this lab, you will be able to:
Create triggers
Modify triggers
Test triggers
Password: Pa55w.rd
11-24 Responding to Data Manipulation Via Triggers
Note: Inserts or deletes on the table do not have to be audited. Details of the current user
can be taken from the ORIGINAL_LOGIN() function.
3. Design a Trigger
2. In SQL Server Management Studio, review the existing structure of the Production.ProductAudit
table and the values required in each column, based on the supporting documentation.
3. Review the existing structure of the Production.Product table on SSMS.
Results: After this exercise, you should have created a new trigger. Tests should have shown that it is
working as expected.
Developing SQL Databases 11-25
Results: After this exercise, you should have altered the trigger. Tests should show that it is now working
as expected.
11-26 Responding to Data Manipulation Via Triggers
Review Question(s)
Question: How do constraints and triggers differ regarding timing of execution?
12-1
Module 12
Using In-Memory Tables
Contents:
Module Overview 12-1
Lesson 1: Memory-Optimized Tables 12-2
Module Overview
Microsoft® SQL Server® 2014 data management software introduced in-memory online transaction
processing (OLTP) functionality features to improve the performance of OLTP workloads. Subsequent
versions of SQL Server add several enhancements, such as the ability to alter a memory-optimized table
without recreating it. Memory-optimized tables are primarily stored in memory, which provides the
improved performance by reducing hard disk access.
Natively compiled stored procedures further improve performance over traditional interpreted Transact-
SQL.
Objectives
After completing this module, you will be able to:
Use memory-optimized tables to improve performance for latch-bound workloads.
Lesson 1
Memory-Optimized Tables
You can use memory-optimized tables as a way to improve the performance of latch-bound OLTP
workloads. Memory-optimized tables are stored in memory, and do not use locks to enforce concurrency
isolation. This dramatically improves performance for many OLTP workloads.
Lesson Objectives
After completing this lesson, you will be able to:
Can persist their data to disk as FILESTREAM data, or they can be nondurable.
Can be queried by using Transact-SQL through interop services that the SQL Server query processor
provides.
Most data types in memory-optimized tables are supported. However, some are not supported, including
text and image.
Developing SQL Databases 12-3
For more information about the data types that memory-optimized tables support, see the topic
Supported Data Types for In-Memory OLTP in Microsoft Docs:
Supported Data Types for In-Memory OLTP
http://aka.ms/jf3ob7
For information about features that are not supported, see the “Memory-Optimized Tables” section of the
topic Transact SQL Constructs Not Supported by In-Memory OLTP in Microsoft Docs:
A table contains “hot” pages. For example, a table that contains a clustered index on an incrementing
key value will inherently suffer from concurrency issues because all insert transactions occur in the last
page of the index.
Repeatable read validation failures. These occur when a row that the transaction has read has
changed since the transaction began.
Serializable validation failures. These occur when a new (or phantom) row is inserted into the range
of rows that the transaction accesses while it is still in progress.
Commit dependency failures. These occur when a transaction has a dependency on another
transaction that has failed to commit.
You can also add a filegroup for memory-optimized data to a database on the Filegroups page of the
Database Properties dialog box in SQL Server Management Studio (SSMS).
Durability
When you create a memory-optimized table, you
can specify the durability of the table data.
You can also specify a durability of SCHEMA_ONLY so that only the table definition is persisted. Any data
in the table will be lost in the event of the database server shutting down. The ability to set the durability
option to SCHEMA_ONLY is useful when the table is used for transient data, such as a session state table
in a web server farm.
Developing SQL Databases 12-5
Primary Keys
All tables that have a durability option of SCHEMA_AND_DATA must include a primary key. You can
specify this inline for single-column primary keys, or you can specify it after all of the column definitions.
Memory-optimized tables do not support clustered primary keys. You must specify the word
NONCLUSTERED when declaring the primary key.
To create a memory-optimized table, execute a CREATE TABLE statement that has the
MEMORY_OPTIMIZED option set to ON, as shown in the following example:
To create a memory-optimized table that has a composite primary key, you must specify the PRIMARY
KEY constraint after the column definitions, as shown in the following example:
When you create a memory-optimized table that has a primary key, an index will be created for the
primary key. You can create up to seven other indexes in addition to the primary key. All memory-
optimized tables must include at least one index, which can be the index that was created for the primary
key.
12-6 Using In-Memory Tables
When you create a hash index for a primary key, or in addition to a primary key index, you must specify
the bucket count. Buckets are storage locations in which rows are stored. You apply an algorithm to the
indexed key values to determine the bucket in which the row is stored. When a bucket contains multiple
rows, a linked list is created by adding a pointer in the first row to the second row, then in the second row
to the third row, and so on.
To create a hash index, you must specify the BUCKET_COUNT value, as shown in the following example:
Note: If there is more than one column in the key of the hash index, the WHERE clause of
any query that uses the index must include equality tests for all columns in the key. Otherwise,
the query plan will have to scan the whole table. When querying the table that was created in the
preceding example, the WHERE clause should include an equality test for the OrderId and
LineItem column.
You can create hash indexes, in addition to the primary key, by specifying the indexes after the column
definitions, as shown in the following example:
Nonclustered Indexes
Nonclustered indexes (also known as range indexes) use a latch-free variation of a binary tree (b-tree)
structure, called a “BW-tree,” to organize the rows based on key values. Nonclustered indexes support
equality seeks, range scans, and ordered scans.
Developing SQL Databases 12-7
You can create nonclustered indexes, in addition to the primary key, by specifying the indexes after the
column definitions, as shown in the following example:
Note: SQL Server 2017 has improved the performance of nonclustered indexes on
memory-optimized tables, thereby reducing the time required to recover a database.
The index key contains more than one column, and you might use queries with predicates that do not
apply to all of the columns.
If you are sure that none of the above scenarios apply, you could consider using a hash index to optimize
equality seeks.
Note: You can have hash indexes and nonclustered indexes in the same table. Prior to SQL
Server 2017, there was a limit of eight indexes including the primary key for memory-optimized
tables. This limitation has been removed.
You can now specify options such as the filegroup; the new name for the original, unmigrated, disk-
based table; and whether to transfer the data from the original table to the new memory-optimized
table.
If you are migrating to a durable table, you must specify a primary key or create a new primary key at
this stage. You can also specify whether the index should be a hash index or not.
This step gives you the same options as primary key migration for each of the indexes on the table.
This step lists the options that you have specified in the previous stages and enables you to migrate
the table, or to create a script to migrate the table at a subsequent time.
To start Memory Optimization Advisor, in SQL Server Management Studio, right-click a table in Object
Explorer, and then select Memory Optimization Advisor.
Note: The Memory Optimization Advisor steps depend on the table. Actual pages may vary
from those described above.
Interpreted Transact-SQL
Transact-SQL in queries and stored procedures
(other than native stored procedures) is referred to
as interpreted Transact-SQL. You can use
interpreted Transact-SQL statements to access
memory-optimized tables in the same way as
traditional disk-based tables. The SQL Server query
engine provides an interop layer that does the necessary interpretation to query the compiled in-memory
table. You can use this technique to create queries that access both memory-optimized tables and disk-
based tables—for example, by using a JOIN clause. When you access memory-optimized tables, you can
use most of the Transact-SQL operations that you use when accessing disk-based tables.
Developing SQL Databases 12-9
For more information about the Transact-SQL operations that are not possible when you access memory-
optimized tables, see the topic Accessing Memory-Optimized Tables Using Interpreted Transact-SQL in
Microsoft Docs:
Demonstration Steps
Create a Database with a Filegroup for Memory-Optimized Data
1. Ensure that the 20762C-MIA-DC and 20762C-MIA-SQL virtual machines are running, and then log
on to 20762C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd
2. In the D:\Demofiles\Mod12 folder, right-click Setup.cmd, and then click Run as administrator.
4. Start SQL Server Management Studio, and then connect to the MIA-SQL database engine instance by
using Windows authentication.
5. In Object Explorer, under MIA-SQL, right-click Databases, and then click New Database.
6. In the New Database dialog box, in the Database name box, type MemDemo.
7. On the Filegroups page, in the MEMORY OPTIMIZED DATA section, click Add Filegroup.
8. In the Name box, type MemFG. Note that the filegroups in this section are used to contain
FILESTREAM files because memory-optimized tables are persisted as streams.
9. On the General page, click Add to add a database file. Then add a new file that has the following
properties:
o Filegroup: MemFG
10. In the Script drop-down list, click Script Action to New Query Window.
11. In the New Database dialog box, click Cancel to view the script file that has been generated.
12-10 Using In-Memory Tables
12. Review the script, noting the syntax that has been used to create a filegroup for memory-optimized
data. You can use similar syntax to add a filegroup to an existing database.
13. Click Execute to create the database.
2. In the Open Project dialog box, navigate to D:\Demofiles\Mod12\Demo12.ssmssln, and then click
Open.
4. Select the code under Step 1 - Create a memory-optimized table, and then click Execute.
5. Select the code under Step 2 - Create a disk-based table, and then click Execute.
6. Select the code under Step 3 - Insert 500,000 rows into DiskTable, and then click Execute.
This code uses a transaction to insert rows into the disk-based table.
7. When code execution is complete, look at the lower right of the query editor status bar, and note
how long it has taken.
8. Select the code under Step 4 - Verify DiskTable contents, and then click Execute.
10. Select the code under Step 5 - Insert 500,000 rows into MemoryTable, and then click Execute.
This code uses a transaction to insert rows into the memory-optimized table.
11. When code execution is complete, look at the lower right of the query editor status bar and note how
long it has taken. It should be significantly lower than the time that it takes to insert data into the
disk-based table.
12. Select the code under Step 6 - Verify MemoryTable contents, and then click Execute.
15. Note how long it has taken for this code to execute.
16. Select the code under Step 8 - Delete rows from MemoryTable, and then click Execute.
17. Note how long it has taken for this code to execute. It should be significantly lower than the time that
it takes to delete rows from the disk-based table.
18. Select the code under Step 9 - View memory-optimized table stats, and then click Execute.
19. Close SQL Server Management Studio, without saving any changes.
Question: You are creating an index for a date column in a memory-optimized table. What
is likely to be the most suitable type of index? Explain your reasons.
Developing SQL Databases 12-11
Lesson 2
Natively Compiled Stored Procedures
Natively compiled stored procedures are stored procedures that are compiled into native code. They are
written in traditional Transact-SQL code, but are compiled when they are created rather than when they
are executed, which improves performance.
Lesson Objectives
After completing this lesson, you will be able to:
For more information about natively compiled stored procedures, see the topic Natively Compiled Stored
Procedures in Microsoft Docs:
Logic requires aggregation functions, nested-loop joins, multistatement selects, inserts, updates, and
deletes, or other complex expressions. They also work well with procedural logic—for example,
conditional statements and loop constructs.
Finally, for best results, do not use named parameters with natively compiled stored procedures. Instead
use ordinal parameters where the parameters are referred to by position.
Note: Natively compiled stored procedures only work with in-memory tables.
For more information about when to use a natively compiled stored procedure, see Microsoft Docs:
SCHEMABINDING
EXECUTE AS
SNAPSHOT. Using this isolation level, all data that the transaction reads is consistent with the version
that was stored at the start of the transaction. Data modifications that other, concurrent transactions
have made are not visible, and attempts to modify rows that other transactions have modified result
in an error.
REPEATABLE READ. Using this isolation level, every read is repeatable until the end of the
transaction. If another, concurrent transaction has modified a row that the transaction had read, the
transaction will fail to commit due to a repeatable read validation error.
SERIALIZABLE. Using this isolation level, all data is consistent with the version that was stored at the
start of the transaction, and repeatable reads are validated. In addition, the insertion of “phantom”
rows by other, concurrent transactions will cause the transaction to fail.
The following code example shows a CREATE PROCEDURE statement that is used to create a natively
compiled stored procedure:
Some features are not supported in native stored procedures. For information about features that are not
supported, see the “Natively Compiled Stored Procedures and User-Defined Functions” section in the
topic Transact-SQL Constructs Not Supported by In-Memory OLTP in the SQL Server Technical
Documentation:
Execution Statistics
Execution statistics for natively compiled stored
procedures is not enabled by default. There is a
small performance impact with collecting statistics,
so you must explicitly enable the option when you
need it. You can enable or disable the collection of
statistics using sys.sp_xtp_control_proc_exec_stats.
Use a dynamic management view (DMV) to collect statistics, after you have enabled the collection of
statistics.
Sys.dm_exec_procedure_stats
SELECT *
FROM sys.dm_exec_procedure_stats
Note: Statistics are not automatically updated for in-memory tables. You must use the
UPDATE STATISTICS command to update specific tables or indexes, or the sp_updatestats to
update all the statistics.
Demonstration Steps
Create a Natively Compiled Stored Procedure
1. Ensure that the 20762C-MIA-DC and 20762C-MIA-SQL virtual machines are running, and then log
on to 20762C-MIA-SQL as ADVENTUREWORKS\Student with the password Pa55w.rd
3. Start SQL Server Management Studio, and then connect to the MIA-SQL database engine instance by
using Windows authentication.
5. In the Open Project dialog box, navigate to D:\Demofiles\Mod12\Demo12.ssmssln, and then click
Open.
8. Select the code under Step 2 - Create a native stored proc, and then click Execute.
9. Select the code under Step 3 - Use the native stored proc, and then click Execute.
10. Note how long it has taken for the stored procedure to execute. This should be significantly lower
than the time that it takes to insert data into the memory-optimized table by using a Transact-SQL
INSERT statement.
11. Select the code under Step 4 - Verify MemoryTable contents, and then click Execute.
13. Close SQL Server Management Studio without saving any changes.
Developing SQL Databases 12-15
Verify the correctness of the statement by placing a mark in the column to the right.
Statement Answer
Objectives
After completing this lab, you will be able to:
Password: Pa55w.rd
3. When you are prompted, click Yes to confirm that you want to run the command file, and then wait
for the script to finish.
2. Add a file for memory-optimized data to the InternetSales database. You should store the file in the
filegroup that you created in the previous step.
Developing SQL Databases 12-17
o SessionID: integer
o TimeAdded: datetime
o CustomerKey: integer
o ProductKey: integer
o Quantity: integer
3. The table should include a composite primary key nonclustered index on the SessionID and
ProductKey columns.
4. To test the table, insert the following rows, and then write and execute a SELECT statement to return
all of the rows.
1 <Time> 2 3 1
1 <Time> 2 4 1
Results: After completing this exercise, you should have created a memory-optimized table and a natively
compiled stored procedure in a database with a filegroup for memory-optimized data.
4. To test the AddItemToCart procedure, write and execute a Transact-SQL statement that calls
AddItemToCart to add the following items, and then write and execute a SELECT statement to return
all of the rows in the ShoppingCart table.
1 <Time> 2 3 1
1 <Time> 2 4 1
3 <Time> 2 3 1
3 <Time> 2 4 1
6. To test the EmptyCart procedure, write and execute a Transact-SQL statement that calls EmptyCart
to delete any items where SessionID is equal to 3, and then write and execute a SELECT statement to
return all of the rows in the ShoppingCart table.
Results: After completing this exercise, you should have created a natively compiled stored procedure.
Developing SQL Databases 12-19
Review Question(s)