SQL Server Materials
SQL Server Materials
RDBMS stands for Relational Database Management System. RDBMS is the basis for SQL,
and for all modern database systems like MS SQL Server, IBM DB2, Oracle, MySQL, and
Microsoft Access.
A database in SQL Server is made up of a collection of tables that stores a specific set of
structured data. A table contains a collection of rows, also referred to as records or tuples, and
columns, also referred to as attributes. Each column in the table is designed to store a certain
type of information, for example, dates, names, dollar amounts, and numbers.
A computer can have one or more than one instance of SQL Server installed. Each instance
of SQL Server can contain one or many databases. Within a database, there are one or many
object ownership groups called schemas. Within each schema, there are database objects such
as tables, views, and stored procedures. Some objects such as certificates and asymmetric
keys are contained within the database, but are not contained within a schema. SQL Server
databases are stored in the file system in files. Files can be grouped into filegroups. When
people gain access to an instance of SQL Server they are identified as a login. When people
gain access to a database they are identified as a database user. A database user can be based
on a login. If contained databases are enabled, a database user can be created that is not based
on a login.
A user that has access to a database can be given permission to access the objects in the
database. Though permissions can be granted to individual users, we recommend creating
database roles, adding the database users to the roles, and then grant access permission to the
roles. Granting permissions to roles instead of users makes it easier to keep permissions
consistent and understandable as the number of users grow and continually change.
.
system databases
master Records all the system-level information for an instance of SQL Server.
Database
msdb Is used by SQL Server Agent for scheduling alerts and jobs.
Database
model Is used as the template for all databases created on the instance of SQL Server.
Database Modifications made to the model database, such as database size, collation, recovery
model, and other database options, are applied to any databases created afterward.
Resource Is a read-only database that contains system objects that are included with SQL Server.
Database System objects are physically persisted in the Resource database, but they logically
appear in the sys schema of every database.
SQL Server does not support users directly updating the information in system objects such
as system tables, system stored procedures, and catalog views. Instead, SQL Server provides
a complete set of administrative tools that let users fully administer their system and manage
all users and objects in a database. These include the following:
You should not code Transact-SQL statements that directly query the system tables, unless
that is the only way to obtain the information that is required by the application. Instead,
applications should obtain catalog and system information by using the following:
System catalog views
SQL-SMO
Windows Management Instrumentation (WMI) interface
Catalog functions, methods, attributes, or properties of the data API used in the
application, such as ADO, OLE DB, or ODBC.
Transact-SQL system stored procedures and built-in functions.
Contained Databases
A contained database is a database that is isolated from other databases and from the instance
of SQL Server that hosts the database. SQL Server 2012 helps user to isolate their database
from the instance in 4 ways.
Much of the metadata that describes a database is maintained in the database. (In
addition to, or instead of, maintaining metadata in the master database.)
All metadata are defined using the same collation.
User authentication can be performed by the database, reducing the databases
dependency on the logins of the instance of SQL Server.
The SQL Server environment (DMV's, XEvents, etc.) reports and can act upon
containment information.
Some features of partially contained databases, such as storing metadata in the database,
apply to all SQL Server 2012 databases. Some benefits of partially contained databases, such
as database level authentication and catalog collation, must be enabled before they are
available. Partial containment is enabled using the CREATE DATABASE and ALTER
DATABASE statements or by using SQL Server Management Studio. This topic contains the
following sections.
Partially Contained Database Concepts
Components of the Partially Contained Database
Containment
Benefits of using Partially Contained Databases
Limitations
Identifying Database Containment
Database boundary
The boundary between a database and the instance of SQL Server. The
boundary between a database and other databases.
Contained
Uncontained
Non-contained database
Contained user
Security note
Enabling partially contained databases delegates control over access to the instance of SQL
Server to the owners of the database
DATABASE BOUNDARY
Because partially contained databases separate the database functionality from those of the
instance, there is a clearly defined line between these two elements called the database
boundary.
Inside of the database boundary is the database model, where the databases are developed and
managed. Examples of entities located inside of the database include, system tables
like sys.tables, contained database users with passwords, and user tables in the current
database referenced by a two-part name.
Outside of the database boundary is the management model, which pertains to instance-level
functions and management. Examples of entities located outside of the database boundary
include, system tables like sys.endpoints, users mapped to logins, and user tables in another
database referenced by a three-part-name.
CONTAINMENT
User entities that reside entirely within the database are considered contained. Any entities
that reside outside of thedatabase, or rely on interaction with functions outside of the
database, are considered uncontained.
In general, user entities fall into the following categories of containment:
Fully contained user entities (those that never cross the database boundary), for
example sys.indexes. Any code that uses these features or any object that references
only these entities is also fully contained.
Uncontained user entities (those that cross the database boundary), for example
sys.server_principals or a server principal (login) itself. Any code that uses these
entities or any functions that references these entities are uncontained.
There are issues and complications associated with the non-contained databases that can be
resolved by using a partially contained database.
Database Movement
One of the problems that occurs when moving databases, is that some important information
can be unavailable when a database is moved from one instance to another
For example, login information is stored within the instance instead of in the database. When
you move a non-contained database from one instance to another instance of SQL Server, this
information is left behind. You must identify the missing information and move it with your
database to the new instance of SQL Server. This process can be difficult and time-
consuming.
The partially contained database can store important information in the database so the
database still has the information after it is moved.
note
A partially contained database can provide documentation describing those features that are
used by a database that cannot be separated from the instance. This includes a list of other
interrelated databases, system settings that the database requires but cannot be contained, and
so on.
Database Administration
Maintaining database settings in the database, instead of in the master database, lets each
database owner have more control over their database, without giving the database
owner sysadmin permission.
Limitations
Partially contained databases do not allow the following features.
Partially contained databases cannot use replication, change data capture, or change
tracking.
Numbered procedures
Schema-bound objects that depend on built-in functions with collation changes
Binding change resulting from collation changes, including references to objects,
columns, symbols, or types.
Replication, change data capture, and change tracking.
caution
Temporary stored procedures are currently permitted. Because temporary stored procedures
breach containment, they are not expected to be supported in future versions of contained
database.
sys.dm_db_uncontained_entities
This view shows any entities in the database that have the potential to be uncontained, such
as those that cross-the database boundary. This includes those user entities that may use
objects outside the database model. However, because the containment of some entities (for
example, those using dynamic SQL) cannot be determined until run time, the view may show
some entities that are not actually uncontained.
database_uncontained_usage event
This XEvent occurs whenever an uncontained entity is identified at run time. This includes
entities originated in client code. This XEvent will occur only for actual uncontained entities.
However, the event only occurs at run time. Therefore, any uncontained user entities you
have not run will not be identified by this XEvent
The PRIMARY KEY constraint uniquely identifies each record in a database table.
Primary keys must contain unique values.A primary key column cannot contain NULL
values.Each table should have a primary key, and each table can have only ONE primary key.
We have following types of keys in SQL which are used to fetch records from tables and to
make relationship among tables or views.
1. Super Key
Super key is a set of one or more than one keys that can be used to identify a record uniquely
in a table.Example : Primary key, Unique key, Alternate key are subset of Super Keys.
2. Candidate Key
A Candidate Key is a set of one or more fields/columns that can identify a record uniquely in
a table. There can be multiple Candidate Keys in one table. Each Candidate Key can work as
Primary Key.
Example: In below diagram ID, RollNo and EnrollNo are Candidate Keys since all these
three fields can be work as Primary Key.
3. Primary Key
Primary key is a set of one or more fields/columns of a table that uniquely identify a record
in database table. It can not accept null, duplicate values. Only one Candidate Key can be
Primary Key.
4. Alternate key
A Alternate key is a key that can be work as a primary key. Basically it is a candidate key
that currently is not primary key.
Example: In below diagram RollNo and EnrollNo becomes Alternate Keys when we define
ID as Primary Key.
5. Composite/Compound Key
6. Unique Key
Uniquekey is a set of one or more fields/columns of a table that uniquely identify a record in
database table. It is like Primary key but it can accept only one null value and it can not have
duplicate values. For more help refer the article Difference between primary key and unique
key.
7. Foreign Key
Foreign Key is a field in database table that is Primary key in another table. It can accept
multiple null, duplicate values. For more help refer the article Difference between primary
key and foreign key.
Example : We can have a DeptID column in the Employee table which is pointing to DeptID
column in a department table where it a primary key.
Lab Exercise:
Exercise 1: Verify SQL Server Component Installation
A new instance of SQL Server has been installed by the IT department at AdventureWorks. It will be used
by the new direct marketing company. The SQL Server named instance is called MKTG. In the first
exercise, you need to verify that the required SQL Server components have been installed.
1. Check that Database Engine and Reporting Services have been installed for the MKTG instance.
2. Note the services that are installed for the default instance.
3. Ensure that all required services including SQL Server Agent are started and set to autostart for both
instances.
Task 1: Check that Database Engine and Reporting Services have been installed for the MKTG
instance
• Check the installed list of services for the MKTG instance and ensure that the database engine and
Reporting Services have been installed for the MKTG instance.
Task 2: Note the services that are installed for the default instance
• Note the list of services that are installed for the default instance.
Task 3: Ensure that all required services including SQL Server Agent are started and set to
autostart for both instances
• Ensure that all the MKTG services are started and set to autostart. (Ignore the Full Text Filter Daemon at
this time).
• Ensure that all the services for the default instance are set to autostart. (Ignore the Full Text Filter
Daemon at this time).
Task 1: Change the service account for the MKTG database engine
• Change the service account for the MKTG database engine service to AdventureWorks\PWService using
the properties page for the service.
Task 2: Change the service account for the MKTG SQL Server Agent
• Change the service account for the MKTG SQL Server Agent service to AdventureWorks\PWService
using the properties page for the service and then restart the service.
Task 1: Enable the named pipes protocol for the default instance
• If necessary, enable the named pipes protocol for the default database engine instance using the
Protocols window.
Task 2: Enable the named pipes protocol for the MKTG instance
• If necessary, enable the named pipes protocol for the MKTG database engine instance using the
Protocols window.
LE
Database Normalization
Database normalization is the process of efficiently organizing data in a database. There are
two reasons of the normalization process:
Eliminating redundant data, for example, storing the same data in more than one
tables.
Ensuring data dependencies make sense.
Both of these are worthy goals as they reduce the amount of space a database consumes and
ensure that data is logically stored. Normalization consists of a series of guidelines that help
guide you in creating a good database structure.
Normalization guidelines are divided into normal forms; think of form as the format or the
way a database structure is laid out. The aim of normal forms is to organize the database
structure so that it complies with the rules of first normal form, then second normal form, and
finally third normal form.
It's your choice to take it further and go to fourth normal form, fifth normal form, and so on,
but generally speaking, third normal form is enough.
The database community has developed a series of guidelines for ensuring that databases are
normalized. These are referred to as normal forms and are numbered from one (the lowest
form of normalization, referred to as first normal form or 1NF) through five (fifth normal
form or 5NF). In practical applications, you'll often see 1NF, 2NF, and 3NF along with the
occasional 4NF. Fifth normal form is very rarely seen and won't be discussed in this article.
First normal form (1NF) sets the very basic rules for an organized database:
Eliminate duplicative columns from the same table.
Create separate tables for each group of related data and identify each row with a
unique column or set of columns (the primary key).
The Boyce-Codd Normal Form also referred to as the "third and half (3.5) normal form",
adds one more requirement:
Lab Setup
For this lab, you will use the available virtual machine environment. Before you begin the lab, you
must complete the following steps:
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.
2. In the virtual machine, click Start, click All Programs, click Microsoft SQL Server 2012, and
click SQL Server Management Studio.
3. In Connect to Server window, type Proseware in the Server name text box.
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.
5. In the File menu, click Open, and click Project/Solution.
6. In the Open Project window, open the project
D:\10775A_Labs\10775A_02_PRJ\10775A_02_PRJ.ssmssln.
7. From the View menu, click Solution Explorer. In Solution Explorer, double-click the query 00-
Setup.sql. When the query window opens, click Execute on the toolbar.
Lab Scenario
You have reviewed the additional instance of SQL Server. A system administrator at
AdventureWorks has expressed some concerns that the existing server may not have enough
memory or I/O capacity to support this new SQL Server instance and is reviewing a new I/O
subsystem. As the database administrator, you need to review the available server memory and
the memory allocated to each of the existing SQL Server instances. You need to ensure that the
I/O subsystem of the new server is capable of running SQL Server and the required workload
correctly.
Supporting Documentation
Required Memory Configuration
• 1.5GB reserved for operating system.
• 60% of the remaining memory as the maximum value for the AdventureWorks server instance.
• 40% of the remaining memory as the maximum value for the Proseware server instance.
• Configure minimum memory as zero for both instances.
Required SQLIOSIM Configuration
• Drive D with a 100MB data file that grows by 20MB increments to a 200MB maximum size.
• Drive L with a 50MB log file that grows by 10MB increments to a 100MB maximum size.
• Cycle Duration (sec) set to 60 seconds.
• Delete Files at Shutdown should be selected.
Required SQLIO Tests
• Drive to be tested is D.
• Test 64KB sequential reads for 60 seconds.
• Test 8KB random writes for 60 seconds.
Exercise 1: Adjust Memory Configuration
Scenario
The Adventure Works Marketing server has an existing default Microsoft SQL Server 2012
instance installed and the new MKTG instance. You need to check the total memory available on
the server and how much memory has been allocated to each of the two existing SQL Server
instances. You should then decide if the memory allocation is appropriate. If not, make the
required changes to the memory configuration.
The main tasks for this exercise are as follows:
1. Check total server memory.
2. Check the memory allocated to the default instance.
3. Check the memory allocated to the MKTG instance.
4. Decide if the memory allocation is appropriate. If not, make the required changes to the memory
configuration.
Task 1: Check total server memory
• Retrieve the installed memory (RAM) value from the properties of the computer.
Task 2: Check the memory allocated to the default instance
• Using the properties of the AdventureWorks server instance in SSMS, retrieve the minimum and
maximum server memory settings.
Task 3: Check the memory allocated to the MKTG instance
• Using the properties of the Proseware server instance in SSMS, retrieve the minimum and
maximum server memory settings.
Task 4: Decide if the memory allocation is appropriate. If not, make the required
changes to the memory configuration
• Review the Required Memory Configuration from the Supporting Documentation.
• Alter the Memory Configuration for both SQL Server instances as per the requirements. You will
need to work out how much memory should be used for all SQL Server instances and apportion
the memory based on the requirements in the Supporting Documentation.
Note While reducing the max server memory might require restarting SQL Server, there is no need
to restart the servers at this point in the exercise.
Exercise 2: Pre-installation Stress Testing
Scenario
After you have reviewed allocated memory on the server, you need to test whether the new I/O
subsystem is capable of running SQL Server successfully. In this exercise, you need to use the
SQLIOSIM utility for stress testing to ensure the stability of SQL Server performance.
The main tasks for this exercise are as follows:
1. Configure SQLIOSIM.
2. Execute SQLIOSIM.
3. Review the results from executing SQLIOSIM.
Task 1: Configure SQLIOSIM
• Install SQLIOSIM from the file D:\10775A_Labs\10775A_02_PRJ\sqliosimx64.exe (Make sure
that you use the Run as administrator option).
• Configure SQLIOSIM as per requirements in the Supporting Documentation.
Task 2: Execute SQLIOSIM
• Execute SQLIOSIM based upon the configured parameters.
Task 3: Review the results from executing SQLIOSIM
• If any errors are returned in red, review the errors.
• Locate the final summary for each of the drives and note the average I/O duration in milliseconds.
Module 3:
Introduction to SQL
DQL- Data Query Language commands used to get data from the database and
impose ordering upon it.
DML- Data Manipulation Language commands allow the users move data into and
out of a database and also modify the data in the database.
DDL- Data Definition Language is used to create, alter and delete database object.
DCL- Data Control Language consists of commands that control the user access to
the database object.
TCL- Transaction Control Language commands allow the users to control
transactions.
The data type specifies what type of data the column can hold. For a complete reference of all
the data types available in MS Access, MySQL, and SQL Server, go to our complete Data
Types reference.
Now we want to create a table called "Persons" that contains five columns: P_Id, LastName,
FirstName, Address, and City.We use the following CREATE TABLE statement:
The P_Id column is of type int and will hold a number. The LastName, FirstName, Address,
and City columns are of type varchar with a maximum length of 255 characters.
The empty table can be filled with data with the INSERT INTO statement.
SQL Constraints
Constraints are used to limit the type of data that can go into a table.Constraints can be
specified when a table is created (with the CREATE TABLE statement) or after the table is
created (with the ALTER TABLE statement).We will focus on the following constraints:
NOT NULL
UNIQUE
PRIMARY KEY
FOREIGN KEY
CHECK
DEFAULT
The next chapters will describe each constraint in detail.By default, a table column can hold
NULL values.
The NOT NULL constraint enforces a column to NOT accept NULL values.The NOT NULL
constraint enforces a field to always contain a value. This means that you cannot insert a new
record, or update a record without adding a value to this field.
The following SQL enforces the "P_Id" column and the "LastName" column to not accept
NULL values:
The UNIQUE constraint uniquely identifies each record in a database table.The UNIQUE
and PRIMARY KEY constraints both provide a guarantee for uniqueness for a column or set
of columns.A PRIMARY KEY constraint automatically has a UNIQUE constraint defined on
it.
Note that you can have many UNIQUE constraints per table, but only one PRIMARY KEY
constraint per table.
CREATE TABLE Persons
(P_Idint NOT NULL UNIQUE,
LastNamevarchar(255) NOT NULL,
FirstNamevarchar(255),
Address varchar(255),
City varchar(255))
To allow naming of a UNIQUE constraint, and for defining a UNIQUE constraint on
multiple columns, use the following SQL syntax:
CREATE TABLE Persons
(P_Idint NOT NULL,
LastNamevarchar(255) NOT NULL,
FirstNamevarchar(255),
Address varchar(255),
City varchar(255),
CONSTRAINT uc_PersonID UNIQUE (P_Id,LastName))
SQL UNIQUE Constraint on ALTER TABLE
To create a UNIQUE constraint on the "P_Id" column when the table is already created, use
the following SQL:
When planning your SQL Server backup strategy, select the backup method that is the best
tradeoff between backup and restore time, depending on the size of your database. For
example, full backups take the longest to perform, but are the fastest to restore. Differential
backups are overall faster than full backups, but take longer to restore. Incremental
(transaction log) backups are the fastest, but are generally the slowest to restore.
Generally, it is faster to restore differential backups (with few, if any transaction logs) than it
is to restore a full backup (with many transaction logs). This is because the restoration of
transaction logs has to play back each transaction when restored. If a transaction took 5
minutes to run, it will also take about 5 minutes to run again when the transaction log is
restored.
For fastest backups, we recommend performing a disk backup to a local drive/drive array,
then moving the backup file(s) remotely.
The selected database recovery model should also play a role in your choice of backup
method and frequency. If you are using the Simple Recovery model:
The development group within the company has ordered a new server for the work they need to do on the
Proseware system. Unfortunately, the new server will not arrive for a few weeks and the development
group cannot wait that long to start work.
The new server that was provisioned by the IT Support department already has two instances of SQL
Server installed. The support team has determined that the new server will be able to support an additional
instance of SQL Server on a temporary basis, until the server for the development group arrives.
You need to install the new instance of SQL Server and if you have time, you should configure the memory
of all three instances to balance their memory demands, and you should create a new alias for the instance
that you install.
Supporting Documentation
Required Memory Configuration (Used in Exercise 4 only)
• 40% of the remaining memory as the maximum value for the AdventureWorks server instance.
• 30% of the remaining memory as the maximum value for the Proseware server instance.
• 30% of the remaining memory as the maximum value for the PWDev server instance.
You will review the supporting documentation that describes the required configuration for the new
instance. You will also create the required folders to hold the data and log files for the instance.
Task 2: Create the folders that are required for the data and log files
• Based on the supplied requirements, create the folders that are required for the data and log files of the
new SQL Server instance.
Task 1: Based on the requirements reviewed in Exercise 1, install another instance of SQL
Server
• Install another instance of SQL Server based on the requirements in Exercise 1.
Note On the Server Configuration page, you should configure the service account name and password, the
startup type for SQL Server Agent, and the collation. On the Database Engine Configuration page, you
should configure Mixed Mode, the sa password, Add Current User, Data Directories tab, and the Filestream
tab.
Task 1: Check that the services for the new SQL Server instance are running
• Using SQL Server Configuration Manager, make sure that the newly installed services are running.
• Make sure that the named pipes protocol is enabled for the new instance.
Task 2: Configure both 32 bit and 64 bit aliases for the new instance
• Configure a 32 bit alias called PWDev for the new instance using named pipes.
• Configure a 64 bit alias called PWDev for the new instance using named pipes.
SQL Server 2012 is Microsoft’s latest cloud-ready information platform. Either organization
can use SQL Server 2012 to efficiently protect, unlock, and scale the power of their data
across the desktop, mobile device, datacenter, and a private or a public cloud. Building on the
success of the SQL Server 2008 R2 release, SQL Server 2012 has made a strong impact on
organizations worldwide with its significant capabilities. It provides organizations with
mission-critical performance and availability, as well as the potential to unlock breakthrough
insights with pervasive data discovery across the organization. Finally, SQL Server 2012
delivers a variety of hybrid solutions you can choose from. For example, an organization can
develop and deploy applications and database solutions on traditional non virtualized
environments, on appliances, and in on-premises private clouds or off-premises public
clouds. Moreover, these solutions can easily integrate with one another, offering a fully
integrated hybrid solution. Figure 1-1 illustrates the Cloud Ready Information Platform
ecosystem.
To prepare readers for SQL Server 2012, this chapter examines the new SQL Server 2012
features, capabilities, and editions from a database administrator’s perspective. It also
discusses SQL Server 2012 hardware and software requirements and installation strategies.
Now more than ever, organizations require a trusted, cost-effective, and scalable database
platform that offers mission-critical confidence, breakthrough insights, and flexible cloud-
based offerings. These organizations face ever-changing business conditions in the global
economy and challenges such as IT budget constraints, the need to stay competitive by
obtaining business insights, and the ability to use the right information at the right time. In
addition, organizations must always be adjusting because new and important trends are
regularly changing the way software is developed and deployed. Some of these new trends
include data explosion (enormous increases in data usage), Consumerization IT, big data
(large data sets), and private and public cloud deployments.
Microsoft has made major investments in the SQL Server 2012 product as a whole; however,
the new features and breakthrough capabilities that should interest database administrators
(DBAs) are divided in the chapter into the following categories: Availability, Manageability,
Programmability, Scalability and Performance, and Security. The upcoming sections
introduce some of the new features and capabilities; however, other chapters in this book
conduct a deeper explanation of the major technology investments.
Availability Enhancements
A tremendous amount of high-availability enhancements were added to SQL Server 2012,
which is sure to increase both the confidence organizations have in their databases and the
maximum uptime for those databases. SQL Server 2012 continues to deliver database
mirroring, log shipping, and replication.
However, it now also offers a new brand of technologies for achieving both high availability
and disaster recovery known as Always On. Let us quickly review the new high-availability
enhancement
AlwaysOn:
Figure 1-2 illustrates an organization with a global presence achieving both high availability
and disaster recovery for mission-critical databases using AlwaysOn Availability Groups. In
addition, the secondary replicas are being used to offload reporting and backups.
FIGURE 1-2 AlwaysOn Availability Groups for an organization with a global presence
However, with SQL Server 2012 there are a tremendous number of enhancements to
improve availability and reliability. First, FCI now provides support for multi-subnet
failover clusters. These subnets, where the FCI nodes reside, can be located in the
same datacenter or in geographically dispersed sites. Second, local storage can be
leveraged for the TempDB database.
Third, faster startup and recovery times are achieved after a failover transpires.
Finally, improved cluster health-detection policies can be leveraged, offering a
stronger and more flexible failover.
Support for Windows Server Core Installing SQL Server 2012 on Windows Server
Core is now supported. Windows Server Core is a scaled-down edition of the
Windows operating system and requires approximately 50 to 60 percent fewer
reboots when patching servers. This translates to greater SQL Server uptime and
increased security
Recovery Advisor A new visual timeline has been introduced in SQL Server
Management Studio to simplify the database restore process. As illustrated in Figure
1-3, the scroll bar beneath the timeline can be used to specify backups to restore a
database to a point in time.
The SQL Server product group has made sizable investments in improving scalability and
performance associated with the SQL Server Database Engine. Some of the main
enhancements that allow organizations to improve their SQL Server workloads include the
following:
Online Index Create, Rebuild, and Drop: Many organizations running mission-
critical workloads use online indexing to ensure their business environment does not
experience downtime during routine index maintenance. With SQL Server 2012,
indexes containing varchar(max), nvarchar(max), and varbinary(max) columns can
now be created, rebuilt, and dropped as an online operation. This is vital for
organizations that require maximum uptime and concurrent user activity during index
operations.
Achieve Maximum Scalability with Windows Server 2008 R2: Windows Server
2008 R2 is built to achieve unprecedented workload size, dynamic scalability, and
across-the-board availability and reliability. As a result, SQL Server 2012 can achieve
maximum scalability when running on Windows Server 2008 R2 because it supports
up to 256 logical processors and 2 terabytes of memory in a single operating system
instance.
Manageability Enhancements
SQL Server deployments are growing more numerous and more common in organizations.
This fact demands that all database administrators be prepared by having the appropriate
tools to successfully manage their SQL Server infrastructure. Recall that the previous releases
of SQL Server included many new features tailored toward manageability. For example,
database administrators could easily leverage Policy Based Management, Resource
Governor, Data Collector, Data-tier applications, and Utility Control Point. Note that the
product group responsible for manageability never stopped investing in manageability. With
SQL Server 2012, they unveiled additional investments in SQL Server tools and monitoring
features. The following list articulates the manageability enhancements in SQL
Server 2012:
SQL Server Management Studio: With SQL Server 2012, IntelliSense and
Transact-SQL debugging have been enhanced to bolster the development experience
in SQL Server Management Studio.
A new Insert Snippet menu: This new feature is illustrated in Figure 1-4. It offers
developers a categorized list of snippets to choose from to streamline code. The
snippet picket tooltip can be launched by pressing CTRL+K, pressing CTRL+X, or
selecting it from the Edit menu.
The SQL Server product group responsible for the Resource Governor feature introduced
new capabilities to address the requests of its customers and the SQL Server community. To
begin, support for larger scale multitenancy can now be achieved on a single instance of SQL
Server because the number of resource pools Resource Governor supports increased from 20
to 64. In addition, a maximum cap for CPU usage has been introduced to enable predictable
chargeback and isolation on the CPU. Finally, resource pools can be affinitive to an
individual schedule or a group of schedules for vertical isolation of machine resources.
Tight Integration with SQL Azure: A new Deploy Database To SQL Azure wizard,
pictured in Figure 1-5, is integrated in the SQL Server Database Engine to help
organizations deploy an on-premise database to SQL Azure. Furthermore, new
scenarios can be enabled with SQL Azure Data Sync, which is a cloud service that
provides bidirectional data synchronization between databases across the datacenter
and cloud.
FIGURE 1-5: Deploying a database to SQL Azure with the Deploy Database Wizard
SQL Server 2012 introduces a few enhancements to DAC. With the new SQL Server, DAC
upgrades are performed in an in-place fashion compared to the previous side-by side upgrade
process we’ve all grown accustomed to over the years. Moreover, DACs can be deployed,
imported and exported more easily across premises and public cloud environments, such as
SQL Azure. Finally, data-tier applications now support many more objects compared to
the previous SQL Server release.
Security Enhancements
It has been approximately 10 years since Microsoft initiated its trustworthy computing
initiative. Since then, SQL Server has had the best record of accomplishment with the least
amount of vulnerabilities and exposures among the major database players in the industry.
The graph shown in Figure 1-6 is from the National Institute of Standards and Technology
(Source: ITIC 2011: SQL Server Delivers Industry-Leading Security).
It shows common vulnerabilities and exposures reported from January 2002 to June 2010.
FIGURE 1-6 Common vulnerabilities and exposures reported to NIST from January 2002 to
January 2010.
With SQL Server 2012, the product continues to expand on this solid foundation to deliver
enhanced security and compliance within the database platform.
For now, here is a snapshot of some of the new enterprise-ready security capabilities and
controls that enable organizations to meet strict compliance policies and regulations:
User-defined server roles for easier separation of duties
Audit enhancements to improve compliance and resiliency
Simplified security management, with a default schema for groups
Contained Database Authentication, which provides database authentication that uses
self- contained access information without the need for server logins
SharePoint and Active Directory security models for higher data security in end-user
reports.
Programmability Enhancements
There has also been a tremendous investment in SQL Server 2012 regarding
programmability.
Specifically, there is support for “beyond relational” elements such as XML, Spatial,
Documents, Digital
Media, Scientific Records, factoids, and other unstructured data types. Why such
investments?
Organizations have demanded they be given a way to reduce the costs associated with
managing both structured and non structured data. They wanted to simplify the development
of applications over all data, and they wanted the management and search capabilities for all
data improved. Take a minute to review some of the SQL Server 2012 investments that
positively affect programmability.
Full-Text Search Enhancements: Full-text search in SQL Server 2012 offers better
query performance and scale. It also introduces property-scoped searching
functionality, which allows organizations the ability to search properties such as
Author and Title without the need for developers to maintain file properties in a
separate database. Developers can now also benefit by customizing proximity search
by using the new NEAR operator that allows them to specify the maximum number of
non-search terms that separate the first and last search terms in a match.
Extended Events Enhancements: This new user interface was introduced to help
simplify the management associated with extended events. New extended events for
functional and performance troubleshooting were introduced in SQL Server 2012.
SQL Server 2012 is obtainable in three main editions. All three editions have tighter
alignment than their predecessors and were designed to meet the needs of almost any
customer with an increased investment in business intelligence each edition comes in a 32-bit
and 64-bit version. The main editions, as shown in Figure 1-7, are the following:
Standard edition
Business Intelligence edition
Enterprise edition
FIGURE 1-7 the main editions of SQL Server 2012 Enterprise Edition
The Enterprise edition of SQL Server 2012 is the uppermost SKU; it is meant to meet the
highest demands
of large-scale datacenters and data warehouse solutions by providing mission-critical
performance and availability for Tier 1 applications, the ability to deploy private-cloud,
highly virtualized environments, and large centralized or external-facing business-
intelligence solutions.
Standard Edition
Policy-based management
For the first time in the history of SQL Server, a Business Intelligence edition is offered. The
Business Intelligence edition offers organizations the full suite of powerful BI capabilities
such as scalable reporting and analytics, Power View, and Power Pivot. It is tailored toward
organizations trying to achieve corporate business intelligence and self-service capabilities,
but that do not require the full online transactional processing (OLTP) performance and
scalability found in the Enterprise edition of SQL Server 2012. Here is a high-level list of
what the new Business Intelligence edition includes:
Up to a maximum of 16 cores for the Database Engine
Maximum number of cores for business intelligence processing
All of the features found in the Standard edition
Corporate business intelligence
Reporting
Analytics
Multidimensional BI semantic model
Self-service capabilities
Alerting
Power View
Power Pivot for SharePoint Server
Enterprise data management
Data quality services
Master data services
In-memory tabular BI semantic model
Basic high availability can be achieved with AlwaysOn 2-Node Failover Clustering
Specialized Editions
Beyond the three main editions discussed earlier, SQL Server 2012 continues to deliver
specialized editions for organizations that have a unique set of requirements. Some examples
include the following:
Developer: The Developer edition includes all of the features and functionality found
in the Enterprise edition; however, it is meant strictly for the purpose of development,
testing, and demonstration. Note that you can transition a SQL Server Developer
installation directly into production by upgrading it to SQL Server 2012 Enterprise
without reinstallation.
Web: Available at a much more affordable price than the Enterprise and Standard
editions, SQL Server 2012 Web is focused on service providers hosting Internet-
facing web services environments. Unlike the Express edition, this edition doesn’t
have database size restrictions, it supports four processors, and supports up to 64 GB
of memory. SQL Server 2012 Web does not offer the same premium features found in
Enterprise and Standard editions, but it still remains the ideal platform for hosting
websites and web applications.
Express: This free edition is the best entry-level alternative for independent software
vendors, nonprofessional developers, and hobbyists building client applications.
Individuals learning about databases or learning how to build client applications will
find that this edition meets all their needs. This edition, concisely, is limited to one
processor and 1 GB of memory, and it can have a maximum database size of 10 GB.
In addition, Express is integrated with Microsoft Visual Studio.
The recommended hardware and software requirements for SQL Server 2012 vary depending
on the component being installed, the database workload, and the type of processor class that
will be used.
Let us turn our attention to Tables 1-1 and 1-2 to understand the hardware and software
requirements
for SQL Server 2012. Because SQL Server 2012 supports many processor types and
operating systems, Table 1-1 covers only the hardware requirements for a typical SQL Server
2012 installation. Typical installations include SQL Server 2012 Standard and Enterprise
running on Windows Server 2008 R2 operating systems.
TABLE 1-1 Hardware Requirements
Like its predecessors, SQL Server 2012 is available in both 32-bit and 64-bit editions. Both
can be installed
with either the SQL Server Installation Wizard through a command prompt or with Sysprep
for automated deployments with minimal administrator intervention. As mentioned earlier in
the chapter, SQL Server 2012 can now be installed on the Server Core, which is an
installation option of Windows Server 2008 R2 SP1 or later. Finally, database administrators
also have the option to upgrade an existing installation of SQL Server or conduct a side-by-
side migration when installing SQL Server 2012.
The following sections elaborate on the different strategies:
NOTE:
SQL Server 2005 with SP4, SQL Server 2008 with SP2, and SQL Server 2008 R2 with SP1
are all supported for an in-place upgrade to SQL Server 2012. Unfortunately, earlier versions
such as SQL Server 2000, SQL Server 7.0, and SQL Server 6.5 cannot be upgraded to SQL
Server 2012.
FIGURE 1-8 An in-place upgrade from SQL Server 2008 to SQL Server 2012
The in-place upgrade strategy is usually easier and considered less risky than the side-by-side
migration strategy. Upgrading is fairly fast, and additional hardware is not required. Because
the names of the server and instances do not change during an upgrade process, applications
still point to the old instances. As a result, this strategy is less time consuming because there
is no need to make changes to application connection strings.
The disadvantage of an in-place upgrade is there is less granular control over the upgrade
process.
For example, when running multiple databases or components, a database administrator does
not have the flexibility to choose individual items for upgrade. Instead, all databases and
components are upgraded to SQL Server 2012 at the same time. In addition, the instance
remains offline during the in-place upgrade. This means if a mission-critical database or
application or an important line-of business application is running, a planned outage is
required. Furthermore, if a disaster transpires during the upgrade, the rollback strategy can be
a complex and time-consuming affair. A database administrator might have to install the
operating system from scratch, install SQL Server, and restore all of the SQL Server data.
Side-by-Side Migration
The term “side-by-side migration” describes the deployment of a brand-new SQL Server
2012 Instance alongside a legacy SQL Server instance. When the SQL Server 2012
installation is complete, a database administrator migrates data from the legacy SQL Server
database platform to the new SQL Server 2012 database platform. Side-by-side migration is
depicted in Figure 1-9. Note You can conduct a side-by-side migration to SQL Server 2012
by using the same server. The side-by-side method can also be used to upgrade to SQL
Server 2012 on a single server.
FIGURE 1-9 Side-by-side migration from SQL Server 2008 to SQL Server 2012
The greatest benefit of a side-by-side migration over an in-place upgrade is the opportunity to
build out a new database infrastructure on SQL Server 2012 and avoid potential migration
issues that can occur with an in-place upgrade. The side-by-side migration also provides more
granular, control over the upgrade process because you can migrate databases and
components independent of one another. In addition, the legacy instance remains online
during the migration process. All of these advantages result in a more powerful server.
Moreover, when two instances are running in parallel, additional testing and verification can
be conducted. Performing a rollback is also easy if a problem arises during the migration.
However, there are disadvantages to the side-by-side strategy. Additional hardware might
need to be purchased. Applications might also need to be directed to the new SQL Server
2012 instance, and it might not be a best practice for very large databases because of the
duplicate amount of storage required during the migration process.
SQL Server 2012 High-Level, Side-by-Side Strategy
The high-level, side-by-side migration strategy for upgrading to SQL Server 2012 consists of
the following
steps:
1. Ensure the instance of SQL Server you plan to migrate meets the hardware and
software requirements for SQL Server 2012.
2. Although a legacy instance will not be upgraded to SQL Server 2012, it is still
beneficial to run the SQL Server 2012 Upgrade Advisor to ensure the data being
migrated to the new SQL Server 2012 is supported and there is no possibility of a
break occurring after migration.
3. Procure the hardware, and install your operating system of choice. Windows Server
2012 is recommended.
4. Install the SQL Server 2012 prerequisites and desired components.
5. Migrate objects from the legacy SQL Server to the new SQL Server 2012 database
platform.
6. Point applications to the new SQL Server 2012 database platform.
7. Decommission legacy servers after the migration is complete.
In this lesson we will cover some of the more frequently used tools: SQL Server Management
Studio, SQL Profiler, SQL Server Agent, SQL Server Configuration Manager, SQL Server
Integration Services .Let us take a brief look at each:
SSMS is the main administrative console for SQL Server installations. It provides you with a
graphical "birds-eye" view of all of the SQL Server installations on your network. You can
perform high-level administrative functions that affect one or more servers, schedule
common maintenance tasks or create and modify the structure of individual databases. You
may also use SSMS to issue quick and dirty queries directly against any of your SQL Server
databases. Users of earlier versions of SQL Server will recognize that SSMS incorporates the
functions previously found in Query Analyzer, Enterprise Manager, and Analysis Manager.
Here are some examples of tasks you can perform with SSMS:
SQL Server databases rely upon tables to store data. We will explore the process of designing
and implementing a database table in Microsoft SQL Server.
The first step of implementing a SQL Server table is decidedly non-technical. Sit down with a
pencil and paper and sketch out the design of your database. You will want to ensure that you
include appropriate fields for your business needs and select the correct data types to hold
your data.
You may wish to consult our Database Normalization Basics article for some tips on proper
database design.
Do not forget to save your table! When you click the save icon for the first time, you will be
asked to provide a unique name for your table.
SQL Server 2012’s Database Engine Tuning Advisor (DETA) provides database
administrators with a powerful way to tune your database for your specific environment.
DETA allows you to design a customized workload that mimics your operational
environment and then analyze that workload to determine the optimal configuration settings
for your SQL Server instance. DETA’s recommendations include adding normal and
clustered indexes to your database, partitioning tables and creating indexed views, where
appropriate.
To run DETA for the first time, you must be a member of the sys admin fixed server role, as
DETA must create several system-wide tables in the msdb database. Once this first-time
initialization is complete, any user who is a member of the db_owner database role for a
particular database may use DETA to tune that database.
You can run DETA several different ways, depending upon your current location in the
server environment:
From the Start menu choose Microsoft SQL Server 2012, then select Performance
Tools, and finally click on Database Engine Tuning Advisor.
If you are within SQL Server Management Studio, select Database Engine Tuning
Advisor from the SSMS Tools menu.
If you are in the SQL Server Management Studio Query Editor, open a Transact-SQL
script and select the text that you wish to analyze. Right-click on that text and choose
Analyze Query in Database Engine Tuning Advisor to get started in DETA with the
selected workload.
If you are running SQL Server Profiler, choose Database Engine Tuning Advisor from
the Tools menu.
SQL Server Profiler Traces -- One of the best ways to design a SQL Server workload
is to create using SQL Server Profiler. With this approach, you use the Profiler tool to
capture database activity during a time where you can either capture a representative
workload or simulate the expected workload. SQL Profiler will capture this activity in a
trace file that may be used as an input to DETA.
Custom-designed workload -- Alternatively, you may write your own script that
contains the queries you want to optimize your database to perform. You can create this
script using Query Editor or the text editor of your choice.
Use the SQL Server plan cache -- SQL Server maintains a cache of query execution
plans to improve the operational efficiency of the database server. This cache contains
information on recently executed SQL queries, and may be used to create, a DETA
workload. This is the easiest way to get started with DETA, as it does not require any
special administrator configuration. However, it should only be used if you are
relatively certain that recent database activity reflects the conditions you wish to
optimize your database against.
Running DETA
Once you have selected your workload and started DETA using one of the methods described
earlier, follow this process to run DETA:
1. Provide the connection details for your database server in the Connect to Server
window. Click Connect to open the database connection.
2. Select the appropriate radio button in the Workload section of the window and, if
applicable, browse to the file you wish to use for your workload. This file may be a
SQL Trace, XML file, or SQL file. It is essential to choose a workload representative
of normal activity for your database, as DETA will optimize the database's
performance based upon this information.
3. Confirm the database that DETA should use for workload analysis using the drop-
down box.
4. Using the checkboxes in the grid at the bottom section of the screen, select the
database(s) and/or table(s) that you wish to tune.
5. Click the Start Analysis button and wait for the database tuning analysis to complete.
6. Review the recommendations presented by DETA and implement those that you
deem appropriate.
SQL Server's auditing capabilities were significantly upgraded with the release of SQL
Server 2008, to the great relief of database administrators and security professionals alike.
The new auditing capabilities allow you to track user and system activity in a manner that is
compliant with recently enacted security regulations, including the Health Insurance
Portability and Accountability Act (HIPAA) and the Payment Card Industry Data Security
Standard (PCI DSS).
1. Create an audit object: The audit object contains the logistic details about your
audit, such as the location where SQL Server will store your audit results, how much
space it may use for the audit and what should happen if the server experiences an
auditing failure.
2. Create an audit specification: The audit specification contains the technical details
of the audit: WHAT you will audit. You may create either a server audit specification,
which audits server-level activities, or a database audit specification, which audits
database-level activities.
Creating an audit object in SQL Server 2008 is very straightforward and takes only a few
minutes. Here's the process:
1. Open SQL Server Management Studio and connect to the applicable instance of
Microsoft SQL Server
2. Expand the Security folder
3. Right-click on the Audits folder and select New Audit
4. Name your audit object by filling in the Audit Name field
5. Select a destination for your audit results. You may send them to a file, the Windows
Application log or the Windows Security log.
6. Click the OK button to create the audit object
If you wish to audit server-level activities, such as successful and failed logins, you will need
to create a server audit specification and link it to the audit object you just created. Here is the
process:
If you wish to audit database-level activity in addition to (or in place of) server-level activity,
you will need to create a database audit specification. Here is how to do that:
Here is How:
1. Open SQL Server Management Studio and connect to the database server that you
wish to serve as the distributor.
2. Right-click on Replication and choose Configure Distribution from the pop-up menu.
4. Select " will act as its own Distributor; SQL Server will create a distribution database
and log", then click the Next button to continue.
5. Click the Next button to accept the default setting that SQL Server Agent should start
automatically.
6. Provide a location where SQL Server should store the snapshot replication files by
providing either a local folder path or a UNC share name, then click Next to continue.
7. Accept the default name and paths for the distribution database by clicking the Next
button.
8. If servers other than the distribution server will publish to this distribution server,
provide their information and then click the Next button to continue.
10. Review the choices presented in the Complete the Wizard screen and click Finish to
configure your distributor.
SQL Profiler provides a window into the inner workings of your database. You can monitor
many different event types and observe database performance in real time. SQL Profiler
allows you to capture and replay system "traces" that log various activities. It is a great tool
for optimizing databases with performance issues or troubleshooting particular problems. As
with many SQL Server functions, you can access SQL Profiler through SQL Server
Management Studio.
SQL Profiler is a diagnostic tool included with Microsoft SQL Server 2012. It allows you to
create SQL traces that track the specific actions performed against a SQL Server databases.
SQL traces provide valuable information for troubleshooting database issues and tuning
database engine performance. For example, administrators might use a trace to identify a
bottleneck in a query and develop optimizations that might improve database performance.
Creating a Trace
In this section, we walk through the process of creating a SQL Server Trace with SQL Server
Profiler, step-by-step. Here is how you can create your own SQL Server traces. Note that
these instructions are for SQL Server 2012. Users of earlier versions of SQL Server should
instead read Creating Traces with SQL Profiler.
1. Open SQL Server Management Studio and connect to the SQL Server instance of
your choice. You will need to provide the server name and appropriate logon
credentials (unless you are using Windows Authentication).
2. Once you have opened SQL Server Management Studio, choose SQL Server Profiler
from the Tools menu. Note that if you do not plan to use other SQL Server tools in
this administrative session, you may choose to launch SQL Profiler directly, rather
than go through Management Studio.
3. Provide login credentials again, if you are prompted.
4. SQL Server Profiler will then assume you wish to start a new trace and open a Trace
Properties window, similar to that shown above. The window will be blank and allow
you to specify the details of your trace.
5. Create a descriptive name for your trace and type it into the "Trace Name" text box.
6. Select a template for your trace from the "Use the Template" drop-down menu. This
allows you to start your trace using one of the predefined templates stored SQL
Server's library. See the section "Choosing a Template" below for descriptions of
three of the most commonly used SQL Server trace templates.
7. Choose a location to save the results of your trace. You have two options here:
Select Save to File to save your trace to a file on the local hard drive. Provide
a file name and location in the Save As window that pops up because of
clicking the checkbox. You may also set a maximum file size (in MB) to limit
the impact your trace might have on disk use.
Select Save to Table to save your trace to a table within your SQL Server
database. If you select this option, you will be prompted to connect to the
database where you wish to store your trace results. You may also set a
maximum trace size (in thousands of table rows) to limit the impact your trace
might have on your database.
8. Click on the Events Selection tab to review the events you may monitor with your
trace. Some events will automatically be selected based upon the template you chose.
You may modify those default selections at this time. You may view additional
options by clicking the Show All Events and Show All Columns checkboxes.
9. Click the Run button to begin your trace. SQL Server will begin creating the trace.
When you are finished, select Stop Trace from the File menu.
Choosing a Template
When you begin your trace, you may choose to base it on any of the templates found in SQL
Server's trace library. Three of the most commonly used trace templates are:
The Standard template collects a variety of information about SQL Server connections,
stored procedures and Transact-SQL statements.
The Tuning template collects information that may be used with the Database Engine
Tuning Advisor to tune your SQL Server's performance.
The TSQL_Replay template gathers enough information about each Transact-SQL
statement to recreate the activity in the future.
SQL Server Agent allows you to automate many of the routine administrative tasks that
consume database administrator time. You can use SQL Server agent to create jobs that run
on a periodic basis, jobs that are triggered by alerts and jobs that are initiated by stored
procedures. These jobs may include steps that perform almost any administrative function,
including backing up databases, executing operating system commands, running SSIS
packages and more..
SQL Server Agent allows you to automate a variety of administrative tasks. In this tutorial,
we walk through the process of using SQL Server Agent to create and schedule a job that
automates database administration.
Open up Microsoft SQL Server Configuration Manager and locate the SQL Server Agent
service. If the status of that service is "RUNNING", you do not need to do anything.
Otherwise, right-click on the SQL Server Agent service and select Start from the pop-up
menu. You'll see the Starting Service window shown above.
Open SQL Server Management Studio and Expand the SQL Server Agent Folder
Close SQL Server Configuration Manager and open SQL Server Management Studio. Within
SSMS, expand the SQL Server Agent folder. You'll see the expanded folders shown above.
Next, right-click on the Jobs folder and select New Job from the start-up menu you will see
the New Job creation window shown above. Fill in the Name field with a unique name for
your job (being descriptive will help you manage jobs better down the road!). Specify the
account that you wish to be the owner of the job in the Owner text box. The job will run with
the permissions of this account and may only be modified by the owner or sysadmin role
members
Once you have specified a name and owner choose one of the predefined job categories from
the drop-down list. For example, you might choose the "Database Maintenance" category for
routine maintenance jobs.
Use the large Description text field to provide a detailed description of the purpose of your
job. Write it in such a way that someone (yourself included!) would be able to look at it
several years from now and understand the purpose of the job.
On the left side of the New Job window, you'll see a Steps icon under the "Select a page"
heading. Click this icon to see the blank Job Step List shown above.
Next, you will need to add the individual steps for your job. Click the New button to create a
new job step and you will see the New Job Step window shown above.
Use the Step Name textbox to provide a descriptive name for the Step.
Use the Database drop-down box to select the database that the job will act upon.
Finally, use the Command textbox to provide the Transact-SQL syntax corresponding to the
desired action for this job step. Once you have completed entering the command, click the
Parse button to verify the syntax.
After successfully validating the syntax, click OK to create the step. Repeat this process as
many times as necessary to define your desired SQL Server Agent job.
Finally, you will want to set a schedule for the job by clicking the Schedule icon in the Select
a Page portion of the New Job window. You will see the New Job Schedule window shown
above.
Provide a name for the schedule in the Name text box and choose a schedule type (One-time,
Recurring, Start when SQL Server Agent starts or Start When CPUs Become Idle) from the
drop-down box. Then use the frequency and duration sections of the window to specify the
job's parameters. When you are finished click OK to close the Schedule window and OK to
create the job.
SQL Server Configuration Manager is a snap-in for the Microsoft Management Console
(MMC) that allows you to manage the SQL Server services running on your servers. The
functions of SQL Server Configuration Manager include starting and stopping services,
editing service properties and configuring database network connectivity options. Some
examples of SQL Server Configuration Manager Tasks include:
Starting the SQL Server Agent Service (or other SQL Server services) with
Configuration Manager
Encrypting SQL Server database connections with SQL Server Configuration Manager.
Automating Database Administration with SQL Server Agent
Start the SQL Server Agent Service
SQL Server Agent allows you to automate a variety of administrative tasks. In this tutorial,
we walk through the process of using SQL Server Agent to create and schedule a job that
automates database administration.
Open up Microsoft SQL Server Configuration Manager and locate the SQL Server Agent
service. If the status of that service is "RUNNING,” you do not need to do anything.
Otherwise, right-click on the SQL Server Agent service and select Start from the pop-up
menu. You will see the Starting Service window shown above.
Open SQL Server Management Studio and Expand the SQL Server Agent Folder
Close SQL Server Configuration Manager and open SQL Server Management Studio. Within
SSMS, expand the SQL Server Agent folder. You will see the expanded folders shown
above.
Once you have specified a name and owner, choose one of the predefined job categories from
the drop-down list. For example, you might choose the "Database Maintenance" category for
routine maintenance jobs.
Use the large Description text field to provide a detailed description of the purpose of your
job. Write it in such a way that someone (yourself included!) would be able to look at it
several years from now and understand the purpose of the job.
Next, you will need to add the individual steps for your job. Click the New button to create a
new job step and you will see the New Job Step window shown above.
Use the Step Name textbox to provide a descriptive name for the Step.
Use the Database drop-down box to select the database that the job will act upon.
Finally, use the Command textbox to provide the Transact-SQL syntax corresponding to the
desired action for this job step. Once you have completed entering the command, click the
Parse button to verify the syntax.
After successfully validating the syntax, click OK to create the step. Repeat this process as
many times as necessary to define your desired SQL Server Agent job.
Provide a name for the schedule in the Name text box and choose a schedule type (One-time,
Recurring, Start when SQL Server Agent starts or Start When CPUs Become Idle) from the
drop-down box. Then use the frequency and duration sections of the window to specify the
job's parameters. When you are finished click OK to close the Schedule window and OK to
create the job.
Encryption helps you prevent unauthorized access to information when individuals try to
bypass your database security controls. For example, someone with a computer on your
network might try to use a packet sniffer to monitor connections made by users to a database.
Fortunately, database encryption allows you to protect the data sent over those connections
from prying eyes.
Here is how:
3. Right-click on the Protocols folder for the SQL Server database instance and choose
Properties from the pop-up menu.
4. Set the Force Encryption value to Yes. This will require all database users have
encrypted connections.
6. Select the appropriate certificate that was installed by your system administrator.
Tips:
When you complete this process, the server will reject any requests for unencrypted
communications. This feature will protect your database contents from eavesdropping
while in transit between the client and server.
SQL Server Integration Services (SSIS) provide an extremely flexible method for
importing and exporting data between a Microsoft SQL Server installation and a large variety
of other formats. It replaces the Data Transformation Services (DTS) found in earlier versions
of SQL Server.
The SQL Server Import and Export Wizard allow you to quickly and easily import
information into a SQL Server 2012 database from any of the following data sources:
Microsoft Excel
Microsoft Access
Flat Files
Another SQL Server Database
The wizard builds SQL Server Integration Services (SSIS) packages through a user-friendly
graphical interface.
You may start the SQL Server Import and Export Wizard directly from the Start menu on a
system that has SQL Server 2012 already installed. Alternatively, if you are already running
SQL Server Management Studio, follow these steps to launch the wizard:
The SQL Server Import and Export Wizard provide you with a guided process to import data
from any of your existing data sources to a SQL Server database. In this example, we will
walk through the process of importing contact information from Microsoft Excel into a SQL
Server database. In this example, we will bring the data from our sample Excel contacts
file into a new table of a SQL Server database.
Here is how to get started:
The SQL Server Import and Export Wizard provides you with a guided process to export data
from your SQL Server database to any supported format. This example will walk you through
the process of taking the contact information you imported into a SQL Server database in the
previous example and export it to a flat file.
SQL Server Configuration Manager is a tool to manage the services associated with SQL
Server, to configure the network protocols used by SQL Server, and to manage the network
connectivity configuration from SQL Server client computers. SQL Server Configuration
Manager is a Microsoft Management Console snap-in that is available from the Start menu,
or can be added to any other Microsoft Management Console display. Microsoft Management
Console (mmc.exe) uses the SQLServerManager10.msc file in the Windows System32 folder
to open SQL Server Configuration Manager.
SQL Server Configuration Manager and SQL Server Management Studio use Window
Management Instrumentation (WMI) to view and change some server settings. WMI provides
a unified way for interfacing with the API calls that manage the registry operations requested
by the SQL Server tools, and to provide enhanced control and manipulation over the selected
SQL services, of the SQL Server Configuration Manager snap-in component.
Managing Services
Use SQL Server Configuration Manager to start, pause, resume, or stop the services,
to view service properties, or to change service properties.
Use SQL Server Configuration Manager to start the Database Engine using startup
parameters.
SQL Server Configuration Manager allows you to configure server and client network
protocols, and connectivity options. After the correct protocols are enabled, you usually do
not need to change the server network connections. However, you can use SQL Server
Configuration Manager if you need to reconfigure the server connections so SQL Server
listens on a particular network protocol, port, or pipe.
SQL Server Configuration Manager allows you to manage server and client network
protocols, including the ability to force protocol encryption, view alias properties, or
enable/disable a protocol.
SQL Server Configuration Manager allows you to create or remove an alias, change the order
in which protocols are uses, or view properties for a server alias, including:
Server Alias -The server alias used for the computer to which the client is connecting.
Protocol -The network protocol used for the configuration entry.
Connection Parameters -The parameters associated with the connection address for
the network protocol configuration.
The SQL Server Configuration Manager also allows you to view information about failover
cluster instances, though Cluster Administrator should be used for some actions such as
starting and stopping the services.
SQL Server supports Shared Memory, TCP/IP, and Named Pipes protocols. For information
about choosing a network protocols, see Configure Client Protocols. SQL Server does not
support the VIA, Banyan VINES Sequenced Packet Protocol (SPP), Multiprotocol,
AppleTalk, or NWLink IPX/SPX network protocols. Clients previously connecting with
these protocols must select a different protocol to connect to SQL Server. You cannot use
SQL Server Configuration Manager to configure the WinSock proxy. To configure the
WinSock proxy, see your ISA Server documentation.
This section describes how to configure WMI to show the server status in SQL Server tools
in SQL Server 2012. When connecting to servers, both the Registered Servers and Object
Explorer components of SQL Server Management Studio, as well as SQL Server
Configuration Manager, use Windows Management Instrumentation (WMI) to obtain the
status of the SQL Server (MSSQLSERVER) and SQL Server Agent (MSSQLSERVER)
services. To display the status of the service, the user must have rights to remotely access the
WMI object. The server must have WMI installed to configure this permission.
This section describes how to to configure startup options that will be used every time the
Database Engine starts in SQL Server 2012 by using SQL Server Configuration Manager.
SQL Server Configuration Manager writes startup parameters to the registry. They
take effect upon the next startup of the Database Engine.
On a cluster, changes must be made on the active server when SQL Server is online,
and will take effect when the Database Engine is restarted. The registry update of the
startup options on the other node will occur upon the next failover.
Security Permissions
Configuring server startup options is restricted to users who can change the related entries in
the registry. This includes the following users.
This section describes how to connect to another computer in SQL Server 2012. Follow the
first procedure to open the Windows Computer Management Microsoft Management Console
(mmc), connect to the computer, and expand the Services and Applications tree. Follow the
second procedure to create a file with a link to the SQL Server Configuration Manager on a
remote computer.
To start, stop, pause, or resume services on another computer, you can also connect to the
server with SQL Server Management Studio, right-click the server or SQL Server Agent and
then click the desired action.
All network protocols are installed by SQL Server Setup, but may or may not be enabled.
This topic describes how to enable or disable a server network protocol in SQL Server 2012
by using SQL Server Configuration Manager or PowerShell. The Database Engine must be
stopped and restarted for the change to take effect.
How to: Pause and Resume an Instance of SQL Server (SQL Server Configuration Manager)
Microsoft SQL Server can be paused and resumed from SQL Server Configuration Manager.
On the Start menu, point to All Programs, point to Microsoft SQL Server 2008 R2,
point to Configuration Tools, and then click SQL Server Configuration Manager.
In SQL Server Configuration Manager, expand Services, and then click SQL Server.
In the details pane, right-click the named instance of SQL Server, and then
click Pause.
A pair of vertical blue bars on the icon next to the server name and on the toolbar
indicates that the server started successfully.
To resume the server, right-click the named instance of SQL Server, and then
click Resume.
A green arrow on the icon next to the server name and on the toolbar indicates that the
server resumed successfully.
Click OK to close SQL Server Configuration Manager.
Lab Scenario
The development group within the company has ordered a new server for the work they need to do on the
Proseware system. Unfortunately, the new server will not arrive for a few weeks and the development
group cannot wait that long to start work.
The new server that was provisioned by the IT Support department already has two instances of SQL
Server installed. The support team has determined that the new server will be able to support an additional
instance of SQL Server on a temporary basis, until the server for the development group arrives.
You need to install the new instance of SQL Server and if you have time, you should configure the memory
of all three instances to balance their memory demands, and you should create a new alias for the instance
that you install.
Supporting Documentation
Required Memory Configuration (Used in Exercise 4 only)
• 40% of the remaining memory as the maximum value for the AdventureWorks server instance.
• 30% of the remaining memory as the maximum value for the Proseware server instance.
• 30% of the remaining memory as the maximum value for the PWDev server instance.
You will review the supporting documentation that describes the required configuration for the new
instance. You will also create the required folders to hold the data and log files for the instance.
Task 2: Create the folders that are required for the data and log files
• Based on the supplied requirements, create the folders that are required for the data and log files of the
new SQL Server instance.
Task 1: Based on the requirements reviewed in Exercise 1, install another instance of SQL
Server
• Install another instance of SQL Server based on the requirements in Exercise 1.
Note On the Server Configuration page, you should configure the service account name and password, the
startup type for SQL Server Agent, and the collation. On the Database Engine Configuration page, you
should configure Mixed Mode, the sa password, Add Current User, Data Directories tab, and the Filestream
tab.
Task 1: Check that the services for the new SQL Server instance are running
• Using SQL Server Configuration Manager, make sure that the newly installed services are running.
• Make sure that the named pipes protocol is enabled for the new instance.
Task 2: Configure both 32 bit and 64 bit aliases for the new instance
• Configure a 32 bit alias called PWDev for the new instance using named pipes.
• Configure a 64 bit alias called PWDev for the new instance using named pipes.
This manages how data is stored on disk and in memory. The main sub-categories:
The Access Methods component – works with how you access data
The Page Cache – stores cached copies of data pages to minimize data retrieval time.
The Locking and Transaction Management components – maintain consistency,
integrity of data, with help of database log file.
SQL OS Layer
This provides operating system functionality (ie interfaces) to SQL Server components. The
most important functions provided by this layer are memory management and scheduling.
The most important resources SQL Server utilizes from the server platform are CPU, memory
and I/O. CPU and I/O
When a SQL Server component needs to execute code, it creates a task, which represents the
time it will take to run. These tasks are scheduled by Windows. SQL Server tasks spend most
of their time waiting for something external to happen, usually I/O. When a task needs to wait
for a resource, it is placed in a waiting list until the resource is available. When it is told the
resource is available, it then waits for a share of CPU time, allocated by SQL OS.
SQL Server keeps records of how long tasks spend waiting and the types of resources they
wait for. Query the following:
sys.dm_os_waiting_tasks
sys.dm_os_wait_stats
Memory
The Buffer Pool is the main memory object of the SQL Server, and is divided into 8KB
pages – the same size as database data pages. It has three sections:
Free Pages – not yet used
Stolen Pages – used pages
Data Cache – used for caching database data pages. Data modification happens here,
then the pages are considered dirty and so the changes are written to the database by a
process called Checkpoint.
The Data Cache uses a least recently used (LRU) algorithm to determine which pages
to drop from memory when cache space is needed.
You can monitor the overall physical I/O operations by querying the
sys.dm_io_virtual_file_stats system function. The values are cumulative from the last system
restart.
Parallelism
The Query Optimizer decides if tasks should be synchronized and monitored (which happens
if they are expensive) and only does so if it is worth it. Parallelism involves spreading tasks
across CPUs, rather tharunning them sequentially.
PRE INSTALLATION TESTING FOR SQL SERVER
SQL Server 2012 introduces the following changes to SQL Server Setup:
Datacenter Edition: Datacenter edition which was introduced in SQL Server 2008 R2 is no
longer available as a SQL Server 2012 edition.
Business Intelligence edition: SQL Server 2012 includes a new edition of SQL Server –
SQL Server Business Intelligence.
SQL Server 2012 Business Intelligence edition delivers comprehensive platform empowering
organizations to build and deploy secure, scalable and manageable BI solutions. It offers
exciting functionality such as browser based data exploration and visualization; powerful data
mash-up capabilities, and enhanced integration management.
Enterprise Editions: Starting SQL Server 2012, we have two editions for Enterprise
differentiated based on licensing model.
Enterprise Edition: Server/Client Access License (CAL) based licensing
Enterprise Edition: Core-based Licensing
Changes to Operating System Requirements: Starting with SQL Server 2012, Service
Pack 1 is the minimum requirement for Windows 7 and Windows Server 2008 R2 operating
systems.
Data Quality Services: You can now install Data Quality Services (DQS) using the SQL
Server Setup.
Product Update: Product Update is a new feature in SQL Server 2012 Setup. It integrates
the latest product updates with the main product installation so that the main product and its
applicable updates are installed at the same time.
Server Core Installation: Starting with SQL Server 2012, we can install SQL Server on
Windows Server 2008 R2 Server Core SP1.
SQL Server Data Tools (Formerly called Business Intelligence Development Studio):
Starting with SQL Server 2012, you can install SQL Server Data Tools (SSDT) which
provides an IDE for building solutions for the Business Intelligence components:
Analysis Services, Reporting Services, and Integration Services.
SSDT also includes “Database Projects”, which provides an integrated environment
for database developers to carry out all their database design work for any SQL Server
platform (both on and off premise) within Visual Studio. Database developers can use
the enhanced Server Object Explorer in Visual Studio to easily create or edit database
objects and data, or execute queries.
SQL Server multi-subnet clustering: You can now configure a SQL Server failover
cluster using clustered nodes on different subnets.
SMB file share is a supported storage option: System databases (Master, Model, MSDB,
and TempDB), and Database Engine user databases can be installed on a file share on an
SMB file server. This applies to both SQL Server stand-alone and SQL Server failover
cluster installations.
Local Disk is now a supported storage option for tempdb for SQL Server failover cluster
installations.
Setup now offers default accounts for the SQL Server services whenever possible.
The following sections list the minimum hardware and software requirements to install and
run SQL Server 2012. For more information about requirements for Analysis Services in
SharePoint integrated mode.
For both 32-bit and 64-bit editions of SQL Server 2012, the following considerations apply:
We recommend that you run SQL Server 2012 on computers with the NTFS file
format. Installing SQL Server 2012 on a computer with FAT32 file system is
supported but not recommended as it is less secure than the NTFS file system.
SQL Server Setup will block installations on read-only, mapped, or compressed
drives.
To make sure that the Visual Studio component can be installed correctly, SQL Server
requires you to install an update. SQL Server Setup checks for the presence of this
update and then requires you to download and install the update before you can
continue with the SQL Server installation. To avoid the interruption during SQL
Server Setup, you can download and install the update before running SQL Server
Setup as described below (or install all the updates for .NET 3.5 SP1 available on
Windows Update):
If you install SQL Server 2012 on a computer with the Windows Vista SP2 or
Windows Server 2008 SP2 operating system, you can get the required update
fromhere.
If you install SQL Server 2012 on a computer with the Windows 7 SP1 or Windows
server 2008 R2 SP1 or Windows Server 2012 or Windows 8 operating system, this
update is included.
The installation of SQL Server 2012 fails if you launch the setup through Terminal
Services Client. Launching SQL Server Setup through Terminal Services Client is not
supported.
SQL Server Setup installs the following software components required by the
product:
SQL Server Native Client
SQL Server Setup support files
.NETFramework:-
.NET 3.5 SP1 is a requirement for SQL Server 2012 when you
select Database Engine, Reporting
.NET 4.0 is a requirement for SQL Server 2012. SQL Server installs .NET 4.0 during the
feature installation step.
If you are installing the SQL Server Express editions, ensure that an Internet
connection is available on the computer. SQL Server Setup downloads and installs the
.NET Framework 4 because it is not included in the SQL Server Express media.
SQL Server Express does not install .NET 4.0 on the Server Core mode of Windows
Server 2008 R2 SP1 or Windows Server 2012. You must install .NET 4.0 before you
install SQL Server Express on a Server Core installation of Windows Server 2008 R2
SP1 or Windows Server 2012.
Network Supported operating systems for SQL Server 2012 have built-in
Software network software. Named and default instances of a stand-alone
installation support the following network protocols: Shared
memory, Named Pipes, TCP/IP and VIA.
role in:
Windows Server 2008 SP2 Standard, Enterprise and Datacenter
editions
Windows Server 2008 R2 SP1 Standard, Enterprise, and
Datacenter editions.
Windows Server 2012 Datacenter and Standard editions.
The following memory and processor requirements apply to all editions of SQL Server 2012:
Component Requirement
Memory[1] Minimum:
Recommended:
Express Editions: 1 GB
All other editions: At least 4 GB and should be increased as database size increases to en
Optimal performance.
Processor Minimum:
Speed x86 Processor: 1.0 GHz
x64 Processor: 1.4 GHz
[1]The minimum memory required for installing the Data Quality Server component in Data
Quality Services (DQS) is 2 GB of RAM, which is different from the SQL Server 2012
minimum memory requirement.
WOW64 Support:
During installation of SQL Server 2012, Windows Installer creates temporary files on the
system drive. Before you run Setup to install or upgrade SQL Server, verify that you have at
least 6.0 GB of available disk space on the system drive for these files. This requirement
applies even if you install SQL Server components to a non-default drive.
Actual hard disk space requirements depend on your system configuration and the features
that you decide to install. For a list of features that are supported by the editions of SQL
Server,
Disk space
Feature
requirement
SMB storage can be hosted by a Windows File Server or a third party SMB storage device. If
Windows File Server is used, the Windows File Server version should be 2008 or later.
For security reasons, we recommend that you do not install SQL Server 2012 on a domain
controller. SQL Server Setup will not block installation on a computer that is a domain
controller, but the following limitations apply:
You cannot run SQL Server services on a domain controller under a local service
account.
After SQL Server is installed on a computer, you cannot change the computer from a
domain member to a domain controller. You must uninstall SQL Server before you
change the host computer to a domain controller.
After SQL Server is installed on a computer, you cannot change the computer from a
domain controller to a domain member. You must uninstall SQL Server before you
change the host computer to a domain member.
SQL Server failover cluster instances are not supported where cluster nodes are
domain controllers.
SQL Server Setup cannot create security groups or provision SQL Server service
accounts on a read-only domain controller. In this scenario, Setup will fail.
Security is important for every product and every business. By following simple best
practices, you can avoid many security vulnerabilities. This section discusses some security
best practices that you should consider both before you install SQL Server and after you
install SQL Server.
Follow these best practices when you set up the server environment:
Enhance physical security
Use firewalls
Isolate services
Configure a secure file system
Disable NetBIOS and server message block
Installing SQL Server on a domain controller
Enhance Physical Security
Physical and logical isolation make up the foundation of SQL Server security. To enhance the
physical security of the SQL Server installation, do the following tasks:
Place the server in a room accessible only to authorized persons.
Place computers that host a database in a physically protected location, ideally a
locked computer room with monitored flood detection and fire detection or
suppression systems.
Install databases in the secure zone of the corporate intranet and do not connect your
SQL Servers directly to the Internet.
Back up all data regularly and secure the backups in an off-site location.
Use Firewalls
Firewalls are important to help secure the SQL Server installation. Firewalls will be most
effective if you follow these guidelines:
Put a firewall between the server and the Internet. Enable your firewall. If your
firewall is turned off, turn it on. If your firewall is turned on, do not turn it off.
Divide the network into security zones separated by firewalls. Block all traffic, and
then selectively admit only what is required.
In a multi-tier environment, use multiple firewalls to create screened subnets.
When you are installing the server inside a Windows domain, configure interior
firewalls to allow Windows Authentication.
If your application uses distributed transactions, you might have to configure the
firewall to allow Microsoft Distributed Transaction Coordinator (MS DTC) traffic to
flow between separate MS DTC instances. You will also have to configure the
firewall to allow traffic to flow between the MS DTC and resource managers such as
SQL Server.
Isolate Services
Isolating services reduces the risk that one compromised service could be used to
compromise others. To isolate services, consider the following guideline:
Run separate SQL Server services under separate Windows accounts. Whenever
possible, use separate, low-rights Windows or Local user accounts for each SQL
Server service.
Configure a Secure File System
Using the correct file system increases security. For SQL Server installations, you should do
the following tasks:
Use the NTFS file system (NTFS). NTFS is the preferred file system for installations
of SQL Server because it is more stable and recoverable than FAT file systems. NTFS
also enables security options like file and directory access control lists (ACLs and
Encrypting File System (EFS) file encryption. During installation, SQL Server will
set appropriate ACLs on registry keys and files if it detects NTFS. These permissions
should not be changed. Future releases of SQL Server might not support installation
on computers with FAT file systems.
Use a redundant array of independent disks (RAID) for critical data files.
Servers in the perimeter network should have all unnecessary protocols disabled, including
NetBIOS and server message block (SMB).
Web servers and Domain Name System (DNS) servers do not require NetBIOS or SMB. On
these servers, disable both protocols to reduce the threat of user enumeration.
For security reasons, we recommend that you do not install SQL Server 2012 on a domain
controller. SQL Server Setup will not block installation on a computer that is a domain
controller, but the following limitations apply:
You cannot run SQL Server services on a domain controller under a local service
account.
After SQL Server is installed on a computer, you cannot change the computer from a
domain member to a domain controller. You must uninstall SQL Server before you
change the host computer to a domain controller.
After SQL Server is installed on a computer, you cannot change the computer from a
domain controller to a domain member. You must uninstall SQL Server before you
change the host computer to a domain member.
SQL Server failover cluster instances are not supported where cluster nodes are
domain controllers.
SQL Server Setup cannot create security groups or provision SQL Server service
accounts on a read-only domain controller. In this scenario, Setup will fail.
MODULE 4
A database in SQL Server is made up of a collection of tables that stores a specific set of
structured data. A table contains a collection of rows, also referred to as records or tuples, and
columns, also referred to as attributes. Each column in the table is designed to store a certain
type of information, for example, dates, names, dollar amounts, and numbers.
A computer can have one or more than one instance of SQL Server installed. Each instance
of SQL Server can contain one or many databases. Within a database, there are one or many
object ownership groups called schemas. Within each schema, there are database objects such
as tables, views, and stored procedures. Some objects such as certificates and asymmetric
keys are contained within the database, but are not contained within a schema. SQL Server
databases are stored in the file system in files. Files can be grouped into filegroups. When
people gain access to an instance of SQL Server they are identified as a login. When people
gain access to a database they are identified as a database user. A database user can be based
on a login. If contained databases are enabled, a database user can be created that is not based
on a login.
A user that has access to a database can be given permission to access the objects in the
database. Though permissions can be granted to individual users, we recommend creating
database roles, adding the database users to the roles, and then grant access permission to the
roles. Granting permissions to roles instead of users makes it easier to keep permissions
consistent and understandable as the number of users grow and continually change.
.
system databases
master Records all the system-level information for an instance of SQL Server.
Database
msdb Is used by SQL Server Agent for scheduling alerts and jobs.
Database
model Is used as the template for all databases created on the instance of SQL Server.
Database Modifications made to the model database, such as database size, collation, recovery
model, and other database options, are applied to any databases created afterward.
Resource Is a read-only database that contains system objects that are included with SQL Server.
Database System objects are physically persisted in the Resource database, but they logically
appear in the sys schema of every database.
SQL Server does not support users directly updating the information in system objects such
as system tables, system stored procedures, and catalog views. Instead, SQL Server provides
a complete set of administrative tools that let users fully administer their system and manage
all users and objects in a database. These include the following:
You should not code Transact-SQL statements that directly query the system tables, unless
that is the only way to obtain the information that is required by the application. Instead,
applications should obtain catalog and system information by using the following:
System catalog views
SQL-SMO
Windows Management Instrumentation (WMI) interface
Catalog functions, methods, attributes, or properties of the data API used in the
application, such as ADO, OLE DB, or ODBC.
Transact-SQL system stored procedures and built-in functions.
Contained Databases
A contained database is a database that is isolated from other databases and from the instance
of SQL Server that hosts the database. SQL Server 2012 helps user to isolate their database
from the instance in 4 ways.
Much of the metadata that describes a database is maintained in the database. (In
addition to, or instead of, maintaining metadata in the master database.)
All metadata are defined using the same collation.
User authentication can be performed by the database, reducing the databases
dependency on the logins of the instance of SQL Server.
The SQL Server environment (DMV's, XEvents, etc.) reports and can act upon
containment information.
Some features of partially contained databases, such as storing metadata in the database,
apply to all SQL Server 2012 databases. Some benefits of partially contained databases, such
as database level authentication and catalog collation, must be enabled before they are
available. Partial containment is enabled using the CREATE DATABASE and ALTER
DATABASE statements or by using SQL Server Management Studio. This topic contains the
following sections.
Partially Contained Database Concepts
Components of the Partially Contained Database
Containment
Benefits of using Partially Contained Databases
Limitations
Identifying Database Containment
A fully contained database includes all the settings and metadata required to define the
database and has no configuration dependencies on the instance of the SQL Server Database
Engine where the database is installed. In previous versions of SQL Server, separating a
database from the instance of SQL Server could be time consuming and required detailed
knowledge of the relationship between the database and the instance of SQL Server. Partially
contained databases in SQL Server 2012 make it easier to separate a database from the
instance of SQL Server and other databases.
The contained database considers features with regard to containment. Any user-defined
entity that relies only on functions that reside in the database is considered fully contained.
Any user-defined entity that relies on functions that reside outside the database is considered
uncontained. The following terms apply to the contained database model.
Database boundary
The boundary between a database and the instance of SQL Server. The
boundary between a database and other databases.
Contained
Uncontained
Non-contained database
Contained user
Enabling partially contained databases delegates control over access to the instance of SQL
Server to the owners of the database
DATABASE BOUNDARY
Because partially contained databases separate the database functionality from those of the
instance, there is a clearly defined line between these two elements called the database
boundary.
Inside of the database boundary is the database model, where the databases are developed and
managed. Examples of entities located inside of the database include, system tables
like sys.tables, contained database users with passwords, and user tables in the current
database referenced by a two-part name.
Outside of the database boundary is the management model, which pertains to instance-level
functions and management. Examples of entities located outside of the database boundary
include, system tables like sys.endpoints, users mapped to logins, and user tables in another
database referenced by a three-part-name.
CONTAINMENT
User entities that reside entirely within the database are considered contained. Any entities
that reside outside of thedatabase, or rely on interaction with functions outside of the
database, are considered uncontained.
In general, user entities fall into the following categories of containment:
Fully contained user entities (those that never cross the database boundary), for
example sys.indexes. Any code that uses these features or any object that references
only these entities is also fully contained.
Uncontained user entities (those that cross the database boundary), for example
sys.server_principals or a server principal (login) itself. Any code that uses these
entities or any functions that references these entities are uncontained.
The behavior of partially contained databases differs most distinctly from that of non-
contained databases with regard to collation. For more information about collation issues,
see Contained Database Collations.
Benefits of using Partially Contained Databases
There are issues and complications associated with the non-contained databases that can be
resolved by using a partially contained database.
Database Movement
One of the problems that occurs when moving databases, is that some important information
can be unavailable when a database is moved from one instance to another
For example, login information is stored within the instance instead of in the database. When
you move a non-contained database from one instance to another instance of SQL Server, this
information is left behind. You must identify the missing information and move it with your
database to the new instance of SQL Server. This process can be difficult and time-
consuming.
The partially contained database can store important information in the database so the
database still has the information after it is moved.
note
A partially contained database can provide documentation describing those features that are
used by a database that cannot be separated from the instance. This includes a list of other
interrelated databases, system settings that the database requires but cannot be contained, and
so on.
Limitations
Partially contained databases do not allow the following features.
Partially contained databases cannot use replication, change data capture, or change
tracking.
Numbered procedures
Schema-bound objects that depend on built-in functions with collation changes
Binding change resulting from collation changes, including references to objects,
columns, symbols, or types.
Replication, change data capture, and change tracking.
caution
Temporary stored procedures are currently permitted. Because temporary stored procedures
breach containment, they are not expected to be supported in future versions of contained
database.
sys.dm_db_uncontained_entities
This view shows any entities in the database that have the potential to be uncontained, such
as those that cross-the database boundary. This includes those user entities that may use
objects outside the database model. However, because the containment of some entities (for
example, those using dynamic SQL) cannot be determined until run time, the view may show
some entities that are not actually uncontained.
database_uncontained_usage event
This XEvent occurs whenever an uncontained entity is identified at run time. This includes
entities originated in client code. This XEvent will occur only for actual uncontained entities.
However, the event only occurs at run time. Therefore, any uncontained user entities you
have not run will not be identified by this XEvent
Database Files
SQL Server databases have three types of files, as shown in the following table.
File Description
Primary The primary data file contains the startup information for the database and points to
the other files in the database. User data and objects can be stored in this file or in
secondary data files. Every database has one primary data file. The recommended file
name extension for primary data files is .mdf.
Secondary Secondary data files are optional, are user-defined, and store user data. Secondary files
can be used to spread data across multiple disks by putting each file on a different disk
drive. Additionally, if a database exceeds the maximum size for a single Windows
file, you can use secondary data files so the database can continue to grow.
The recommended file name extension for secondary data files is .ndf.
Transaction The transaction log files hold the log information that is used to recover the database.
Log There must be at least one log file for each database. The recommended file name
extension for transaction logs is .ldf.
For example, a simple database named Sales can be created that includes one primary file
that contains all data and objects and a log file that contains the transaction log information.
Alternatively, a more complex database named Orders can be created that includes one
primary file and five secondary files. The data and objects within the database spread across
all six files, and the four log files contain the transaction log information.
By default, the data and transaction logs are put on the same drive and path. This is done to
handle single-disk systems. However, this may not be optimal for production environments.
We recommend that you put data and log files on separate disks.
Filegroups
Every database has a primary filegroup. This filegroup contains the primary data file and any
secondary files that are not put into other filegroups. User-defined filegroups can be created
to group data files together for administrative, data allocation, and placement purposes.
For example, three files, Data1.ndf, Data2.ndf, and Data3.ndf, can be created on three disk
drives, respectively, and assigned to the filegroup fgroup1. A table can then be created
specifically on the filegroup fgroup1. Queries for data from the table will be spread across
the three disks; this will improve performance. The same performance improvement can be
accomplished by using a single file created on a RAID (redundant array of independent disks)
stripe set. However, files and filegroups let you easily add new files to new disks.
All data files are stored in the filegroups listed in the following table.
Filegroup Description
Primary The filegroup that contains the primary file. All system tables are allocated to the
primary filegroup.
User- Any filegroup that is specifically created by the user when the user first creates or later
defined modifies the database.
Default Filegroup
When objects are created in the database without specifying which filegroup they belong to,
they are assigned to the default filegroup. At any time, exactly one filegroup is designated as
the default filegroup. The files in the default filegroup must be large enough to hold any new
objects not allocated to other filegroups.
The PRIMARY filegroup is the default filegroup unless it is changed by using the ALTER
DATABASE statement. Allocation for the system objects and tables remains within the
PRIMARY filegroup, not the new default filegroup.
In SQL Server, you can move the data, log, and full-text catalog files of a user database to a
new location by specifying the new file location in the FILENAME clause of the ALTER
DATABASE statement. This method applies to moving database files within the same
instance SQL Server. To move a database to another instance of SQL Server or to another
server, use backup and restore or detach and attach operations.
Considerations
When you move a database onto another server instance, to provide a consistent experience
to users and applications, you might have to re-create some or all the metadata for the
database. Some features of the SQL Server Database Engine change the way that the
Database Engine stores information in the database files. These features are restricted to
specific editions of SQL Server. A database that contains these features cannot be moved to
an edition of SQL Server that does not support them. Use the
sys.dm_db_persisted_sku_features dynamic management view to list all edition-specific
features that are enabled in the current database.
The procedures in this topic require the logical name of the database files. To obtain the
name, query the name column in the sys.master_files catalog view.
Starting with SQL Server 2008 R2, full-text catalogs are integrated into the database rather
than being stored in the file system. The full-text catalogs now move automatically when you
move a database.
important
If the database cannot be started, that is it is in suspect mode or in an unrecovered state, only
members of the sysadmin fixed role can move the file.
For more information about how to use the sqlcmd utility, see Use the sqlcmd Utility
below
1. Exit the sqlcmd utility or SQL Server Management Studio.
2. Stop the instance of SQL Server.
3. Move the file or files to the new location.
4. Start the instance of SQL Server. For example, run: NET START MSSQLSERVER.
5. Verify the file change by running the following query.
SELECT name, physical_name AS CurrentLocation, state_desc
FROM sys.master_files
WHERE database_id = DB_ID(N'<database_name>');
example
The following example moves the AdventureWorks2012 log file to a new location as part of
a planned relocation.
USE master;
GO
-- Return the logical file name.
SELECT name, physical_name AS CurrentLocation, state_desc
FROM sys.master_files
WHERE database_id = DB_ID(N'AdventureWorks2012')
AND type_desc = N'LOG';
GO
ALTER DATABASE AdventureWorks2012 SET OFFLINE;
GO
-- Physically move the file to a new location.
-- In the following statement, modify the path specified in FILENAME to
-- the new location of the file on your server.
ALTER DATABASE AdventureWorks2012
MODIFY FILE ( NAME = AdventureWorks2012_Log,
FILENAME = 'C:\NewLoc\AdventureWorks2012_Log.ldf');
GO
ALTER DATABASE AdventureWorks2012 SET ONLINE;
GO
--Verify the new location.
SELECT name, physical_name AS CurrentLocation, state_desc
FROM sys.master_files
WHERE database_id = DB_ID(N'AdventureWorks2012')
AND type_desc = N'LOG';
important
If you move a system database and later rebuild the master database, you must move the
system database again because the rebuild operation installs all system databases to their
default location.
To move a system database data or log file as part of a planned relocation or scheduled
maintenance operation, follow these steps. This procedure applies to all system databases
except the master and Resource databases.
1. For each file to be moved, run the following statement.
ALTER DATABASE database_name MODIFY FILE ( NAME = logical_name ,
FILENAME = 'new_path\os_file_name' )
2. Stop the instance of SQL Server or shut down the system to perform maintenance.
Move the file or files to the new location.
3. Restart the instance of SQL Server or the server. Verify the file change by running the
following query.
SELECT name, physical_name AS CurrentLocation, state_desc
FROM sys.master_files
WHERE database_id = DB_ID(N'<database_name>');
If the msdb database is moved and the instance of SQL Server is configured for Database
Mail, complete these additional steps.
1. Verify that Service Broker is enabled for the msdb database by running the following
query.
SELECT is_broker_enabled
FROM sys.databases
WHERE name = N'msdb';
2. Verify that Database Mail is working by sending a test mail.
If a file must be moved because of a hardware failure, follow these steps to relocate the file
to a new location. This procedure applies to all system databases except the master and
Resource databases.
important
If the database cannot be started, that is it is in suspect mode or in an unrecovered state, only
members of the sysadmin fixed role can move the file.
note
Because tempdb is re-created each time the instance of SQL Server is started, you do not
have to physically move the data and log files. The files are created in the new location when
the service is restarted in step 3. Until the service is restarted, tempdb continues to use the
data and log files in existing location.
1. Determine the logical file names of the tempdb database and their current location on
the disk.
SELECT name, physical_name AS CurrentLocation
FROM sys.master_files
WHERE database_id = DB_ID(N'tempdb');
GO
2. Change the location of each file by using ALTER DATABASE.
USE master;
GO
ALTER DATABASE tempdb
Lab Scenario
Now that the Proseware instance of SQL Server has been installed and configured on the server, a number
of additional database configurations need to be performed. As the database administrator, you need to
perform these configuration changes.
You need to create a new database on the server, based on requirements from an application vendor’s
specifications. A client has sent you a database that needs to be installed on the Proseware instance.
Instead of sending you a backup, they have sent a detached database and log file. You need to attach the
database to the Proseware instance.
A consultant has also provided recommendations regarding tempdb configuration that you need to review
and implement if appropriate.
Supporting Documentation
Exercise 1: Adjust tempdb Configuration
Scenario
You will adjust the current configuration of the tempdb database.
The main tasks for this exercise are as follows:
1. Adjust the size of tempdb.
2. Check that the tempdb size is still correct after a restart.
Task 2: Check that the tempdb size is still correct after a restart
• Restart the Proseware server using SQL Server Configuration Manager.
• Check that tempdb is still the correct size.