0% found this document useful (0 votes)
6 views192 pages

Cours BD2

The document provides a comprehensive guide on database administration, covering topics such as backup and restore, managing logins and server roles, and implementing indexes. It includes practical instructions for installing SQL Server Developer and Management Studio, as well as detailed T-SQL commands for database operations. Additionally, it discusses strategies for data recovery, user access management, and maintaining database performance through indexing.

Uploaded by

hanine.benamor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views192 pages

Cours BD2

The document provides a comprehensive guide on database administration, covering topics such as backup and restore, managing logins and server roles, and implementing indexes. It includes practical instructions for installing SQL Server Developer and Management Studio, as well as detailed T-SQL commands for database operations. Additionally, it discusses strategies for data recovery, user access management, and maintaining database performance through indexing.

Uploaded by

hanine.benamor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 192

Administration des

bases de données
www.imenemami.com/adminBD/cours.pdf
Plan
1. Introduction
2. Backup and Restore
3. Manage logins and server roles
4. Implement and maintain indexes
5. Import and export data
6. Manage SQL Server Agent
7. Manage and configure databases
8. Identify and resolve concurrency problems
9. Collect and analyse troubleshooting data
10. Audit SQL Server Instances
11. Additional SQL server components
Introduction
Downloading SQL Server Developer

Download a free specialized edition


 Basic, Custom or Download Media
 Langage: English
 Package ISO or CAB
Introduction
Installing SQL Server Developer
 SQL Server Installation Center – Installation
 New SQL Server stand-alone installation or add features to an existing installation
 Free edition: Developer
Features selection: Database Engine Services
 Instance Configuration: Default instance
 Database Engine Configuration: Add current user
Introduction
Installing SQL Server Management Studio (SSMS)
https://docs.microsoft.com/fr-fr/sql/ssms/download-sql-server-management-studio-
ssms?redirectedfrom=MSDN&view=sql-server-ver15
Introduction
Downloading Demo Database

AdventureWorks (AdventureWorks.bak)

https://learn.microsoft.com/en-us/sql/samples/adventureworks-install-configure?view=sql-
server-ver16&tabs=ssms
Backup and Restore
Restoring AdventureWorks with SSMS
T-SQL (Transact-SQL)

RESTORE DATABASE [AdventureWorks2014] FROM


DISK=N'E:\adminBD\Adventure+Works+2014+Full+Database+Backup\AdventureWo
rks2014.bak'
WITH FILE = 1, MOVE N'AdventureWorks2014_Data'
TO N'C:\Program Files\Microsoft SQL
Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\AdventureWorks2014_Data.mdf',
MOVE N'AdventureWorks2014_Log' TO N'C:\Program Files\Microsoft SQL
Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\AdventureWorks2014_Log.ldf',
NOUNLOAD,
STATS = 5
Backup and Restore
The backup and restore component provides an essential safeguard for protecting critical data
stored in your databases.
Why backup?
 Backing up is the only way to protect your data
 Recover your data from many failures, such as:
 User errors, for example, dropping a table by mistake.
 Media and Hardware failures, for example, a damaged disk drive or permanent loss of a server.
 Natural disasters.

Restoring a backup consists of creating a database containing exclusively all the data contained in
the backup
T-SQL (Transact-SQL)
j
Backing up Database
Backing up Database
BACKUP DATABASE [AdventureWorks2014] TO DISK =
N'E:\adminBD\DBA\AdventureWorksBackup'
WITH NOFORMAT, NOINIT,
NAME = N'AdventureWorks2014-Full-Database-Backup’,
SKIP, NOREWIND, NOUNLOAD, STATS = 1
Recovery Model
 SIMPLE

 FULL

 BULK LOGGED
Different backup models
 FULL BACKUP

DIFFERENTIAL BACKUP

TRANSACTION LOG BACKUP


Different backup models in SSMS
Point in time recovery
Quiz
Using NORECOVERY and RECOVERY
Using NORECOVERY and RECOVERY
BACKUP LOG [AdventureWorks2014] TO DISK = N'C:\Program Files\Microsoft SQL
Server\MSSQL15.MSSQLSERVER\MSSQL\Backup\AdventureWorks2014_LogBackup_2022-02-11_07-49-49.bak'
WITH NOFORMAT, NOINIT, NAME = N'AdventureWorks2014_LogBackup_2022-02-11_07-49-49', NOSKIP,
NOREWIND, NOUNLOAD, NORECOVERY , STATS = 5
RESTORE DATABASE [AdventureWorks2014Backup4] FROM DISK = N'C:\Program Files\Microsoft SQL
Server\MSSQL15.MSSQLSERVER\MSSQL\Backup\AdventureWorks2014_LogBackup_2022-02-04_08-42-27.bak'
WITH FILE = 2, MOVE N'AdventureWorks2014_Data' TO N'C:\Program Files\Microsoft SQL
Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\AdventureWorks2014Backup4_Data.mdf', MOVE
N'AdventureWorks2014_Log' TO N'C:\Program Files\Microsoft SQL
Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\AdventureWorks2014Backup4_Log.ldf', NORECOVERY,
NOUNLOAD, STATS = 5
RESTORE DATABASE [AdventureWorks2014Backup4] FROM DISK = N'C:\Program Files\Microsoft SQL
Server\MSSQL15.MSSQLSERVER\MSSQL\Backup\AdventureWorks2014_LogBackup_2022-02-04_08-42-27.bak'
WITH FILE = 3, NORECOVERY, NOUNLOAD, STATS = 5
RESTORE LOG [AdventureWorks2014Backup4] FROM DISK = N'C:\Program Files\Microsoft SQL
Server\MSSQL15.MSSQLSERVER\MSSQL\Backup\AdventureWorks2014_LogBackup_2022-02-04_08-42-27.bak'
WITH FILE = 4, NORECOVERY, NOUNLOAD, STATS = 5
RESTORE LOG [AdventureWorks2014Backup4] FROM DISK = N'C:\Program Files\Microsoft SQL
Server\MSSQL15.MSSQLSERVER\MSSQL\Backup\AdventureWorks2014_LogBackup_2022-02-11_07-49-49.bak'
WITH NOUNLOAD, STATS = 5, STOPAT = N'2022-02-11T07:49:58'
Using NORECOVERY and RECOVERY
use [master]

Go

restore database [AdventureWorks2014Backup4] with recovery


Backup an SQL server environment and
system databases
 Master
 Model
 Msdb
 TempDB
 Resource
Perform backup/restore based on strategies
 What you are backing up and how often the data is updated
 How much data can you afford to loose
 How much space will a full database use ?
 Backup redundancy
 Reliability
 Expiration of the backup
 Compression
 Encryption
Recover from a corrupted drive
 What happens if the backup media is damaged ?

 Mirrored Media

 Allow the restore to continue despite the errors


oDatabase consistency check
oRepair data loss (single user mode and rollback immediate)
Recover from a corrupted drive
RESTORE DATABASE [AdventureWorks2014backup]
FROM DISK = N'C:\Program Files\Microsoft SQL
Server\MSSQL15.MSSQLSERVER\MSSQL\Backup\AdventureWorks2014_LogBackup_2022-01-27_13-
18-47.bak'
WITH CONTINUE_AFTER_ERROR, NORECOVERY, FILE = 6

ALTER DATABASE [AdventureWorks2014backup] SET SINGLE_USER WITH ROLLBACK IMMEDIATE


DBCC CHECKDB ([AdventureWorks2014backup], REPAIR_ALLOW_DATA_LOSS)

ALTER DATABASE [AdventureWorks2014backup] SET MULTI_USER


Quiz
Manage logins and server roles
Create login accounts
 Logins
 Create a login
 Windows authentification / SQL server authentification

CREATE LOGIN [DESKTOP-ICEQ7B9\SQLTest]


FROM WINDOWS
WITH DEFAULT_DATABASE=[AdventureWorks2014]
Manage access to the server
 Serveradmin
 Alter settings, shutdown, alter any endpoint, create any endpoint

Securityadmin
 Alter any login

Processadmin
 Alter any connection

Setupadmin
 Alter any linked server

Bulkadmin
 Administer Bulk operations
Manage access to the server
 Diskadmin
 Alter resources

Dbcreator
 Alter any database, create any database

Sysadmin
 Can perform any activity on the server

Public
 No server-level permission (except view any database and connect permission)

ALTER SERVER ROLE [sysadmin] ADD MEMBER [DESKTOP-ICEQ7B9\SQLTest]


ALTER SERVER ROLE [sysadmin] DROP MEMBER [DESKTOP-ICEQ7B9\SQLTest]
Create and maintain user-defined server roles
 Create my own server rôle
 Name
 Owner: AUTHORIZATION
 Securables (Endpoints, logins, servers, availability groups, server roles)

CREATE SERVER ROLE [myServerRole1]


ALTER SERVER ROLE [myServerRole1] ADD MEMBER [DESKTOP-ICEQ7B9\SQLTest]
GRANT ALTER ANY LOGIN TO [myServerRole1]
Create database user accounts
 Add user account

CREATE USER [SQLTest] FOR LOGIN [DESKTOP-ICEQ7B9\SQLTest] WITH DEFAULT_SCHEMA=[dbo]


Fixed database-level roles
 db_owner
 Has all permission in the database

db_securityadmin
 Alter any role, create role, view definition

db_accessadmin
Alter any user, connect

db_backupoperator
Backup database, backup log, checkpoint

db_ddladmin
Data definition Langage commands in the database
Fixed database-level roles
 db_datawriter
 Grant Insert, update, delete on database
db_denydatawriter
Deny Insert, update, delete on database
db_datareader
Grant Select on database
db_denydatareader
Deny Select on database
public
No database level permissions (except some database permissions for example: view any column
master key definition and select permission on many individual system tables)

ALTER ROLE [db_datareader] ADD MEMBER [SQLTest]


User database-level roles
 New database rôle
Name
Owner
Members
Secrurables
Permisssions (Alter, Control, select, insert, update, references, execute, take ownership…)

CREATE ROLE [myNewDBRole]


ALTER ROLE [myNewDBRole] ADD MEMBER [SQLTest]
GRANT SELECT ON [HumanResources].[Department] TO [myNewDBRole]
DENY SELECT ON [HumanResources].[Employee] TO [myNewDBRole]
Creating and using schemas
USE [AdventureWorks2014]
GO
CREATE SCHEMA [myNewSchema] AUTHORIZATION [SQLTest]
GO

 Schema owner and permissions


Creating access to server/database with
least privilege
 Principle of least privilege
 Use fixed server roles
 Restrict use of sysadmin
 Assign permissions to role
 Use stored procedures and functions
Permission statements
 Grant, Deny, Revoke (REVOKE SELECT ON HumanResources.Department TO myNewDBRole)

 Owernership chains
Protect objects from being modified
use [AdventureWorks2014]

DENY ALTER ON [Production].[Culture] TO [SQLTest]


DENY CONTROL ON [Production].[Culture] TO [SQLTest]
DENY DELETE ON [Production].[Culture] TO [SQLTest]
DENY INSERT ON [Production].[Culture] TO [SQLTest]
DENY TAKE OWNERSHIP ON [Production].[Culture] TO [SQLTest]
DENY UPDATE ON [Production].[Culture] TO [SQLTest]
Quiz
Implement and maintain indexes
What are indexes ?
 Clustred indexes
 Unique column properties

 Non-clustred indexes
Non-unique
Implement indexes
CREATE CLUSTERED INDEX [IX_NewTable_ID] ON [dbo].[NewTable]
(
[ID] ASC
)

CREATE NONCLUSTERED INDEX [IX_NewTable_Color] ON [dbo].[NewTable]


(
[ColorName] ASC
)
INCLUDE([ObjectName])

DROP INDEX [IX_NewTable_ID] ON [dbo].[NewTable]


Fragmentation
A B
ObjectName ColorName ID ObjectName ColorName ID
ObjectName ColorName ID
Table Black 1
Table Brown 2
Computer Gold 3
Printer Black 4
Printer Red 5 C D
Bookcase Brown 6 ObjectName ColorName ID ObjectName ColorName ID
Computer Black 7
Book White 8
Table Gold 9
Book Green 10
Fragmentation
A B
ObjectName ColorName ID ObjectName ColorName ID
ObjectName ColorName ID
Table Black 1
Table Brown 2 Computer Gold 3
Computer Gold 3 Table Black 1
Printer Black 4 Table Brown 2
Printer Red 5 C D
Bookcase Brown 6 ObjectName ColorName ID ObjectName ColorName ID
Computer Black 7
Book White 8
Table Gold 9
Book Green 10
Fragmentation
A B
ObjectName ColorName ID ObjectName ColorName ID ObjectName ColorName ID
Table Black 1
Table Brown 2 Computer Gold 3 Table Black 1

Computer Gold 3 Printer Black 4 Table Brown 2

Printer Black 4 Printer Red 5

Printer Red 5 C D
Bookcase Brown 6 ObjectName ColorName ID ObjectName ColorName ID
Computer Black 7
Bookcase Brown 6
Book White 8
Computer Black 7
Table Gold 9
Book Green 10
Fragmentation A B
ObjectName ColorName ID ObjectName ColorName ID
ObjectName ColorName ID
Table Black 1 Computer Gold 3 Table Black 1
Table Brown 2 Printer Black 4 Table Brown 2
Computer Gold 3 Printer Red 5 Table Gold 9
Printer Black 4 C D
ObjectName ColorName ID ObjectName ColorName ID
Printer Red 5
Bookcase Brown 6 Book White 8 Book Green 10
Computer Black 7 Bookcase Brown 6
Book White 8 Computer Black 7
Table Gold 9 Book (Green) D
Book Green 10 Book (White), Bookcase, Computer (Black) C
Computer(Gold), Printer A
Table B
Fragmentation A B
ObjectName ColorName ID ObjectName ColorName ID
ObjectName ColorName ID
Table Black 1 Computer Gold 3 Table Black 1
Table Brown 2 Table Brown 2
Computer Gold 3 Printer Red 5 Table Gold 10
Printer Black 4 C D
ObjectName ColorName ID ObjectName ColorName ID
Printer Red 5
Bookcase Brown 6 Book White 8 Book Green 9
Computer Black 7 Bookcase Brown 6
Book White 8 Computer Black 7
Table Gold 9
Book (Green) D
Book Green 10
Book (White), Bookcase, Computer (Black) C
Computer (Gold), Printer A
Table B
Fragmentation A B
ObjectName ColorName ID ObjectName ColorName ID
ObjectName ColorName ID
Table Black 1 Computer Gold 3 Table Black 1
Table Brown 2 Table Brown 2
Computer Gold 3 Printer Red 5 Table Gold 10
Printer Red 5 C D
ObjectName ColorName ID ObjectName ColorName ID
Bookcase Brown 6
Computer Black 7 Book White 8 Book Brown 11
Book White 8 Bookcase Brown 6 Book Green 9
Table Gold 9 Computer Black 7
Book Green 10
Book (Brown) Book (Green) D
Book Brown 11
Book (White), Bookcase, Compter (Black) C
Computer(Gold), Printer A
Table B
ObjectName ColorName ID

Bookcase Brown 6 E
Computer Black 7
Fragmentation A B
ObjectName ColorName ID ObjectName ColorName ID
ObjectName ColorName ID
Table Black 1 Computer Gold 3 Table Black 1
Table Brown 2 Table Brown 2
Computer Gold 3 Printer Red 5 Table Gold 10
Printer Red 5 C D
ObjectName ColorName ID ObjectName ColorName ID
Bookcase Brown 6
Computer Black 7 Book White 8 Book Brown 11
Book White 8 Book Yellow 12 Book Green 9
Table Gold 9
Book (Brown) Book (Green) D
Book Green 10
Book (White), Book(Yellow) C
Book Brown 11
BookCase, Computer (Black) E
Book Yellow 12
Computer(Gold), Printer A
Table B
Fragmentation
 Reorganize
Computer(Gold), Printer A ObjectName ColorName ID
Table B
Book (White), Book(Yellow) C Computer Gold 3

Book (Brown), Book (Green) D


BookCase, Computer (Black) E Printer Red 5

Book (Brown) Book (Green) D ObjectName ColorName ID


Book (White), Book(Yellow) C
BookCase, Computer (Black) E Computer Gold 3

Computer(Gold), Printer A Printer Red 5

Table B
Fragmentation
 Rebuild
 Drops the index and starts from scratch
 Sort out (Book, Bookcase, Computer,…)

Only use rebuild when it is absolutely necessary, when the fragmentation has reached such an
extent that you just have to delete it and start again.
Fragmentation
 Reorganize

ALTER INDEX [IX_NewTable_ID] ON [dbo].[NewTable] REORGANIZE

 Rebuild

ALTER INDEX [IX_NewTable_ID] ON [dbo].[NewTable] REBUILD PARTITION = ALL WITH


(ONLINE = ON)
How fragmented are the indexes
select * from
sys.dm_db_index_physical_stats(DB_ID('AdventureWork
s2014'),OBJECT_ID('[Person].[Address]'),null,null,n
ull) as stats
join sys.indexes as si
on stats.object_id=si.object_id and
stats.index_id=si.index_id
Fill Factor
 When you are creating a page in a first place, don’t make it 100 % full.
 Reorgonize less
 Rebuild less

ALTER INDEX [IX_NewTable_Color] ON [dbo].[NewTable] REBUILD PARTITION =


ALL WITH (FILLFACTOR = 80)
Optimise indexes
CREATE NONCLUSTERED INDEX [NCDemo] ON [Person].[Address]
(
[AddressID] ASC
) INCLUDE([PostalCode])
WHERE city='London'
Identify unused indexes
 user_seeks
 user_scans
 user_lookups
 user_updates

select * from sys.dm_db_index_usage_stats as stats


join sys.indexes as si
on stats.object_id=si.object_id and stats.index_id=si.index_id
Disable or drop unused indexes
ALTER INDEX [IX_NewTable_ID] ON [dbo].[NewTable] DISABLE

DROP INDEX [IX_NewTable_Color] ON [dbo].[NewTable]


Quiz
Import and export data
Transfer data
 Detach and attach a database

EXEC master.dbo.sp_detach_db @dbname = N'AdventureWorks2014backup’

CREATE DATABASE [AdventureWorks2014backup] ON


( FILENAME = N'C:\Program Files\Microsoft SQL
Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\AdventureWorks2014backup_Data.mdf' ),
( FILENAME = N'C:\Program Files\Microsoft SQL
Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\AdventureWorks2014backup_Log.ldf' )
FOR ATTACH
Transfer data
 Import flat file

 Import data

Export data

Copy database (SQL Server Agent)


Bulk Insert
create table dbo.flatFile (Heading1 varchar(50), Heading2 varchar(50))

bulk insert [dbo].[flatFile] from 'E:\newFiletest.txt'


with
(FIELDTERMINATOR=',',
ROWTERMINATOR='\n',
FIRSTROW=2
)
Quiz
Manage SQL Server Agent
Create, maintain and monitor jobs
 Automation

 Enable/Start SQL Server Agent

Job: Full backup every day at midnight


Create, maintain and monitor jobs
 Job steps
Create, maintain and monitor jobs
Create, maintain and monitor jobs
 Job schedule
Administer jobs
USE msdb

Go

select * from sysjobs

select * from sysjobsteps

select * from syssessions

select * from sysjobactivity

select * from sysjobhistory

select * from sysschedules


RAISERROR
 User-defined error
RAISERROR (id, severitynumber, statenumber) (id >= 50000, severity <=10
information, >=19 fatal error , state between 0 and 255)

select * from sys.messages

exec sp_addmessage 50001, 16, 'Iam raising an alert’

RAISERROR (50001, 16, 1)


Alerts
 SQL Server event alert

 Server performance condition alert

 WMI (Windows Management Instrumentation) event alert


Create Event Alerts
Create Event Alerts
RAISERROR and event Alerts
RAISERROR (50001, 16, 1) WITH LOG
Create alerts on critical server condition
Operators
 Database Mail
Configuration

 Set up database mail


E-mail profile
 SMTP accounts
Database mail configuration
Database mail configuration
 SQL Server Agent – Enable mail profile
Adding operators to jobs and alerts
Quiz
https://chat.deepseek.com/a/chat/s/2f74f13a-28bf-48ab-a17f-716ab1a9918a

https://chatgpt.com/c/6828eba8-a66c-8001-9cec-70b6183cf113

Manage and configure databases


AUTO_CLOSE: When set to ON, SQL Server
automatically closes the database when the last user
connection terminates and frees all resources

Autoclose and Autoshrink


 Autoclose
AUTO_SHRINK: When set to ON, SQL
Server automatically shrinks database files
ALTER DATABASE [AdventureWorks2014] SET AUTO_CLOSE ON when more than 25% of the file contains
unused space.

Why you might use it:


To automatically reclaim disk space when
large amounts of data are deleted
In environments where disk space is
 Autoshrink extremely limited

ALTER DATABASE [AdventureWorks2014] SET AUTO_SHRINK ON


Autoclose and Autoshrink
Filegroups are logical containers that group database files
together for administration and placement purposes.

They help with:


Performance optimization (Distribute/spread I/O/ db loading
across multiple disks &&Parallel operations can work with
different filegroups simultaneously)
Backup strategies (Backup/restore individual filegroups
Storage management

Design multiple file groups Maintenance operations

Key Components
Big Database: one filegroup ?
Primary Filegroup:
Primary data file (.mdf) Contains the primary data file (.mdf)
Contains system tables by default
Created automatically with every database
 Secondary data file (.ndf)
Secondary Filegroups:
Contain secondary data files (.ndf)
Can be created for specific purposes
 Filegroups Can be set as the default filegroup

 Data Files on different filegroups Log Files:


Stored in .ldf files
Not part of any filegroup (transaction logs are managed separately)

Primary Data File (.mdf)


Mandatory: Every SQL Server database must have exactly one primary data file Secondary Data File (.ndf)
Stores database schema information (system tables) Optional: A database can have zero or more secondary data files
Contains the "primary filegroup" by default Stores only user data (never system objects)
Can store user data if no other filegroups exist Belongs to a specific filegroup (primary or user-created)
Location: Typically stored on the primary storage device Location: Often placed on separate physical drives for performance
Creating database with multiple
filegroups
Creating database with multiple
filegroups
Creating database with multiple
filegroups
CREATE DATABASE [DBAdatabase]

CONTAINMENT = NONE

ON PRIMARY

( NAME = N'DBAdatabase', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\DBAdatabase.mdf' ,

SIZE = 102400KB , FILEGROWTH = 65536KB ),

FILEGROUP [Secondary]

( NAME = N'DBAdatabase2', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\DBAdatabase2.ndf' ,

SIZE = 102400KB , FILEGROWTH = 65536KB )

LOG ON

( NAME = N'DBAdatabase_log', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\DBAdatabase_log.ldf'


,

SIZE = 8192KB , FILEGROWTH = 65536KB )

GO

IF NOT EXISTS (SELECT name FROM sys.filegroups WHERE is_default=1 AND name = N'Secondary')

ALTER DATABASE [DBAdatabase] MODIFY FILEGROUP [Secondary] DEFAULT


Manage file space including adding new
filegroups
Manage file space including adding new
filegroups
create new filegroup ( empty)
ALTER DATABASE [DBAdatabase] ADD FILEGROUP [third]

ALTER DATABASE [DBAdatabase] ADD FILE ( NAME = N'DBAdatabase3',


addd a data file to it
FILENAME = N'C:\Program Files\Microsoft SQL
Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\DBAdatabase3.ndf' , SIZE = 8192KB ,
FILEGROWTH = 65536KB ) Adds a new physical .ndf (secondary
data file) to the "third" filegroup
TO FILEGROUP [third]
Initial size: 8MB (8192KB)

Autogrowth: 64MB (65536KB) when


more space is needed
This moves the table NewTable to the "third" filegroup by:
via Creating a new clustered index on the desired
filegroup
In SQL Server, the clustered index determines the
physical storage location of the table data

Manage file space - moving objects


CREATE CLUSTERED INDEX [ClusteredIndex-
20220407-111813] ON [dbo].[NewTable]
(
[heading1] ASC, [heading2] ASC
)
ON [third]

RQUE:To move an existing table without a clustered index:


-- Create a new clustered index to move the table
CREATE CLUSTERED INDEX CI_NewTable ON dbo.ExistingTable(ColumnName)
ON [third];
Manage file space - moving objects
CREATE TABLE [dbo].[NewTable2] Creating a New Table on Specific Filegroup

( Directly creates the new table in the specified filegroup ("third")

Heading1 int, heading2 int All data for this table will be stored in the DBAdatabase3.ndf file

)
ON [third]
Partitionning
Partitioning is a database process that divides large tables into smaller, more manageable
pieces while maintaining a single logical view of the data.

 Create filegroups

 Create partition function

Create scheme partition (uses partion function and filegroups)

Create/modify tables/indexes using partition scheme


Partitionning
Partitionning
Maps each partition to a physical filegroup:

Partition 1 : PRIMARY filegroup

Partition 2 : Secondary filegroup step; page 111

Partition 3 : third filegroup

Partitionning before this, you should first have had created your filegroups
BEGIN TRANSACTION
Defines how to
CREATE PARTITION FUNCTION [PartitionFunctionPartition](date) AS RANGE RIGHT FOR VALUES (N'2018-01-01', N'2022-01-01’)
split the data by
date ranges
CREATE PARTITION SCHEME [PartitionSchemeParttition] AS PARTITION [PartitionFunctionPartition] TO ([PRIMARY], [Secondary], [third])

apply partitionning: begin transaction ken ynajm ybada lahne


CREATE CLUSTERED INDEX [ClusteredIndex_on_PartitionSchemeParttition_637849306408154072] ON [dbo].[partitionTable]

[dateOfEntry]
Purpose: Physically reorganizes the table according to the partition scheme
) ON [PartitionSchemeParttition]([dateOfEntry])

DROP INDEX [ClusteredIndex_on_PartitionSchemeParttition_637849306408154072] ON [dbo].[partitionTable]

COMMIT TRANSACTION
Partitionning
select *, $PARTITION.PartitionFunctionPartition(dateOfEntry) as PartitionNumber
from [dbo].[partitionTable] Querying Partitioned Data
(VIEWING PARTITION DISTRIBUTION)

$PARTITION function shows which partition each row belongs to


Testing Partition Mapping

select $PARTITION.PartitionFunctionPartition('2018-01-01')
Returns which partition a specific value would go into (returns 2 in this case)
Filegroup backup
 How can you manage a VERY BIG database ?
 How do you back that up ? It is huge (hours, days)
Not to backup the entirety
BACKUP DATABASE [DBAdatabase]
FILEGROUP = N'Secondary' TO
DISK = N'C:\Program Files\
Microsoft SQL Server\MSSQL15.MSSQLSERVER
\MSSQL\Backup\DBAdatabase.bak'
File/Filegroup Restore: Physical corruption in specific files/groups.

Page Restore: Isolated page corruption (e.g., DBCC CHECKDB errors).

Transaction Log Restore: Recover from logical errors (e.g., DROP TABLE).

Filegroup and Page restore


The database must be in ONLINE or RESTORING state
1. Restoring a Specific Data File
RESTORE DATABASE [DBAdatabase] FILE = N'DBAdatabase2' FROM DISK = N'C:\Program
Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\Backup\DBAdatabase.bak'
WITH FILE = 1, NOUNLOAD, STATS = 10
FILE = N'DBAdatabase2': Specifies which logical file to restore
WITH FILE = 1: Restores from the first backup set in the backup file
NOUNLOAD: Keeps the tape mounted after restore (for tape backups)
2. Restoring a Specific Page STATS = 10: Shows progress every 10% completion
Requires the database to be in FULL or BULK_LOGGED recovery model
RESTORE DATABASE [AdventureWorks2014] PAGE='1:12’
FROM DISK = N'C:\Program Files\Microsoft SQL
Server\MSSQL15.MSSQLSERVER\MSSQL\Backup\AdventureWorks2014_LogBackup_2022-02-
28_07-58-53.bak' Restores only page 12 from file ID 1 in the database

Used for repairing specific corrupted pages


You must know the exact page numbers (from error logs or DBCC CHECKDB)
This DMV shows:
Total log size (in bytes)
Used space (in bytes)
Percentage of space used
Status of the log

Key metrics to watch:

Manage log file growth When log space used exceeds 70-80%, consider expanding
If log reuse wait is not "NOTHING", investigate why logs aren't
truncating

select * from sys.dm_db_log_space_usage


 Add additional log files Only if you need to distribute I/O across multiple disks

 Do a transaction log backup (Most Important)


BACKUP LOG [DBAdatabase] TO DISK = 'C:\Backups\DBAdatabase_Log.trn'
 Shrinking the file
dbcc shrinkfile(DBAdatabase_log,4)

What is log truncation?


Log truncation is the process of marking space in the
transaction log as reusable
It does not physically shrink the log file, just marks
 Enable Autogrowth inactive portions as available

Occurs automatically after:


A transaction log backup (FULL/BULK_LOGGED
recovery model)
A checkpoint (SIMPLE recovery model)

Shrinking refers to the process of reducing


the physical size of database files (data
files or log files) by releasing unused space
back to the operating system.
Best practices:

Only shrink after log backup

Avoid shrinking to very small size (causes growth overhead)

Better to set proper initial size instead of frequent shrinking

DBCC
Operates dbcc shrinkdatabase(DBAdatabase,20) Shrinks all database files (data and log)
on all
database Target Specification:Percentage of free space 20 = target percentage of free space to leave after shrink
files at once

 File Id Shows all database files with their:


file_id -
◦ select * from sys.database_files name -
physical_name - OS file path
type (0 = rows, 1 = log, 2 = FILESTREAM, etc.)
Operates on a
Current size/used space
single
specified file
dbcc shrinkfile(3,truncateonly) Releases only unused space at the
(data or log) end of the file
Target SpecificationExact size in MB/GB
Doesn't reorganize data pages
dbcc shrinkfile(DBAdatabase3,emptyfile)
Least impactful option
A contained database is a database that isolates all its metadata and
dependencies from the SQL Server instance, making it more portable and
easier to move between instances. Key characteristics:
Self-contained: Includes all database settings and metadata
Portable: Easier to move between SQL Server instances
Independent authentication: Users can authenticate directly at database level
Implement and configure contained
databases and logins
 Moving database from one particular instance of sql server to another

 A contained database
 A database that is isolated from the instance of SQL Server

 Contained database users


Configured directly at the database level and don’t require an associated login
 Authenticate users by passwords
Implement and configure contained
databases and logins
Implement and configure contained
databases and logins
1. Enable Contained Database Authentication feature (at Server Level)
EXEC sys.sp_configure N'contained database authentication', N'1'
GO
RECONFIGURE WITH OVERRIDE

2. Set Database Containment to Partial (DB level)


-- Convert an existing database to partially contained

ALTER DATABASE [DBAdatabase] SET CONTAINMENT = PARTIAL WITH NO_WAIT


Implement and configure contained
databases and logins Key benefits of contained users:
 SQL users with password
No need for server-level logins

Credentials stored in the database itself


CREATE USER [ContainedUser] WITH PASSWORD=N'ContainedUser'
Easier to move database between servers
Quiz
Your partition function defines:

3 boundary values (1, 100, 1000)


This creates:

4 partitions total (n boundary values + 1)

Partition ranges will be:

Values <= 1

Values > 1 and <= 100

Values > 100 and <= 1000

Values > 1000


This is because RANGE LEFT includes the boundary value (100)
in the left partition (Partition 2)
hethy yji mbaad

Requires
Binary Configuration Option: RECONFIGURE to take
effect:
0 = Disabled (default)
sql
1 = Enabled RECONFIGURE;

enable not even valid


Data Compression Data Compression and Storage Optimization Techniques

 Page compression
 Row compression Typically provides moderate space savings
 Reduce metadata overhead
Data Compression Options
 Uses variable-length storage for numeric-based types 1. Row Compression
 Uses variable-length character strings Eliminates unused space in fixed-length data types
Best for: Tables with many fixed-length columns (int, char, etc.)
Compression ratio: Typically 20-40% space savings
Prefix compression 2. Page Compression (Includes Row Compression plus additional
 Stores commonly used prefixes elsewhere techniques)
 Prefix values are replaced by a reference to the prefix +++Provides higher compression than row alone
Best for: Read-heavy tables with repetitive data
Compression ratio: Typically 40-60% space savings
Dictionary compression A/Prefix compression:
 Replaces commonly used values Stores common/repeating prefixes in page header separately
++++Replaces values with references to prefixes

B/Dictionary compression:
Identifies repeated values across columns
Stores them in a dictionary
Type Storage SavingsBest Use Cases
Row Compression 20-40%OLTP with fixed-length columns
Page Compression 40-60%Read-heavy tables with repeats
Columnstore 90%+Analytical workloads
Backup Compression50-80%All backup operations

Data Compression
Data Compression
ALTER TABLE [HumanResources].[Employee] REBUILD PARTITION = ALL
WITH
(DATA_COMPRESSION = PAGE
)
ALTER INDEX [IX_Employee_OrganizationNode] ON
[HumanResources].[Employee] REBUILD PARTITION = ALL WITH
(DATA_COMPRESSION = PAGE)

exec sp_estimate_data_compression_savings [HumanResources], [Employee], 1,


null, 'PAGE'
Sparse columns a storage optimization feature designed to significantly reduce the space required for
NULL values in your database tables

 Optimized space for null values


Reduce the space requirements for null values

 Sparse columns require more storage space for non-NULL values than the space required for
identical data that is not marked SPARSE
Best when: High percentage of NULLs (typically >60%)

what percent of the data must be NULL for a net space savings ?
Estimated space savings by data type

https://docs.microsoft.com/en-us/sql/relational-databases/tables/use-sparse-columns?view=sql-
server-ver15
Sparse columns
create table sparsetable (heading1 nvarchar(10) sparse null)
Types of Columnstore Indexes
1. Clustered Columnstore Index: Fundamental structure of the entire table (replaces traditional row storage) Best for: Operational
analytics (combining OLTP and analytics)

2. Nonclustered Columnstore Index: Secondary index on a traditional rowstore table: Best for: Operational analytics (combining
OLTP and analytics) Columnstore indexes are a specialized type of index that stores
data column-wise (vertically) rather than row-wise (horizontally),
Columnstore Indexes offering significant performance benefits for analytical queries and
data warehousing workloads.

 Clustered and nonclustered columnstore index

CREATE NONCLUSTERED COLUMNSTORE INDEX [NonClusteredColumnStoreIndex-20220414-


122043] ON [dbo].[NewTable]
Key Benefits
(
High Compression (typically 10x reduction):
[newcolumn] Values are compressed within each column segment
Similar values compress extremely well
)
Batch Mode Processing:
Processes rows in batches (typically 900-1000 rows at a time)
Dramatically improves query performance for analytical workloads
CREATE CLUSTERED COLUMNSTORE INDEX [ClusteredColumnStoreIndex-20220414-
122043] ON [dbo].[NewTable] Eliminates Index Tuning:
Single columnstore index often replaces multiple traditional indexes
Automatically benefits all queries accessing the table
Quiz
A Sparse column is a column
optimized for storing NULL
values efficiently. It reduces
storage space for NULLs but
increases storage space for non-
NULL values.

Thus, it's best used for


columns that contain a
high percentage of
NULLs.
IDENTIFY AND RESOLVE CONCURRENCY
Identify and resolve concurrency problems
Diagnose blocking, live locking and Types of
Concurrency
deadlocking Problems

 Blocking – the second connection is blocked One session holds a lock and another is waiting for it to be released.

Example: A long-running transaction is updating a row; another cannot read/update it.

Live locking- Shared locks prevent another process from acquiring exclusive locks (but one
process wins then the next process wins) Transactions are not blocked, but continue to interfere with each other and no
progress is made.
Transactions keep retrying but none succeed; . Processes repeatedly block each other while competing for
Example: Two processes keep yielding to each other in a loop. resources, but no progress is made

Deadlocking – two processes compete for the same resource


Two (or more) processes are waiting for each other to release resources, and none can proceed.

SQL Server picks one as the deadlock victim. Symptoms: "Deadlock victim" errors in logs

Detection: Detection: Monitor Detection:


sys.dm_os_waiting_tasks for -- Check recent deadlocks
-- Check blocking sessions constantly changing wait types SELECT * FROM sys.dm_exec_requests
SELECT * FROM sys.dm_os_waiting_tasks WHERE status = 'suspended';
WHERE blocking_session_id <> 0;

-- Or use sp_who2
EXEC sp_who2;
Diagnose deadlocking - practise
Transaction 1 –Q-58 Transaction 2 – Q-59
begin transaction begin transaction
1
update [dbo].[Table1] update [dbo].[Table2] -- Acquires lock on Table2
2
set column1=column1+1 -- Acquires lock on Table1 set ColorName='Brown2'
where ColorName='Brown'
select * from [dbo].[Table2]
-- Tries to acquire lock on Table2 4 select * from [dbo].[Table1] 3
Deadlock Occurs Because: -- Tries to acquire lock on Table1
Each transaction holds an exclusive (X) lock the other needs

Neither can proceed until the other releases its lock

SQL Server chooses a victim to break the cycle


Diagnose deadlocking - practise
exec sp_who2
Diagnose deadlocking - practise
 Activity Monitor Graphical interface showing blocking chains
Monitor via DMV (Dynamic management
view)
select resource_type, request_status, request_mode, request_session_id from
sys.dm_tran_locks
Monitor via DMV (Dynamic management
view)
Transaction (Process ID 58) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim
Monitor via DMV (Dynamic management
-- Current locks
view) SELECT resource_type, request_status, request_mode, request_session_id
FROM sys.dm_tran_locks;
select * from sys.dm_os_waiting_tasks -- Waiting tasks

where session_id in(58,59)

select * from sys.dm_exec_requests -- Active requests


Examine deadlocking issues using the
SQL server logs
 Trace flags -- Enable deadlock logging
DBCC TRACEON(1204, -1); -- Basic deadlock info
 Flag 1222 returns deadlock information
DBCC TRACEON(1222, -1); -- More detailed deadlock graph
 Flag 1204 provides information about the nodes involved in the deadlock

dbcc traceon(1204,-1) Commonly Used DBCC Commands


sql
dbcc traceon(1222,-1) -- Check database integrity
DBCC CHECKDB('AdventureWorks') WITH NO_INFOMSGS;

-- Shrink a database file


DBCC (Database Console DBCC SHRINKFILE('AdventureWorks_Log', 1024); -- Target size in MB
Commands) are special
administrative commands in SQL -- Enable a trace flag globally
Server used for database DBCC TRACEON(1222, -1); -- -1 makes it global
maintenance, validation, and
troubleshooting -- Display index fragmentation
DBCC SHOWCONTIG('HumanResources.Employee') WITH ALL_INDEXES;
Examine deadlocking issues using the
SQL server logs
Examine deadlocking issues using the
SQL server logs
Quiz
The transaction’s cost (SQL Server picks the least
expensive to roll back).
SQL Server automatically rolls back (aborts) the victim transaction
Or SET DEADLOCK_PRIORITY (if specified). with error 1205

The non-victim transaction(s) involved in the deadlock continue


executing normally.

The SQL Server instance does not shut down—only the victim
transaction fails.
-1 makes it global
Collect and analyse troubleshooting data
Collect trace data by using SQL Server
Profiler
 SQL Server Profiler
+ to diagnose sql profiler trace

 Finding problem slow queries

 Capturing a series of T-SQL statements that lead to a problem

Analyzing the performance of SQL server

 Correlating performance counters to diagnose problems


Collect trace data by using SQL Server
Profiler
Collect trace data by using SQL Server
Profiler
Collect trace data by using SQL Server
Profiler
Example: Quick Deadlock Monitoring
sql
-- Create session
CREATE EVENT SESSION [Deadlocks] ON SERVER
ADD EVENT sqlserver.xml_deadlock_report
ADD TARGET package0.event_file(SET
filename=N'DeadlockLogs');

-- Start it
Use XEvents (Extended Events)
ALTER EVENT SESSION [Deadlocks] ON SERVER
STATE = START;

-- Query results
SELECT CONVERT(xml, event_data) AS DeadlockGraph
FROM
sys.fn_xe_file_target_read_file('DeadlockLogs*.xel',
NULL, NULL, NULL);
Use XEvents (Extended Events)
Use XEvents (Extended Events)
Use XEvents (Extended Events)
Use XEvents (Extended Events)
Know what affects performance
 Blocking, deadlocking and locking

 DMVs

 Performance monitor
Diagnose performance problems with
DMVs
CPU usage
select current_workers_count, work_queue_count, pending_disk_io_count
from sys.dm_os_schedulers current_workers_count, -- Number of active workers
where scheduler_id <=255 work_queue_count, -- Tasks waiting for worker threads
pending_disk_io_count -- Pending I/O operations

What to look for:

 Buffer Pool/ Data cache Databases consuming


disproportionate memory
select count(database_id)*8/1024.0 as [cache in MB], database_id
Unexpected memory
from sys.dm_os_buffer_descriptors allocation patterns
group by database_id
Memory pressure
indicators
Diagnose performance problems with
DMVs
select * from sys.sysperfinfo
where object_name like
'SQLServer:Buffer Manager%'
order by counter_name

Key counters to monitor:

Buffer cache hit ratio (should be > 90%)

Page life expectancy (should be > 300 seconds)

Page reads/sec (high values indicate disk pressure)


Collect performance data by using
Performance Monitor
Collect performance data by using
Performance Monitor
Collect performance data by using
Performance Monitor CPU-Related Counters
 Processor: % Privileged Time
The amount of time the processor spends on processing of input/output requests from SQL Server
threshold < 30%

 Processor: % User Time


 The percentage of time the processor spends on executing things like sql server
< 30%

 System: Processor queue length


 The number of threads waiting for processing time (what is being the bottleneck)
< 2 per core
Collect performance data by using
Performance Monitor
 Data collector sets
Memory related
IO, Memory and CPU bottlenecks counters
 Memory
Memory: Available bytes
Memory: Pages/sec
 Process: Working Set
SQL Server: Buffer Manager: Buffer Cache Hit Ratio
SQL Server: Buffer Manager: Databases Pages
SQL Server: Memory Manager: Total Server Memory

 Processor
Processor : % Privildged Time
Processor : % User Time
 System: Processor queue length
IO, Memory and CPU bottlenecks
 IO Primary
PhysicalDisk: Avg Disk sec/Write
PhysicalDisk: Avg Disk sec/Read

 IO Secondary
PhysicalDisk: Avg Disk queue length
PhysicalDisk: Disk Bytes/sec
 PhysicalDisk: Disk Transfer/sec
Quiz
Audit SQL Server Instances
Implement a security strategy for
auditing and controlling the instance
Core Audit Components
 Server-level audits Component Description Example
Audit Container for audit specificationsCREATE SERVER AUDIT
Server Audit Specification What to audit at instance levelLogins, role changes
Database Audit SpecificationWhat to audit within a databaseTable access,
 Database-level audits schema changes
Target Where audit data is storedFile, Windows Event Log,
Security Log
Components of audits
1. Audit Container (The "Recorder")
 Audit itself
What it is: The master configuration that defines WHERE audit data gets stored
 Server audit specification
 Database audit specification
 Target Storage Targets
Configure server audits
Configure server audits – monitor the
attemps to connect
Configure server audits – log file
Monitor elevated privileges
View A fixed list of what the privileges currently are:
Current select princ.name, perm.permission_name
Privileges
from sys.server_permissions as perm join
sys.server_principals as princ on
perm.grantee_principal_id =
princ.principal_id
This gives you:Who has what server-level privileges
(e.g., ALTER ANY LOGIN, VIEW SERVER STATE, etc.)
 Audit action type (examples for privileges)
Database object permission change
 Database object access group
 Database role member change group
Database change group
Configure database-level audit – track
who modified an object
Additional SQL server components
Full-text indexing Allows you to perform advanced text searches on string
data using language-aware searching.

 Additional Features: Full text and semantic extraction for search

 Define a full-text index on a column or columns


Requirements:
A full-text index must be created on the column(s).

Full-text catalog needed (can be auto or manual).


Full-text indexing ==>
1. Enable Full-Text Search Feature
select * 2. create catalog: CREATE FULLTEXT CATALOG
from [Person].[Address] MyFullTextCatalog AS DEFAULT;
3. create index:
where CONTAINS(AddressLine1, 'Drive’) CREATE FULLTEXT INDEX ON [Person].[Address]
(
AddressLine1 LANGUAGE 1033 -- 1033 = English
)
KEY INDEX PK_Address_AddressID -- Name of the unique
select * index (usually the primary key)
ON MyFullTextCatalog;
from [Person].[Address]
where CONTAINS(AddressLine1, 'Drive NEAR Glaze')
Stores unstructured data (e.g., PDFs, Word docs, images)
Filestream directly in the NTFS file system, while maintaining transactional
consistency in SQL Server.
 Allows to store unstructed data like documents and images on the file system.
 Filestream is not automatically enabled
Filestream enabling ( other method)
exec sp_configure filestream_access_level, 2
reconfigure

 Restart SQL server service

Create a filestream database


Filestream
creation of filestream-enabled database

Filestream
CREATE DATABASE [filestramdatabase]
CONTAINMENT = NONE
ON PRIMARY
( NAME = N'filestramdatabase', FILENAME = N'C:\Program Files\Microsoft SQL
Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\filestramdatabase.mdf' , SIZE = 8192KB , FILEGROWTH =
65536KB ),
FILEGROUP [filestreamdata] CONTAINS FILESTREAM
( NAME = N'filestreamdata', FILENAME = N'C:\Program Files\Microsoft SQL
Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\filestreamdata' )
LOG ON
( NAME = N'filestramdatabase_log', FILENAME = N'C:\Program Files\Microsoft SQL
Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\filestramdatabase_log.ldf' , SIZE = 8192KB , FILEGROWTH =
65536KB )
GO
Built on top of FILESTREAM —
FileTable lets you access files in SQL Server through a Windows file share, with SQL
metadata.

FileTables remove a significant barrier to the use of SQL Server for the storage and management
of unstructured data

Store files and documents in special tables in SQL Server called FileTables

 Every row in a FileTable represents a file or a directory

 Require a non-transactional access


FileTable
enable filetable
ALTER DATABASE [filestramdatabase]
SET FILESTREAM(NON_TRANSACTED_ACCESS = FULL, DIRECTORY_NAME = 'MyFiles')

create table dbo.myfiles as FILETABLE create filetable here called dbo.myfiles in the
dbo schema
WITH(FileTable_Directory='MyFiles', FileTable_Collate_Filename=database_default);

INSERT INTO dbo.myfiles(name,file_stream) insert a file

SELECT 'mytxtfile.txt', x.* from OPENROWSET (BULK 'c:\mytxtfile.txt', SINGLE_BLOB) AS x


select all the columns from alias x <=> the file content

You might also like