SQL Server DBA Notes (Legacy)
SQL Server DBA Notes (Legacy)
DBA TOPICS
FUNDAMENTALS
1. INTRODUCTION
2. EDITIONS
3. VERSIONS
4. INSTALLATION PRE-REQUIREMENTS
DIFFERENCE B/W DELETE AND TRUNCATE
5. INSTALLATIONS , SILENT INSTALLATIONS, UNINSTALLATIONS
6. SERVICE PACKS
7. FILE & FILE GROUPS
8. PAGES & EXTENTS
9. DATA FILE & LOG FILE ARCHITECTURE
10. SECURITY
ROLES & PERMISSIONS
1. INSTANCE LEVEL
2. DATABASE LEVEL
3. OBJECT LEVEL
HIGH AVILABILITY
PERFORMANCE TUNING
1. LOCKS
2. BLOCKINGS
3. DEAD LOCKS
4. INDEXES
5. FRAGMENTATION
6. ISOLATION LEVELS
7. SWITCHES
8. LOCK PAGES IN MEMORY
9. DAC[ DEDICATED ADMIN CONNECTION]
10. CDC [ CHANGE DATA CAPTURE]
11. WAITING TYPES
12. SQL ARCHITECTURE
13. PROFILER & PERFMON TOOLS
14. DTA[DATABASE TUNING ADVISOR]
15. ACID PROPERTIES
16. WINDOWS TASK SCHEDULER
17. QUERY TUNING
18. HIGH CPU ISSUE & MEMORY ISSUE
19. UPDATE STATISTICS
20. ACTIVITY MONITOR
21. EXECUTION PLAN
22. RAID LEVELS
23. TEMPDB ARCHITECTURE
24. SQL SERVER AUDITING
25. NEW FEATURES LIST
26. DMV’S& SP & DBCC
27. ALWAYS ON HIGH AVAILABILITY
28. REAL TIME CLASSES
Source: Unknown
INTRODUCTION
>MS SQL Server is a Relational database server used on Web Servers to access information. Microsoft SQL Server is a database
platform for large-scale online transaction processing (OLTP), data warehousing, and e-commerce applications; it is also a
business platform for data integration, analysis, and reporting solutions.
• RDBMS stands for Relational Database Management System. RDBMS data is structured in database tables, fields and
records. Each RDBMS table consists of database table rows. Each database table row consists of one or more database
table fields.
Size of database
Security of data
Reliability
Having said that, if we check the real world scenarios and implementations we will find these facts:
● Size of database: According to the 2005 Survey of Winterport [European bank], the largest SQL Server DW database is
the 19.5 terabytes.
● Security of data: “Microsoft beats Oracle in security showdown. The website clearly says “Microsoft patched 59
vulnerabilities in its SQL Server 7, 2000 and 2005 databases during the period, while Oracle issued 233 patches for
software flaws in its Oracle 8, 9 and 10g databases”
● Speed and concurrency: SQL Server 2005 system that handles 5,000 transactions per second and 100,000 queries a
day and can scale up to 8 million new rows of data per day, Is still required more performance? no
● Last but not least, search on any job site for “Oracle” and “SQL Server”, you will find more jobs for SQL Server than
Oracle, why? Because more companies implement SQL Server than Oracle. For example, I searched on sites for 7 jobs
over the last 7 days. I found 2143 Jobs for SQL Server and 1867 jobs for Oracle.
Multiples of bytes:
– Kilobyte (KB)
– Megabyte (MB)
– Gigabyte (GB)
– Terabyte (TB)
– Petabyte (PB)
– Exabyte (EB)
– Zettabyte (ZB)
– Yottabyte (YB)
Source: Unknown
EDITIONS
Types of editions:
● Developer Edition
● Enterprise Edition
● Standard Edition
● Workgroup Edition
● Express Edition
● Evaluation Edition
● Embedded Edition
● Azure Edition
• OLTP deployments that are not expected to rapidly grow in the future
• OLTP deployments that are not expected to slightly grow in the future
Developer Edition:
• It is especially used for R & D. It is a fully featured version. No limit for the processors and RAM. We can’t use it for
business purposes.
2. From 2012 SQL supported core based license but not processor based...
3. Sql 2012, standard edition support server and cal + per core base both models. In this models UNlimited users can connect to
the database.
4. For the enterprise edition, Microsoft offers only "per core basis" but not "per server +CAL” models.
Source: Unknown
VERSIONS
http://sqlserverbuilds.blogspot.in
INSTALLATIONS PRE-REQUIREMENTS
Installation Pre-Requirements:
Versions
O\S Windows Server 2000
Windows Installer 2
O\S Windows Server 2003
Processor type Pentium III compatible processor or higher required [SPEED 600 MHz]
AMD Opteron, AMD Athlon 64, Intel Xeon with Intel EM64T support, Intel
Processor type
Pentium IV with EM64T support
Types of queries:
What are the difference between DDL, DML and DCL commands?
DDL
Data Definition Language (DDL) statements are used to define the database structure or schema. Some examples:
DML
Data Manipulation Language (DML) statements are used for managing data within schema objects. Some examples:
DCL
Data Control Language (DCL) statements. Some examples:
TCL
Transaction Control (TCL) statements are used to manage the changes made by DML statements. It allows statements to be
grouped together into logical transactions.
DELETE TRUNCATE
Delete row by row values Truncate entire data from the table at one shot
Delete can have where condition Truncate we cannot apply where condition
Roll back is possible Roll back is not possible
Slow operation Very fast operation
DML Operation DDL operation
if we delete only data gets deleted If we truncate only data gets deleted but structure remains
same
Delete Operation will not reset seed value of Truncate resets the seed value
identify column
Source: Unknown
INSTALLATION
Ex:
.Net Framework:
Note: SQL Server version increase then my .net, windows installer, o\s, memory and hard Disk are different.
1. In x86 only 32buses transferred each time. Where as in 64 bit 64 buses transferred each time.
2. In X86 have limitations when trying to consume the hard ware resources like memory.
Ex: In windows server O\S[X-86] Max memory by default can utilize only 2 GB for the SQL Server. If we want to use more than 2
GB then need to enable switches at O\S and Sqllevel.
In X64 bit no need to enable any switches and no limitation while consuming the resources.
-------------------------------------------------------------------------------------------------------------------------------
Method1: Just run the setup and can get without installing
Method2: Go to image folder in softwares and can get the same info.
Windows Installer: This is one of the operating system where it helps to deployed the SQL Server services into O\S
Firewall: It will give end points to communication purpose and validate your connection
Note: Only 1 time installation is enough for many Sql server installations in the same server.
4. Registration information
5. Component selection
3 Types:
2. OLAP: SSIS [SQL INTEGRATION SERVICE], SSRS [SQL REPORTING SERVICES], SSAS [SQL ANALYSIS SERVICE]
Note:
When you run setup file for the folder SERVERS...You have option to install tools as well as...No need to install again tools for
multiple times in singe server.
2. Named Instance: Can specify the customized name as per the requirement.
Source: Unknown
Note:
1. If SQL Server edition is enterprise: total no. of instances can install 50 per server
2. If SQL Server edition is other than EE: total no. of instances can install 16 per server
Limitations:
Note: In real time always select "Mixed mode" and required to provide the password called “SA account ".
8. Service accounts: On which service account the SQL Server services need to run.
Note: In real time always use domain level services and install SQL Server.
Note: Ensure you confirmed the collation name with requestor or application team... Collation need to specify on basis of your
application
11. Install
-------------------------------------------------------------------------------------------------------------------------------
Source: Unknown
SQL Server configuration manager:This tool shows only SQL Server related services.
Note: Every version of SQL Server have configuration manager tool. In higher version configuration manager I can see the
services of lower version of Sql as well.
Start
Stop
Pause
>Report server configuration: This is one of the tool help to configure reporting services.
Folder 80: Created when you install Sql server 2005 version and used for backward Compatibility
COM[Component management] - SQL Server exe files and DLL files related to SQL Server components
SETUP BOOTSTRAP: Key folder contains [1033, bin, log, resource, arpwraper, setup, dll files]
TOOLS: Tools related exe files for tells and sample databases.
Binn: Critical folder for the instance and contains instance related .DLL and .EXE files.
Logs: Very important folder to store event tracking inside the Sql server and used for DBA trouble shooting area.
-------------------------------------------------------------------------------------------------------------------------------
Build Numbers:
Connectivity Procedure:
[Hostname or local]
[HOSTNAME\instance name]
-------------------------------------------------------------------------------------------------------------------------------
Shortcuts:
-------------------------------------------------------------------------------------------------------------------------------
Source: Unknown
Resource database is introduced in SQL Server 2005 newly for security purpose.
>SSAS Service
>SSRA service
Common services:
Integration service
Browser
Notification services
-------------------------------------------------------------------------------------------------------------------------------
2000: 8.0.194
2005: 9.0.1399
2008: 10.0.1600
2012: 11.0.2100
-------------------------------------------------------------------------------------------------------------------------------
32767
Database ID's:
Master -1
Source: Unknown
Model-3
Msdb-4
Tempdb- 2
Resource-32767
Note: When you create any user database id starts from [5]. Database engine always uses or communicate with DBID
If I delete any between database then the same ID assign to a new database in sequence manner.
SQL Server logs track or contains "instance level event" but whereas error log have only “SQL Server agent related event
tracking"
-------------------------------------------------------------------------------------------------------------------------------
Note: SQL Server 2008 \R2 configuration manager show all SQL Server services for same and lower version of SQL as well
But where as in Sql server 2005 configuration manager does not show 2008 \R2 or higher version services.
2. Installation steps:
>Verify whether SQL Server services are removed from configuration manager for the specific instance
> Remove the folders which are all related to your instance which need to uninstall.
>Remove from register editor [regedit] for the particular instance if not removed at the time of uninstallation.
Note: ARPWRAPER is only available in SQL Server 2005 but not in versions of 2008, 2008 R2, 2012.
Note: Before and after uninstallation of SQL please send email notification to requestor.
-------------------------------------------------------------------------------------------------------------------------------
2. Default\named instance
3. Collation settings
4. Service account
5. Authentication mode.
9. Components selection.
>Installation Phase:
>Post-installation phase:
Source: Unknown
> Not install right .net frame work and windows installer
Steps:
1. From command prompt required to enter to path where "setup.exe" file is placed.
Drive:
Cd (copy path)
Source: Unknown
-------------------------------------------------------------------------------------------------------------------------------
2. EULA
4. Service account
5. Collation
6. Authentication mode
File stream:
>FILESTREAM was introduced in SQL Server 2008 for the storage and management of unstructured data. The FILESTREAM
feature allows storing BLOB data (example: word documents, image files, music and videos etc) in the NT file system and
ensures transactional consistency between the unstructured data stored in the NT file system and the structured data stored in
the table.
Note: Folder creations in SQL 2005 create as MSSQL.number. It is very difficult to understand folder belongs to which instance.
But whereas from SQL Server 2008 onwards the folder creation (MSSQL 10.0 OR 50 _INSTANCE NAME) which help to
understand easy about the instance details.
-------------------------------------------------------------------------------------------------------------------------------
> Never keep all system databases, user databases files, backup files, binary files, instance related files into single disk.
Note: EMC2 SAN [Storage area networks] disks are act like a network disks where we can Connect to another system like
external hard disk. Data recovery is very easy.
-------------------------------------------------------------------------------------------------------------------------------
Process: Inform to windows team to restart the server. Once server up, DBA team check SQL Server.
-------------------------------------------------------------------------------------------------------------------------------
Points on versions:
Point 1:
Note: In same server if we install SQL Server 2005 and Sql 2008 or higher version?
>SQL 2005 Configuration manager shows only the services of 2005. Whereas SQL Server 2008 or higher version shows the
services in configuration of "SQL Server 2008 or higher Version+ lower version ". Vice versa is not possible.
> Each version of Sql server have different configuration manager’s tools.
Point 2:
Source: Unknown
>SQL 2005 instance can connect in SQL server 2008\higher version SSMS but whereas higher version instance cannot be able to
connect in lower version SSMS.
Point 3:
>For Sql server 2008 and 2008 R2 summery .txt file is common and can access from 100 folder.
SERVICE PACKS
>Microsoft designed service packs or hot fix or cumulative updates to fix the error or bugs or code related.
>Enhance performance
RTM (no SP
SP1 SP2 SP3 SP4
)
Duration: Every month 2nd week MS release hot fixes for multiple issues
>Cumulative update: Multiple hot fixe bugs are included and released as CU
Note: SP or CU or hot fixes designed for version level but not edition level.
-------------------------------------------------------------------------------------------------------------------------------
1. Pre-installation steps:
Select @@version...
Note: Take a copy of the BINN folder and resource database [MDF and LDF] file.
SQL Server services automatically goes offline state and once done services comes online.
When you apply service pack the BINN and resource database files overwrite or update with Service Pack files.
> Run moksha tool to find whether any .msi or .msp files?
2. Installation steps:
> Verify the summery.txt file to confirm whether sp or cu or hotfix is applied for all components or not
Or
xp_msver
-------------------------------------------------------------------------------------------------------------------------------
How to bypass restart computer policy in Sql server and applying any sp?
SQL 2005:
Points:
> If service pack applied on RTM then if you uninstall then build no goes to RTM only. Whereas if we apply sp2 on RTM+SP1
then build no goes to SP1.
2. Incompatibility issue
3. Permission issue.
A MSP (Microsoft Software Patch) is basically a set of transforms (modifications) against a baseline MSI (Microsoft Software
Installer) file.
Solution:
Process:
Extraction process: Sql service pack 3 file for example is of the format ‘Sql server sp3-90000-setup.exe’ in E:\sqlbits directory,
then navigate to this directory in command prompt and execute the command as below:
X: means extract
A dialog box will open requesting the location to extract the bits. Provide the location and press OK. SP3 bits will be extracted in
the specified location.
Or
> Download the missing .msi or .msp file and place into C:\WINDOWS\INSTALLER Or
FILES
Files:
Types of files:
.MDF [Master data file, main data file, primary data file]
.NDF [Next data file, new data file, secondary data file]
Purpose:
1. .MDF:
>This file is the startup of the database in Sql server/database...i.e. whenever you start Sql server database the first file comes
online "MDF"
> Per database only 1 MDF file and cannot possible to delete.
2. .NDF:
> When primary data file is full\depending on the requirement we can add multiple secondary files per each database.
3. .LDF:
> The main purpose this file to recover the data whenever any databases crashes.
> Every transaction should write first into LDF file and this concept is called as “WAL [WRITE A HEAD LOGGING]".
Note:per database we can be able to create 32767 file (.MDF, .NDF, .LDF)
Note:
1. per database can create total max number of files can create 32767
ADD FILE
To filegroup [FGNAME]
Note:
2.Every database should contain Logical name and as well physical file name with specific extension.
3. Every file should contain one file id to communicate database by using database ID.
6. Auto growth option is always set depending on the file type and gives more value of file growth which increases the
performance database to faster the transaction.
Note:
In SQL Server we can add files [NDF (or) LDF] to the database is online... i.e. without downtime can be able to add the files.
1. MASTER: We cannot be able to add any data (.NDF) or log (.LDF) files to master database.
2. MODEL:We cannot able to add any data (.NDF) or log (.LDF) files to model database.
3. MSDB: We can able to add files (.NDF or .LDF) to MSDB database.
4. TEMPDB:We can able to add files (.NDF or .LDF) to Tempdb database.
5. RESOURCE DB: We cannot able to add any files due to resource database is read_only mode and hidden.
FILE GROUPS
File Group:
Advantages:
>Ease of Administration
>Filegroup Backups
>Performance Benefit
Note:
Note:
Tables always create file group level but not specific files. If a file group contains multiple files then the table structure creates
inside of all files.
Ex:
) ON [FILEGROUP NAME]
Note: We cannot be able to read data from physical NDF, MDF or LDF.
Reason: Inside file data in encrypted formate and only Sql server engine can be able to understand.
Note:
In real time MDF or NDF file ALWAYS the file restriction should be "UNRESTRICTED GROWTH” and File growth value should”
MB"
Ldf file ALWAYS the file restriction should be "RESTRICTED GROWTH” and File growth value should” MB"
-------------------------------------------------------------------------------------------------------------------------------
Note:
1. Primary file group never able to set read-only and which is default set
2. If you set any file group as read-only then cannot be able to any DML\Alter operations in the table.
3. For a specific file group can possible to set both READ_ONLY\DEFAULT. But which always takes "READ_ONLY"
PAGES
Pages:
● Page is the fundamental unit of data storage in SQL Server (data blocks in Oracle).
● Eight (8) physically contiguous pages => One (1) EXTENT.
● Disk space allocated to a data file (.mdf or .ndf) in a database is logically divided into pages numbered from 0 to n.
● Disk I/O operations are performed at the page level. (SQL Server reads/writes WHOLE pages).
● Page size: 8kb. ==> 128 pages = 1Mb.
Page architecture:
>Page header: type of the page, page number, object details, how much free space inside the page
>Data rows: Where actual data stores inside the page and size is 8060-Bytes
>Row offset: This is the index of the page when data rows increases the row offset also increase. Row offset index of data rows.
Types of pages:
>Data:All data type data stores in this page except text, ntext, and image.
Note: All below 5 types of pages are designed for Sql server engine usage. CalledMaintenance pages.
>IAM [Index allocation map]: Stores information about index pages in extents.
>DCM [Differential change mapping]: Stores information about full database backup.
GAM SGAM
It records what extends are allocated SGAM records which extents are current used as mixed
extents and have at least one unused space
GAM has 1 bit for extent for each extent in the interval, SGAM if bit is 1, then it is extent is used as mixed extent
it covers. If the bit is 1, the extent is used and has free page
It talks about general extents It talks about exclusively mixed extents
For a database:
Page 0 is header
Page 1 is PFS
Page 2 is GAM
Page 3 is SGAM
Page 6 is DCM
Page 7 is BCM
Objects in a database
Note: If we want to open log file, we need to go third party tool or Log Explorer, Apex SQL tool, SQL log resource
> The last run DBCC Check db is written in the boot page. It contains most critical information about the database. Every
database boot page is the 9th page
> When instance is restarted by looking at this boot page is a method to write, information into the event log.
Error: 5047failed for file group (with in the file group, if an table is there)
Source: Unknown
> Moving a file from one file group to other file group, is not page the alternate way is to first move the data in the FG\file to the
other file group through scripting/export import and once data moved successfully. Then delete the file from file groups and
create the file in File group2.
EXTENTS
Extents:
> 16 Extents = 1 Mb
Types of extents:
1. Uniform extent: Same type of 8 pages store then call as uniform extent.
Note:By default SQLServer engine allocates the pages (8) into mixed extent. After filled, if engine identifies all 8 pages are
belongs to same type then all pages carry to "uniform extent".
2. Page allocation: Page allocation always stores in a sequence manner starting with 0.
01: 0000
01:0001...ETC
Source: Unknown
03: 0000
03:0001...ETC
● XP_fixeddrives
3 .To know (dbname, size, owner, date created, status, compitabilitylevel) Particular databases
● sp_helpfile
Note: System databases [master, model, msdb, Tempdb] not possible to take offline
9.Rename of database:
>The transaction log is used to generate the data integrity of the database and for data recovery.
> The transaction log file contains a storing of log records. Physically the sequence of log records is stored efficiently in the set of
physical files that implement the transaction log.
Explicit Transaction:
> A transaction is started with begin Tran and finished with end/commit transaction.
Statement1
Statement2
Statement3
Implicit Transaction:
> SQL Server internally adds begin and end transaction for the individual statements.
Syntax: Statement1
Statement2
Log Record:
Log Records are entries that are made into the Transaction Log files.
1) LSN Number
Source: Unknown
2) Transaction ID
3) Timestamp
5) Statement (Query)
7) Previous LSN, Log record type, Abort (Transaction committed (or) not redo LSN and Undo LSN)
>Whenever any query is processed, the data will be passed to Data file. Below is the process how a query is processed in SQL
Server/ Importance of Log File Architecture:
Source: Unknown
Commit Process:
As soon as commit statement is written to the log file it throws a token to the user that commit is completed successfully (Ex. 1
row affected), this process is called as Commit Process.
WAL [Write a head logging]: Writing every transaction into transaction log called “WAL"
3. VLF's creation depends on File AUTO_GROWTH option and set more which cause less VLF's creation and can get more
performance.
5. Max 50 VLF's then can get very good performance...If increase then performance gets down.
6. The SQL Server Database Engine divides each physical log file internally into a number of virtual log files. Virtual log files have
no fixed size, and there is no fixed number of virtual log files for a physical log file. The Database Engine chooses the size of the
virtual log files dynamically while it is creating or extending log files. The Database Engine tries to maintain a small number of
virtual files. The size of the virtual files after a log file has been extended is the sum of the size of the existing log and the size of
the new file increment. The size or number of virtual log files cannot be configured or set by administrators.
DBCC LOGINFO
Size of VLF
File id
File sequence no
Note:
For log files always Microsoft recommends to keep FILE GROWTH INTO "Percentage"...
For Data files always Microsoft recommends to keep FILE GROWTH INTO "MB's"...
select * from sys.databases ::: Verify the column "log_reuse_wait_desc" shows "active" then log file is in use ...if not using
shows "nothing"
Definitions:
Dirty Page: Current modifying data inside the pages in buffer pool called "Dirty page"
Column: LOG_REUSE_WAIT_DESCRIPTION:
NOTHING: LDF file is not using or no transactions are running and using WAL concept
-------------------------------------------------------------------------------------------------------------------------------
Columns:
Transaction id
CHECK POINT
Check Point
1. A checkpoint writes the current in-memory modified pages (known as dirty pages) and transaction log information from
memory to disk
1. Direct check point: Which is the default checkpoint triggered by SQL Server engine.
Syntax:
3. Indirect checkpoint: This check point risewhen anySql server technical activities like
Backup operation
attach\detach operation
SQLServer restart
Cluster failover
Source: Unknown
Note: How to capture check point and lazy writer information into Sql server logs?
Check point:
Lazy writer:
> This is one of the back ground process where works only from BUFFER Cache.
> Lazy writer works only when system is running with memory pressure or running out of memory.
> User cannot see or create, manual lazy writer operation in Sql server
> We cannot able to capture lazy writer information into SQL Server logs.
Note: prime job of lazy writer is it flush the pages from buffer to disk.
Question? How lazy writer knows what pages to flush from buffer to disk?
Ans: lazy writer check each page header and verify whether all transactions in the pages are committed then pages will be
flushed to disk... [Page header contains infusion of whether committed\uncommitted]
2. If any user started transaction with begin Tran then user should fire manually commit otherwise transaction never complete.
Dirty Page:
In buffer pool what are the current modifying pages called as "Dirty page"
SP_WHO:
Spid
Status
Login name
Db name
Command
Blocking by
Host name
CPU Time
Disk I\O
Spid
Status
Login name
Db name
Command
Blocking by
Host name
We can able to track checkpoint into Sql server logs by Lazy writer information cannot be able to track into SQL Server
enabling the trace 3502 logs.
In memory free pages list not taken care and kept free In memory free pages list taken care and kept free list
list
We can fire a query to see check point information by We don’t have any query to see lazy writer information
using Select * from:: fn_dblog (null, null) WHERE
[Operation] like '%CKPT'
SECURITY
Security:
2. Once they add account into domain level, automatically accounts replicate to the number of servers added into domain.
3. In O\S level, user name and password stores in NTLM OR KERBOROS KEY.
Check 2: Verify the user name and password entered by you by the O\S from Kerberos keys.
Authentication: To access any application\server\database...etc. provide valid authentication details like user name and
password
1. Windows authentication
1. Windows authentication:
> If you want to connect Sql server first the account should be member of O\S level then only can be able to use same account
to login into Sql server.
> When you connect to Sqlserver with windows authentication validation (user name and password) checks in Kerberoskey.
Once valid pass the connection to Sql server as well.
1. If the account is a member of o\s admin group, by default you can get access\connect to Sql server because of
"BUILTIN\Administrators" Group in SQL Server.
2. If this account is deleted then no admin group accounts can login until adding them manually.
>User name and password provided while login into SQL Server instance with Sql authentication
>We can only able to connect by installing tools in localmachine from their connect to server by providing user name and
password.
>Passwords travel over a network for authentication this makes Sql authentication is less secure than windows authentication.
Note: Cases of going to use SQL authentication: If any third party source databases like Ms-access, Oracle, db2…etc
Note: Windows authentication is always more secure compare to SQL Server authentication due to Sql authentication
.Password are travel over a network which cause less secure than windows authentication.
-------------------------------------------------------------------------------------------------------------------------------
Source: Unknown
Error: 18452
Error reason: When my instance is configured with windows authentication, users are trying to connect with SQL Server
authentication.
Note: Real time when you perform any Sql server installation, please configure with mixed mode always to avoid login failure.
ROLE OR PERMISSIONS
INSTANCE LEVEL:
B D 2P S4
Bulkadmin
Dbcreator
Diskadmin
Processadmin
Public
Securityadmin
Serveradmin
Setupadmin
Sysadmin
Exec sp_helpsrvrole
1. Bulk admin:
Source: Unknown
User can able to perform bulk insert operation when any data loading activity from application side.
Ex:
http://blog.sqlauthority.com/2008/02/06/sql-server-import-csv-file-into-sql-server-using-bulk-insert-load-comma-delimited-file-
into-sql-server/
2. DB Creator:
Create, alter and drop the database and which is an instance level permission.
Note: If any login can perform only the specific activities as per role defined to the account.
3. Disk admin:
User can able to manage disk level files like MDF, NDF and LDF files.
Ex: User can able to move the files from one drive to another drive if member of disk admin permissions.
4. Process admin:
1. User can see list of processes which are running in Sql server including system and user defined process.
5. Security admin:
6. Server admin:
Processor changes\allocations
7. Setup admin:
8. Sysadmin:
User can have full control on the SQL Server instance and which is an administrator on SQL server.
A B D 4O S
DB_Accessadmin
DB_Backup operator
DB_Datareader
DB_Datawriter
DB_Ddladmin
Source: Unknown
DB_Denydatareader
DB_Denydatawriter
DB_Owner
DB_Securityadmin
User creation:
Note: When we map any login with database by default that login name create as user in specific database.
1. DB_Access admin:
2. DB_Backup operator: User can able to perform backup operations in Sql server
3. DB_Data Reader: User can only able to perform data reading from all tables with in the specific database.
4. DB_Data writer: User can only able to perform data write into all tables with in the specific database.
I. Create
II. Alter
III. Drop
Note:
1. If we have db_owner & DB_Deny data reader or writer permission then we can be able to perform all operations on database
except reading\writing...Preference goes to DENY
2. If we have sys admin & DB_Deny data reader\write permission then we can be able to perform all operations on
database...Preference goes to SYSADMIN.
3. Provide reader and deny data reader permission to same account then user cannot be able to read the data.
8. DB_OWNER:
9. DB_Security admin:
User can get permissions of giving roles to other users at specific table\store procedure\views\functions...etc
Note:
10. Public:
Limitations:
Note:
1. Always keep database owner as SA account which is inbuilt SQL Server account and we never delete that account.
2. Any account is using as owner for specific database you cannot be able to delete until you change the owner of the database.
3. You cannot able to delete the login if any active sessions are running.
Schema:
> If we not mentioned any schema for user, he occupies the default schema-(db)
Sp_setapprole ‘approlename’,’password’
Unset_setapprole ‘approlename’,’password’
>Insert
>Alter
>Control
>Delete
>Select
>References
>Take ownership
>Update
>View definition
Note:
>If I grant any database level role then can get permissions on all tables inside the database
>If I grant any table level role then can get only on specific table set permissions and cannot access other tables due to object
level restriction.
PRINCIPALS:
> Principals are entities that can request SQL Server resources like other components of the SQL Server authorization model,
principals can be arranged in a hierarchy.
> Every principal has a security identifier (SID) which is presents in hexadecimal part.
Source: Unknown
> If we want to give permissions to 100 users, we need not give manually 100 times to users, simply we can put them as a
group, with that group we can give permissions at one time.
> If we want to see the principals in our server through below quires.
U—Windows user
G—Windows Group
S—SQL User
A—Application role
R—Database role
SECURABLES:
> Securables are the resources to which the SQL Server Database Engine authorization system regulates access.
Default logins:
1.BUILTIN\Administrators: This one of the o\s administrator group where if any account is a member of O\S level admin group
can get directly access to SQLServer without adding the account in SQL Server with SYSADMIN permissions.
Note: From SQL Server 2008 onwards BUILTIN\Administrators group is removed. Even any member included in o\s administrator
group should add under SQL Server instance as a login.
2. NT AUTHORITY\SYSTEM: This is one of the local system account which is windows authentication and contains Sysadmin
permissions.
6. SA:Which is the default Sql authentication account and contains Sysadmin permissions.
3. Add current user is one of the option added while installing SQL Server.
Hardening is nothing but protecting the Sql server from the threats or end users
> Always try to create windows authentication instead of SQL Server authentication
>Never give Sysadmin permissions to any other teams except DBA team until strong reason.
>Server account "passwords never expire” option should enable and "USER must change the password" option should be
disable or uncheck.
Note: changing account for the service required restart of SQL Server only specific service.
Note: From SQL Server 2012 onwards user defined servers can be created
Syntax:
Source: Unknown
Role Creation:
USE [dbname]
Drop Role:
USE [dbname]
Note: In real time apps team define this roles and request DBA team to execute the script.
Steps:
Note: when any requirement to provide highest level of permissions and member in a group, then add that particular account
as a login and provide permissions only.
-------------------------------------------------------------------------------------------------------------------------------
>Password
>User name
Default Users:
SQL Server ships with ten pre-defined schemas that have the same names as the built-in database users and roles. These exist
mainly for backward compatibility
1. SYS
2. INFORMATION_SCHEMA
3. GUEST
4. DBO
The SYS& INFORMATION_SCHEMA schemas are reserved for system objects. You cannot create objects as a guest. Permissions
granted to the guest user are inherited by users who have access to the database, but who do not have a user account in the
database.
GUEST:
Each database includes a guest. Permissions granted to the guest user are inherited by users who have access to the database.
DBO:
The dbo schema is the default schema for a newly created database. The dbo schema is owned by the dbo user account. By
default, users created with the CREATE USER Transact-SQL command have dbo as their default schema.
If any member provided as Sysadmin role then can able to access all the database by using DB_owner role.
Server principals with names enclosed by double hash marks (##) are for internal system use only
RECOVERY MODELS
Recovery Models:
Full
Bulk Logged
Simple.
FULL RM:
● In FULL recovery model every transaction is logged into the transaction log file (as per WAL).
● This recovery model is generally used in Production databases (i.e. OLTP based systems)
Advantages:
2) Point-in-time Recovery
Disadvantages:
1) Performance Overhead and large transactions at times can take more time
● In Bulk-logged recovery model every transaction is logged into the transaction log file, but bulk insert operations are
minimally logged.
● Bulk Insert operations are SELECT INTO, BULK INSERT, TEXT/IMAGE, Online INDEXING
Advantages:
2) Disk space utilization can be reduced when compared to Full Recovery model during bulk insert operations
Disadvantages:
3) Bulk logged recovery model is used in special requirement cases where bulk insert operations have to be performed with a
time constraint and generally data loss is compromised to BULK LOGGED.
● In simple recovery model every transaction is logged into the transaction log file, but at regular intervals the
transaction log file is TRUNCATED whenever a CHECKPOINT operation occurs. Simple recovery model is generally used
in Development environment where Database Priority/Point-in-time priority is less
Advantages:
1) Transaction log file growth can be controlled with regular truncation that occurs
Disadvantage:
Use Master
Master- Simple
MSDB- Simple
TEMPDB- Simple
MODEL–Full
Transactions are fully logged Transactions are minimally logged Transactions are fully logged
3. Restricted mode: The users who are having only Sysadmin, db owner permissions can only access the database.
Source: Unknown
BACKUPS
Backups
>A copy of data that is used to restore and recovery the data after a system failures.
Use:Backup is safe guard to databases because data may can loss due to many failures. Such as media failures, user errors,
hardware failures and natural disasters etc… with good backups, we can recover database from failures.
Backup are light weight threads in SQL Server which mean never consume more resource like CPU, MEMORY……etc
Types:
Types of backup:-
1. FULL BACKUP, 2. DIFFERENTIAL BACKUP, 3.TRANSACTION LOG BACKUP, 4.FILE AND FILE GROUP BACKUP, 5. MIRROR
BACKUP, 6. SPLIT OR STRIPE BACKUP, 7. COPY ONLY BACKUP,8. TAIL LOG BACKUP, 9. PARTIAL BACKUP, 10.PATRIAL DIFF
BACKUP
>This backs up the whole database. In order to have further differential or transaction log backups you have to create the full
database backup first.
Syntax:
Note:
>When we are performing any full database backups the backup file includes both committed and uncommitted transactions as
well.
> When we perform full backup and in same time if any active transactions then upto point the active transaction included
under full backup file.
2. Differential backup:
Source: Unknown
>Differential database backups are cumulative. This means that each differential database backup backs up the all the changes
from the last Full database backup and NOT last Differential backup.
Note:What are the pages are modified??That information stores in BCM pages after full backup. SQL Server before trigger the
diff backup verifies in DCM page to perform diff backup.
>It takes complete log file (.LDF) and never takes backup of any .MDF OR .NDF files. Full backup is the base.
Syntax:
Points:
>Log backups are not possible under simple recovery model due to truncate operation after check point.
> First time log backup depends on recent full backup and after if any number of time log backup always depends on recent log
backup.
Note: When any log backup completion information stores under BULK CHANGE MAPPING [BCM] page.
If SQL engine need to take any more log backups checks the sequence from this BCM pages.
DCM -Differential change mapping from this page engine reads when the full backup was.
2. Used to create a full database or transaction log backup without breaking the log chain
3. Copy only backup is used in production servers when any high availability concepts are configured.
Point:A copy-only full backup can't be used as a basis for a differential backup, nor can you create a differential copy only
backup.
Note: Can take copy only differential and copy only t-log backups.
Copy only diff and t-log always depends on copy only full backup
Source: Unknown
to disk='path\sbidb_diff_copyonly.bak'
withcopy_only, differential
to disk='path\sbidb_tlog_copyonly.trn'
with copy_only
5. Mirror backup:
2. Mirrored backups simply write the backup to more than one destination.
3. You can write up to four mirrors per media set. This increases the possibility of a successful restore if a backup media gets
corrupted
Syntax:
Mirror
Note: Cost and disk space utilization are high in this type of backup.
>Partial backups were introduced in SQL Server 2005. They are designed for use under simple recovery model to improve
flexibility for backing up very large databases that contain one or more READ-ONLY filegroups
>This backup only the file\file groups in read_write only mode. Skips the file \file groups in read_only mode.
Syntax:
The modified pages of READ_WRITE_FILEGROUPS will be backed up during partial differential backup.
Syntax:
This type of backup mainly used the case where there is a disk space issue in the server.
[Split] Backup: Striped backup will split the backup into parts and can be very useful during tight space constraints. Striped
backup is taking backup onto different locations (i.e. parts of backups but not a mirror)
Syntax:
Even this case we can take a backup by splitting into 2 drives 50 GB + 50 GB.
Note: To restore split backup files both backup are mandatory. If you lost 1 split backup files then you cannot be able to recover
the data by using another backup file.
>In Sql server 2005, if backup starts Sql engine directly start writing into backup file without checking any disk space.
>Whereas from SQL 2008 version onwards, SQL Server engine always checks the disk space and then it start writing the data if
sufficient space available .If not fails at the beginning.
Crash cases: -MDF file corrupt\Missing. LDF file corrupt\Missing, Page corruption, Header corruption, internal db errors
Note:
2. Tail log backup cannot work when database is in simple recovery model.
File Backup:-
Syntax:
Syntax:
Note: By default LDF file gets backed up when you trigger any file\file group backup
Note: File or file group backup does not required base as a full backup.
If you want restore file\file group backup, MDF file should restore first.
>>I have a backup file? How to find the whether backup file is valid or not?
Output:
Source: Unknown
>>I have a backup file? How to find the how many files are there inside of the .bak\trn file?
Output:
Number OF FILES
Path
>>I have a backup file? How to find the version of Sql server by using the backup file?
Output:
DBNAME
Compatibility
Version
Server name
Login name
Db size
Collation
Append:
Overwrite:
Entire data gets taken into backup file and due to this takes more time.
Source: Unknown
Backup compression:New inbuilt feature in Sql server 2008 onwards. The main purpose is you can save the disk space and keep
more number of day’s backup files.
1---10%
2--20%
3--30%
4--40%
...10
Backup Set:
A backup set contains the backup from a single, successful backup operation. It can be a FULL/Diff/Tlog backup.
A media set is an ordered collection of backup media, tapes or disk files, to which one or more backup operations have written
using a fixed type and number of backup devices.
Backup splited into two files and called backup files belong to same family
>Checksum:
Checksum is verified to ensure if data is corrupted or not and performs mathematical calculations.
If any errors are reported in database then backups are generally fails. If still want to take backup you can take by selecting
“Continue backup on error". But again backup file become corrupted.
FOR LARGE DATABASE NEVER USE RESTORE VERIFY ONLY OPTION AND IT TAKE HUGE TIME TO SCAN EACH PAGE.
Note: Backup compression is new feature in SQL server 2008 onwards and only possible in
ENTERPRISE EDITION.
Source: Unknown
Upto SQL 2005 or lower version to compress the backup file projects use third party tools.
RESTORE METHODS
Restore methods:
Syntax:
● Restoring is the process of copying data from a backup and applying logged transactions to the data to roll it forward to
the target recovery point.
● A restore is a multiphase process. The possible phases of a restore include
>Data copy,
The data copy phase involves copying all the data, log, and index pages from the backup media of a database to the database
files.
From LDF both committed and UN committed transactions sent to MDF file.
The committed transaction should stay in MDF and only UN committed transactions sent to LDF file for further processing.
When you have multiple backup files you should follow some standard
RECOVERY METHODS
Recovery Methods:
WITH RECOVERY:
Additional backups file not allowed and bring the database only immediately.
WITH NO RECOVERY:
Additional backups file are allowed and bring the database into restoring state immediately. In this feature diff or log backups
can be restored.
WITH STAND_BY:
Additional backups file are allowed and bring the database into standby\read_only state immediately. In this further diff or log
backups can be restored.
Note: In this mode users can read the data in middle of restoration process. But no write possibility.
Example: I have 2 backup file full and diff.I want to restore and follow below script.
With norecovery
1. Whenever database crashes attempt tail log backup to recover active transactions
Note: Tail log always need to restore last and tail log need to apply first before starting recovery process.
Backup strategy 1:
Recovery process:
1. Whenever database crashes always attempt tail log backup to recover active transactions. Once tail log backup taken then
start recovery process by using existing backup files
>Restore each transaction log backup after wed 11:00PM with no recovery upto Thursday 10:00 AM
> Last restore tail log backup to recover 3 min of data with recovery.
Note:
> If we restore last backup with no recovery instead of recovery then Please use below query to bring db online.
> If LDF file is corrupted then DBA cannot be able to recover the transaction from LDF file.
Backup strategy 2:
Source: Unknown
Recovery Process:
1. Whenever database crashes then attempt tail log backup to recover active transactions in the log file.
2. Take recent full backup which happen recent Sunday 10:00 AM and restore with no_recovery...
4. Restore each t-log backup file after differential backup Saturday 10:15 PM...Since t-logs are incremental. Restore with no
recovery [If tail log backup success]
[If tail log backup not work then last t-log backup restore with recovery].
5. Last restore tail log backup with recovery to bring database online upto the point.
Note: If LDF file corrupt 100% not possible to attempt TAIL LOG backup with error "fail activation failure"
Backup strategy 3:
Full backup-10:00 AM
DIFF: 11:00 AM
TLOG: 12:00 PM
DIFF: 1:00 PM
Recovery process:
2. Take last to last full backup due to recent full backup @ 12:30 PM got corrupted and restore with no recovery
Note: If full backup corrupted after recent diff or log cannot be able to use.
Recovery process:
Copy only backup never create distribute the backup sequence .son you cannot be able to use diff 2 with copy only.
Permission issue
Network issue
Wrong syntax
Db is unavailable
LSN mismatch
Permission issue
Network issue
Wrong syntax
LSN mismatch
Without full backup restore user trying to restore diff or log backup
If required to restore one database backup file into another database in the same instance then use "WITH MOVE” option
Syntax:
1. Dbo.backupfile: Stores: Database file, Number of pages backuped up for each file (Backup_up_page_count),
2. Dbo. Backupfilegroup:
File group name, File group id, Backup set, Is default, Is read_only
3. Dbo.backupmediafamily
Phyical_device_name [Physical path for your backup file], Media set, Media count
4. Dbo.backupmediaset
5. Dbo.backupset
Backup start date, Backup finished date, Backup lsn, Check point lsn
TYPE OF BACKUP, [D: FULL, I: DIFFERENTIAL, L: Log Backup, f: file group backup]
DB Name, Backup triggered login name, Size, Password protected, Collation setting and Compatibility
-------------------------------------------------------------------------------------------------------------------------------
1. Dbo.restorefile
File number
2. Dbo.restorefilegroup
Restore History
3. Dbo.restorehistory
Restore date
Note:
2nd method to see history related to backup or restore events from \\DATABASE>REPORTS>STANDARD REPORTS>BACKUP AND
RESTORE EVENTS
Case: 1
If require to keep same physical files in same instance with different database then require to use "With move” option....
PIECEMEAL RESTORE
● Piecemeal restore, introduced in SQL Server 2005, allows databases that contain multiple filegroups to be restored and
recovered in stages.
● Piecemeal restore involves a series of restore sequences, starting with the primary file group and, in some cases, and
one or more secondary filegroups.
● Piecemeal restore maintains checks to ensure that the database will be consistent in the end.
● Piecemeal restore works with all recovery models, but is more flexible for the full and bulk-logged models.
● Every piecemeal restore starts with an initial restore sequence called the partial-restore sequence. Minimally, the
partial-restore sequence restores and recovers the primary file group.
● During the piecemeal-restore sequence, the whole database must go offline. Thereafter, the database is online and
restored filegroups are available...
Source: Unknown
Purpose: we restore the database pieces by piece in Sql server by restoring the database partially.
Note: -Real time : if any users want to access a specific table \tables which resides in file group ; you have an option to restore
the database a specific file instead of entire database by using piece meal restore
Advantages: - Saves the time, saves the Storage cost, can test quickly, Can control the user at specific level.
Note: In standard edition piece meal restore is not possible in SQL server.
ABC - Database
A - FG - A1
B - FG - B1
C - FG - C1
WITH NORECOVERY
-------------------------------------------------------------------------------------------------------------------------------
4) Veritas NetBackup
6) Acronis
7) EMC Networker
9) ZAMANDA (AMANDA)
DATABASE REFRESH
Taking a backup into production Sql server database and restoring into development Sql server then this is called "DATABASE
REFRESH"
Steps:
2. Always use copy method to copy the backup file from prod to test\dev server.
If Case: I don’t have space in the dev\test server then what I have to do?
Step1: Check the drive whether any unnecessary \old backup file there then delete those to claim some additional space. Try to
copy the backup after
OR
Step 2: If I don’t have any old backup files then inform to storage team to expand the disk or add new disk...Meanwhile send
email to requestor.
4. Move backup file from source to destination server by mapping the drive from either of the servers.
Path: Go to run>type\\10.10.10.1\d$
5. Restore in destination server by changing the root path with destination instance.
7. Issue after database refresh: Whenever we perform any backup and restore between different instances we used to get
ORPHAN user issue.
What is orphan user: A user without having login called as "ORPHAN USER?"
This method help you to recover the data upto point from the backup file alwaysby using below
Syntax:
------------------------------------------------------------------------------------------------------------------------------
Note:
1. When we change any recovery model from full or bulk logged to simple or simple to full\bulk logged then you should trigger
recent full backup to form LSN Number. Otherwise LSN mismatch and backup may fail.
2. Higher version backup files are not possible to restore in lower version. But vice versa is possible.
3. A full or differential backup clears the log: NO...But in directly when trigger these kind of backups check point operation raise.
4. Backups read data through the buffer pool: NO, Never take backup from buffer pool
Reason: every time to get pages to buffer pool is not possible and it impact the performance.
No...Not sure
>If one spid trigger add file and another spid trigger backup then until backup completion add file in waiting status.
GUI PATH:
Command:
JOBS
By using this job DBA can automate Maintenance tasks by defining specific schedule.
Minimum requirements:
>Job schedule
-------------------------------------------------------------------------------------------------------------------------------
dbo.sysjobactivity
dbo.sysjobhistory
dbo.sysjobs
dbo.sysjobservers
dbo.sysjobschedules
dbo.sysjobsteps
dbo.sysjobstepslogs
Disadvantage:
When we have multiple steps in a job we can define separate schedule for each step. The schedule applies to all the steps each
time.
-------------------------------------------------------------------------------------------------------------------------------
Source: Unknown
Job disabled
Agent stopped
T-SQL issue
Note: Purpose of Job user is always should keep "SQL Server agent service account [Local\Domain]"
Note: Multiple jobs can be deleted from "OBJECT EXPLORER DETAILS [F7]"
MAINTENANCE PLANS
Maintenance Plans:
>Newly introduced in SQL Server 2005 with bugs. Fully in SQL 2005 +SP1
1. Wizard
2. Flow chat
Note:
>From SQL Server 2008 onwards "Ignore offline database" option introduced which can skip if any db is in offline and complete
the Maintenance plans.
> Multiple tasks selected in Maintenance plans SQL 2005 then dependent multiple jobs creats.Where as in SQL 2008 onwards
only 1 job for multiple tasks or steps.
-------------------------------------------------------------------------------------------------------------------------------
Total 11 MP types:
Source: Unknown
-------------------------------------------------------------------------------------------------------------------------------
Daily:
Backup-differential
Weekly Maintenance :
Backup- Full
Moving the backup file from disk to tap according to the retention period.
Note: From windows side; we can also configure automated tasks by using "WINDOWS TASK SCHEDULER"
Note: When DBA can use windows task scheduler to automate Sql related in Sql server express edition we don’t have" Sql
server agent services". Without agent service cannot run jobs or Maintenance plans. In this case we can use Windows
Automated task scheduler.
Note:cannot delete a job without deleting Maintenance plan. First need to delete Maintenance plan which automatically
delete dependent job
dbo.sysmaintplan_log
dbo.sysmaintplan_subplans
dbo.sysmaintplan_logdetail
>This method mainly help to move the database faster from one instance to another instance or within same instance to move
the files [MDF OR LDF OR NDF] between the drives.
Note: Very fast method and save lot of time compare to backup and restore.
Pre-install Steps:
1. Perform full backup of the database depending on the db size [if you have more down time]
Source: Unknown
Install steps:
Method-1 GUI:
> Copy MDF and LDF from source server to destination server
Attach \Detach:
Exec master
GO
USE [master]
GO
FOR ATTACH
GO
> Change the compatibility number if you perform into higher version of SQL Server.
Note:
Limitations:
> Attach\detach method not work from higher version of Sql to lower version of SQL Server.
> This method help to create a set of copy of the database in same instance or different destination instance.
Note:
Users
Logins
Database
Tables
Views
Store procedures...etc
Limitations:
>System databases
> In SQL Server system object data stores in master database logically and physically in resource database.
>Whenever restart SQL Server ; SQL Server engine checks for MASTER MDF and LDF path from configuration
manager>advanced> start up parameter> MDF and LDF file location
> At the same time write recovery of database information into SQL Server error log.
Note: In SQL 2005 version: Master and resource database files MDF and LDF stores in DATA folder.
Whereas from SQL 2008 onwards; MASTER into data folder and resource files into BINN folder.
Linked server
Endpoint
Instance configuration
Login [SYSXLOGIN]
> When you create any user database get all the below properties from model database.
Ex:
File
File size
Recovery model
Collation
Root path....etc
3. MSDB[DBID: 4]
Information stores:
Jobs
Maintenance plans
Alerts
Operator
Database mail
Log shipping
Backup
Restore
SSIS\DTS packages
> Whenever restart SQL Server services then Tempdb MDF and LDF reset to original size by flushing all the temp information.
> Tempdb always inherit properties from MODEL database except Tempdb MDF size: 8 MB and recovery model: Simple
> By finding Tempdb created date by using SP_HELPDB or db> right click properties or Tempdb mdf, ldf file date creation: can
conclude that is the time when SQL Server services restarted.
Tempdb Stores:
Cursors
Triggers
Functions
Joins
Indexes
>Hidden database
> Only Select queries can work and to find when was the last resource database updated. Also current SQL Server version
information stores in resource
GO
-------------------------------------------------------------------------------------------------------------------------------
>Backups should take for master, model and msdb ...Tempdb, resource database backup statement does not work
>Adding a file possible for only msdb and Tempdb. File groups are only possible for msdb database
> Can move files [mdf and ndf] from one drive to another drive called "FILE MOVEMENT"
> Never add file or file groups to master and model database
SUSPECT DATABASE
[User db corruption]
Reasons:
Steps to Resolve:
3) Set the EMERGENCY mode on for the database for further troubleshooting. Emergency mode is a READ_ONLY state and gives
some base for identifying the cause of the issue.
5) Run DBCC CHECKDB on the database to identify if the issue is with Data files or Log files.
Running checkdb finds any consistency and allocation errors and if there are no errors found then Data file is considered to be
clean. The issue might exist with Log file.
6) Detach the database and delete log file from the path.
sp_detach_db @dbname='dbname'
Column: "Estimated_completion_time"
------------------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------------------------
Restore database dbname page= (file id: pageid) from disk='backup file path'
Note: Page level restore is possible from SQL Server 2005 version by using T-SQL and whereas from SQL 2012 Page restore
included in GUI.
-------------------------------------------------------------------------------------------------------------------------------
Note:
If backup is not available and issues found with data file/log file use
This command repairs the database but has the risk of data loss. Take proper approvals before performing this step.
If you run checkdb what are the internal queries will execute?
Checks the logical and physical integrity of all the objects in the specified database by performing the following operations:
>>Runs DBCC CHECKALLOC on the database:: Checks the consistency of disk space allocation structures for a specified database.
>>Runs DBCC CHECKTABLE on every table and view in the database::Checks the integrity of all the pages and structures that
make up the table or indexed view.
>>Runs DBCC CHECKCATALOG on the database::Checks for catalog consistency within the specified database. The database
must be online.
MASTER CORRUPT:
>Master is the most crucial database in an instance, if it is corrupt entire instance gets affected.
Master Corrupt
Partially corrupt:
1. If master database is corrupt, it is either completely corrupt or partially corrupt. If partially corrupt, instance will start with
-m;-t3608 and if it is completely corrupt instance wouldn't start.
Completely Corrupt:
Source: Unknown
1) Master database doesn't start with /m /t3608 and hence we need to rebuild the master database.
Use command prompt and start rebuilding master database by enter into path of setup.exe [Path of SQL Server software]
2) Rebuild master
MODEL CORRUPTION:
Error message:
The header for file '\model.mdf' is not a valid database file header.
Solution:
Model database being one of the crucial database for new database creations and also for Tempdb recreation on every restart.
Steps:
1) Verify if Model is corrupt or not in Event viewer and SQL Server Error Logs.
2) Confirm if a valid database backup exists or not using restore verify only/header only.
3) Start instance with Master database only by enabling the trace 3608.
OTHER METHOD: Copy and paste model .mdf, .ndf files from other instance [Required to take instance offline]
Note:
From SQL SERVER 2008 onwards MS introduced TEMPLATE FOLDER WHICH contains fresh "MASTER, MODEL and MSDB" MDF
and LDF files.
Source: Unknown
If any corruption for system database, then please use this template to bring the instance quickly and do the restore to get
updated data.
MSDB CORRUPT:
1) Verify the reason of failure in the error logs and troubleshoot accordingly. If database is really corrupt then look out for an
available valid backup. If backup is available restore MSDB as a normal user database and it would be restored.
2) If backup is not available, then stop the instance and start the instance in /m and /t3608 startup parameters.
3) Connect to the Query window and detach MSDB database and delete the OS level files.
SP_detach_db 'MSDB'
ISSUE:::: after msdb rebuild then Sql server agent not able to start? How to resolve
Solution:
RECONFIGURE;
RECONFIGURE;
http://techydave.blogspot.in/2013/02/sql-server-agent-agent-xps-disabled.html
TEMPDB CORRUPTION:
1. If Tempdb corrupt instance wouldn't respond and it would be in hung state equal to crash.
Note:
1. IfTempdb log file is full then SQL Server instance did not allow any connections.
2. If model database is corrupted at the same time Tempdb corrupt, then even Tempdb did not create after restart.
Trace Flags:
-s: sqlserver.exe
-n: start instance name then the information will not trace in event viewer
-------------------------------------------------------------------------------------------------------------------------------
FILE MOVEMENTS
1. File move will applicable for both user and system databases.
Pre-Activity: Take full database backup before performing any changes on your SQL Server instance.
1. Collect logical file names for your databases [User and system]
Source: Unknown
sp_helpfile
2. Update the logical name for user database by using alter command with new path.
Alter database dbname modify file (name='logical file name', filename='newpath') -----Data file
Alter database dbname modify file (name='logical file name', filename='newpath') -----log file
Note: Verify whether logical name is updated with new path by using sp_helpfile again...
3. Take database offline. [Kill the session if any active sessions are running on the database before taking your user database
offline]
4. Move physical files to new path and bring the user database online.
Note: While moving physical .mdf, .ldf, .ndf files use always copy & paste method but not cut and paste.
5. Sp_helpfile
Note: When performing user database file movement only required to take database offline but not SQL Server services offline.
Note: Ask application team to check the connectivity of the database and once received confirmation from application team
then only delete the older files from physical location.
Note: After completing the file movement go and clear the older files from directory by using SHIFT+DELETE but not delete
...since if you use delete next go to recycle bin folder if space is not there in C:\ drive then server becomes hung state. More risk
is involve
-------------------------------------------------------------------------------------------------------------------------------
Note: for system database you have to take complete Sql server instance restart [instance level downtime is required]
2. Update logical name for Tempdb database by using alter command with new path
Source: Unknown
Default Instance:
Named Instance:
5. Tempdb new files [.mdf, .ldf] will automatically create in new path.
6. NOTE: After verification if your instance ok then remove your old Tempdb files.
Note: For system databases you cannot perform offline, attach, detach, drop, delete
Steps:
2. Go to configuration manager-->advanced tab-->startup parameters--> specify new path for your .mdf and .ldf files
4. Copy physical .mdf and .ldf files to new location or new path.
-------------------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------------------------
>The main purpose is to load your data from SQL Server to any third party files like notepad, excel, any other RDBMS (Oracle.
etc.) by using export technique. (Or)
>If your data is in third party files like notepad, excel, any other RDBMS (Oracle.etc) to SQL Server database by using import
technique.
http://searchsqlserver.techtarget.com/feature/The-SQL-Server-Import-and-Export-Wizard-how-to-guide
http://sqlage.blogspot.in/2014/03/how-to-use-importexport-wizard-in-sql.html
Source: Unknown
>The SQL Server Import and Export Wizard is based in SQL Server Integration Services (SSIS). You can use SSIS to build
extraction, transformation and load (ETL) packages and to quickly create packages for moving data between Microsoft Excel
worksheets and SQL Server databases.
>Launch SQL Server Import and Export Wizard by one of the following methods:
Method 1: On the Start menu, roll the cursor over All Programs, scroll down to Microsoft SQL Server and then click Import and
Export Data.
Method 2: In SQL Server Data Tools (SSDT), right-click the SSIS Packages folder and then click SSIS Import and Export Wizard.
Method 3: In SQL Server Data Tools, go to the Project menu and click SSIS Import and Export Wizard.
Method 4: In SQL Server 2014 Management Studio, connect to the Database Engine server type, expand Databases, right-click a
database, point to Tasks and then click Import Data or Export data.
We have received SourceFile.xlsx file and we have to load that to SQL server Table. We can either create SSIS Package in BIDS or
we can use Import/Export Wizard to load this file in SQL Server Table. In this post, we will use Import/Export Wizard.
Step 1:
Right Click on Database in which your table exists or you want to create it and load Excel data as shown below
Source: Unknown
Choose the data source which you want to use as source, as we are loading data from Excel, Choose Excel file as shown below
Source: Unknown
Choose a Destination: Choose the destination where you want to load the data from source. In our case we are loading our data
to SQL Server Table. Configure as shown below
You can directly choose the table from where do you want to load the data or you can write query if you are using Database as
your source. As we are using Excel as source, we will choose Table (Sheet).
In this part of Wizard, we have to select the Tables or Views we want to use from source and load data to destination. As we are
loading data from Excel, the Excel Tabs are shown. Choose the Sheet (Tab) which do you want to load. Under Destination, it will
show you same name like Source. I have changed that to Customer Data. You can choose any name of your Table you want. You
can choose multiple sheets or Tables from Source.
Source: Unknown
Column Mappings:
Click on Edit Mappings and then you can map the source columns to destination columns, also if you need to choose correct
Data type, you can change here.
By Default, Run immediately is checked. I have changed the option to Save SSIS Package and provided the location where I want
to save the SSIS Package. Also there is no sensitive information that I want to save in Package such as Password so I have
selected Do not save sensitive data.
Source: Unknown
Provide the name of SSIS Package and File Location as shown below
Summary of all the steps will be shown to you in this step. You can see the source and destination etc.
Once you hit Finish button, The Wizard will execute all below steps and finally save the SSIS Package.
Source: Unknown
To execute this package, double click on it and below window will open. If you need to change the name of File or SQL Server,
you can go to Connection Managers and change it. In my case, I do not want to make any changes. Press Execute Button
Source: Unknown
Once you hit Execute, Package Execute Progress window will appear and you will be able to see the progress of execution of
your SSIS Package.
Import/Export Wizard is a way to quickly load data between different sources and destinations. You can create your SSIS
Package quickly by using Import/Export Wizard and then add to SSIS Project and make changes if required.
If we need to export data from SQL Server then we need to Right Click on Database-->Tasks-->Export Data and Import/Export
Wizard will start
INTRODUCTION
> This is an enterprise solution for sending mails from the SQL Server database engine to SMTP servers. SQL Server database
applications can communicate with users through an email system. It provides features like scalability, security, and reliability.
> It uses an SMTP server to send mail. SQL Server 2000 supports SQL Mail, which supports MAPI profiles to send email instead
of an SMTP server. SQL Mail requires a MAPI-compliant mail server (Microsoft Exchange Server) and a MAPI client (Microsoft
Outlook).
>We can send a text message, query result, file as attachment. The database mail can be used to notify users or administrators
regarding events raised in SQL Server. For example, if an automation process like replication, database mirroring fails or there
are latency related problems then SQL Server can use this feature to notify the administrators or operators.
Points to Remember
● Like SQL Mail, database mail doesn’t require a MAPI – a compliant mail server like Outlook Express or extended programming
interface.
● Better performance. Impact of sending mails to SMTP servers by SQL Server is reduced as this task is implemented by an
external process initiated by the DatabaseMail.exe file.
● Works fine in a cluster based environment.
● 64-bit support.
● Database mail configuration information is maintained in an MSDB database.
● Only members of Sysadmin and DatabaseMailUserRole database role of MSDB can send mails by default.
● Allows sending messages in different formats like text and HTML.
● Supports logging and auditing features through different system tables of MSDB.
● Sp_send_dbmail
This is a system defined stored procedure which is used by SQL Server to send email using the database mail feature. This stored
procedure is present in the MSDB database.
● MSDB Database
Consists of all stored procedures, system tables, and database roles related to database mail.
● Service Broker
Source: Unknown
To establish communication between the SQL Server engine and the database mail engine we need a service broker. It submits
the messages to the mail engine.
● DatabaseMail.exe
This file is present in the Binn folder of the respective instance. It is the database mail engine.
How it works?
When a run time error occurs due to any automated task like backups, replication etc database engine raise the error and
same information is submitted to Database Mail engine, then database mail engine will submit the mail to SMTP Server using
EmailID and Password mentioned in profile. At the last SMTP Server sends mail to recipients.
Error --> DB Engine --> DB Mail Engine --> SMTP Server --> Recipients
USE [master]
GO
ALTER DATABASE [MSDB] SET ENABLE_BROKER WITH NO_WAIT
GO
Steps to configure
1) Database mail is newly introduced concept in SQL Server 2005 and it is replacement of SQLMail.
2) Database Mail is based on SMTP (Simple Mail Transfer Protocol) and also very fast and reliable whereas SQLMail is based on
MAPI (Messaging Application Programming Interface).
3) SQL Mail needs an email client (like Outlook/Express) to send and receive emails, whereas Database mail works without any
Email client.
4) SQL Mail works with only 32-bit version of SQL Server, whereas Database Mail works with both 32-bit and 64-bit.
1. blog.sqlauthority.com/2008/08/23/sql-server-2008-configure-database-mail-send-email-from-sql-database/
2. www.codeproject.com/Articles/485124/Configuring-Database-Mail-in-SQL-Server
Source: Unknown
LITE SPEED
• Lite Speed for SQL Server is a revolutionary, patented development in database backup technology, encompassing the
latest encryption and compression algorithms to deliver a complete solution for your archiving needs.
• Lite Speed maximizes disk space and process efficiency while greatly reducing the overhead costs associated with
maintaining a state-of-the-art database facility.
Key Benefits:
• Reduces backup times (up to 75% faster than native SQL Server)
• The ability to create your backups with varying types of industry-standard encryption
• The ability to do object-level restores (i.e., tables, views, and stored procedures; this feature is available only in the
enterprise version)
• An enterprise console allowing you to control the backup and restores of all your MS SQL Servers in one location
Lite Speed provides consistent compression and encryption for SQL Server and Oracle, along with efficiency
Compression:
Lite Speed uses the same compression engine on both the Oracle and SQL Server platform, so the compression ratio for
like data is reliably similar on both databases.
Encryption:
Lite Speed provides encryption for both SQL Server and Oracle.
Efficiency:
Lite Speed performs backup compression on the database server, in memory, before the backup is written to disk or
shipped across the network to a tape system. All other methods require more storage and/or greater network
utilization because the compression occurs after the backup is created.
Source: Unknown
• http://www.techrepublic.com/blog/howdoi/how-do-i-install-configure-and-use-litespeed-for-database-backups/167
• http://kotako.wordpress.com/2010/03/02/convert-litespeed-backups-to-sql-server-backups-and-restore-them/
• http://easymssql.blogspot.co.uk/2010/01/need-to-convert-sql-lite-speed-backup.html
• https://support.quest.com/SolutionDetail.aspx?id=SOL22045
SHRINKING
> To release the space to disk when any unused space hold by the database
> The only way to release the space from database in Sql server is shrink.
Note: Shrink only when the database \file contains available free space. If no even you try to shrink no space released to disk.
Cases:
> Can shrink log file any time based on the available free space in Sql server.
> Check whether any open transactions are running on database before shrink data file. If any then do not perform any shrink
operation.
>Request for down time for any application that is trying to access SQL Server on the server so that resource utilization on the
server is very minimal.
> Shrink max 5 or 10 GB from database at a time to release the space but not more than that. Causes performance impact.
Source: Unknown
>Do not shrink the data file to its maximum capacity. Always leave minimum 10-20% free space on the data file.
1. When application try to connect, uses Config file (Configuration file--Resides in application server). Config file (.txt) should
contains SQL Server instance name + user name [service account name--DBA team should add the account into SQL Server
under security] + Password [Strong]
Application use all these details from Config file and point connection to SQL server database
> Any business 2 data centers maintains called PRODUCTION and DR data center.
> Always distance between data centers should not be more than 50 KM.
UPGRADATION
Upgradation Steps:
>Upgradation involves overwriting existing SQL Server instance and upgrading from one version to another version.
Steps:
Pre-upgrade steps
Upgrade steps,
Pre-upgrade steps:
Upgrade steps:
3. Check db, login, jobs and Maintenance plans are still remain same in SQL Server after upgrade
Note: If any SSIS or DTS or cubes packages are there then BI team will take care...
Note: If instance is configured with high availability then break the HA and install higher version then try to reconfigure HA one
more time.....
Reasons of Upgradation:
Database Engine
Analysis Services
Reporting Services
Integration Services
1. Issue can prevent from upgrading from lower version to higher version
Advisor report:
Source: Unknown
Advisor gives the report which contains issues found during analysis,to manage tasks associate list.
The analysis examines objects that can be accessed, such as scripts, stored procedures, triggers, and trace files.
Note: Upgrade Advisor cannot analyze desktop applications or encrypted stored procedures.
MIGRATION
Migration Steps:
>It involves moving data or databases from one Instance to another Instance same in SQL Or other DBMS like oracle, Sybase etc.
>Like OS Upgradation, Moving data from one drive to another drive, other DBMS etc.,
https://sqlschoolhouse.wordpress.com/category/sql-server-database-migration/
Pre-Migration steps:
2) Disable all HA options and ensure all jobs are also disabled.
Migration steps:
3) Take backup of Source (2000/2005) and restore at destination (2005/2008) (or) follow Detach/Attach. ONLY user databases
should be backed up.
4) Change compatibility level from 80 to 90 for all the database(s) that have been migrated
sp_dbcmptlevel
sp_help_revlogin
7) DBCC UPDATEUSAGE
>The table or index row counts and the page counts for data, leaf, and reserved pages could become out of synch with time in
the database system tables. DBCC UPDATEUSAGE command corrects these inaccuracies and ensures the counts are updated.
8) DBCC CHECKDB
>DBCC CHECKDB is a commonly used command that can check the allocation, structural and logical integrity of a database and
its objects.
9) Update Statistics
>Statistics of SQL Server 2005 can be outdated and hence it is very important to update the statistics after moving the database
to 2008.
>Update statistics updates the header information (Page), count information, histograms, and metadata allocations.
sp_updatestats
Torn Page Detection: Torn Page detection is a method that calculates Bit values for every 512 byte sector and stores the final
values in the page header. While retrieving the page it checks if the page is Corrupted or not.
Checksum: Checksum generates a checksum number and stores in the page header and when retrieving the page checks if the
checksum is correct or not and hence this is how it validates the page is 'not corrupted'/'corrupted'.
11) Transfer of Jobs (/Logins) can also be done using DTS Packages by DTS/SSIS team. If there are any DTS Packages in SQL Server
2000 they can be migrated to SSIS Packages in 2005.
If Jobs have to be transferred by DBA, then manually he/she has to script all the jobs and execute that commands at the
destination.
12) Finally after migration is completed ask Application team to perform App checks and validate if SQL Server 2005 is
compatible with the respective application or not.
Once final Go is given by App team that confirms that migration is a success.
Migration from SQL Server 2008 to SQL Server 2014 Check list
TEMPDB FULL
Tempdb Full:
3) Find out the transaction which is occupying more space in Tempdb and troubleshoot or kill that transaction based on
approval
4) Shrink the Data File of Tempdb (if no OPEN transactions are in progress)
5) DBCC FREEPROCCACHE
>There is a risk involved with this, it can affect next procedures to be reparsed and compiled. Also it might not give accurate
results for DMV's.
6) Restart the instance and it resets the tempdb size to last stated value in sysdatabases.
Source: Unknown
LOGFILE FULL
Error: 9002
Steps:
1. By using select * from sys.databases and verify the column "log_reuse_wait_desc" to find any active transactions are there?
If set restricted then increase the size for log file growth.
3. Verify whether log backups are running or not if not manually run one T-log backup.
5. Check the disk space availability where you kept log file
6. If disk space is not available then add the over_flow file into the disk where space is available.
7. Perform file movement after approval to the disk where it have enough free space.
8. Verify whether log backups are running or not if not manually run one T-log backup.
Last option:
Note: VLF'S deleted from log file and due to truncate operation. You may get the free space in log file but chances of data loss.
>Resource Governor is a new technology in SQL Server 2008 that enables you to manage SQL Server workload and resources by
specifying limits on resource consumption by incoming requests.
> The following three concepts are fundamental to understanding and using Resource Governor.
Resource pools:Two resource pools (internal and default) are created when SQL Server 2008 is installed. Resource Governor
also supports user-defined resource pools.
Workload Groups:Two workload groups (internal and default) are created and mapped to their corresponding resource pools
when SQL Server 2008 is installed. Resource user-defined workload groups.
Classification:There are internal rules that classify incoming requests and route them to a workload group. Resource Governor
also supports a classifier user-defined function for implementing classification rules.
The Policy Based Management feature was introduced in SQL Server 2008. The purpose of the feature is to assists SQL
Server administrators in creating and enforcing policies tied to SQL Server instances and their objects. The policies can
be configured on one SQL Server and re-used on other SQL Server instances to provide a SQL Server security model for
instance.
Source: Unknown
Policy Based Management allows DBAs to define the preferred state of the SQL Server system components (e.g.
instances and objects) and compare the compliance status with the preferred state. Properly declared policies ensure
enforcing company rules in the SQL Server environment, and are commonly a part of the SQL Server security model.
The Policy Based Management feature is built on top of the SQL Server Management Objects collection (objects that
are designed for programming all aspects of managing Microsoft SQL Server) which supports SQL Server 2000 and later
versions. Therefore Policy Based Management can be utilized on versions prior to SQL Server 2008, for instance via the
PowerShell subsystem and SQL Server Agent.
Policy Management allows creating policies for various facets with a specified condition.
● Facets: Facets is the property of SQL Server which the policy will consider managing. There are several facets on which policies
could be implemented. For example, we will use the “Database Option” facet to implement a policy which will ensure that the
AutoShrink option should be TRUE for all hosted databases on the server. Similarly, we will be creating policies on the Stored
Procedure facet.
● Conditions: It is the criteria upon which the facet is evaluated. While designing a policy for the server, the first step is to create a
condition which is to be evaluated by the policy for the facet.
● Policies: As the dictionary says, I reform, a SQL Server policy is a set of basic principles and associated guidelines, formulated
and enforced by the Policy Manager of a server, for the desired server facets to conform with, which in the long run shall
maintain the server consistent and help the DBA achieve organizational level IT norms.
Example 1
Scenario: We will create an on demand policy to ensure that all the databases have the Auto Shrink option set to True. By
default, a database that is created has Auto Shrink set to False, as shown in the figure below.
Source: Unknown
Next, provide a name to the Condition: “Check Auto Shrink”, and select the facet from the Facets drop down as “database
option”. In the Expression Editor, choose the field from the drop down “@AutoShrink”, select operator as “=”, and value as
“True”.
The condition will check all databases for their auto shrink properties to be true.
Click OK.
Source: Unknown
Provide a name as “AutoShrinkPolicy”; from the Check condition drop down, select the Condition we just created. And from
Targets, check every database as we want every database to conform to this policy.
Next is the evaluation mode. Let’s keep it “On demand” for this example. On demand means we will evaluate the policy at our
will instead of at a predefined schedule.
Source: Unknown
Click OK.
Step 3: Evaluation
We have been able to create the policy; now we will let the Policy Manager evaluate the policy. To evaluate, right click the Policy
“AutoShrinkPolicy” and click Evaluate. SQL Server evaluates and lists the result as shown in the screenshot below. Since for none
of my databases Auto Shrink is True, there are non-conformations for each one of the hosted databases on my server.
Source: Unknown
For conforming the results as per the Policy, check against the database and click on the Apply button.
Source: Unknown
This will set the Auto Shrink property for TestDB to True and a green sign will denote its conformance.
HIGH AVIALABILITY
High Availability
● High Availability: SQL Server provides several options for creating high availability for a server or database.
● HA is to continue operations when a component fails. This is usually a hardware component like a CPU, Power supply,
Disk failure, Memory failure or the complete server.
● With HA there is usually no loss of service when a component fails.
LOG SHIPPING
• Log Shipping is used to synchronize the Distributed Databases. Synchronize the database by copying Transaction logs,
Backing up, Restoring data. SQL Server used SQL Server Job Agents for making those processes automatic.
Or
• It automatically sends transaction log backups from one database (Known as the primary database) to a database
(Known as the Secondary database) on another server. An optional third server, known as the monitor server, records
the history and status of backup and restore operations. The monitor server can raise alerts if these operations fail to
occur as scheduled.
Or
• Shipping the transaction log backups from one server to another server called "Logshipping"
6. Create a copy share in secondary server and provide minimum read permissions.
7. SQL Service account should have the permissions on backup and copy share.
9. Should be any edition for log shipping except express edition [No agent service]
For implementing Log Shipping, we need the following components - Primary Database Server, Secondary Database
Server, and Monitor Server.
• Primary Database Server: Primary Sever is the Main Database Server or SQL Server Database Engine, which is being
accessed by the application. Primary Server contains the Primary Database
• Secondary Database Server: Secondary Database Server is a SQL Server Database Engine or a different Server that
contains the backup of primary database. We can have multiple secondary severs based on business requirements.
• Monitor Server: Monitor Server is a SQL Server Database Engine which Track the Log Shipping process.
Note: If monitor server is included in LS configuration then alert job gets created in monitor server itself if monitor
server is not included then alert job created in both primary and secondary server.
Backup job:
Copy job:
• A SQL Server Agent job that copies the backup files from the primary server to the secondary server
Restore job:
• A SQL Server Agent job that restores the copied backup files to the secondary database
Alert job:
• A SQL Server Agent job that raises alerts for primary and secondary databases when a backup or restore operation
does not complete successfully within a specified threshold.
Source: Unknown
Architecture Points:
3. Backup job take the transactional log backup of primary database into backup share.
4. From backup share copy job picks the t-log backups and copy to secondary server > copy folder.
5. From copy folder restore job picks the backup file and restore in secondary database
In real time always use network path to place the backup files.
Note: Taking backup into local server is very fast compare to network path.
3. Connect to secondary instance > then provide copy share path and restore database mode.
Source: Unknown
2. Standby: Database in read only mode and users can able to read the data.
Note: In log shipping secondary server database can be used for "reporting purpose".
Advantages:
• Recovery model supports Full and bulk-logged…database to simple recovery will cause log shipping to stop
functioning
• Edition is not necessary to be same for both primary and secondary servers.
Disadvantages:
• Failover: Manual
7) Backup/Copy/Restore Job owner change can cause permission issues and break LS.
9) WITH RECOVERY statement fired at Standby server can bring secondary database ONLINE breaking LS.
12) Backup schedule is changed can cause lot of delay which might raise an alert.
1. Monitoring in Log-shipping
2. What are the log shipping jobs?
3. Failover
4. Switchover
5. What are the reasons log shipping fails? If fail what happened?
6. What are the common errors numbers will get in log shipping?
7. What is .tuf [Transaction undo file] purpose? Overview
8. If we delete .tuf file what happen to LS?
9. If we do insert, update, delete in primary server database, changes replicate to secondary?
10. If I shrink primary database what happen to log shipping in secondary db?
11. If I delete some records in primary database what happen to log shipping in secondary db?
12. If I do truncate in primary database what happen to log shipping in secondary db?
13. If I take a manual log backup for LS configured database will it impact secondary?
14. Adding multiple secondary server to the existing configuration?
15. Adding file to log shipping database?
16. Patch management process in LS instance?
17. Reasons a backup job, copy and restore job fails?
18. How to change recovery models in log shipping expect simple? Yes we can able to change ....Simple recovery not support
for LS
19. If primary database log file full then what happen to secondary? How will we resolve?
1. Go to job view history and check status to monitor log shipping status. If all backup, copy and restore jobs are running
then we can say your log shipping is in sync and working fine.
MSDB Tables:
Source: Unknown
1.dbo.log_shipping_primary_databases:
Monitor server ID
2.dbo.log_shipping_primaries:
primary_id
primary_server_name
primary_database_name
maintenance_plan_id
backup_threshold
threshold_alert
threshold_alert_enabled
last_backup_filename
last_updated
planned_outage_start_time
planned_outage_end_time
planned_outage_weekday_mask
source_directory
3.dbo.log_shipping_primary_secondaries:
1.dbo.log_shipping_secondary_databases
Secondary ls databases
2.dbo.log_shipping_secondary:
Monitor server
2.dbo.log_shipping_secondaries
primary_id
secondary_server_name
secondary_database_name
last_copied_filename
last_loaded_filename
last_copied_last_updated
last_loaded_last_updated
secondary_plan_id
copy_enabled
load_enabled
out_of_sync_threshold
threshold_alert
threshold_alert_enabled
planned_outage_start_time
Source: Unknown
planned_outage_end_time
planned_outage_weekday_mask
allow_role_change
1.dbo.log_shipping_monitor_primary:
Primary server
Primary dabase
Backup threshold
2.dbo.log_shipping_monitor_Secondary:
Primary server
Secondary server
Threshold
3.dbo.log_shipping_monitor_history_detail:
4.dbo.log_shipping_monitor_error_detail:
5.dbo.log_shipping_monitor_alert:
Source: Unknown
>This is done when the primary database is No longer available. It is not pre-planned.
>When primary database goes down my application cannot be able to connect and log shipping is no more exists.Need to bring
secondary database online manually is called as “failover”
Steps:
1. Disable all backup, copy and restore jobs, alert job. Inform application to stop the app to avoid any user co0nnections
3. Compare [BACKUP SHARE AND COPY SHARE] and move .trn files from primary to secondary copy folder.
==Manually copy and paste from backup share to copy share by checking the time stamp and LSN number
== Or just run the copy job and which automatically copy to copy share. Please copy manually only TAIL LOG BACKUP
Note: Find what was the last restore backup file in Secondary
From msdb database, log shipping secondary tables we can get what was the last backup file restored...
Go to secondary server> MSDB> select * from sys. log_shippng_secondary_databases>> this table give data of last restored
backup file.
4. Restore pending .trn backup files in secondary database with no recovery to recover the transactions (If tail log backup).
Note: Restore or copy by running manually or run the copy \restore jobs.
5. Restore last .trn backup file with recovery to get your secondary database up & running. If I have tail log then restore last TAIL
log backup file with recovery
7. Inform to application team with new server instance name and database to start transactions.
Source: Unknown
Note: Log shipping supports manual failover method and does not support automatic failover.
3. HOW TO MOVE LOGINS IN LOG SHIPPING FROM PRIMARY TO SECONDARY SERVER DATABASE?
1. Whenever you create any login in Primary server then take the create login script in Primary server and execute in secondary
server.
3. But a user from primary server to secondary server is not possible to proceed manually. Due to secondary server database is
in read-only\restoring state.
4. Wait or run manually for the backup\copy\restore job to run after login creation.
5. Once done, your secondary database gets added into user under database automatically.
4. SWITCH-OVER:
>This is done when both primary and standby databases are available. It is pre-planned.
> Switch-Over: Swapping the roles that means-- your primary become -secondary [ONLINE -Restoring or standby mode]
DISASTER RECOVERY: [DR process] this concept used for high availability solutions to test both primary and secondary are
working fine instead of waiting for any disaster. Can called as proactive check.
Every 6 Months or 1 Year DR Test happens. In real time down time is required.
Pre-step:
After primary becomes secondary server. Role changed from ONLINE TO RESTORING MODE
3. Same like your failover operations; compare your backup and copy folders, move files
4. Restore secondary database by using t-log backup with recovery to change the role. Now secondary database will be in online
state (Role swapped)
5. Start reverse configuring log shipping from secondary to primary server [switchover].
Post Activities:
Source: Unknown
>Inform application to point connection to the current primary which was previous secondary to test.
>If apps and users are able to work then DR test is successful.
Note: If monitor server is not included then alert job create in both primary and secondary
SCENARIO: 6 WHAT ARE THE REASONS LOG SHIPPING FAILS? IF FAIL WHAT HAPPENED ?
Reasons:
SCENARIO: 7 WHAT ARE THE COMMON ERRORS NUMBERS WILL GET IN LOG SHIPPING?
Error: 14420 [Primary server –backup job related] & 14421[Secondary server copy and restore jobs]
Contains only uncommitted transactions and create in secondary server but not in primary...ONLY SECONDARY DB IS IN
STANDBY MODE
The TUF file basically contains the information with respect to any modifications that were made as part of incomplete
transactions at the time the backup were performed.
> This file resides always under secondary server> copy folder
Is there any way to re-create .tuf file... i have configured log shipping but unfortunately tuf file has been removed and now log
shipping has been stop and we are unable to up the same.?
Ans:Impact is only restore job failed in secondary. We have reconfigure the log shipping one more time to re-create .TUF file
Note: If I delete .TUF file then impact to restore job but not copy and backup jobs.
.Tuf file is updates dynamically whenever any log backup file restore
.TUF file only creates in secondary server > copy folder>
No .tuf file creates in “NORECOVERY “mode.
>.WRK [work file]:Work file creates in secondary server and the main purpose is it contains copy job information. This file used
by only log shipping copy job
WORK FILE [.WRK]:- To manage the file copy process from Primary server to Secondary server, .WRK files temporarily
generated.
Means, The .wrk file got generate when the transaction log backups files are being copied from the backup location (Commonly
at Primary Server end) to the secondary server by the agent job named as LS-Copy on the secondary, and when file copied
completely at secondary server, they renamed to the .trn extension.
The temporary naming using the .wrk extension indicates/ensure that the files will not picked up by the restore job until
successfully copied.
SENEARIO: 9 IF I DELETE SOME RECORDS IN DATABASE WHAT HAPPEN TO LOG SHIPPING IN SECONDARY DB
SENEARIO: 10 IF I TRUNCATE SOME RECORDS IN DATABASE WHAT HAPPEN TO LOG SHIPPING IN SECONDARY DB
SCENARIO: 11 IF I SHRINK PRIMARY DATABASE WHAT HAPPEN TO LOG SHIPPING IN SECONDARY DB?
If we perform shrink in primary database automatically shrinking replicate to secondary database as well...No impact to log
shipping configuration
SENEARIO: 12 IF I TAKE A MANUAL FULL BACKUP FOR LS CONFIGURED DATABASE WILL IT IMPACT SECONDARY DB?
Source: Unknown
Due to LSN mismatch log shipping will not work (restore job fail)
Note: Recommended in LS take copy only backup if require to take full backup
SENEARIO: 13 IF I TAKE A MANUAL LOG BACKUP FOR LS CONFIGURED DATABASE WILL IT IMPACT SECONDARY DB?
Due to LSN mismatch log shipping will not work (restore job)
Note: Due to this reason in log shipping if any user used to take adhoc full backup then we will always USE” COPY ONLY FULL
BACKUP “not to distribute any LSN number and to work log shipping as usual.
2. Go to primary server> database>properties>Log shipping> add> secondary instance details+copy share location+db recovery
state
3. In 2nd secondary server again create additional copy and restore jobs.
Note: multiple secondary server > each secondary server should contains at least 1 copy and 1 restore jobs.
Impact: After adding a file to logshipping db then there will be no impact to backup job, copy job but restore job gets fail.
3. Identify after adding a file what is the recent backup is happened then confirm by perform
Note: All backup files are moved to copy share and restored expect backup file after file added
4. Go to secondary and try to restore the log backup file with move option,
restore log dbname from disk='[path of log backup file n copy folder]'
5. Verify by running restore job whether log shipping is working or not. If works, log shipping is in sync.
Note: Always keep same version of Sql between primary and secondary
Then you can perform all operations but except SWICHOVER Scenario in log shipping.
Note:-
1. Inlog shipping for the version of Sql 2005 if any changes performed at primary server but the changes are failed
2. To roll back the data we need to restore with fresh full backup, after restoring full backup if I enable logshipping jobs my
restore job is going to fail due to LSN number mismatch
Same in Sql server 2008 version just take a fresh log backup from primary and restore in secondary automatically Log shipping
will going to start
If we remove file in primary the automatically remove from secondary after backup restore.
When an instance of SQL Server is configured as part of Log Shipping, t is important to install service packs in a correct
sequence; otherwise we may get unexpected issues.
There is no required sequence to apply a service pack for Primary, Secondary and Monitor servers in a Log Shipping
environment. The following is the steps about apply service pack:
Agent failure
Source: Unknown
Incorrect path
Network failure
Domain issue
Agent down
>In log shipping we can change recovery model from full to bulk-logged or bulk-logged to full...There will not be any impact to
LS.
> But is we change from full to simple or simple to full then we need to trigger one full backup as a mandatory.
Note: System databases [master, model, msdb, Tempdb] log shipping configuration is not possible.
No. currently, it is not possible to script log shipping. the only supported means of setting up log shipping is through the wizard
Source: Unknown
In SQL 2005 version: Not possible to configure log shipping between 2 different domains.
In SQL 2008 version onwards: Possible to configure log shipping between 2 different domains.
Use pass-through security. Configure Windows NT accounts with the same name and passwords on the primary,
secondary and monitor servers. Configure SQL Server related services to start under these accounts on all servers and
use SQL authentication while setting up log shipping to connect to the monitor server. Or
Use conventional Windows NT security. You must configure the domains with two-way trusts. SQL Server related
services can be started under domain accounts. Either SQL authentication or Windows authentication can be used by
jobs on the primary and secondary servers to connect to the monitor server
Note: Only Enterprise and Developer editions of SQL Server support log shipping.
• Re-establish log shipping only on standby server (i.e. remove the Copy/Restore/Alert jobs and delete the standby
database) and add the Standby instance as a new standby database.
• You can set up log shipping between versions, however things aren't that simple. If you use it in this way and you need
to fail over to the standby server, you now have no way to swap the log shipping roles, as you can't then log ship back
from the 2008 server to the 2005 server.
http://www.databk.com/walkthrough5.htm
http://support.microsoft.com/kb/314515
Source: Unknown
5. Failover Process
10. If I perform shirking on primary LS database what will happen? Is it effect to secondary server database?
13.If change recovery model? LS there any impact to log shipping database?
>If you change the recovery mode of an existing secondary database, for example, from No recovery mode to Standby
mode, the change takes effect only after the next log backup is restored to the database.
14. Scenarios:
Source: Unknown
DB MIRRORING
> Database mirroring is a primarily software solution for increasing database availability.
> It maintains two copies of a single database that must reside on different server instances of SQL Server Database Engine.
(Or)
> Database mirroring transfers transaction log records directly from one server to another and can quickly fail over to the
standby server.
Mirroring pre-requisites:
> Make sure that the two partners that is the principal server and mirror server, are running the same or same version
> Minimum SQL Server 2005 SP1 is required to configure mirroring or if you have RTM then you have enable trace
DBCC TRACEON (1400)--Only for Sql 2005 RTM...From Sql 2008 onwards no need to have any sp even can configure with RTM
• Principal– the server that holds a copy of database that is accessible to client applications at any given time.
• Mirror – the server that holds copy of database that is always in restoring state that is not accessible to the
applications. (Or) Where db is restoring and not having access to application\users
• Witness – the optional server that is useful to provide an automatic failover mechanism in case of any failure on principal
server. (Or) Used mainly for failover\monitoring.
• Endpoint: A SQL Server object that enables Principal, Mirror & Witness servers to communicate over the network.
Snapshot on mirror is possible for read-only purpose that is ideal for reporting requirements.
Transaction safety level – that determines whether the changes on the principal database are applied to the mirror database
synchronously or asynchronously. Two safety levels—OFF and FULL.
1. Synchronous:
> In this mode when any log record sent from principle to mirror, acknowledgement has to send back from mirror to principle.
Moreover first the transaction should commit @ mirror server.
Source: Unknown
2. Asynchronous:
> In this mode when any log record sent from principle to mirror, acknowledgement no need to send back from mirror to
principle. Moreover first the transaction should commit @ principle server itself.
>Another configuration mode (possible) asynchronous mode with no witness server although this is possible to setup it is not
recommended because it combines the risk of data loss and split-brain scenario.
Mirroring Architecture:
Architecture Points:
1. In mirroring transaction log record write into log file (Log buffer).
2. The same Log records sent from principle to mirror server database
Endpoint: Mainly help to communicate and data transfer from principle to mirror server database
• Database mirroring architecture is more robust and efficient than Database Log Shipping.
• It can be configured to replicate the changes synchronously to minimized data loss.
• It has automatic server failover and client failover mechanism.
• Configuration is simpler than log shipping and replication, and has built-in network encryption support (AES algorithm).
• Does not require special hardware (such as shared storage, heart-beat connection) and cluster ware, thus potentially
has lower infrastructure cost
2. High performance
3. High protection
> The data transfer type in this mode is SYNCHRONOUS. I.e. acknowledgement should sent from mirror to principle
> Transaction first should commit in mirror server database and then it commit in principle server database.
>If transaction is not commit in mirror due to any reason again the same transaction sends from principle to mirror for further
process.
> This mode supports both automatic and manual failover methods.
> The data transfer type in this mode is SYNCHRONOUS. i.e. acknowledgement should sent from mirror to principle
Source: Unknown
> Transaction first should commit in mirror server database and then it commit in principle server database.
>If transaction is not commit in mirror due to any reason again the same transaction sends from principle to mirror for further
process.
> The data transfer type in this mode is ASYNCHRONOUS. i.e. No acknowledgement should sent from mirror to principle after
any transaction reach
> Transaction first should commit in Principle server database and then it commit in Mirror server database.
>Principle db does not know whether transactions are reaching mirror or not.
1. MONITORING IN MIRRORING:
Note: In Sql server database mirroring to monitor we have inbuilt tool released called as "LAUNCH DATABASE MIRRORING
MONITOR".
> DB> right click> click > mirror monitor> Check the columns
CURRENT ROLE
MIRRORING STATE
>Unsent Log: When principle database\server is not online then what ever the transactions are running principle should be
store in "UNSENT LOG"
> Unsent log size increases more if mirror database not present for longer time.
>Unrestored log: After some time mirror server \database present the data which is pending in principle server unsent log will
be sent to mirror un-restored log.
> Mirror commit overhead: How much time take to commit any transaction in mirror server.
T-SQL Method:
Use msdb
3. Manually fail over a database mirroring session at the mirror in High Availability Mode by issuing the following command
at the principal
4. Select * from sys.database_mirroring ---> provides information about principal and mirror
Whenever principle database\server down witness come into picture to perform failover to mirror server and brings the mirror
database online i.e...Accessible to end user\application.
Whenever principle goes down DBA should perform manual failover if no witness.
Note: This type of failover supports in “HIGH AVALIBILITY WITH AUTOMATIC FAILOVER + High protection” modes
Post failover:
>Application\end users can auto redirect connection to mirror server. No need to inform DBA specifically to app team.
>Logins need to create in mirror server whenever you create in principle instance...
3. SWITCHOVER PROCESS:
> In mirroring there is no separate switch over process. Why because if we do failover then automatically roles get changes and
again no switch over concept is required.
Note: In High performance mode, we cannot be able to perform manual failover. If we need to do failover then first need to
change the database transfer type from Asynchronous mode to synchronous mode.i.e. please change the mirroring mode from
high performance to high protection.
> In High performance mode, only supports FORCE FULL FAILOVER MODE
3. Apply one transactional log backup in principle server database and copy the log backup to mirror server.
4. Take the t-log backup file and restore in mirror database with no recovery by using with move.
Source: Unknown
with no recovery,
5. Reconfigure the mirroring for the database and check the stats of mirroring monitor.
Method 2: Instead of breaking the mirroring we can perform "PAUSE Mirroring” from database properties.
After 4th step we should resume the mirroring and check whether mirroring is working or not.
http://www.mssqltips.com/sqlservertip/2834/how-to-add-a-database-file-to-a-mirrored-sql-server-database/
1. Collect logical file names for your databases [User and system]
sp_helpfile
alter database dbname modify file (name='logical file name', filename='newpath') -----Data file
alter database dbname modify file (name='logical file name', filename='newpath') -----log file
Note: Verify whether logical name is updated with new path by using sp_helpfile again...
>If witness included, then automatic failover occur and mirror become principle.
4. Move physical files to new path and bring the instance online.
Note: While moving physical .mdf, .ldf, .ndf files use always copy & paste method but not cut and paste.
Note: When you are performing @mirror server automatic failover happen and principle become online at principle instance.
Note: If we don’t have witness then DBA need to perform manual failover before taking the instance offline in both principle
and mirror instance.
Source: Unknown
If we delete, insert, update, truncate, shrink records in principle server then automatically delete, insert, update, truncate,
shrink records in mirror databases also without effect mirroring configuration.
7. JOBS IN MIRRORING?
Only 1 job: “Database Mirroring Monitor Job” gets created in both principle and mirror server. But witness does not have the
job
• Mirror db offline
>Take a login script in principle server and execute the same script in mirror server to recreate in mirror instance.
>In mirroring users move automatically from principle to mirror server database after synchronization.
(Or)
> When create any login @ principle then create the same login @mirror server as well.
> If you map the same login @ principle to any mirror configured database, then same login create as a user in mirror database.
> Automatically same user account replicate from principal db to mirror db.
Once fixed, automatically a mapping happen between login and user along with the permissions.
Note: After creating login @mirror server login state in DISABLE status. Ensure you have to enable the login.
Note: In real time whenever you create any login in principle ensure that the same login you create in mirror immediately.
Source: Unknown
>If we take manual full or t-log backup for principle database there is no impact to mirror server database and mirroring still
works.
Process steps:
1. Apply in witness server instance even there is no impact to mirroring [Only impact failover process become manual].
2. Once done, perform patch in mirror server instance only after witness online.
3. Once mirroring patch completed, perform manual FAILOVER to mirror instance before start patch in principle.
4. Apply patch in principle instance and fail back if required after the service pack.
http://blog.sqlauthority.com/2010/04/05/sql-server-2008-introduction-to-snapshot-database-restore-from-snapshot/
Main purpose:
1. We generate snapshot on mirror database (Restoring status) and can perform data reading from snapshot in mirror server.
i.e. we cannot update\delete \insert into snapshot database tables due to read_only.
3. If we want to read recent \concurrent changes from snapshot required to generate always RECENT snapshot to read recent
updates.
AS SNAPSHOT OF [DBNAME]
No. You cannot delete the database until you first delete the snapshot database.
Yes possible.
Source: Unknown
3. Network issue
Advantages:
Automatic failover
Disadvantages:
>Direct there is no reporting support from mirror server due to restoring state until you have used Database SNAPSHOT
>Multiple mirror servers does not support. Only 1 mirror server is possible.
For mirrored configured database, you cannot be able to change the recovery model from full to any other bulk-logged\simple.
Note: why the reason: mirroring supports only full recovery model. If you change the recovery model mirroring is going to fails.
If you need to change, break the mirroring and then perform the recover model change operation. But again can't configure
mirroring.
16. WHAT ARE THE NEW FEATURES IN SQL 2008 MIRRORING ONWARDS?
> If any page is corrupted in mirror server database in 2005 SQL then mirroring is in out of sync and DBA should fix this issue
manually by performing auto page recovery process.
>Where as in SQL 2008 onwards AUTO PAGE RECOVERY is possible. If any page is corrupted in MS then get the same copy of the
page from PS.
>I/O errors on the principle server may be fixed during the mirroring session
>I/O errors on the mirror server require the mirroring session to be suspended
QUORUM:
1. Quorum contains the information of which instance is currently acting as principle (online) and which instance is acting as a
mirror server.
Note: When witness included in mirror configuration, always before perform automatic failover witness reads from QUORUM to
know from which instance to failover.
DYNAMIC MANAGEMENT VIEWS: This concept is introduced in SQL Server 2005 version.
The main purpose we can monitor SQL Server without consuming hardware resources like DBCC queries.
SELECT name, type, type_desc FROM sys.system_objects WHERE name LIKE 'dm_%' ORDER BY name
Output:
List of DMV'S
Source: Unknown
1. Returns a row for every automatic page-repair attempt on any mirrored database on the server instance.
2. This view contains rows for the latest automatic page-repair attempts on a given mirrored database, with a maximum of 100
rows per database.
connection_id
transport_stream_id
state
state_desc
connect_time
login_time
authentication_method
principal_name
remote_user_name
last_activity_time
is_accept
login_state
login_state_desc
peer_certificate_id
encryption_algorithm
encryption_algorithm_desc
receives_posted
is_receive_flow_controlled
sends_posted
is_send_flow_controlled
total_bytes_sent
Source: Unknown
total_bytes_received
total_fragments_sent
total_fragments_received
total_sends
total_receives peer_arbitration_id
2. Sys.dm_db_mirroring_connections:
Db id
File id
Page id
Error type
Page status
Modification time
If primary server is down then users cannot connect Supports auto client redirect method.
automatically to secondary server as no auto client redirect
Monitoring is via jobs or reports Mirroring monitor tool is now in place to monitor.
STATE=STARTED
STATE=STARTED
REPLICATION
What is Replication?
>It is a set of technologies for copying and distributing data and database objects from one database to another and then
synchronizing between databases to maintain consistency. Using replication, you can distribute data to different locations and to
remote or mobile users over local and wide area networks, dial-up connections, wireless connections, and the Internet.
>Merge data from several source databases into one destination database
Replication Terminologies:
1. ARTICAL: A table, schema, column, rows, indexes, views, store procedure, triggers. Etc. can called as "ARTICAL" in replication.
2. PUBLISHER: Where data coming from application and the article is online.
4. DISTRIBUTOR: It act as intermediate between publisher and subscriber. i.e. distributing article data from publisher to
subscriber(S).
5. LOCAL PUBLICATION: The number of articles included as a part of publisher server called "local publication"
6: LOCAL SUBSCRIPTION: The number of articles included as a part of subscriber server called "local subscription
>Data transfer in replication: Replication always uses "replication agents" to transfer data.
Replication pre-requisites:
>Verify that there are no differences in system collation settings between the servers.
>Verify that the local windows groups and SQL Server Login definitions are the same on both servers.
>Verify that CLR assemblies deployed on the publisher are also deployed on the subscriber.
>Verify that SQL agent jobs and alerts are present on the subscriber server, if these are required.
>Verify that for the certificates and keys used to access external resources, authentication and encryption match on the
publisher and subscriber server.
TYPES OF REPLICATIONS:
3. TRANSACTIONAL WITH UPDATABLE SUBSCRIPTION [BI DIRECTIONAL] ---Removed from SQL SERVER 2012
>Snapshot replication refers to a replication method between databases. During this process, data is infrequently updated at
specified times by copying data changes from the original database (publisher) to a receiving database (subscriber).
MINIMUM REQUIREMENTS:
ARCHITECTURE:
> Snapshot taken from publisher on article and stored into snapshot folder in distributor server by SNAPSHOT AGENT
> From snapshot folder collect snapshot by distributor agent and distributes the data to subscriber and store into subscriber
server.
CONFIGURATION STEPS:
> Provide the password after adding publisher instance and specify the snapshot folder path to store snapshots.
2. Configure publisher instance and select the article table to publish as a part of replication
3. Configure subscriber.
Key points:
SNAPSHOT Background: When you generate a snapshot initial schema+data (if any) takes from publisher and store into
snapshot folder.
Source: Unknown
> The same snapshot replicate to subscriber and apply at subscription article.
1. Snapshot agent
2. It prepares schema and initial data files of published tables and other objects, stores the snapshot files.
2. Distributor agent
1. Distributor agent takes snapshot files from snapshot folder, apply to subscriber.
2. Depending on the push and pull type of replication distributor agent operate either from distributor or from subscriber
SNAP.exe
Distrib.exe
1. SCH [Schema Script]: This contains the script of the published tables (schema)
2. BCP [Bulk Copy Program]: Contains data which need to move to subscriber
4. IDX [Index File]: Contains index of publisher tables which need to apply in subscriber after creating.
PULL SUBSCRIPTION: In this mode distributor always works from subscriber and pulls data from distributor snapshot.
PUSH SUBSCRIPTION: In this mode distributor always works from distributor and sends data to subscriber article.
> Pub and sub db names can be different but still support.
Disadvantages:
>Con current changes in pub are not replicate to sub immediately. Only when after re generating the new snapshot.
Note: When you configure snapshot replication with multiple subscribers then multiple distribution agents will create...
>Transactional replication is the automated periodic distribution of changes between databases. Data is copied in (or near)
real-time from the primary server (publisher) to the receiving database (subscriber). Thus, transactional replication offers an
excellent backup for frequent, daily databases changes.
Transactional replication typically starts with a snapshot of the publication database objects and data. As soon as the initial
snapshot is taken, subsequent data changes and schema modifications made at the Publisher are usually delivered to the
Subscriber as they occur (in near real time)
ARCHITECUTRE:
> Initial schema and data sends from publisher to subscriber by using snapshot agent method
> After con current changes replicate from publisher to subscriber by using a new agent creates in transaction replication “log
reader agent [logread.exe]".
1. Snapshot agent:Take initial schema and data in publisher and store to snapshot folder.
CONFIGURATION STEPS:
1. Configure distributor
Source: Unknown
2. Configure publisher
3. Configure subscriber
2. From there distributor agent takes the same snap and apply to subscriber.
3. After any con current changes [DML operation or any] @ publisher table then those changes replicate immediately to
subscriber by LOG READER AGENT via distributor agent
4. Log reader agent collect data from transaction log file @pub and stores into distributor > distribution database. From their
distributor agent takes and send to subscriber.
5. Transactional replication is always suitable for any types of critical transaction happen every second.
> But if you insert data at sub data won’t come to publisher
>Generally in simple recovery model when check point the transaction in VLF's truncated.
Source: Unknown
>But in replication, when you perform any transaction as per WAL, should logged into transaction log after replication MARK
THOSE TRANSACTIONS AS
>Replication transaction and will not be deleted from log file in publisher until move to distributor even truncate operation
comes.
Replication Monitor:
IN Replication, to monitor the synchronization status we have to use "REPLICATION monitor" tool...
Transactional Replication:
Latency: How much time take to send the transaction from pub to sub
TRACER TOKEN: Measures the latency values from publisher to distributor and distributor to subscriber and overall total
latency.
> Check column "PERFORMANCE" to know whether replication working fine or not.
REPLICATION JOBS:
1. Agent history clean up: distribution: Removes replication agent history from distribution database
2. Distribution clean up: distribution: Removes transactions from distribution database to control the size
3. Expired subscription clean up: Detect and removed expired subscriptions from publication database.
Detect whether any data validation errors then if any errors renationalize the subscriber with fresh snapshot from publisher.
Schedule: No schedule
Note: This job not recommend to run in business hours which cause performance impact.
5. Replication agents checkup: Checks whether replication agents are running or not
>Updatable subscriptions for transactional replication allow Subscribers to replicate changes to the Publisher. Triggers are added
to the published tables in the subscription database, and when a change is made at the Subscriber, the trigger fires:
1. When data DML operation performs at publisher, same data sends to sub by using SNPAHOT and Log reader agents.
2. If you perform any DML operations at subscriber then data sent to publisher by using 2 methods.
• For immediate updating subscriptions, the change is propagated directly to the Publisher and applied using Microsoft
Distributed Transaction Coordinator (MSDTC).
• For queued updating subscriptions, the change is first propagated to a queue and then applied to the Publisher by the
Queue Reader Agent.
1. SNAPSHOT AGENT
2. LOG READER AGENT
3. QUEUE READER AGENT
4. DISTRIBUTOR AGENT
CONFIGURATION STEPS
1. Configure distribution
2. Configure publisher.
Changes: - When you configure, the article gets added with new extra column "MSREPL_TRAN_VERSION"
Note: - "MSrepl_tran_version" >this column is used for change tracking and conflict detection.
3. Configure Subscriber:
Changes:-
1. At the time of configuration required to configure "LINKED SERVER" for the purpose of connectivity from subscription to
publisher articles.
2. You have option to select the type of data sending from sub to pub either "MSDTC [IMMEDIATE SYNCRONISATION] OR QUEUE
READER CHANGES"
Changes @subscriber:
Trigger @ subscriber:-
1. trg_MSsync_del_tab
2. trg_MSsync_ins_tab
Source: Unknown
3. trg_MSsync_upd_tab
1. Dbo.sp_MSsync_del_tab_1
2. Dbo.sp_MSsync_ins_tab_1
3. Dbo.sp_MSsync_upd_tab_1
Case 1: IF MSDTC
2. The trigger calls through MSDTC to the appropriate stored procedure at the Publisher.
3. The stored procedure performs the insert, update, or delete unless there is a conflict. If there is a conflict, the change is rolled
back at the Publisher and the Subscriber.
Source: Unknown
1. Updates made at the Subscriber are captured by triggers on the subscribing tables. The triggers store these updates in
MSreplication_queue.
2. The Queue Reader Agent reads from MSreplication_queue, and then applies queued transactions to the appropriate
publication using replication stored procedures.
3. While applying the queued transactions, conflicts (if any) are detected and resolved according to a conflict resolution policy
that is set when the publication is created.
4. Changes made at the Publisher as a result of changes replicated from a Subscriber are propagated to all other Subscribers
according to the Distribution Agent schedule.
Note: - ALWAYS FASTER AND COMMIT happen immediately in MSDTC type of data transfer.
But where as in Queued transaction, data not going to send to sub and first take same transaction into queue
"MSREPLICATION_QUEUE" and then store. From there sent to publisher to "QUEUE reader agent"
Source: Unknown
1. Merge replication data can be read or write from any server site ...i.e. from publisher or either subscriber
2. In Merge replication data transfer between multiple server or sites by using agent called “MERGE AGENT " and place a major
role.
Architecture Points:
1. Bi-directional replication is possible [update data in publication or subscriber and it will maintain same data to maintain sync.]
2. Rowguid-column [unique identifier column] will help to avoid conflict detection in both publication and subscription.
4. Primary key is not required. Merge agent collect or distribute all changes at publisher and subscribers then collect all changes
at all the locations[publisher, subscriber] and do processing then distribute to all add maintain same data at all locations
Merge replication, like transactional replication, typically starts with a snapshot of the publication database objects and data.
Subsequent data changes and schema modifications Publisher and Subscribers are tracked with triggers.
Source: Unknown
The Subscriber synchronizes with the Publisher when connected to the network and exchanges all rows that have changed and
Subscriber since the last time synchronization occurred.
Merge replication is typically used in server-to-client environments. Merge replication is appropriate in any of the following
situations:
1. Multiple Subscribers might update the same data at various times and propagate those changes to the Publisher and to
other Subscribers.
2. Subscribers need to receive data, make changes offline, and later synchronize changes with the Publisher and other
Subscribers.
3. Each Subscriber requires a different partition of data.
4. Conflicts might occur and, when they do, you need the ability to detect and resolve them.
5. The application requires net data change rather than access to intermediate data states. For example, if a row changes
five times at a Subscriber before it synchronizes row will change only once at the Publisher to reflect the net data
change (that is, the fifth value).
CONFIGURATION STEPS:
Note:
1. In merge replication; distribution agent or server job is idle except storing snapshot history at distribution database.
2. Merge agent collect all data from different sites and store into distribution database for processing of data .Once done
removes from distributor database and re send to other servers.
2. Configure publisher:
Note: - ROW GUID COLUMN gets added to publisher tables and same replicate to multiple subscribers.
Changes:-
Note: - Below triggers and Store procedures gets created in all servers included as a part of merge replication.
Store Procedures:
dbo.MSmerge_del_sp_80FDDC450D814A6F4A371E97BB9A4ECC
dbo.MSmerge_ins_sp_80FDDC450D814A6F4A371E97BB9A4ECC
dbo.MSmerge_upd_sp_80FDDC450D814A6F4A371E97BB9A4ECC
Source: Unknown
dbo.MSmerge_sel_sp_80FDDC450D814A6F4A371E97BB9A4ECC
Triggers:
MSmerge_del_EDE22551187F43A2A520C328D819CF51
MSmerge_ins_EDE22551187F43A2A520C328D819CF51
MSmerge_upd_EDE22551187F43A2A520C328D819CF51
Tables:-From these tables merge agent read and gets\collect the data changes.
dbo. MSmerge_genhistory: Contains one row for each generation. A generation is a collection of changes that is delivered to a
publisher or subscriber. Generations are closed each time the Merge Agent runs, Subsequent changes in an are added to one or
more open generations.
The Merge Agent detects conflicts by using the lineage column of the MSmerge_contents system table.
>If column-level tracking is enabled for an article, the COLV1 column is also used. These columns contain metadata about when
a row or column is inserted or updated, and about which nodes in a merge replication topology made changes to the row or
column.
sp_showrowreplicainfo (Transact-SQL):
Displays information about a row in a table that is being used as an article in merge replication
>As the Merge Agent enumerates changes to be applied during synchronization, it compares the metadata for each row at the
Publisher and Subscriber.
Source: Unknown
>Peer-to-peer replication provides a scale-out and high-availability solution by maintaining copies of data across multiple server
instances, also referred to as nodes. Built on the foundation of transactional replication, peer-to-peer replication propagates
transnationally consistent changes in near real-time.
>This enables applications that require scale-out of read operations to distribute the reads from clients across multiple nodes.
Because data is maintained across the nodes in near real-time, peer-to-peer replication provides data redundancy, which
increases the availability of data.
MINIMUM REQUIREMENTS:
> Enterprise edition of SQL Server should be and minimum 2005 version...
> Table schema and table name should be identical between all the nodes.
CONFIGURATION STEPS:
2. Take database full backup and restore into all other nodes with recovery
For 2005\8\8R2:
Source: Unknown
At the time of local publication configuring need to select "TRANSACTIONAL REPLICATION" due to PEER TO PEER only added in
2012 in GUI configuration
4. Right click > properties on local publication> enable "PEER TO PEER REPLICATION TO TRUE"
6. Once done, automatically publication and subscriptions gets created in all the nodes.
Feature Restrictions
Advantages:
Disadvantages:
Source: Unknown
1. Troubleshooting is difficult
5. End users also should point their connection to sub instance manually...i.e. no auto client redirect method support like
mirroring
REPLICATION SCENARIOS:
Transaction with updatable subscription: Snapshot agent, distributor agent, log reader, queue reader agent.
Peer to Peer Replication:Snapshot agent, distributor agent, and log reader agent [Per each peer or node]
2. HOW TO ADD AND DELETE ARTICLE IN EXISTING REPLICATION? WHAT IS DIFF BETWEEN 2000, 2005 AND 2008 ADDING
ARTICLE PROCESS?
Adding article
2. Take the scripts in publisher and subscriber for create and drop scripts.[Purpose: like a backup of existing replication
configuration]
4. Renationalize the snap shot again from replication monitor: to replicate newly added article from publisher to subscriber
Through query:
Example:EXEC SP_addartical
Source: Unknown
@publication = Pub_dbAmericasCitrixFarm,
@article = Table_2,
Removal of article:
2. Take the scripts in publisher and subscriber for create and drop scripts.
Example:
EXEC sp_dropsubscription
@publication = Pub_dbAmericasCitrixFarm,
@article = N'Table_2',
@subscriber = 'FTDCCWPCTRXSQL';
If we perform any schema changes to the existing replicated article, then we need to re initialize snapshot from publisher one
more time.
4. REMOVAL OF REPLICATION?
Method 1:
Go to the publication >replication> "Disable publishing and distribution" option>click next and finish
Automatically delete distribution database. After go to local publication delete publication and subscription from local
subscriber.
Method 2:
Method 1: Go to distribution server>right click> disable publishing and distribution> remove distribution database.
1. Sys.dm_repl_articles: Returns information about database objects published as articles in a replication topology.
Undistributed commands: which displays how many commands or queries which are waiting to move from distribution
database to subscriber. It will also displays how much take to replicate those commands from distributor to subscriber.
Note: Verify \check the history whether data is moving from "Publisher to distributor" or "Distributor to subscriber" and check
"undistributed command"
Note: Perform regular index maintenance, update stats, reindex, on Replication system tables just like you do for user tables.
MSmerge_contents
MSmerge_genhistory
MSmerge_tombstone
MSmerge_current_partition_mappings
MSmerge_past_partition_mappings
Merge Replication:
Store Procedures:
dbo.MSmerge_del_sp_80FDDC450D814A6F4A371E97BB9A4ECC
Source: Unknown
dbo.MSmerge_ins_sp_80FDDC450D814A6F4A371E97BB9A4ECC
dbo.MSmerge_upd_sp_80FDDC450D814A6F4A371E97BB9A4ECC
dbo.MSmerge_sel_sp_80FDDC450D814A6F4A371E97BB9A4ECC
Triggers:
MSmerge_del_EDE22551187F43A2A520C328D819CF51
MSmerge_ins_EDE22551187F43A2A520C328D819CF51
MSmerge_upd_EDE22551187F43A2A520C328D819CF51
Trigger @ subscriber:-
1. Trg_MSsync_del_tab
2. Trg_MSsync_ins_tab
3. Trg_MSsync_upd_tab
1. Dbo.sp_MSsync_del_tab_1
2. Dbo.sp_MSsync_ins_tab_1
3. Dbo.sp_MSsync_upd_tab_1
>There will be no impact if we change recovery models due to replication support all type of recovery models.
Yes we can modify distribution database but it effect replication. Not recommended
>Truncate is a non-logged operation and it will not replicate automatically from publisher to subscriber
>In subscriber, need to perform truncate operation again to sync both pub and sub.
Error messages:
Violation of PRIMARY KEY constraint 'PK_tablename'. Cannot insert duplicate key in object 'dbo.tablename'. (Source:
MSSQLServer, Error number: 2627)
Replication sync gets fail if this error reported. data\transaction sends from publisher to distributor. But from distributor to
subscriber it fails and report the error in transactional replication monitor.
Where to find:
>Go to replication monitor> All subscriptions column > Status column> double click> check the information from publisher to
distributor and dist> sub
>You can see error of primary key violation in dist to sub history.
Transaction sequence no
Command ID:
Solution:
1. First find what transaction is inserted into subscriber which is causing this error by using below commands and execute only
on distribution database.
@xact_seqno_start = ‘0x00000018000000A1000300000000’,
@xact_seqno_end = '0x00000018000000A1000300000000',
Source: Unknown
2. COMMAND with transaction: {CALL [sp_MSins_dboetab] (N'3 ', N’c ', N’ap ')}
Yes. Scripting the replication configuration is a key part of any disaster recovery plan for a replication topology
16. WHY DOES REPLICATION ADD A COLUMN TO REPLICATED TABLES; WILL IT BE REMOVED IF THE TABLE ISN'T PUBLISHED?
>Merge replication adds the column rowguid to every table, unless the table already has a column of data type unique
identifier with the ROWGUIDCOL property set (in which case this column is used). If the table is dropped from the publication,
the rowguid column is removed; if an existing column was used for tracking, the column is not removed.
Rowguid column:added in merge
MSrepl_tran_version: transactional updatable subscription
>Yes. There are no restrictions on the number or types of publications that can use the same distribution database. All
publications from a given Publisher must use the same Distributor and distribution database.
>No. Replication does not encrypt data that is stored in the database or transferred over the network
>In SQL Server replication data transfer happen by using method of “BCP [BULK COPY PROGRAM] - for data, sch –for schema,
Ind-for indexes move”
>Yes we can modify the distribution database tables but not recommended...it will go to create impact.
Source: Unknown
Steps:
>After schema changes in publisher article then we require to renationalize the snapshot agent to generate new snapshot
>Go to subscriber and then renationalize to apply new snapshot into subscriber article.
Note1: In peer –to-peer replication automatically schema change replicate to subscriber without snapshot agent run.
Note2: If we run snapshot then only modified data changes replicate to subscriber.
Snapshot Replication Transactional Replication Transactional replication with Merge Replication Peer-2-Peer
updatable subscription replication
Snapshot ,
Distribution
Snapshot , Distribution Snapshot , Distribution agents, Snapshot , agents
Snapshot and
agents and Log reader Queue Reader agent and Log Distribution agents and Log reader
Distribution agents
agent reader agent and Merge agent agent
(each per one
node)
one way replication - use
one way replication transactional BI- Directional BI- Directional BI- Directional
Only one publishers Only one publishers are Only one publishers Multiple
are allowed allowed are allowed publishers
Only one publishers are are allowed
allowed
Multiple subscribers Multiple subscribers
are allowed Multiple subscribers are are allowed
allowed
Multiple
Multiple subscribers are subscribers
allowed are allowed
Peer 2 peer
replication has
Merge replication has rich conflict
Updatable replication has rich
Not required[ rich conflict detection detection
Not required conflict detection and handling
conflict detection] and handling and handling
capabilities as well.
capabilities as well. capabilities
as well from
2008.
Source: Unknown
To add article To add article required to To add article required to To add article To add article
required to generate generate snapshot as generate snapshot as required to generate required to
snapshot as mandatory mandatory snapshot as generate
mandatory mandatory snapshot as
mandatory
It could be set to track column changes It creates at least one procedure call
instead of row changes per row affected
Advantages of Replication
CLUSTERING
Clustering:
>Windows clustering: 2 or more independent systems (referred as nodes) working for same purpose which is to provide
continuous high availability to the instance
Note:
2. Cluster feature is enabled for most of the O\S like win servers 2000, 2003, 2008, 2008 R2, 2012, 2014
Note: IN WINDOWS 2008 ONWARDS NAMEING CONVENTION HAS CHANGED TO "FAILOVER CLUSTER"
4. GEO CLUSTER
5. VERITAS CLUSTER
1. Network Load Balancing acts as a front-end cluster, distributing incoming IP traffic across a cluster of servers.
2. Up to 32 computers running a member of the Windows Server 2003 family can be connected to share a single virtual IP
address.
3. NLB enhances scalability by distributing its client requests across multiple servers within the cluster.
4. As traffic increases, additional servers can be added to the cluster; up to 32 servers are possible in any one cluster.
5. NLB also provides high availability by automatically detecting the failure of a server and repartitioning client traffic among the
remaining servers within 10 seconds, while it provides users with continuous service.
1. Component Load Balancing distributes workload across multiple servers running a site's business logic.
2. It provides for dynamic balancing of COM+ components across a set of up to 8 identical servers
Failover Clustering:
Cluster Service acts as a back-end cluster; it provides high availability for applications such as databases, messaging and file and
print services.
MSCS attempts to minimize the effect of failure on the system as any node (a server in the cluster) fails or is taken offline.
MSCS failover capability is achieved through redundancy across the multiple connected machines in the cluster, each with
independent failure states.
Hardware Information:
Required Software:
2) Windows Server 2012 R2 (with trial license free for 180 days)
4) Three virtual machines: one setup as Domain Controller and DNS Server (DC), the other two as clustered nodes (WIN1 and
WIN2).
Cluster Configuration:
Source: Unknown
DC
Win1
Win2
DC Server configuration:
Dism /online /enable-feature /feature name: NetFX3 /all /Source: d:\sources\sxs /LimitAccess
testcluster.com
5. Add Extra nic card for internal node1 and node2 communication.
5. Add Extra nic card for Internal node1 and node2 communication.
1. Cluster configuration
2. Inventory
3. Storage
4. Network
5. System configuration
1. Check all ip's are working and pinging between all nodes.
Source: Unknown
OVERALL WINDOWS CLUSTER IPADDRESS FOR 2 NODE CLUSTER: Total 7 for only windows
Windows team handover to DBA team after windows cluster installation for Sql server installation.
• Aside from speed, another great advantage the SCSI card can connect 15 or more devices in a daisy chain.
QUORUM Overview:
The database resides in a file named \MSCS\quolog.log. The quorum is sometimes also referred to as the quorum log.
Types of QUORUM:
1. Standard Quorum
1. Standard Quorum:
>This types generally used in WINDOWS SERVER 2003 O\S, which help to configure windows cluster.
>If Standard quorum is down, then entire windows cluster not works until you reconfigure the quorum...Why..? Due to cluster
service always read node information from quorum \MSCS\quolog.log only.
>If quorum is not available then failover stuff not going to happen.
Note: If quorum disk goes down in Wind 2008\R2\2012 still my cluster work due to maintains a local copy of "quolog.log" in
each node. Due to this my cluster service can read from local copy of each MNS quorum .This makes always my cluster to run
even quorum down in wind 2008 onwards.
Heart Beat: Very important component in cluster which always sends UDP packs between nodes for every 1.2 sec.
If packets fails for 5 times then the cluster services will initiate failover of your applications.
Cluster groups:
ACTIVE-PASSIVE
ACTIVE-ACTIVE
SINGLE NODE CLUSTER [Feature may add another node in same cluster]
>Single instance is a new terminology that started from SQL 2008 onwards. A single instance cluster has only one instance of
SQL Server, which is owned by one node and all other nodes are in wait state till the failover occurs.
>This is equalent to multiple instance cluster, where for example if we have 4 nodes. Each node contains one instance hosted
(i.e. 3 instances hosted on three nodes) and the 4th node is maintained as a standby waiting for failover. N+1 means N nodes are
hosted with SQL Server instances and 1 node is in waiting state.
>This is also a multiple instance cluster where N nodes are hosted with SQL Server instances and M nodes are in waiting state.
Cluster Service:
• The cluster service manages all the activity that is specific to the cluster. One instance of the cluster service runs on
each node in the cluster. The cluster service does the following
Resource:
RESOURCES IN CLUSTER:
Disks
QUORUM
MSDTC0
PUBLIC IP
PRIVATE IP
MSDTC: is used by SQL Server and other applications when they want to make a distributed transaction between more than one
machines. A distributed transaction is simple a transactions which spans between two or more machines. The basic concept is
that machine 1 starts a transaction, and does some work. It then connects to machine 2 and does some work. The work on
machine 2 fails, and is cancled. The work on machine 1 needs to then be rolled back.
RESOURCE STATE:
Offline
Offline_Pending
Online
Online_Pending
Failed
Source: Unknown
CLUSTERING TERMS:
Cluster Nodes:
• A cluster node is a server within a cluster group. A cluster node can be Active or it can be Passive as per SQL Server
Instance installation.
Heartbeat:
• Heartbeats are single User Datagram Protocol (UDP) packets exchanged between nodes once every 1.2 seconds to
confirm that each node is still available. If a node is absent for five consecutive heartbeats, the node that detected the
absence initiates a regroup event to make sure that all nodes reach agreement on the list of nodes that remain
available.
Private Network:
• The Private Network is available among cluster nodes only. Every node will have a Private Network IP address, which
can be ping from one node to another. This is to check the heartbeat between two nodes.
Public Network:
• The Public Network is available for external connections. Every node will have a Public Network IP address, which can
be connected from any client within the network.
• A shared disk array is a collection of storage disks that is being accessed by the cluster. This could be SAN or SCSI RAIDs.
• Windows Clustering supports shared nothing disk arrays. Any one node can own a disk resource at any given time. All
other nodes will not be allowed to access it until they own the resource (Ownership change occurs during failover).
Quorum Drive
• This is a physical drive assigned on the shared disk array specifically for Windows Clustering. Clustering services write
constantly on this drive about the state of the cluster. Corruption or failure of this drive can fail the entire cluster setup.
Cluster Name
• This name refers to Virtual Cluster Name, not the physical node names or the Virtual SQL Server names. It is assigned
to the cluster as a whole.
Cluster IP Address
• This IP address refers to the address which all external connections use to reach to the active cluster node.
• This account must be configured at the domain level, with administrator privileges on all nodes within the cluster
group. This account is used to administer the failover cluster.
Source: Unknown
• This includes any services, software, or hardware that can be configured within a cluster. Ex: Generic Application,
Service, Internet Protocol, Network Name, Physical Disk.
Cluster Group
• Conceptually, a cluster group is a collection of logically grouped cluster resources. It may contain cluster-aware
application services, such as SQL Server 2000, 2005, 2008.
• This is the SQL Server Instance name that all client applications will use to connect to the SQL Server.
• This refers to the TCP/IP address that all client applications will use to connect to SQL +Server; the Virtual Server IP
address.
• Certain SQL Server Components require MS DTC to be up and running. MS DTC is shared for all named / default
instances in cluster group.
• This is the SQL Server service account, and it must follow all the rules that apply to SQL Service user accounts in a
non-clustered environment.
4. Component selection
5. VITRUAL SERVER NAME [Extra screen in cluster setup]--Application and end user connectivity to SQL Server
6. VITRUAL SERVER IP Address [Extra screen in cluster setup]--Application and end user connectivity to SQL Server
9. Domain user and group account should require for SQL Server services.
2. Virtual name
3. Virtual ip
8. 29 configuration checks
> Network ip
DIFFERENCES BETWEEN SQL 2005 CLUSTER AND SQL 2008\8 R2\12\14 CLUSTER:
SQL 2005:
Source: Unknown
2. Start servers+tools in active node but only SERVER replicate automatically into passive node [DATABASE ENGINE
SERVICE+ANALYSIS SERVICE+SQL AGENT+SQL FULL TEXT]-Cluster aware services
3. First Sql installation complete in passive node and then complete in active node only cluster aware service or components.
Note: Automatically installation replicate by using domain level admin account and password.
Extra screens:
Virtual name
Virtual ip
2. Start server and tools in Active node but no changes are replicate in passive node.
3. Once active node installation completed, then manually go to passive node > re run the setup >"ADD NODE” option > then
again install servers+tools
Again one more set of binary files gets create under node 2 as well.
Extra screens:
Network name
Networkip
Note:
Tools:
Reporting services
Integration services
Notification services.
Note: Only we can see cluster aware services from cluadmin but not unaware services...
CLUSTER SENEARIOS:
Moving SQL Server resource from active node to passive node called "FAILOVER".
2 Ways we have
Method 1:
Go to cluadmin.msc> go to services and applications>go to Sql server > right click> move application or services from NODE AAA
TO NODE BBB...or best possible node or select node.
>Then automatically all the SQL Server resources will failover to another node.
Source: Unknown
Automatically cluster resource take bring online in another node in below process.
2. Automatic failover: If node is down then automatically SQL and disks resources move to another node.
Note: When we perform any kind of failover we need to have instance level downtime.
i.e. Sql server services take offline in current node and after comes online into another node
Active node:
Passive node:
3. CLUSTER MONITORING?
Ex:CheckingSqlserver SERVICES, FAILOVER, FAILBACK, OFFLINE, ONLINE OF SQL SERVER RESOURCE... etc.
Source: Unknown
Cluster Monitoring:
Look Alive check: (called as Basic resource health check) verifies that SQL Server is running on the current node.
If Look Alive check fails Windows Cluster performs anIs Alive check.
Is Alive check :(called as Thorough resource health check) runs every 60 seconds and verifies instance is up and running or not
using the command in Resource DLL called SELECT @@SERVERNAME every 60 seconds.
Quorum
MSDTC
IS\LOOK ALIVE RUNS> use administrator account> writes into quorum> cluster service reads from quorum> required trigger
automatic failover of apps.
3. Once passive node up, please go to active node start applying service pack for server +tools Automatically same service pack
replicate to node 2 servers (passive node).
Source: Unknown
Note: Until service pack complete, Sql server databases are not accessible to application and downtime is more.
> If service pack fail in active node then it automatically fails in passive node.
> SP always apply to node 1binnary files and node2 binary files automatically
1. Always apply the service pack in passive node for Server +tool binary files. Meanwhile my application still access the active
node database and can run the business.
3. Once node up, go to active node > Perform manual failover of SQL resource to passive node.
4. Then apply service pack in previous active node for servers+toolsbinary files
Note:
> Very less down time required for any Maintenance activities
> Risk is very less...in terms of if sp is failed in passive node no impact to another node and can still my application run the
business.
Note: FROM Active NODE DO the failover of SQL Server resources to passive node.
Note: From Sql server 2008 with minimal down time (only for restart) can apply the patch.
ACTIVE-ACTIVE:
> If it is active -active node then need to perform by manual failover to any node and make one node as passive node .Then
apply patch for passive after failback again to active node.
1. Failover multiple instance to node1 and now node 2 become passive node.
Go to cluadmin> Services and Applications> Right click > check nodes> apply
>Preferred nodes are an attribute of a resource group which is outside of SQL Server and is a Windows Resource.
>You can always find out which node owns the resource by using
SERVERPROPERTY ('ComputerNamePhysicalNetBIOS')
Example:
Then my node2 need to bare the load of both NODE 1 SBI+NODE2 ICICI instances.
End users may face the performance impact>How to avoid this case?
Your application gives good performance even 2 instances are running in single node.
Go to registry> hkey_local_mechine> system> current control set> control> session manager> " PENDING FILE RENAME
OPERATIONS> Open file and clear the data...
Note: Registry can be backup and restore by using IMPORT \EXPORT methods at windows level
Source: Unknown
C:\windows\cluster\clusterlog.txt
>After adding windows team or san team disk into disk management. DBA team check whether disk is showing in my computer
or not.
> Now DBA team need to make the added disk as a dependency to SQL Server service.
> Add storage to the SQL Server group and then perform below step.
Note: Adding dependency disk is completely offline operation i.e. need to take SQL Server services offline...Downtime is
required
Go to cluster administrator> go to Sql server main server> go to properties>go to dependency tab> modify> add disk...> click ok
Note: Adding dependency disk is completely online operation i.e. No need to take SQL Server service offline or downtime is not
required.
Go to cluster administrator> go to Sql server main server> go to properties>go to dependency tab> modify> add disk...> click ok
10. DIFFERENCE BETWEEN WINDOWS SERVER 2003\2005, WINDOWS SERVER 2008\R2\SQL 2008\R2, WINDOWS SERVER
2012\SQL SERVER 2012
Points Windows server 20003\SQL 2005 Windows server 2008,2008 r2,\SQL 2008, Windows server 2012
R2
Number of
8 16 16
nodes
Dependency Downtime required. Need to restart Online operation. NO
Online operation. NO downtime required
disk SQL Server services downtime required
But in SQL Server 2008
But in SQL Server 2008 Onwards if any Onwards if any resource
In SQL Server 2005 cluster, if any
resource offline then with in default 15 offline then with in default 15
Policies resource went to offline, we require
min of interval cluster try to start the min of interval cluster try to
manually to bring online.
resource into online. … start the resource into online.
…
Source: Unknown
LOCKS
Locks:
>To hold a specific object (tables, database, pages, rows, instance, extent, key……etc)in SQL Server by using this locking
concept.
> To provide consistence data or right data or correct data to the end user
Note: Lock internally managed by lock manager and takes the decision depend on the transaction what lock to be applied.
LOCK RESOURCES:
ROW LEVEL: Row identifier used to lock a single row with in a table
DATABASE LEVEL:Database
KEY LEVEL:row lock with in an index used to product key ranges in serializable transaction
SP_LOCK
OR
Output:
Request type
Request Session id
Source: Unknown
SP_WHO2
TYPES OF LOCKS:
1. Shared lock[S]: Multiple users can able to read the data on specific resource. No transaction or query need to wait.
>When transaction starts internally lock manager applies shared lock and once reading completed lock revoked automatically.
2. Exclusive Lock[X]: When we perform any insert, update and delete operations then exclusive lock (X) placed on resource.
>Always lock manager gives the priority to DML operations compare to any select queries.
3. Update Lock [U]: Whenever we perform any update operations then update lock placed in SQL Server.
>Update lock calls most of the time exclusive lock (X) by lock manager.
4. Schema Lock (SCH-L): When performing any locks at schema table then lock manager raise Schema level lock.
5. Bulk Update [BU]: Bulk update lock generally placed by lock manager when there are any bulk transactions.
2 Types:
>Instead of multiple row level locks better is table level lock. Which reduces number of locking types and improves the
performance by escalating lock.
>SQL Server supports escalating the locks to the table level. The locks can only be escalated from rows to the table or pages to
the table level.
Note: In Sql server can be maintain by lock manger users are DBA does not have any consoling locking system
BLOCKINGS
Blockings:
Source: Unknown
>Blocking occurs when one SPID holds a lock on a specific resource and a second SPID attempts to acquire a conflicting lock type
on the same resource.
Method 1:SP_WHO2
Method 4:Go to database > right click> reports> standard reports> all blocking sessions
Solution:
1. Check what is the spid causing the blocking in column "BLKBY” and find the queries which are relate to by using
2. Share these spid and query information to apps guys and ask confirmation from their side to kill one of the SPID to resolve
the blockings.
3. Once apps team confirms then we will proceed for killing the spid. Ensure apps team send email but not verbally.
KILL SPID
5. Ask apps team to check the query status and keep monitoring from DBA end.
-Application runs slowly and apps side queries take longer time
- If blockings run long time then it may down the Sql server. ASAP inform after monitoring to apps team and kill after
confirmation.
Long-running queries.
Note: BLOCKING information never stores in error log or event viewer. We have to capture when blockings are running.
Blocking:
Create table A
Open--54:
Open--56:
Open --58:
Dbcc inputbuffer(54)
sp_lock
to inform app team which query imp based on kill the session
DEADLOCKS
Source: Unknown
Deadlock: A deadlock occurs when two or more tasks permanently block each other by each task having a lock on a resource
which the other tasks are trying to lock.
Error message:Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the
deadlock victim.Rerun the transaction.
Impact: If deadlock occurs it impact user queries\connections\application slowness\ and Sql server may down if long time
deadlock runs
How to find?
Note: By default deadlock information never capture in SQL Server error log or event viewer or you cannot find by using any
query or DMV's.
We have to enable trace or run the profiler to capture the victim transactions for deadlocks
Method: 1
Enable DBCC TRACEON (1222, 1204,-1) then after automatically SQL Server captures deadlock transaction into SQL Server
errorlogs.
-1: Global Trace it will check the all the data bases.
DBCC TRACESTATUS
Method 2:
Sysadmin
Once run then automatically deadlock transactions captured into profiler tool in a graph representation.
Solution: Deadlocks always kill or resolved by deadlock manager based on "DEADLOCK_PRIORITY” parameter gives which SPID
need to kill.
Pass the final information to dead lock manager and kills victim transaction to fix deadlock.
3. Avoid cursors.
INDEXES
Index
> An index is a collection of pages associated with a table (or view) used to speed retrieval of rows from the table or enforce
uniqueness
Types of indexes:
1. HEAP TABLE
>An index can be an unnecessary storage and maintenance overhead. (Eg: Audit Log)
>Using a table scan to find data can be quicker than maintaining and using an index. (Eg: less rows)
>Frequently rows are added, deleted, and updated. The overhead of index maintenance can be more costly than the benefits.
>Contain predominantly duplicate data rows. A table scan can be quicker than an index lookup.
2. CLUSTER INDEX:
> A poorly-chosen clustered index lead to high execution times and storage space.
> One clustered per Table. As only one physical way to store data.
Note:
>In cluster index data always stores in databases which are at leaf level.
>Index data always stores at root and intermediate levels where it contains only index pages.
Note:
>Primary Key by default creates a clustered Index. But whereas cluster index never create primary key on table.
Columns:
Ind id
Source: Unknown
Index name
[COLUMN NAME]ASC
Note: Per table can create only one cluster index. The reason can arrange data physically in 1 way.Mutliple physical ways are not
possible.
2.6 byte page pointer [Address reference at bottom level in page for next page]
Limitations:
>To return a range of values by using operators (BETWEEN, >, >=, <, and <=). Once first row is found subsequent indexed values
will be physically adjacent.
>The data values in the rows are already sorted. This improves query performance.
>Changes to a clustered index mean that the entire row of data must be moved to keep the data values of a row in physical
order.
>In non-cluster index data pages stores in storage location instead of leaf nodes
> Only pointer address maintain in leaf level and index pages gets created in root, intermediate level and leaf level as well.
> In this index type performance is not faster compare to cluster index due to multiple levels of search.
Per table:
Syntax:
[COLUMN NAME]ASC
1. Only one clustered index per table, where as you can have more than one non cluster index
2. Cluster index is faster than a non-clustered index, because, the clustered index has to refer back to the table, if the selected
column is not present in the index.
Source: Unknown
3. Clustered index determines the storage order of rows in the table, and hence doesn't require additional disk space, but
whereas a Non Clustered index is stored separately from the table, additional storage space is required.
3. UNIQUE INDEX
>This is the type of index where it did not allow duplicate values on created column.
>This type of index allow UNIQUE null value but not multiple null values.
Syntax:
Note: Can create unique index in cluster index as well .But there is no use as cluster index properties only applies for the tables
4. COMPOSITE INDEX
When you include more than a column at the time of index creation called as COMPOSITE index
Or
Can configure single index more than one column (Multiple) called as composite index
Note: Reduces number of index creation and storage space. Improve performance as well.
5. COVERING INDEX
> If the query include all the columns in a table with indexes called "COVERING index".
> All columns should include as a part of the indexes if we miss one column then it became composite index.
> Reduces DISK I\O contention and improve the query performance.
>Partitioning can improve performance by distributing data across files, reduce contention, and increase parallel IO operation.
> This type of index can use when any column data type as a "XML" type.
> This type of index improve query performance 50 times faster when retrieving any XML data from the tables.
> Can this index used on row level on table included in portion.
> It reduce storage cost, improve query performance, reduce the Maintenance cost and Maintenance is not required until any
alter index performed.
> Allow null values due to this only NON CLUSTER FILTERED INDEX Can create.
> Can create 249 spatial index on a spatial column supported table.
> Location aware devices and services like GPS/ online mapping needs spatial data support to manage location aware data
2. Vertipaq – a New Microsoft tech – different way to store columns and compress data in index.
3. In regular index, all indexed data from each row kept in a single page. And columns of different rows spread across.
4. In column store index, data from each column are kept together in same page.
Source: Unknown
> The SQL Server in-memory columns tore index stores and manages data by using column-based data storage and
column-based query processing.
> Column store indexes work well for data warehousing workloads that primarily perform bulk loads and read-only queries.
> Use the column store index to achieve up to 10x query performance gains over traditional row-oriented storage, and up to 7x
data compression over the uncompressed data size.
Index Pub:
FILL FACTOR:
>It tells how much percentage of index data need to fill in index pages [ROOT and intermediate levels]
> A fill factor is reserved free space on each leaf level page which is used for future growth of data or index in table and reduces
the page splits.
Assign Fill factor: right click on server> properties > database settings> default index fill factor> provide the value
Note:As per MS recommend fill factor always set 80% and rest of 20% used for index Maintenance .
PAD INDEX:
>This is depends on fill factor percentage value It tells how much percentage of data need to fill in data pages [Leaf levels]
>By default there is no value and it always depends on fill factor parameter.
Note:Without enabling or set fill factor pad index not possible to enable.
Source: Unknown
INDEX SCAN: It scans all the rows in table and can we consider as a table scan. Which takes long time generally to retrieve data.
INDEX SEEK: Only search for qualified rows in SQL Server and it is always faster than index scan.
The pages gets effected with fragmentation due to huge insert\update or delete
FRAGMENTATION
Select * from sys.dm_db_index_physical_stats (DB_ID (‘DB NAME’), Object_id (‘tablename’), NULL, NULL, ‘Detailed’)
Avg_fragementaion_in_pernt:
>0 to <5: Indexes are good and no fragmentation .No action required.
Fragmented Index:
FRAGEMENTATION TYPES
1. Internal Fragmentation: Index pages takes more pages than needed. Data stored in pages in improper order.
REORGANIZE: Internally the data gets rearranged inside the data pages and no effect to existing indexes.
REBUILD: Existing indexes dropped after recreating newly with same name and on same columns. At the time index recreation
automatically data gets rearranged.
Note: When you are creating index\rebuild\reorganize the table loaded into TEMPDB system database and then perform the
operations. If Tempdb disk does not have sufficient disk space then index may fail.
Note: Upto SQL Server 2000 version rebuilding the index required down time due to table cannot be accessible to end users.
[OFFLINE INDEXING]
Source: Unknown
When rebuild index is running still table can be accessible by end users.
Note: Rebuild or reorganize always performs at Tempdb. 120% of free space should require depends on the table size.
How to find:
UNUSED INDEXES:If indexes are not used they should be dropped as Indexes reduces the performance for INSERT/UPDATE
statement. Indexes are only useful when used with SELECT statement.
To find whether any unused indexes in SQL Server can get by DMV.
Note: if any UN unused indexes in SQL Server then we cannot get good performance. To use indexes drop existing indexes and
recreate the same indexes.
ISOLATION LEVELS
>Isolation levels in SQL Server control the way locking works between transactions.
1. READ UNCOMMITTED
2. READ COMMITTED [THE DEFAULT]
3. REPEATABLE READ
4. SERIALIZABLE
5. SNAPSHOT [SQL 2005 NEW FEATURE]
>Before I run through each of these in detail you may want to create a new database to run the examples, run the following
script on the new database to create the sample data.
Note: You’ll also want to drop the Isolation Tests table and re-run this script before each example to reset the data.
Id INT IDENTITY,
SELECT 1,2,3
1.Dirty Reads – This is when you read uncommitted data, when doing this there is no guarantee that data read will ever be
committed meaning the data could well be bad.
2. Phantom Reads – This is when data that you are working with has been changed by another transaction since you first read it
in. This means subsequent reads of this data in the same transaction could well be different.
1. READ UNCOMMITTED
>This is the lowest isolation level there is. Read uncommitted causes no shared locks to be requested which allows you to read
data that is currently being modified in other transactions. It also allows other transactions to modify data that you are reading.
>As you can probably imagine this can cause some unexpected results in a variety of different ways. For example data returned
by the select could be in a half way state if an update was running in another transaction causing some of your rows to come
back with the updated values and some not to.
>To see read uncommitted in action let’s run Query1 in one tab of Management Studio and then quickly run Query2 in another
tab before Query1 completes.
Query1
Source: Unknown
BEGIN TRAN
ROLLBACK
Query2
>Notice that Query2 will not wait for Query1 to finish, also more importantly Query2 returns dirty data. Remember Query1 rolls
back all its changes however Query2 has returned the data anyway, this is because it didn’t wait for all the other transactions
with exclusive locks on this data it just returned what was there at the time.
>There is a syntactic shortcut for querying data using the read uncommitted isolation level by using the NOLOCK table hint. You
could change the above Query2 to look like this and it would do the exact same thing.
>This is the default isolation level and means selects will only return committed data. Select statements will issue shared lock
requests against data you’re querying this causes you to wait if another transaction already has an exclusive lock on that data.
Once you have your shared lock any other transactions trying to modify that data will request an exclusive lock and be made to
wait until your Read Committed transaction finishes.
>You can see an example of a read transaction waiting for a modify transaction to complete before returning the data by
running the following Queries in separate tabs as you did with Read Uncommitted.
Query1
Source: Unknown
BEGIN TRAN
ROLLBACK
Query2
>Notice how Query2 waited for the first transaction to complete before returning and also how the data returned is the data we
started off with as Query1 did a rollback. The reason no isolation level was specified is because Read Committed is the default
isolation level for SQL Server. If you want to check what isolation level you are running under you can run “DBCC useroptions”.
Remember isolation levels are Connection/Transaction specific so different queries on the same database are often run under
different isolation levels.
3. REPEATABLE READ
> This is similar to Read Committed but with the additional guarantee that if you issue the same select twice in a transaction you
will get the same results both times. It does this by holding on to the shared locks it obtains on the records it reads until the end
of the transaction, this means any transactions that try to modify these records are force to wait for the read transaction to
complete.
> As before run Query1 then while its running run Query2
Query1
Source: Unknown
BEGIN TRAN
ROLLBACK
Query2
> Notice that Query1 returns the same data for both selects even though you ran a query to modify the data before the second
select ran. This is because the Update query was forced to wait for Query1 to finish due to the exclusive locks that were opened
as you specified Repeatable Read.
> If you rerun the above Queries but change Query1 to Read Committed you will notice the two selects return different data
and that Query2 does not wait for Query1 to finish.
Note:
> One last thing to know about Repeatable Read is that the data can change between 2 queries if more records are added.
Repeatable Read guarantees records queried by a previous select will not be changed or deleted, it does not stop new records
being inserted so it is still very possible to get Phantom Reads at this isolation level.
4. SERIALIZABLE
Source: Unknown
> This isolation level takes Repeatable Read and adds the guarantee that no new data will be added the chance of getting
Phantom Reads. It does this by placing range locks on the queried data. This causes any other transactions trying to modify or
insert data touched on by this transaction to wait until it has finished.
> You know the drill by now run these queries side by side…
Query1
BEGIN TRAN
ROLLBACK
Query2
VALUES (100,100,100)
> You’ll see that the insert in Query2 waits for Query1 to complete before it runs the chance of a phantom read. If you change
the isolation level in Query1 to repeatable read, you’ll see the insert no longer gets blocked and the two select statements in
Query1 return a different amount of rows.
Serializable: Phantom reads are node possible due to transaction execution performs serially.
>Well it’s more in the way it works, using snapshot doesn’t block other Queries from inserting or updating the data touched by
the snapshot transaction. Instead it creates its own little snapshot of the data being read at that time, if you then read that data
again in the same transaction it reads it from its snapshot, This means that even if another transaction has made changes you
will always get the same results as you did the first time you read the data. To use the snapshot isolation level you need to
enable it on the database by running the following command
SET ALLOW_SNAPSHOT_ISOLATION ON
Note:
Advantages of Snapshot:
No blockings
No deadlocks
>The table data gets mapped into space file in TEMPDB after set the isolation level as SNAPSHOT then what ever the SELECT
queries reads from Tempdb sparse file and whatever inserts\updates\deletes performs on actual table in the user database
SWITCHES
/3GB SWITCH:
>By default SQL Server can be able to utilize 2 GB max memory in 32-bit o/s.
Source: Unknown
>If required to use SQL Server more than 2 GB [Max 3 GB] then recommended to enable /3GB switch in BOOT.INI file.
C:\BOOT.INI
>Note pad will be opened and at last enable /3GB and save the notepad file. After your SQL Server can be able to consume 3 GB
max.
If RAM<4GB:
/3GB
/3GB /PAE
/PAE
Note: After enabling PAE switch at o\s level SQL Server can go and utilize entire memory from O\S .Due to this chance of
running Server out of memory which can leads to Server hang. Required to control or restrict or cap memory at SQL Server level
by using SQL Server SWITCH “AWE"
Note: As there is a risk involved with memory consumption from OS Mode, it is always recommended to set "MAX SERVER
MEMORY" restriction.
Note: In 64-bit O/S no need to enable any switches to use memory by SQL Server applications.
Note:
>IN WINDOWS and SQL 32 BIT O\S Need to enable AWE & PAE switches to use more memory to SQL Server.
>In x 64 bit O\S no need to enable AWE OR PAE switch .Directly we can enable MIN AND MAX memory values at SQL Server
level.
SQL 2005 or LOWER VERSIONS: After setting AWE with min and max memory required to restart ONLY SQL SERVER SERVICES.
Source: Unknown
SQL 2008 ONWARDS: No need to perform ant Sql server service restart after min and max memory.
32-bit 64-bit
Lesser Data Transfer Speeds upto 32bits Faster (double) performance benefit, speeds upto 64bits
Maximum RAM supported is 4GB Maximum RAM Supported is 7-8TB
Less memory registers allocated Has more memory registers allocated
Reserves less space for OS reserved portions when compared to 64
Reserves more space for OS reserved portions
bit
Memory mapped files are more difficult to implement in 32 bit It’s easy to map memory mapped files
32 bit less expensive 64 bit expensive
Offers special enhanced security features like Kernel Patch
Protection, Support for hardware backup data execution
protection
More compatible device drivers Incompatible Device Drivers
> It is a Windows policy determines which accounts can use a process to keep data in physical memory, preventing the system
from paging the data to virtual memory on disk.
Note: Always SQL Server gives good performance when read or write from memory instead of disk. To control IO from Disk
better to add SQL Server service account need to add under Group policies.
GPEDIT.MSC -> Computer Configuration -> Windows Settings -> Security Settings -> Local Policies -> User Rights Assignment ->
Lock Pages in Memory -> Add Service Account to use the Physical Memory.
> In 32bit it is used to extend memory access beyond 4 GB or to enable the AWE feature. For 64bit systems, it is to possibly gain
performance and to “lock pages” for the buffer pool.
• The DAC allows an administrator to access any emergency issue a running instance of SQL Server Database Engine to
troubleshoot problems on the server—even when the server is unresponsive to other client connections.
Source: Unknown
• The DAC is available through the Sql cmd utility and SQL Server Management Studio [Query analyzer].
• By default DAC is enabled to connect through local connection. That means you can connect to the local instance of
SQL server using DAC without making any changes.
• To connect the SQL server using DAC from remote machine using TCP/IP, we have to enable the 'remote admin
connections’ using sp_configure. We can enable the remote DAC connection using the below query.
Sp_configure: This procedure allows you to enable internal configurations of SQL Server .Eg, memory, DAC, Query timeout.
Etc.
Note1:
• By default SQL server listen to DAC connection on port number 1434. If the port number 1434 is not available, SQL
server dynamically assign a port number during the start up and this can be found in the SQL server error log as given
below.
• DAC only allows you to connect query analyzer but not object explore SSMS.
>> By SQLCMD:
-P: Password
Note:If the DAC is already in use, the connection will fail with an error indicating it cannot connect.
Source: Unknown
DAC Limitations:
• Only one DAC connection is allowed per instance. If a DAC connection is already open, new connection request will be
denied with error 10053.
• You can’t connect SSMS object explorer using the DAC connection, but you can connect a query analyser window.
• SQL server prohibits running parallel queries or commands on DAC connection. Error 3637 is generated if you try to
perform a backup or restore operation.
• Only login with Sysadmin rights can establish the DAC connections.
• Cannot run parallel queries in SQL Server instance as only 1 DAC connection is allowed.
• SQL Server services are not possible to restart by using DAC query analyzer.
By default always DAC uses 1434 port number but if 1434 is not enabled in server then SQL uses dynamic port at the time
of SQL Server start up.
• This method help to connect a dedicated admin connection to o\s by using mstsc \admin
Max any server RDP allow only 2 connections. If I need to connect for any emergency then should use above method without
disconnecting any users.
1. Microsoft SQL Server 2008 has introduced a very exciting feature for logging DML changes.
2. Change data capture provides information about DML changes on a table and a database.
3.Change data capture records insert, update, and delete activity , that’s applied to a SQL Server table and makes a record
available of what changed, where, and when, in simple relational 'change tables’.
4. Also stores historical data and COLUMN level changes in SQL Server by using CDC feature.
5. Change data capture is available only on the Enterprise, Developer, and Evaluation editions of SQL Server
>>>How it works>
1. The source of change data for change data capture is the SQL Server transaction log.
2. As inserts, updates, and deletes are applied to tracked source tables, entries that describe those changes are added to the
log.
3. The log serves as input to the change data capture process. This reads the log and adds information about changes to the
tracked table’s associated change table.
EXEC sys.sp_cdc_enable_db
[cdc].[captured_columns]
[cdc].[change_tables]
[cdc].[ddl_history]
[cdc].[index_columns]
[cdc].[lsn_time_mapping]
[dbo].[systranschemas]
EXEC sys.sp_cdc_enable_table
@source_schema = N'dbo',
@source_name = N'MyTable',
@role_name = NULL
Note: Few CDC system table and 2 CDC jobs create automatically inside of the SQL Server databases
cdc.change_tables:This table returns list of all the tables which are enabled for capture.
cdc.ddl_history:This table contains history of all the DDL changes since capture data enabled.
cdc.lsn_time_mapping:This table maps LSN number (for which we will learn later) and time.
dbo.systranschemas:
Ex:CDC.DBO_STAB_CT
cdc.DBNAME_capture
cdc.DBNAME_cleanup
Findings:
1: Delete operation
2: Insert operation
3: Before update
4: After update
Source: Unknown
Along with this data gets captured into CDC defined table.
Note: Enable CDC only with confirmation from apps team or client... If you enable it consumes more hardware resource and
additional storage is required.
WAITING TYPES
> When if any query waiting for resources then relevant wait type comes into picture. Which cause high performance impact.
1. LCK_M_S: Occurs when a task is waiting to acquire a shared lock. [Occurs mostly in blockings]
2. ASYNC_IO_COMPLETION: Occurs when a task is waiting for I/Os to finish.
3. ASYNC_NETWORK_IO:Occurs on network writes when the task is blocked behind the network. Verify that the client is
processing data from the server.
Source: Unknown
>SQL Server architecture is mainly divided into different components i.e. SNI Protocol Layer, Relational Engine, Storage Engine,
Buffer Pool. Majorly classified as two main engines: Relational Engine and the Storage engine.
>The SQL Server Network Interface (SNI) is a protocol layer that establishes the network connectionbetween the client and the
server. It consists of a set of APIs that are used by both the databaseengine and the SQL Server Native Client (SNAC).
➤Shared memory: Simple and fast, shared memory is the default protocol used to connect froma client running on the same
computer as SQL Server. It can only be used locally, has no configurable properties, and is always tried first when connecting
from the local machine.
➤TCP/IP: TCP/IP is the most commonly used access protocol for SQL Server. It enables youto connect to SQL Server by
specifying an IP address and a port number. Typically, this happens automatically when you specify an instance to connect to.
Source: Unknown
Your internal name resolution system resolves the hostname part of the instance name to an IP address, and either you connect
to the default TCP port number 1433 for default instances or the SQL Browser service will find the right port for a named
instance using UDP port 1434.
➤Named Pipes: TCP/IP and Named Pipes are comparable protocols in the architectures in which they can be used. Named
Pipes was developed for local area networks (LANs).
➤VIA: Virtual Interface Adapter is a protocol that enables high-performance communications between two systems. It requires
specialized hardware at both ends and a dedicated connection.
TDS is a Microsoft-proprietary protocol originally designed by Sybase that is used to interact with adatabase server. Once a
connection has been made using a network protocol such as TCP/IP, a linkis established to the relevant TDS endpoint that then
acts as the communication point between theclient and the server.
The Relational Engine is also sometimes called the query processor because its primary function is query optimization and
execution.
1. Command Parser:
The Command Parser’s role is to handle T-SQL language events. It first checks the syntax and returns any errors back to the
protocol layer to send to the client. If the syntax is valid, then the next step is to generate a query plan or find an existing plan. A
Query plan contains the details about how SQL Server is going to execute a piece of code. It is commonly referred to as an
execution plan.
Plan Cache: Creating execution plans can be time consuming and resource intensive, so The Plan Cache, part of SQL Server’s
buffer pool, is used to store execution plans in case they are needed later.
2. Query Optimizer:
The Query Optimizer is one of the most complex and secretive parts of the product. It is what’s known as a “cost-based”
optimizer, which means that it evaluates multiple ways to execute a query and then picks the method that it deems will have
the lowest cost to execute. This “method” of executing is implemented as a query plan and is the output from the optimizer.
3. Query Executor:
The Query Executor’s job is self-explanatory; it executes the query. To be more specific, it executes the query plan by working
through each step it contains and interacting with the Storage Engine to retrieve or modify data.
>The Storage engine is responsible for managing all I/O to the data, and contains the Access Methods code, which handles I/O
requests for rows, indexes, pages, allocations and a Buffer Manager, which deals with SQL Server’s main memory consumer, the
buffer pool. It also contains a Transaction Manager, which handles the locking of data to maintain Isolation (ACID properties)
and manages the transaction log.
1. Access Methods:
Source: Unknown
>Access Methods is a collection of code that provides the storage structures for data and indexes as well as the interface
through which data is retrieved and modified. It contains all the code to retrieve data but it doesn’t actually perform the
operation itself; it passes the request to the Buffer Manager.
2. Buffer Manager:
>The Buffer Manager manages the buffer pool, which represents the majority of SQL Server’s memory usage. If you need to
read some rows from a page the Buffer Manager will check the data cache in the buffer pool to see if it already has the page
cached in memory. If the page is already cached, then the results are passed back to the Access Methods. If the page isn’t
already in cache, then the Buffer Manager will get the page from the database on disk, put it in the data cache, and pass the
results to the Access Methods.
Data Cache: The data cache is usually the largest part of the buffer pool; therefore, it’s the largest memory consumer within SQL
Server. It is here that every data page that is read from disk is written to before being used.
3. Transaction Manager:
>The Transaction Manager has two components that are of interest here: a Lock Manager and a Log Manager.
>The Lock Manager is responsible for providing concurrency to the data. The Access Methods code requests that the changes it
wants to make are logged, and the Log Manager writes the changes to the transaction log. This is called Write-Ahead Logging.
Checkpoint Process:
>A checkpoint is a point in time created by the checkpoint process at which SQL Server can be surethat any committed
transactions have had all their changes written to disk. This checkpoint thenbecomes the marker from which database recovery
can start.
>The checkpoint process ensures that any dirty pages associated with a committed transaction will be flushed to disk. Unlike the
lazy writer, however, a checkpoint does not remove the page from cache; it makes sure the dirty page is written to disk and then
marks the cached paged as clean inthe page header.
Lazy writer:
>The lazy writer is a thread that periodically checks the size of the free buffer list. When it’s low, itscans the whole data cache to
age-out any pages that haven’t been used for a while. If it finds anyDirty pages that haven’t been used for a while, they are
flushed to disk before being marked as freein memory.
Checkpoint occurs:
Cluster failover
Snapshot is created
Source: Unknown
Commit is issues.
Lazy writer
Check point.
Dirty pages
Dirty data.
Performance:
• Disk Issues
• DB Design
• SQL profiler is a tool to monitor performance of various stored procedures. It is used to debug the queries and
procedures. Based on performance, it identifies the slow executing queries. Capture any problems by capturing the
events on production environment so that they can be solved.
SQL Profiler captures SQL Server events from a server. The events are saved in a trace file that can be used to analyse and
diagnose problem.
• It is used to find the cause of the problem by stepping through problem queries.
• SQL Profiler also supports auditing the actions performed on instances of SQL Server.
• Monitor the performance of an instance of the SQL Server Database Engine, Analysis Server, or Integration Services
(after they have occurred).
• Collect a sample of events for tuning the physical database design by using database engine tuning advisor
• Aggregate trace results to allow similar event classes to be grouped and analyzed. These results provide counts based
on a single column grouping.
Profiler is tool that monitors the events and activity running on a SQL Server. Using profiler, this monitoring can be viewed,
saved, and replayed.
ErrorLog
EventLog
Exception
Hash Warning
Execution Warnings
Sort Warnings
Bitmap Warning
Lock: Acquired
Lock: Cancel
Lock: Deadlock
3.Lock and Timeout Events
Lock: Deadlock Chain
Lock: Timeout
Audit Login,
5.Security Audit:
Audit Logout
7.TSQL events:
SQL: BatchStarting
SQL: StmtStarting
SP: Starting
RPC: Starting
To start, stop and delete a trace you use the following commands.
To find traceid
This will give you a list of all of the traces that are running on the server.
To start a trace
Sp_trace_setstatus traceid, 1
To stop a trace
Source: Unknown
Sp_trace_setstatus traceid, 0
Sp_trace_setstatus traceid, 2
To delete you need to stop the trace first and then you can delete the trace. This will close out the trace file that is written.
1. DBCC opentran--To check what the current open transactions are running
2. Check blockings
3. Check deadlocks
4. Check indexes
Joins
Number of columns (eg:: 6 columns to get data but used select * from option then it will use all the columns)
PERFORMANCE TOOL
Performance monitor:
This is one of tool in built in SQL Server and the main purpose is to capture resource utilization in SQL Server in terms of CPU,
MEMORY, DISK I\O and network
1. MEMORY COUNTERS:
This counter tells how much time page residing in the buffer pool is nothing but Page life Expectancy.
2. Lazy Writes/Sec:
This counter tracks how many times a second that the Lazy Writer process is moving dirty pages from the buffer to disk in order
to free up buffer space.
Note: Generally speaking, this should not be a high value, say more than 20 per second or so
3. Checkpoint Pages/Sec: When a checkpoint occurs, all dirty pages are written to disk. This is a normal procedure and will
cause this counter to rise during the checkpoint process.
4. Page reads/sec: Number of physical database page reads issued. 80 – 90 per second is normal. Anything value more then
leads to memory pressure.
5. Page writes/sec: Number of physical database page writes issued. 80 – 90 per second is normal. Anything value more than
leads to memory pressure.
7. Stolen pages: Number of pages used for miscellaneous server purposes (including procedure cache).
8. Buffer Cache hit ratio: Percentage of pages that were found in the buffer pool without having to incur a read from disk.
9. Target Server Memory (KB): Total amount of dynamic memory the server can consume.
10. Total Server Memory (KB): Total amount of dynamic memory (in kilobytes) that the server is using currently
2. DISK COUNTERS:
1. Avg. Disk Sec/Read: Measure of disk latency. Avg. Disk sec/Read is the average time to read data from disk
2. Avg. Disk sec/Write: Measure of disk latency. Avg. Disk sec/Write is the average time, in seconds, of a write of data to the
disk.
3. Avg. Disk Read Queue Length: Avg. Disk Read Queue Length is the average number of read requests that were queued for the
selected disk during the sample interval
4. Avg. Disk Write Queue Length: Avg. Disk Read Queue Length is the average number of write requests that were queued for
the selected disk during the sample interval
5. Avg disk reads\sec: Per second how many number of reads should perform in disk
6. Avg disk writes\sec: Per second how many number of writes should perform in disk
Source: Unknown
%Processor Time:
%Privileged Time
User time
Interrupts time
4. NETWORK COUNTERS:
TCP: Segments/sec.
>The main purpose of this tool is to provide recommendations to improve the query performance by analysis the query [work
load file] on a table in the database.
> Database tuning advisor can analyze the performance effects of workloads run against one or more databases or a SQL
profiler trace (they may contain T-SQL batch or remote procedure call). After analyzing, it recommends to add, remove or
modify physical design structure such as clustered and non-clustered indexes, indexed views and portioning.
Workload:a work load is set of transact-SQL statements that executes against databases you want to tune.
Source: Unknown
Steps:
Note: After analysis DTA provides recommendations and then implement always on test server first. Check the performance and
after implement on prod system.
From DTA reports> statement cost report > Value have to check to know how much percentage of query faster.
Example:
Createtable [Table](userID varchar (55), FirstName nvarchar (55), Lastname nvarchar (55))
; Withsequenceas (
fromsys.all_columns c1,sys.all_columns c2
Select
DBCC FREEPROCCACHE
GO
DBCC DROPCLEANBUFFERS
GO
DBCC FREESYSTEMCACHE('ALL')
Source: Unknown
GO
DBCC FREESESSIONCACHE
Note:
ACID PROPERTIES
What is Transaction?
Transaction is a set of logical unit of work and it contains one or more database operations. A valid transaction should be met
(ACID) Atomicity, Consistency, Isolation, Durability properties.
Atomicity: A transaction must be an automatic unit of work, either all of it data modification are performed or none of them is
performed.
Consistency:When completed a transaction must leave all data in consistent state. In a relational database, all rules must be
applied to transactions modifications to maintain all data integrity. All internal data structures, such as B-tree indexes (or)
doubled linked lists must be correct at end of transaction.
Isolation: Modifications made by concurrent transaction must be isolated from the modifications made by the other
consequent transactions. A transaction either sees a data in the state it was in before another concurrent transaction modified
it, (or) sees the data after second transaction completed.
Durability:After a transaction has completed, its effects are permanently in place in the system. The modification persists even
in the event of a system failure.
Note: In Sql server express edition we won’t have Sql server agent service due to this we cannot be able to configure jobs or
either Maintenance plans to automate.
Steps:
Instance name
Path
Type of backup
Retention period
Path
Scheduler name
Schedule
4. Your task scheduler contact batch file and trigger backups via sp.
QUERY TUNING
DBCC OPENTRAN
If yes then according to your project process we find blockings and speak to apps team to confirm which SPID required to kill
with approvals via email.
KILL SPID
If no blockings then?
Note: By default SQL Server db engine will not capture any deadlock information DBA team need to enable trace flags.
Source: Unknown
2. Logical Operation
7. Cache Size
8. Rebinds
9. Rewinds
Actual Execution Plan - (CTRL + M) - is created after execution of the query and contains the steps that were performed
Estimated Execution Plan - (CTRL + L) - is created without executing the query and contains an approximate execution plan
6. Check whether any indexes are created on the table ator not. If no then inform your dev\apps team to suggest create index
which improves performance of query.
Select * from sys.dm_db_index_physical_stats (DB_ID (‘DB NAME’), Object_id (‘tablename’), NULL, NULL, ‘Detailed’)
-----(From SQL Server 2005)
If We FRAGMENTATION level
If no FRAGMENTATION
10. Check CPU and memory utilisation.get Top 10 queries which are consuming more CPU and memory in the Sql then inform
share the details to apps team.
11. Run profiler or perfmon tool to capture events or counters depending on type of parameters.
13. Run the DTA [Database tuning advisor] to get any suggestions to improve the performance
Note: How to find which SQL Server instance is consuming more CPU or memory when multiple instances in a single server.
Go to task manager> view > select columns > Select PID [Process id]. Take PID for SQL Server and compare with configuration
manager> Process id for specific instance. Compare both PID for task manager and configuration manager > Then conclude the
instance name which takes more memory or CPU…..
1. Go to task manager> check whether SQL Server .exe (SSMS.EXE) is using more CPU or not?
If using then look further investigation from inside Sql server else if any other resource .exe is using inform to concern team.
Check any long running or any open transactions are running in SQL Server causing high CPU.
2. If it’s SQL server then open perfmon and add the counter for
% Processor time
%user time
If you see the %privileged time to be high like covering around 40% and above then it could be some driver issue.
3. First and very important one is to know what’s the architecture i.e. 64 bit (X64 or IA 64) or 32 bit(x86):
Note: -Improper index or missing index or fragmented index which lead to query to run longer .It leads to high CPU.
Dbcc tracestatus (-1); It’s always good to check the reason of enabling the trace flags, if enabled.
sp_configure 'advanced', 1
go
go
sp_configure
go
Priority Boost: - it’s not recommended to be enabled as it can cause high CPU utilization. If it’s enabled (value =1) please disable
(set value to 0) it - it’s a static parameter which needs SQL restart
Max degree of parallelism: - Although this parameter doesn’t make much of a difference but sometimes I have seen SPID stuck
in ACCESS_METHODS_SCAN_RANGE_GENERATOR, cx packet or other parallel wait types. If you see any such waits when you
run this query:
Note: - If you have more than 8 processors, please make sure the max degree of parallelism parameter is 8 or below 8.
Source: Unknown
8. DBCC FREEPROCACHE: prepares new plan by query optimizer and recompilation starts again...Which may improve the
performance and reduce CPU utilization.
9. Check with developer on designing of query or Application design or any further bugs...
>The Max Degree of Parallelism or MAXDOP is a configuration indicating how the SQL Server optimizer will use the CPUs. This is
a server wide configuration that by default uses all of the CPUs to have the available portions of the query executed in parallel.
MAXDOP is very beneficial in a number of circumstances, but what if you have a reporting like query that runs in an OLTP system
that monopolizes much of the CPU and adversely affects typical OLTP transactions.
Or
>When SQL Server runs on a computer with more than one microprocessor or CPU, it detects the best degree of parallelism,
that is, the number of processors employed to run a single statement, for each parallel plan execution. You can use the max
degree of parallelism option to limit the number of processors to use in parallel plan execution. To enable the server to
determine the maximum degree of parallelism, set this option to 0, the default value. Setting maximum degree of parallelism to
0 allows SQL Server to use all the available processors up to 64 processors. To suppress parallel plan generation, set max degree
of parallelism to 1. Set the value to a number greater than 1 to restrict the maximum number of processors used by a single
query execution. The maximum value for thedegree of parallelism setting is controlled by the edition of SQL Server, CPU type,
and operating system. If a value greater than the number of available processors is specified, the actual number of available
processors is used. If the computer has only one processor, the max degree of parallelism value is ignored.
Complex/Long running queries – During query optimization, SQL Server looks for queries that might benefit from parallel
execution. It distinguishes between queries that benefit from parallelism and those that do not benefit, by comparing the cost
of an execution plan using a single processor versus the cost of an execution plan using more than one processor and uses the
cost threshold for parallelism value as a boundary point to determine short or long query. In a parallel query execution plan, the
INSERT, UPDATE, and DELETE operators are executed serially. However, the WHERE clause of an UPDATE or a DELETE statement,
or the SELECT part of an INSERT statement may be executed in parallel. The actual data changes are then serially applied to the
database.
Index data definition language (DDL) – Index operations that create or rebuild an index (REBUILD only, not applicable to
REORGANIZE), or drop a clustered index and queries that use CPU cycles heavily are the best candidates for a parallel plan. For
example, joins of large tables, large aggregations, and sorting of large result sets are good candidates.
Other Operations – Apart from the above, this option also controls the parallelism of DBCC CHECKTABLE, DBCC
CHECKDB, and DBCC CHECKFILEGROUP.
The degree of parallelism value is set at the SQL server instance level and can be modified by using the sp_configure system
stored procedure (command shown below). You can override this value for individual query or index statements by specifying
the MAXDOP query hint or MAXDOP index option. Note that this can be set differently for each instance of SQL Server. So if you
have multiple SQL Server instances in the same server, it is possible to specify a different Maximum DOP value for each one.
--and let you set only when show advanced options is set to 1
GO
RECONFIGUREWITHOVERRIDE;
GO
GO
RECONFIGUREWITHOVERRIDE;
GO
By default, when SQL is installed on a server, the parallelism setting is set to 0 meaning that the optimizer can utilize all the
available processors to execute an individual query. This is not necessarily the most optimal setting for the application and the
types of queries it is designed to support.
Scenario 1 – For OLTP application, a typical setting is 1 would help. The reason for this is that in an OLTP environment,
most of the queries are expected to be point queries which address one or a relatively small number of records. Such queries
do not need parallelized processing for efficient execution. If there are specific queries which have a need for a setting greater
than 1, then the source code needs to be examined to see if a MAXDOP hint can be availed.
Source: Unknown
Scenario 2 – For OLAP application, the setting should typically be default 0 (up to 8 processors) or be greater than 1,
because each queries, such application will use, will typical target thousands of, millions of records and also there might a
scenario when you drop the index before ETL operation and re-create it once refreshed data is uploaded in typical data
warehousing application. There will definitely be performance advantages in using multiple processors to do these works in
parallel fashion.
Note: Using a setting of 0 in these applications is not recommended, especially when there are more than 8 processors in order
to keep the coordination costs, context switching down to manageable levels. It is typical to start with a value of 4 and
experiment with the reporting queries to see if this needs to be adjusted upwards.
MEMORY ISSUE
Steps: 1. Go to task manager> check SQLsrv.exe or SSMS.exe how much memory utilizing.
2. If any other .exe is using more Memory then inform to relevant team. If SQL Server is consuming then go to SSMS> try to
check any spid consuming more memory by using
3. If example: is Sql server is consuming 20 GB out of total 24 GB physical memory, then how to find actual utilization from Sql
end.
Find Value or parameter: STOLEN POTENTIAL [Shows free memory from SQL level]
CONVERT VALUE INTO: VALUE*8 /1024/1024= Actual free memory out of total SQL Server consumed memory.
4. Please check whether memory capping or restriction is done at SQL Server level.
Source: Unknown
5. If not please inform apps team to raise Change record and you provide suggestion how much memory capping to be set
dedicatedly for SQL Server.
Always keep 80% of memory Sql server and rest of 20 % of memory to o\s out of total physical memory.
6. Still memory issue, then enable the memory related counters to find the standard values.
7. Verify the execution plans and sometimes query optimizer may prepare wrong plans which leads to high CPU and high
memory
Note: plz get approval from application team before performs the cache clear.
UPDATE STATISTICS
> By default, the query optimizer already updates statistics as necessary to improve the query plan in some cases you can
improve query performance by using UPDATE STATISTICS
> Updating statistics ensures that queries compile with up-to-date statistics. However, updating statistics causes queries to
recompile. We recommend not updating statistics too frequently because there is a performance tradeoff between improving
query plans and the time it takes to recompile queries
Note:
1. Update statistics is the one of the DBA Maintenance activity and runs every day after business hours.
2. After rebuild index no need to perform again update statistics. Why because whenever index runs automatically Update
statistics also gets updated.
3. Whenever create any index, always first create STATISTICS and the create indexes.
ACTIVITY MONITOR
> Activity monitor is used to get the information about users connections to the database engine and the locks that they hold.
Activity monitor is used to troubleshooting database locking issues, and to terminate deadlocked or unresponsive process.
>To use activity monitor: VIEW SERVER STATE permission on server and SELECT permission to the sysprocesses &syslocks tables
in the master database.
>To kill process: Sysadmin and processadmin database roles and permissions are required to KILL a process.
This snapshot is very helpful to get a quick performance snapshot without the need to use other monitoring tool for the same
purpose.
Source: Unknown
% Processor Time – is the percentage of time the processors spend to execute threads that are not idle
Waiting Tasks – is the number of tasks that are waiting for processor, I/O, or memory to be released so the tasks can be
processed
Database I/O – is the data transfer rate in MB/s from memory to disk, disk to memory, or disk to disk
Batch Requests/sec – is the number of SQL Server batches received by the instance in a second
>The Processes pane shows the information about the currently running processes on the SQL databases, who runs them, and
from which application
Session ID – is a unique value assigned by Database Engine to every user connection. This is the spid value returned by
the sp_who procedure
User Process – 1 for user processes, 0 for system processes. The default filter is set to 1, so only user processes are shown
Task State – the task state, blank for tasks in the runnable and sleeping state. The value can also be obtained using the
sys.dm_os_tasks view, as the task_state column. The states returned can be:
“PENDING: Waiting for a worker thread.
RUNNABLE: Runnable, but waiting to receive a quantum.
RUNNING: Currently running on the scheduler.
SUSPENDED: Has a worker, but is waiting for an event.
Source: Unknown
DONE: Completed.
SPINLOOP: Stuck in a spinlock.” [2]
Command – the current command type. The value can also be obtained using the sys.dm_exec_requests view, as
the command column
Wait Time (ms) – how long in milliseconds the task is waiting for a resource. The value can also be obtained using
the sys.dm_os_waiting_tasks view, as the wait_duration_ms column
Wait Type – the last/current wait type. The value can also be obtained using the sys.dm_os_waiting_tasks view, as
the wait_type column. The waits can be resource, queue and external waits
Wait Resource – the resource the connection is waiting for. The value can also be obtained using
the sys.dm_os_waiting_tasks view, as the resource_description column
Blocked By – the ID of the session that is blocking the task. The value can also be obtained using
the sys.dm_os_waiting_tasks view, as the blocking_session_id column
Head Blocker – the session that causes the first blocking condition in a blocking chain
Memory Use (KB) – the memory used by the task. The value can also be obtained using the sys.dm_exec_sessions view, as
the memory_usage column
Host Name – the name of the computer where the current connection is made. The value can also be obtained using
the sys.dm_exec_sessions view, as the host_name column
Workload Group – the name of the Resource Governor Workloadgroup [3]. The value can also be obtained using
the sys.dm_resource_governor_workload_groups view, as the name column
Wait Category – the categories are created combining closely related wait types. The wait types are shown in the Wait
Type column of the Processes pane
Wait Time (ms/sec) – the time all waiting tasks are waiting for one or more resources
Recent Wait Time (ms/sec) – the average time all waiting tasks are waiting for one or more resources
Average Waiter Count – is calculated for a typical point in time in the last sample interval and represents the number of tasks
waiting for one or more resources
Cumulative Wait Time (sec) – the total time waiting tasks have waited for one or more resources since the last SQL Server
restart, or DBCC SQLPERF last execution
Shows information about the database files on the SQL Server instance. For each database, all database files are listed – MDF,
LDF and NDF, their paths, and names
MB/sec Read – shows recent read activity for the database file
MB/sec Written – shows recent write activity for the database file
Response Time (ms) – average response time for recent read-and-write activity
Expensive queries are the queries that use much resources – memory, disk, and network. The pane shows expensive queries
executed in the last 30 seconds. The information is obtained from
the sys.dm_exec_requests and sys.dm_exec_query_stats views. A double-click on the query opens the monitored statement
The context menu for the specific query provides options to open the query in Query Editor, and show the execution plan
Executions/min – the number of executions per minute, since the last recompilation. The value can also be obtained using
the sys.dm_exec_query_stats view, as the execution_count column
CPU (ms/sec) – the CPU rate used, since the last recompilation. The value can also be obtained using
the sys.dm_exec_query_stats view, as the total_worker_time column
Physical Reads/sec, Logical Writes/sec, and Logical Reads/sec – the rate of physical reads/logical writes/logical reads per
second. The value can also be obtained using the sys.dm_exec_query_stats view, as the total_physical_reads/
total_logical_writes/ total_logical_reads columns
Average Duration (ms) – average time that the query runs. Calculated based on
the total_elapsed_time and execution_count columns in the sys.dm_exec_query_stats view
Plan Count – the number of duplicate query plans. A large number requires investigation and potential explicit query
parameterization
Source: Unknown
EXECUTION PLAN
Execution plan graphically displays the data retrieval methods chosen by SQL Server. It represents the execution cost of specific
statements and quires in SQL Server. This graphical approach is very useful for understanding the performance of the query.
What is an execution plan? When would you use it? How would you view the execution plan?
An execution plan is basically a road map that graphically or textually shows the data retrieval methods chosen by SQL
Server Query optimizer for a stored procedure or ad-hoc query and is a very useful tool for a developer to understand the
performance characteristics of a query or stored procedure science the plan is the one that SQL Server will place in its cache and
use to execute the stored procedure or query. From within Query Analyzer is an option called “Show Execution Plan” (located on
the query drop-down menu). If this option is tuned on it will display query execution plan in separate window query is ran again.
1. Physical operations
2. Logical operations
3. Actual Number rows
4. Estimated I/O cost
5. Estimated CPU cost
6. Number of Executions
7. Estimated Number of Executions
8. Estimated Operator cost
9. Estimated Subtree cost
10. Estimated Number of Rows
11. Estimated Row Size
12. Actual rebinds
13. Actual rewinds
14. Key lookup
15. Nested look up
16. Index seek
17. Index scan
Source: Unknown
RAID LEVELS
RAID Levels: [RAID stands for Redundant Array of Inexpensive or Independent Disks]
>RAID is a disk system that contains multiple disk drives, called an array, to provide greater performance, fault tolerance, storage
capacity, at a moderate cost. While configuring your server system, you typically have to make a choice between hardware RAID
and software RAID for the server’s internal disk drives
TYPES OF RAIDS:
1. RAID -0:
1. Minimum 2.Disks.
2. RAID-1:
1. Minimum 2 disks.
3. RAID-5:
3. Best cost effective option providing both performance and redundancy. Use this for DB that is heavily read oriented. Write
operations will be slow.
1. Minimum 4 disks.
If you can afford the dollar, this is the BEST option for any mission critical applications (especially databases)
Note: From disk management you can find whether RAID is configured or not by using "if any 2 or 3 disks letter having same
...That mean raid implemented"
TEMPDB ARCHITECTURE
– User Objects
– Internal Objects
– Version Stores
User Objects:
• Table variables
Internal Objects:
Source: Unknown
• Work files needed for many GROUP BY, ORDER BY, UNION, and SELECT DISTINCT operations.
• Work files for sorts that result from creating or rebuilding indexes (SORT_IN_TEMPDB).
• Storing temporary large objects (LOBs) as variables or parameters (if they won’t fit into memory).
Version Stores:
• The version store is a collection of pages used to store row levelversioning of data.
– When DML is executed against a database using snapshot transactions or read-committed row versioning isolation levels.
2. Online-Index-Build Version Store: Used for online index builds or rebuilds. EE edition only.
• Generally, there are three major problems you run into with Tempdb:
2. Tempdb is experiencing DDL and/or allocation contention on various global allocation structures (metadata pages) as
temporary objects are being created, populated, and dropped. E.G. Any space-changing operation (such as INSERT) acquires a
latch on PFS, SGAM or GAM pages to update space allocation metadata. A large number of such operations can cause excessive
waits while latches are acquired, creating a bottleneck, and hurting performance.
SOLUTION:
• sys.dm_db_file_space_usage: Returns one row for each data file in Tempdb showing space usage.
• sys.dm_db_task_space_usage: Returns one row for each active task and shows the space allocated and deallocated by the
task.
• sys.dm_db_session_space_usage: Returns one row for each session, with cumulative values for space allocated and
deallocated by the session.
Performance Counters:
DMV
• sys.dm_db_file_space_usage
– 1101 or 1105: A session has to allocate more space in tempdb in order to continue
– 3967: The version store has been forced to shrink because tempdb is full.
Note:Be sure auto growth is turned on for tempdb, andensure that you have enough available free diskspace.
> Removing the primary filegroup, primary data file, or log file.
Auditing: Auditing an instance of SQL Server or a SQL Server database involves tracking and logging events that occur on the
system.
Working with SQL Server 2008 auditing need to keep four things in mind:
• Auditing is a new feature in SQL Server 2008 which enables database administrators to capture the events. I hope this
feature will be light weight compared to other third party audit event collectors.
• The audit object provides a manageable auditing framework that makes it easy to define the events that should be
logged and the locations where the log should be stored.
• SQL Server helps you to implement a comprehensive auditing solution to secure database your database and meet
regulatory compliance requirements.
• Assign all users to meaningful logical groups.
Source: Unknown
Creating an audit, and reviewing audit results using SSMS, is a four-step process, as outlined in the previous section:
The first step is to create a new audit object. To create a new audit object using SSMS, go to the SQL Server instance you
want to audit, open up “Security,” and you will see the “Audits” folder, as shown in Figure 2:
Right-click on the “Audits” folder and select “New Audit,” and the “Create Audit” dialog box appears, as shown in Figure 3:
Source: Unknown
Figure 3: To create an audit, you have to assign it a name and specify where the audit data will reside.
The first thing you need to do is to decide if you want to use the name that is automatically generated for you as the audit
object name, or to assign your own name. Since numbers don’t mean much to me, I assigned it my own name.
Next, you have to provide a “Queue Delay” number. This refers to the amount of time after an audit event has occurred
before it is forced to be processed and written to the log. The default value is 1000 milliseconds, or 1 second. While I am
going to accept the default for this demo, you might want to consider increasing this value if you have a very busy server.
The next option on the screen is called “Shut down server on audit log failure”. If you select this option, and later SQL
Server is restarted, and for whatever reason the audit data can’t be logged, then SQL Server will not start, unless you
manually start it at the command line using a special parameter. This option should only be used in environments where
very tight auditing standards are followed and you have 24/7 staff available to deal with the problem, should it occur.
Next, beside “Audit,” in the dialog box, there is a drop-down box with “File” selected by default. This option is used to tell
SQL Server where you want the audit logs to be stored.
Source: Unknown
Figure 4: Three are three options where you can store audit data.
SQL Server Audit allows you to store audit data in a file, in the Security Log, or the Application Log. If you choose “File”, then
you must also specify the location of the file, along with additional information, such as how much data it can collect, and
so on. If you choose Security Log or Application Log, then the audit results are stored in these Windows Operating System
Event Logs. I am going to choose “Application Log”. Once this is done, the dialog box should look as shown in Figure 5:
Source: Unknown
Figure 5: Once all the data has been provided, click “OK” to create the audit.
Now that the audit has been configured, click on “OK” to save it. It should then appear in the SSMS Object Browser, as
shown in Figure 6:
Figure 6: Notice the red arrow next to the newly created audit.
The red arrow next to the audit object means that it is not currently enabled. That’s OK for now, we can enable it later.
Now that we have created the audit, we need to create the matching audit specification. If we wanted to do an
instance-wide audit, we would create a server audit specification. But for this example, where the goal is to audit the
SELECT activity on a single table in a single database, a database audit specification is created.
To create a database audit specification using SSMS, open up the database to be audited, then open up the security folder
under it. Next, right-click on “Database Audit Specifications” and select “New Database Audit Specification”, as shown in
Figure 7:
Figure 7: To create a database audit specification, you must do so from within the database you want to audit.
The “Create Database Audit Specification” dialog box appears, as shown in Figure 8:
Source: Unknown
Figure 8: The “Create Database Audit Specification” dialog box has many options to complete.
You can either choose to accept the default name assigned to this database specification, or you can enter your own. Next,
select the appropriate audit object from the Audit dropdown box, as shown in Figure 9:
Figure 9: The “Create Database Audit Specification” dialog box has many options to complete.
In this case there is only one audit object, the “EmployeePayHistory”, as this is a newly installed SQL Server and doesn’t
have any other audit objects on it.
Next, you must specify the kind of audit activity you want to capture by selecting from the “Audit Action Type” drop-down
box, as shown in Figure 10:
Source: Unknown
Figure 10: You can select from many pre-defined audit actions.
For this example, I want to choose the “SELECT” “Audit Action Type,” as the goal is to record all SELECT activity for the
payroll table. Of course, you can choose any audit action type you want, but you can only choose from those that are listed.
You can’t create your own.
Now that the audit action type has been chosen, the “Object Class” must be chosen – see Figure 11:
Figure 11: In this case, you can choose from three object classes.
The object class allows us to narrow down the scope of what we want to audit. For this example, because we want to
monitor activity on a table, “Object” is selected.
Source: Unknown
The next step is to specify the object, or the table name, that is to be audited. To do this, click on the browse button under
“Object Name,” and the “Select Objects” dialog box appears, as shown in Figure 12:
Figure 12: The “Select Objects” dialog box allows you to select which object to audit.
Having clicked on the “Browse” button, the list of available objects will appear, as shown in Figure 13:
Source: Unknown
Browse through the “Browse for Object” dialog box until you find the object or objects you want to audit, then select them.
Above, I have selected a single table: HumanResources.EmployeePayHistory.
Once the objects have been selected, click “OK,” and the “Select Object” dialog box reappears, as shown in Figure 14:
Source: Unknown
Now that the object to be audited has been selected, click “OK,” and you are returned to the original “Create Database
Audit Specification” dialog box, as shown in Figure 15:
There is one last step, and that is to specify what security principals (user accounts) that we want to monitor. To do this,
click on the browse button under “Principal Name,” and another “Select Object” dialog box appears.
I am going to spare you seeing this screen again, and skip immediately to the “Browse for Object” dialog box, where you
can see what principals you can choose from, as shown in Figure 16:
Source: Unknown
In this case, public is chosen, because the goal of this audit is to identify anyone who runs a SELECT against the payroll
table. Optionally, you can select on specific users or roles. Click on “OK” for this dialog box, then click on “OK” for the
“Select Objects” dialog box, and we reach the final screen, seen on Figure 17:
Figure 17: We are finally done creating the database audit specification.
Source: Unknown
Since we are only interested in auditing this one table for a single action, we will stop now. If you wanted to, you could
continue to add addition actions and objects to this audit specification. Click on “OK,” and the database Audit Specification
will be saved, and you can view it in object explorer, as shown in Figure 18:
Figure 18: Notice the red arrow next to the specification, which tells us that it is turned off.
Once the new database audit specification has been created, it has a red arrow next to it, indicating that it is turned off. We
will turn it on in the next step.
http://bradmcgehee.com/2010/03/30/an-introduction-to-sql-server-2008-audit/
Piecemeal restoration
IntelliSense for Query Editing
Maintenance plans File stream
Resource database
Patch uninstion-Rolling upgrade
Reports Slipstream
http://www.sqlusa.com/article http://sqlcat.com/sqlcat/b/top10lists/arc http://www.databasejournal.co
s2005/top10operation/ hive/2009/01/30/top-10-sql-server-2008- m/features/mssql/article.php/3
features-for-the-database-administrator- 857466/Top-10-Features-of-SQL
dba.aspx -2008-R2.htm
PowerView
http://mcpmag.com/articles/2012/03/14/top-12-featur http://sqlmag.com/sql-server-2014/sql-server-2014-
es-of-sql-server-2012.aspx important-new-features
>The main purpose we can monitor SQL Server without consuming hardware resources like DBCC queries.
>The DMV’S newly introduced in SQL Server 2005 gives the database administrator information about the current state of the
SQL Server machine.
>These Values will help the administrator to diagnose problems and tune the server for optimal performance.
> The DMV’S in SQL Server are designed to give you a window into what’s going on inside SQL Server
> They can provide information on what’s currently happening inside the server as well as the objects it’s strong. They are
designed to be used instead of system tables and various functions.
> DMV’S are stored in sys schema and they start with dm_in the name
SELECT name, type, type_desc FROM sys.system_objects WHERE name LIKE 'dm_%' ORDER BY name
Output:
List of DMV'S
Server-scoped dynamic management views and functions: These require VIEW SERVER STATE permission on the server.
Source: Unknown
Database-scoped dynamic management views and functions: These require VIEW DATABASE STATE permission on the
database.
There are multiple categories close to 17... In which these views and functions have been organized
We have 85 of these views and functions. To give a further split, 76 of these are views and 9 of them are functions... Below are
the 4 types of DMV which can be used frequently.
>This section contains the dynamic management views that are associated with the SQL Server Operating System (SQLOS). The
SQLOS is responsible for managing operating system resources that are specific to SQL Server.
Locks:
Blockings:
>Returns information about the wait queue of tasks that are waiting on some resource.
Sys.dm_os_wait_stats:
>Returns information about all the waits encountered by threads that executed. You can use this aggregated view to diagnose
performance issues with SQL Server and also with specific queries and batches.
Mirroring:
1. Sys.dm_db_mirroring_auto_page_repair:
Source: Unknown
1. Returns a row for every automatic page-repair attempt on any mirrored database on the server instance.
2. This view contains rows for the latest automatic page-repair attempts on a given mirrored database, with a maximum of 100
rows per database.
2. Sys.dm_db_mirroring_connections:
Avg_fragementaion_in_pernt:
Missing Index:
sys.dm_db_index_usage_stats:
Returns counts of different types of index operations and the time each type of operation was last performed in SQL Server.
Sys.dm_exec_cached_plans:
>Returns a row for each query plan that is cached by SQL Server for faster query execution. You can use this dynamic
management view to find cached query plans, cached query text, the amount of memory taken by cached plans, and the reuse
count of the cached plans.
Sys.dm_exec_connections:
>Returns information about the connections established to this instance of SQL Server and the details of each connection.
Sys.dm_exec_sessions:
>shows information about all active user connections and internal tasks. This information includes client version, client program
name, client login time, login user, current session setting, and more.
Source: Unknown
Sys.dm_exec_cursors:
>Returns information about the cursors that are open in various databases.
Sys.dm_repl_articles:
Sys.dm_repl_tranhash:
Sys.dm_repl_schemas:
Sys.dm_repl_traninfo:
>A stored procedure is a group of Sql statements that has been created and stored in the database. Stored procedure will accept
input parameters so that a single procedure can be used over the network by several clients using different input data. Stored
procedure will reduce network traffic and increase the performance. If we modify stored procedure all the clients will get the
updated stored procedure
System stored procedures are stored in the master database and these are starts with a sp_ prefix. These procedures can be
used to perform variety of tasks to support Sql server functions for external application calls in the system tables and use to
perform many administrative and informational activities.
User Defined stored procedures are usually stored in a user database and are typically designed to complete the tasks in the
user database. While coding these procedures don’t use sp_ prefix because if we use the sp_ prefix first it will check master
database then it comes to user defined database
Stored procedure are modules or routines that encapsulate code for reuse. A stored procedures can take input parameters,
return tabular or scalar results and messages to the client.
Extended stored procedures are the procedures that call functions from DLL files that an instance of Microsoft SQL Server can
dynamically load and run. Now a day’s extended stored procedures are depreciated for that reason it would be better to avoid
using of Extended Stored procedures.
USE AdventureWorks2008R2;
GO
CREATE PROCEDURE dbo.sp_who
AS
SELECT FirstName, LastName FROM Person.Person;
GO
EXEC sp_who;
EXEC dbo.sp_who;
GO
DROP PROCEDURE dbo.sp_who;
GO
You can create the procedure once, store it in the database, and call it any number of times in your program.
If the operation requires a large amount of SQL code is performed repetitively, stored procedures can be faster. They are parsed
and optimized when they are first executed, and a compiled version of the stored procedure remains in memory cache for later
Source: Unknown
use. This means the stored procedure does not need to be reparsed and reoptimized with each use resulting in much faster
execution times.
An operation requiring hundreds of lines of Transact-SQL code can be performed through a single statement that executes the
code in a procedure, rather than by sending hundreds of lines of code over the network.
Users can be granted permission to execute a stored procedure even if they do not have permission to execute the procedure's
statements directly.
DBCC (DATABASE CONSOLE COMMANDS) COMMANDS
> DBCC commands are used to check the consistency of the database or database objects. While executing DBCC commands the
DB engine creates a database snapshot and then runs the checks against this snapshot. After the DBCC command is completed,
this snapshot is dropped.
>It’s consuming hardware resources. The DBCC commands are most useful for performance and troubleshooting exercises.
● Maintenance
● Informational
● Validation
● Miscellaneous
MAINTENANCE COMMANDS:
CLEANTABLE:
DBREINDEX:
DROPCLEANBUFFERS:
DBCC DROPCLEANBUFFERS
FREEPROCCACHE:
> Removes all elements from the plan cache, removes a specific plan from the plan cache by specifying a plan handle or SQL
handle, or removes all cache entries associated with a specified resource pool.
DBCC FREEPROCCACHE
INDEXDEFRAG:
SHRINKDATABASE:
>Shrinks the size of the data and log files in the specified database.
SHRINKFILE:
>Shrinks the size of the specified data or log file for the current database, or empties a file by moving the data from the
specified file to other files in the same filegroup, allowing the file to be removed from the database. You can shrink a file to a
size that is less than the size specified when it was created. This resets the minimum file size to the new value.
INFORMATIONAL COMMANDS:
CONCURRENCYVIOLATION:
DBCC CONCURRENCYVIOLAION
UPDATEUSAGE:
>Reports and corrects pages and row count inaccuracies in the catalog views. These inaccuracies may cause incorrect space
usage reports returned by the sp_spaceused system stored procedure.
INPUTBUFFER:
Source: Unknown
>Displays the last statement sent from a client to an instance of Microsoft SQL Server 2005.
OPENTRAN:
> Displays information about the oldest active transaction and the oldest distributed and nondistributed replicated transactions.
OUTPUTBUFFER:
> Returns the current output buffer in hexadecimal and ASCII format for the specifiedsession_id.
DBCC OUTPUTBUFFER (session id)
PROCCACHE:
DBCC PROCCACHE
SHOW_STATISTICS:
> Displays the current distribution statistics for the specified target on the specified table.
SHOWCONTIG:
> Displays fragmentation information for the data and indexes of the specified table or view.
SQLPERF:
> Provides the transaction log space usage statistics for all databases. It can also be used to reset wait and latch statistics.
TRACESTATUS:
USEROPTIONS:
> Returns the SET options active (set) for the current connection.
DBCC USEROPTIONS
VALIDATION COMMANDS:
> Performs validation operations on a database, table, index, catalog, filegroup, or allocation of database pages.
Source: Unknown
CHECKALLOC:
> Checks the consistency of disk space allocation structures for a specified database.
CHECKCATALOG:
CHECKCONSTRAINTS:
> Checks the integrity of a specified constraint or all constraints on a specified table in the current database.
CHECKDB:
> Checks the logical and physical integrity of all the objects in the specified database.
CHECKFILEGROUP:
> Checks the allocation and structural integrity of all tables and indexed views in the specified filegroup of the current database.
CHECKIDENT:
> Checks the current identity value for the specified table and, if it is needed, changes the identity value.
CHECKTABLE:
> Checks the integrity of all the pages and structures that make up the table or indexed view.
PROCESS:
2 DB Maintains Activities(Daily\weekly\Monthly)
8 Capacity planning\management
9 Interview Handling
10 Day-to-Day Activities
11 General Responsibility
13 Escalation Matrix
14 KT Questions
Source: Unknown
Service operations
P1 15min 4hrs[Platinum]
P5 1 Day 7 Days[plastic]
CITRIX
Servers:
Note:
1. Every DBA can have user name and password to login into BCM REMEDY ticketing tools.
2. Alerts calls assigned to DBA queue after L1 team assign\ distribute the calls based on the resource availability.
System raised
User raised
System raised: This type of alerts raised by monitoring tool... i.e. BMC Patrol, and SCOM (System Centre Operation Manager)
How it works:
When any error tracks into SQL server logs or event viewer\agent error logs then patrol agent per each server can monitor and
raise alert to BMC remedy to the based on the priority.
User raised: Depending on the user requirement to raise a call based on the impact to select priority in SQL Server.
Note: Always work on any incidents based on the priority and SLA’s
Go to MSTC [Microsoft terminal service connection] or remote desktop processing>> provide FQDN [Fully qualified
domain name] (or) server ip to login.
Provide user name and password domain name then connect to login into servers.
Change Management:
BMC Remedy
1. Get all the requirements including down time details from apps team
2. Raise a change as per the plan
3. Once change raise, pull all technical steps including pre-installation, implementation, and verification and backout plan.
4. Once completed, get technical side approval
5. Once approved, get SDM Approval.[Delivery manager approval\client approval]
6. After implement the change as per implementation date and finish as per the time lines.
7. Once implement success then Change is success else failed.
Change States:
Raise Change> Impact assessment> File the technical steps> Pass for technical approval> Pass to SDM approval> Once
Approved,
Problem Management:
Problem Management handles recurring incidents. Problem Management Naming convention is ‘PRB’
Identify the causes, resolve if it is from root
Generally Sr.L3/L4 will be doing problem management
For every Problem once it is resolved requires a RCA (Root cause analysis )
Note: PB tickets handled by Pb management coordinators by setting up the call, getting updates from all team. On time all
members should provide the correct updates.
Capacity planning\management:
Rules:
RACI Matrix: Defines Roles and responsibilities from each team which relate to each team.
❖ R- Responsibility
❖ A-Accountable
❖ C-Consult
❖ I-Inform
RACI is a simple Excel sheet
Daily Maintenance : Differential backup, Transaction log backup, Blocking jobs, CPU Utilization
Weekly Maintenance : Full backup, Rebuild indexes, Reorganize index, DBCC Checkdb, Purging jobs,
Level 2(L2)
Project manager
Level 1(L1)
When you complaint against a person ensure you have follow process as per reporting structure.
SLA [Service Level Agreement]: This is the agreement between client and company
SLA:
>This defines a business which needs to run continuously without having any service interruption.
Team size: 12
24*7
L1 3
L24
L3 3
Team lead: 1
Manager: 1
Shift Management:
Shift handover:
How many prod and dev\uat\pre-prod servers? 350+ Prod and UAT\PRE-PROD\DEV-150+
Backup strategy:
Logshipping: 100 +
Mirroring: 200+
Replication: 2000+
In disk: 7 days
In Tape: 2 months
Old backups automatically deleted from disk or tape by using clean up job.
Database growth report by monthly job as per schedule every month 1st.
Database refresh
Do we have a client interaction?
How many changes are implemented today or last week or last month?
4. In technical roles, Performing SQL Server various version installation of 2005, 2008, 2008 R2 and 2012 along the
configurations.
5. Applying service packs, hot fixes and cumulative updates for various versions and if any failures trouble shoots
accordingly.
6. Whenever if we have any issues with file and file groups, to resolve adding the over flow files to database.
7. As a part of security, creation of users, login and providing relevant Permissions to the requestor.
8. Implementing best security hardening rules in production servers.
9. Configured different types of backups for various versions and involved in recovery scenarios if database crashes or
depending on the requirement.
10. Defining the best backup strategy especially for production servers.
11. Suggest to application team on recovery models for database to resolve log file related issues especially on
production servers.
12. Performed system and user database file movements whenever if we have any disk space crunch.
13. Involve recovering of system and user database if any corruption happened especially MASTER REBULDING, model,
msdb and tempdb.
14. Performed attach\detach, import \export, copy database techniques.
15. Configured jobs, Maintenance plans, and dB mail to send any notification to end user on production servers.
16. As a part high availability and disaster recovery solutions, configured log shipping, db mirroring, and replication and
clustering.
17. Involved in UPGRADATION and Migration activities and resolved if any ORPAN user issues.
18. As a part of performance tuning,
- Resolving blockings
- Resolving deadlocks
- Maintain indexes
- Query tuning
- Running Sql server profiler to capture event classes
- Running performance monitor tool to capture counters
Process roles:
We need to write
Daily Responsibilities:
6) Perform any Change Requests during your shift (if any scheduled).
Health Checks:
4) Checking Backups
6) HA Synchronization/failure checks
7) Event Viewer
9) Database settings
12) Storage
These are the task list that must be done by a DBA on daily basis.
1) Check System Event Logs and SQL Server Error Logs for unusual events.
3) Confirm that backups have been made successfully and moved to the backup storage or secure location.
4) Monitor disk space to make sure SQL Servers will not run out of disk space.
5) Periodically monitor performance of database and system using both System Monitor and SQL Server Profiler.
7) Keep a log of any changes you make to servers, instance setting, database setting and monitoring script.
8) Create SQL Server alerts to notify you of problems, and e-mailed to you. Take action as needed.
Source: Unknown
9) Regularly restore backups to a test server in order to verify that you can restore them.
P1 Tickets:
2) Corruptions (Disk)
3) Hardware Failures
P2 Tickets:
3) Blockings
4) Deadlocks
7) Space Issues
9) Permission issues
Source: Unknown