SQL Server Notes
SQL Server Notes
SQL Server Notes
Information:
Data:
Ex:
Database:
SQL Server was originally developed by Sybase Corporation. ANSI did not authorize this
product.
Versions:
SQL Server 1.0, SQL Server 2.0, SQL Server 3.0, SQL Server 4.0, SQL Server 5.0, SQL Server 6.0,
MS SQL Server 6.5, MS SQL Server 7.0, MS SQL Server 2000, MS SQL Server 2005,
MS SQL Server 2008, MS SQL Server 2008 R2, MS SQL Server 2012, MS SQL Server 2014,
Starting from SQL Server 2017 Service Packs will no longer be released
SP1 SP2 SP3 SP4
SQL Server 2016 13.0.1601.5 13.0.4001.0 13.0.5026.0
SQL Server 13 or 13.1.4001.0 or 13.2.5026.0
Support end date: 2021-07-13
+ CU9 + CU15 + CU17
Ext. end date: 2026-07-14
Starting from SQL Server 2017 Service Packs will no longer be released
In real time it is recommended to practice n-1 version, i.e., 1 version earlier to the latest
version.
15.0.2000.5 is called as ‘build number’.
RTM (Release to Manufacturing) is the actual base version released into the market
which we can use.
Before RTM is released, Beta versions/Community Technology Preview (CTP) are
released into the market which shows what update is done in this release when
compared to the previous version.
In between the Beta release and the RTM we have a Release Candidate (RC) which
means the version is ready and can be released into the market anytime.
Cumulative Updates (CU) contains the bug fixes and enhancements upto that point of
time. CU’s are not fully regression tested. Cu’s are released in between 1 service pack to
another service pack.
Service Pack (SP) contains larger collection of bug fixes and is released over a longer
period of time (say once in 6 months or 9 months or a year). SP’s are fully regression
tested. SP’s are discontinued starting from SQL Server 2017.
Enterprise:
Top edition with all the enterprise feature of SQL server. It has to be purchased. Can be
used for production use.
Standard:
The cut down edition. We may not have some enterprise features. It has to be
purchased. Cost is much less compared to Enterprise edition. Can be used for the production
use.
Developer:
It is a fully functional enterprise edition of SQL Server licensed for use as development
and test database in non-production environment. Can be downloaded. In simple we can say
this is for learning process.
Express:
Basic edition of SQL server. Can be used for production with limitations. Can be
downloaded.
Free editions
Express Express edition is a free lightweight/lite edition of SQL Server with some limitations,
that can be used in production environment.
Main limitations:
• Limited to lesser of 1 physical CPU or 4 cores.
• No single database (.MDF file) can be over 10 GB.
Developer Developer edition is a fully-functional Enterprise Edition of SQL Server, licensed for
use as a development and test database in a non-production environment. Download
SQL2019 Developer Edition Download SQL2017 Developer Edition
Evaluation Evaluation edition is a fully-functional trial Enterprise Edition of SQL Server but is
limited to 180 days, after which the tools will continue to run, but the server services
will stop.
Paid editions
Enterprise Enterprise edition is the top-end edition with a full feature set.
Standard Standard edition has less features than Enterprise, when there is no requirement of
advanced features.
Cost:
Authentication Modes:
At the time of installation after selecting the instance type, need to set the
authentication mode.
We have 2 types of authentication modes. They are “Windows” and “Mixed mode”.
Mixed authentication mode consists of both ‘Windows’ and ‘SQL’ mode.
If we configure the SQL Server in ‘Windows’ authentication then we can grant access
only to the windows user accounts.
If we configure the SQL Server in ‘Mixed’ authentication mode then we can grant access
to windows and non-windows user accounts/SQL accounts as well.
If we configure the SQL Server in ‘Mixed’ authentication mode then by default SQL
admin account is created by name “sa” for which we need to set the password.
If the installation is done is “Windows” mode and want to change to “Mixed” mode
post installation then we do have an option to do the same.
We can change only the authentication modes after installation but not the Default or
Named instances.
Login Options:
SQL Server Management Studio (SSMS) is the tool used to connect the SQL Server after
installation and helps to perform any database administration and development
activities.
SQL Server Configuration Manager (SSCM) is the tool used to find out the list of SQL
servers services installed on a particular machine/server along with the status.
SSCM can be opened through command prompt by using the command
“sqlservermanager15.msc”.
In SSCM the default SQL Server instance will be shown by the name “MSSQLSERVER”
and the named SQL Server instance will be shown by the instance name.
Windows Authentication Mode:
To login into SQL Server using windows authentication mode we can give the server
name as “machine name” or “.” or “local”.
We can login into any SQL Server instance installed on the machine by giving the server
name as the machine name along with the server instance name or “.instance name” or
“local instance name”.
To login into SQL Server using SQL Server authentication mode we can give the server
name as “machine name” and login as “sa” and the password as the one that we set at
the time of installation.
We can login into any SQL Server instance installed on the machine by giving the server
name as the machine name along with the server instance name and login as “sa” and
the password as the one that we set at the time of installation.
1. Configuration Manager:
Run sqlservermanager15.msc SQL Server services Right click on SQL Server
service Start/Stop/Restart
2. Services.msc:
4. Command Prompt:
Named Instance:
5. Power Shell:
Named Instance:
Whenever any changes are made to SQL Server configuration, to apply and incorporate
those changes the SQL Server has to be restarted.
Option 2: From System Registry
To check the version that is installed – select @@version. This command shows the
build number.
To check the server – select @@servername.
Database:
There are 2 types of databases. They are “System database” and “User database”.
System database again have 4 types of databases by names “Master DB”, “Model DB”,
“Msdb DB”, “Temp DB”.
System database gets created at the time of installation of SQL Server.
User database is created as per the business requirements.
Any database consist of 2 files namely “Data file” and “Log file”.
The extension of data file is “mdf/ndf” and log file is “ldf”.
Right Click on the instance name Properties Database Settings Change it to new path
Restart SQL Service
Development Basics:
SQL:
SQL Server is a RDBMS product with which we can store, retrieve, manipulate and
delete the data.
To perform these activities we need a language which is SQL (Structured Query
Language).
Data Types:
The data type of a column defines what value the column can hold.
Each column in a database table is required to have a name and a data type.
Various data types are as follows: INT, Char, Varchar, Money, Date, Datetime, binary,
Float etc.,
Constraints:
SQL Languages:
Creation of Database:
Using GUI:
Using Query:
Note: Whenever the database name includes “_”, “-“, “ “, then place the database name in
square brackets ([]).
ADD FILE
To Delete a Datafile:
To Drop a Database:
Creating a Table:
Using GUI:
Tables Right Click New Table Column Name Data Type Save Table Name
Using Query:
OR
INSERT INTO TABLE_NAME (COL1, COL2, COL3,…) VALUES (‘AAA’, ‘BBB’, ‘CCC’,…….)
Demo:
CREATE DATABASE [DEV-BASICS]
use [DEV-BASICS]
go
create table emp1
(ID int, name varchar(50))
Databases Right Click Restore Databases Devices Browse the file location OK
*******backup table data
select * into
[dbo].[Persons_bkp030521]
from [dbo].[Persons]
select * from
[Sales].[SalesOrderDetail]
select SalesOrderID,OrderQty
from
[Sales].[SalesOrderDetail]
select * from
[Sales].[SalesOrderDetail]
where SalesOrderID=43659
select * from
[Sales].[SalesOrderDetail]
where SalesOrderID<>43659
select * from
[Sales].[SalesOrderDetail]
where SalesOrderID in
(43659,43660,43661)
select * from
[Sales].[SalesOrderDetail]
where SalesOrderID not in
(43659,43660,43661
select * from
[Sales].[SalesOrderDetail]
where SalesOrderID in
(43659,43660,43661) and OrderQty>5
select * from
[Sales].[SalesOrderDetail]
where SalesOrderID in
(43659,43660,43661) or OrderQty>5
select * from
[Sales].[SalesOrderDetail]
order by OrderQty asc
select * from
[Sales].[SalesOrderDetail]
order by UnitPrice asc
select SalesOrderID,
sum(orderqty)as totalqty
from
[Sales].[SalesOrderDetail]
group by SalesOrderID
having sum(orderqty) >30
select * from
[HumanResources].[Department]
where name like '%ces'
select * from
[HumanResources].[Department]
where name like 's%'
select * from
[HumanResources].[Department]
where name like '%sa%'
select * from
[HumanResources].[Department]
where name like '_a%'
select * from
[Sales].[SalesOrderDetail]
where OrderQty between 5 and 10
Local Server:
1. Using SSMS
2. Using configuration Manager
3. Using SQLCMD
SQLCMD:
C:\WINDOWS\system32>hostname
DESKTOP-IA0FMAF
C:\WINDOWS\system32>SQLCMD -S DESKTOP-IA0FMAF
1> use
2> [dev-basics]
3> go
Changed database context to 'DEV-BASICS'.
1> select * from [dbo].[dept]
2> go
DeptID Dname
Dhead
-----------
-------------------------------------------------
-
-------------------------------------------------
-
101 IT
Mike
102 ac
elizabeth
103 finance
nancy
104 hr
chris
(4 rows affected)
1>
How to setup SQL Server connectivity from the application:
This is used to connect the SQL Server in admin mode to trouble shoot connectivity
issues.
The default port for DAC is 1434.
Ping:
C:\WINDOWS\system32>ping 192.168.100.9
Telnet:
The firewall Is blocking the connecting coming, we need open a firewall rule to allow the
inbound connection to the port : 1433
Telnet successful:
C:\WINDOWS\system32>telnet 192.168.100.9,1433
UDL Test:
sp_configure
go
Databases:
1. System Database
2. User Database
User Database:
These databases that gets created along with SQL server installation.
System databases maintain all the system level information.
These databases should not be used for storing user/business data.
Master
Model
Msdb
Tempdb
Resource
Distribution
Master Database:
Linked Servers:
To check all the databases information the following query can be used.
SELECT * FROM [sys].[databases]
To know the information or data about the files (mdf/ndf/ldf) then the following query
can be used.
SELECT * FROM [sys].[master_files]
To know the information of all the databases and their data files at a time, we can use
the following query using joins.
select
sd.name,
sd.create_date,
smf.name,
smf.physical_name
from sys.databases sd
join sys.master_files smf
on sd.database_id=smf.database_id
Open Sessions, Connections… etc.,:
To check the information about the open sessions at that point of time.
The session id’s 1 to 50 are reserved for system processes and whenever we open a new
query tab that id will be reflected in the session id tab.
To check the information about the connections at that point of time.
MSDB Database:
For every SQL Server database engine that we install, a corresponding agent service will
be installed.
Purpose of Agent:
To automate any activities like maintenance, backups, or any application jobs SQL Server
agent service is used.
By using the agent we can create jobs and automate them as per schedule.
TEMP Database:
Table: dept
Insert into #tempdept
Select * from dept where deptname=’ac’
Loop through #tempdept, and update the salary and delete #tempdept
A=10
B=15
C=a+b=25
D=a+c=35
Total=a+b+c+d
1. GUI
3. Sys.master_files, Sys.sysaltfiles
Startup Parameters:
It reads the startup parameters and will go to the corresponding location and checks
whether the corresponding files are available or not.
As master is maintaining locations of all other databases, it will bring all other databases
online one by one.
Error log:
This is file where SQL server logs/writes all the messages like errors, warnings,
informational messages, etc.
This is our first focus point to troubleshoot any SQL server related issues.
The location of error logs can be found by ‘e’ value available in startup parameters.
SSMS:
Under SQL instancemanagement SQL server logs
The default error log count is 1 current and 6 archives (old files).
The default error log count can be changed in between 6 to 99.
The default error log count can be changed as follows:
Right click on the SQL Server logs configure change the default error log count
Once the error log count is changed, new error log archives are generated each time
when we restart the SQL Server.
Error Log Recycling Process:
In order to generate the error logs automatically, we can execute the following stored
procedure.
sp_cycle_errorlog
2. SSMS- script:
To check and read the data from the error log the following stored procedure can be
used.
sp_readerrorlog
To check and read the data from a particular error log the stored procedure can be used
along with 2 parameters as follows.
1st parameter is file number
2nd parameter is file type
There are two types of error log file types. They are ‘SQL Server Error Logs’ and ‘SQL
Server Agent Error Logs’.
The SQL Server Error Log file type is denoted by ‘1’ and the SQL Server Agent Error Log
by ‘2’.
Hence to read a particular error log, the stored procedure can be used as follows.
sp_readerrorlog 0,1
‘0’ represents current error log and ‘1’ represents SQL Server Error Logs.
When we want to search or read an error log with a specific pattern then the above
stored procedure can be used as follows.
sp_readerrorlog 0,1, ‘error’, ‘been’
For the above stored procedure the result will be displayed with the sentences which
contain error and been words.
The current version stored procedure accepts only 2 parameters whereas the previous
version used to accept 4 parameters as follows.
Even though sp_readerrolog accepts only 4 parameters, the extended stored procedure
accepts at least 7 parameters.
If this extended stored procedure is called directly the parameters are as follows:
Value of error log file you want to read: 0 = current, 1 = Archive #1, 2 = Archive #2, etc...
Log file type: 1 or NULL = error log, 2 = SQL Agent log
Search string 1: String one you want to search for
Search string 2: String two you want to search for to further refine the results
Search from start time
Search to end time
Sort order for results: N'asc' = ascending, N'desc' = descending
Logging path
Ports information
When we install SQL server, the system databases will be created by default in C- Drive.
The C-Drive is OS drive and is dedicated for OS.
If we keep any of our databases (system/user) and if something goes wrong with
operating system, then we may lose our SQL server databases also.
Considering that it is not a best practice to keep any of the application data in OS drive.
Model:
use model
go
sp_helpfile
use model
go
sp_helpfile
MSDB:
use msdb
go
sp_helpfile
use msdb
go
sp_helpfile
Tempdb:
--step1: identify the current location of tempdb database
use tempdb
go
sp_helpfile
use tempdb
go
sp_helpfile
Master:
use master
go
sp_helpfile
use master
go
sp_helpfile
Architectures:
Here we discuss about the “Database Architecture” and “SQL Server Architecture”.
Database Architecture:
Database consists of two files. They are “Data File” and “Log File”.
Here we discuss about the architecture of both Data file and Log file.
Data File Architecture:
At the time of creation of database, the data file will be created with defined initial size.
Once the size is filled up, then it is not possible to insert the data into the data file.
Hence in order to avoid that we have an option called “Auto growth” where we specify
some file size which will be added automatically to the data file once it gets filled up.
Auto growth option can be provided with 2 storage values. Either we can fix “unlimited”
storage value/file size or we can limit the storage/size to a prescribed value.
The unlimited storage value depends upon the storage value/size of the disk (D/E/F
Drive) in which the data is being stored.
The data file is divided into number of small storage/data blocks.
Each block is called a page.
The size of each page is 8 kb.
8 pages combines to form a group called as “Extent” and the size of the “Extent” is 64
kb.
Extents can again be divided into 2 types. They are “Uniform Extent” and “Mixed
Extent”.
In the uniform extent all the 8 pages contains the data from single table (Say T1).
In the mixed extent the 8 pages contains data from a maximum of 8 tables/different
objects and a minimum of 2 tables.
The pages are again divided into 2 types. They are GAM (Global allocation map) and
SGAM (Shared Global allocation map).
GAM and SGAM tracks the status of uniform and mixed extents respectively.
These pages are called as bit map pages as the data is stored in the form of 0’s and 1’s.
If the bit map value is 1 in GAM, then it indicates that the uniform extent with all the 8
pages is full and already allocated.
If the bit map value is 0 in GAM, then it indicates that the uniform extent with all the 8
pages is empty and available for use.
If the bit map value is 1 in SGAM, then it indicates that the mixed extent with all the 8
pages is full and already allocated.
If the bit map value is 0 in SGAM, then it indicates that the mixed extent with atleast 1
page is empty and is available for use.
Every time when a table is created, it is assigned with mixed extent.
Once the same table satisfies to fill all the 8 pages, then that particular table will be
allocated with uniform extent.
Each GAM and SGAM tracks status of 4 Gb worth of uniform and mixed extents.
All the transaction that we make with the database except the “Select” statement gets
recorded in the log file first.
Every transaction that is recorded into the log file will have a LSN (Log Sequence
Number).
LSN is a hexadecimal number.
Once the transaction gets committed in the log file then it will apply the corresponding
changes to the data file.
Once the transaction is updated into the data file, the corresponding transactions in the
log file will get truncated periodically.
The truncation of the log file occurs either when the check pointer occurs or the log
backup occurs.
The log file is usually divided into number of small files called as VLF (Virtual Log Files).
The VLF is of 3 types. They “Active”, “Recoverable”, “Reusable”.
Initially all the VLF’s will be in Inactive state.
Once the VLF’s are occupied with active transactions then that VLF is called as Active
VLF.
Active VLF’s cannot be truncated.
The VLF whose transaction is completed and waiting for backup to be performed are
called as “Recoverable VLF”.
The VLF’s whose backup is completed are truncated and will be converted into
“Reusable VLF”.
The various layers of SQL Architecture are Client machine, Protocol Layer, Relational
Engine and Storage Engine.
Any client machine has some network libraries which converts the given SQL query into
a TDS (Tabular Data Structure) packet.
Whenever we want to transmit the data over a network it should be in the form of
network packets.
Whatever the SQL query that we are executing, it will be converted into a TDS packet
and will be sent over the network to the database server.
Once the protocol layer receives the TDS packets, it unwraps the TDS packet and
extracts the original SQL statement.
SQL Server supports 3 different types of protocol layers. They are TCP/IP, Named Pipes
and Shared Memory.
When a remote connection is to be made, we use TCP/IP.
When we have 1 server and multiple clients/systems need to be connected to that
server, then we use Named Pipes.
When the SQL Server is installed on a machine and we are trying to connect the SQL
Server from the same machine, then shared memory is used.
The protocol layer converts the query written in English language into the machine
understandable binary language.
The query converted into binary format is sent to the command parser in the relational
engine.
Once the query is executed in the query executor in the relational engine, the result will
be sent in binary format to the protocol layer which converts it into the English
language.
Relational engine has 3 main parts which are “CMD Parser”, “Optimizer”, and “Query
Executor”.
The CMD parser receives the query from protocol layer and checks whether the query is
correct or not with regard to the syntax.
If the syntax is wrong then the CMD Parser returns the query back to the protocol layer
which again sends it back to the user.
The CMD Parser generates a query tree.
Query tree is a bunch of plans with which the SQL query can be executed.
The query tree gives a bunch of plans to the optimizer which selects an optimal plan to
execute the query.
Once the optimal plan is selected by the optimizer it will be sent to the query executor
which executes the plan the control goes to storage engine where we have the data
related things.
Storage engine contains Access methods which decides whether the query is a
transaction or not (Select operation or not).
If it is select operation then the control comes to Buffer Manager directly.
The select operation doesn’t goes to transaction log file.
The role of buffer manager is to check the corresponding pages required for the select
query available or not in the buffer pool.
Buffer pool is the RAM/Memory that we assigned to our SQL Server.
If the required pages are available in the buffer pool, then it gives the response back to
buffer manager, then to access methods, then to query executor and then to the
protocol layer which converts the binary format into the user understandable language
and then to the TDS packet and then to the user in the form of result.
If the corresponding/required pages are not available in the buffer pool, then the buffer
manager goes to data file.
Data file is the place where we have our pages. The data file brings the corresponding
pages to the buffer pool.
If the operation is other than select, then from access methods control goes to
transaction manager which makes the entry into the transaction log, once committed
gives the control back to transaction manager and then to access methods and then to
buffer pool and then so on.
In order to avoid the buffer pool to be filled up, we have 2 internal processes running by
name “Check Pointer” and “Lazy Writer”.
Check pointer is the process which runs at regular intervals and identifies all the dirty
pages in the buffer pool and sends them back to the data file.
Dirty page is a page which is updated/modified and is residing in buffer pool.
Lazy writer is a process which doesn’t have any fixed interval and runs randomly and
checks if there is any memory pressure in the buffer pool.
If there is any memory pressure, then it identifies all the inactive pages and sends them
back to data file.
Inactive page is the page which is in the buffer pool for some time and there is no
activity on that page.
Lazy writer uses an algorithm called LRU (Least Recent Use) to find the inactive pages.
Backup: Maintaining a copy of database in different location and it can be used to recover the
database in case of failure.
Backup Architecture:
While recovering/restoring the databases the .mdf and .ldf can be restored to the same
old previous location or can be restored to a new required location also.
Types of Backups:
Full backups
Differential Backups
Log backups or T-log backups or transactional log backups
File and File group backups
Split backups
Mirror backups
Copy only backup
Tail-log backup
Full Backup:
Differential Backup:
It takes the backup of all the changes (pages and extents) happened from last full
backup to till now.
For example, if the database is of size 500 gb and we have taken full backup at 10 am.
From 10 am to 11 am there is about 10 gb database added and now if we are taking
differential backup, then the size of differential backup will be 10 gb only.
Differential backups are cumulative.
The extension is “Bak”.
Log Backup:
It takes the backup of all the transactions available in the log file and once the backup is
done it will truncate those transactions from the log file.
The extension is ‘TRN’.
Log backups are sequential.
Case study (Differential Backup):
SSMS – GUI
SSMS – TSQL
SQL Server agent job
Maintenance Plans
Power shell
Third Party tools
SSMS – GUI:
The backup of the database can be done using the SSMS as follows:
Right click on required database Tasks Back Up General Tab Select the
backup type (Full/Differential) Select the Backup destination Ok.
Database restoring can be done as follows.
Right click on database folder Restore Database Select device and browse the
backup location Ok
Restore Options:
If there are any further backups to be restored, then we need to use restore with no
recovery.
The backup will be restored, but the database gets into the restoring state and we can’t
access the database.
Restore with Recovery:
If there are no further backups to be restored, then we can choose backup with
recovery option.
This will restore the backup, brings the database online and will be available for use.
It will restore the backup and brings the database online available for user in “Read
only” mode.
Backup Strategy:
Let us assume that our database is crashed on Thursday 4.30 PM, in such case the employ the
following process to recover our database.
Recovery Process:
Failure Point:
The process of recovering a database exactly at the failure point, then we call it as PIR is
achieved.
Recovery Models:
In simple recovery model, the transaction log file (ldf) will get truncated automatically in
regular intervals (when check pointer occurs).
As the transaction log is truncated automatically, it won’t allow us to perform log
backups. (Limitation is log backups can’t be performed).
PIR can’t be achieved in Simple recovery model.
In bulk logged recovery model, the transactions will be minimally logged for bulk
operations like BCP, Bulk insert, select into, index rebuild.
So we can save the transaction log space of the database.
For normal operation it works as similar to full recovery model.
Case1: If the database is in bulk logged, if there is bulk operations performed, then we can’t
achieve PIR
Case2: If the database is in bulk logged, if there are no bulk operations, it is as similar to full
recovery model and we can achieve PIR.
In Full recovery model, all the transactions will be fully logged into the log file and there
won’t be any automatic truncation.
Hence we can take log backups and we can achieve PIR.
It is a best practice to keep all the production databases in ‘Full’ recovery model.
Option 1:
Option1:
SSMS – TSQL:
If we don’t mention any option, then restoring will be done with the option of backup
with recovery.
Configuring/running the backups using SQL Server agent can be done as follows.
SQL Server Agent Right click on the jobs New job Name Steps New
Step Name Add the full backup command in the open space Ok Schedules
New Name Schedule type as recurring Recurs every as daily Occurs every 1
Min Ok.
We can avoid taking manual backup and can automate the process of taking the
backups using the SQL Server agent job.
File and File Group Backups:
When we take the backup from SSMS, the backup consists of the entire database.
When we want to take the backup of a particular data file or file group then we can do
the same using SSMS as follows.
Right click on the database Tasks Backup File and file groups Add Backup
location Ok
Using SQL query the backup can be taken as follows.
backup database [BACKUP-DEMO] file=’ BACKUP-DEMO2’ to disk=’c:\sql\backups\
backupdemo_file2.bak’
backup database [BACKUP-DEMO] filegroup=’second’ to disk='C:\sql\backups\
backupdemo_fg2.bak',
Split Backup:
Mirror Backups:
Restore Commands:
Verify only
Header only
Filelist only
Label only
Verify Only:
We can verify whether the database backup is in valid restorable format or not.
Restore verifyonly from disk='\\DESKTOP-IA0FMAF\backups\backupdemo1.bak'
Header Only:
This will provide all the metadata of a backup such as database name, database creation
date, backup size, backup start and finish time, server name, version..etc.
It provides the backupsets information also.
Restore headeronly from disk='\\DESKTOP-IA0FMAF\backups\samantha.bak'
Restore headeronly from disk='\\DESKTOP-IA0FMAF\backups\backupdemo1.bak'
Filelistonly:
Label only:
This will provide all the backup media information such media type like disk, or tape or
url etc.
Restore labelonly from disk='\\DESKTOP-IA0FMAF\backups\modeldemo.bak'
Tail log backup:
Case Study:
Recovery process:
With above backups, we were able to recover the database only upto 4 pm.
But the database is crashed at 4.30 pm, So there is about 30 mins of transaction loss.
To recover the remaining 30 mins of transactions, we need to use tail log backup.
Recovery process:
USE [TailLogDemo]
GO
use master
Go
USE MASTER
GO
BACKUP LOG TailLogDemo TO DISK
= 'C:\sql\backups\tailogdemo\TAIL.TRN'
with no_truncate
GO
On Thursday 4.30 PM, A developer raised a ticket to refresh the Dev server with the
production database.
Khan:
He just took a full backup on Thursday and restored into dev server, request completed.
Ticket closed.
Requester is happy.
Friday – everything went well, no issues.
Saturday at 4.30 PM database is crashed.
Harinath (on-call):
Step 1:
USE master
GO
create database CopyOnlyDemo
alter database CopyOnlyDemo set recovery full
C:\sql\backups\Copy_ONly\Copy_ONLY
/**If we try to restore our original full backup, our latest full backup and any transaction log
backup after the differential we get this error. **/
USE master
GO
BACKUP DATABASE CopyOnlyDemo TO DISK='C:\sql\backups\Copy_ONly\
CopyOnlyDemo_full.BAK' WITH INIT
--step 2
The backup and restore history will be stored in the MSDB database.
The tables in which the backup and restore history is stored is Backupset and
backupmediafamily.
select
bs.database_name,
bs.type,
bs.is_copy_only,
bs.differential_base_lsn,
bs.backup_finish_date,
bmf.physical_device_name
from backupset bs
join backupmediafamily bmf
on bs.media_set_id=bmf.media_set_id
where database_name='CopyOnlyDemo'
order by backup_finish_date descs
On Thursday 4.30 PM, A developer accidentally deleted some key table from the
database.
Recovery Process:
GO
GO
Backup Options:
select
command,
percent_complete,
estimated_completion_time
from sys.dm_exec_requests
where command like '%backup%'
select
command,
percent_complete,
estimated_completion_time
from sys.dm_exec_requests
where command like '%restore%'
RPO: - 0, the database is crashed at 4.30 PM, then we should recover database to 4.30 PM.
RTO – 2 hours, the db is crashed at 4.30 PM and we should be able to recover the database by
6.30 PM
HA Architecture:
Log Shipping:
Log shipping is a Basic database level high availability solution which transfers the transactional
log backups from one server to another server and restores there.
Demo:
Adding a file to log shipping primary database:
--add file
alter database [LsDemo]
add file (name='lsdemo1',filename='c:\sql\backups\lsdemo1.ndf')
1. Take a tail log backup to take log backup of transactions happened after last log backup
2. Restore all the pending log backups if there are any.. using with no recovery
3. Restore tail log backup with recovery
4. Work with application team to point the application connections to secondary database
5. We have to validate all the logins, orphan users, jobs ..etc
Database Mirroring:
Database Mirroring also Database level High availability solution which transfer “Transaction
logs” from one server another server.
Pre Requisites:
Mirroring Modes:
1. Synchronous
i. High safety or High protection
ii. High availability
2. Asynchronous Mirroring
i. High Performance mode