0% found this document useful (0 votes)
18 views40 pages

Locking and Blocking

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views40 pages

Locking and Blocking

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Part 4

Locking and Blocking


Objectives
 Lock
o Lock Granularity
o Locking Behaviors for Online Maintenance
o Lock Compatibility
o Lock Partitioning
o Deadlocks
 Transactional Properties
o Atomic
o Consistent
o Isolation
o Transactional anomalies
o Pessimistic Isolation Levels
o Optimistic Isolation Levels
o Durable
What is lock?

Locking allows concurrent users to access the


same data, without the risk of updates conflicting
and causing data integrity issues.
Lock Granularity
• Processes can take out locks at many different
levels of granularity, depending on the nature of
the operation requesting the lock.
• Take out a lock at the lowest possible level of
granularity.
• If an operation requires acquiring millions of
locks at the lowest level of granularity, then this
is highly inefficient, and locking at a higher level
is a more suitable choice.
Lock Granularity
Lock Granularity
• When SQL Server locks a resource within a table, it
takes out an intent lock on the resource directly above it
in the hierarchy.
Lock a RID or KEY => intent lock on the page containing row
• If the Lock Manager decides that it is more efficient to
lock at a higher level of the hierarchy, then it escalates
the lock to a higher level.
• SQL Server uses for lock escalation are as follows:
o An operation requires more than 5000 locks on a
table, or a partition,if the table is partitioned.
o The number of locks acquired within the instance
causes memory thresholds to be exceeded.
Lock Granularity
Using the LOCK_ESCALATION option of a table
to change lock escalation behavior for specific
tables ( ALTER TABLE )
Locking Behaviors for Online Maintenance
Control the behavior of locking for online index rebuilds
and partition SWITCH operations.
( ALTER INDEX … REBUILD )
Locking Behaviors for Online Maintenance
Lock Compatibility
A process can acquire different types of locks
Lock Compatibility
Intent locks improve performance, because they are only
examined at the table level, which negates the need to
examine every row or page before another operation
acquires a lock.
Lock Compatibility
• Lock compatibility controls whether multiple
transactions can acquire locks on the same resource at
the same time.
• If a resource is already locked by another transaction, a
new lock request can be granted only if the mode of the
requested lock is compatible with the mode of the
existing lock.
• If the mode of the requested lock is not compatible with
the existing lock, the transaction requesting the new lock
waits for the existing lock to be released or for the lock
timeout interval to expire.
Lock Compatibility
Lock Partitioning

• It is possible for locks on frequently accessed


resources to become a bottleneck.
• SQL Server automatically It is possible for locks on
frequently accessed resources to become a
bottleneck applies a feature called lock partitioning
for any instance that has affinity with more than 16
cores.
• Lock partitioning reduces contention by dividing a
single lock resource into multiple resources.
Deadlocks
A deadlock occurs when two or more tasks permanently
block each other by each task having a lock on a resource
which the other tasks are trying to lock.

In the sequence described here, neither Process A nor Process B can


continue, which means a deadlock has occurred
Deadlocks
When the deadlock monitor encounters a deadlock
• If the processes have different transaction
priorities, it kills the process with the lowest
priority.
• If they have the same priority, then it kills the
least expensive process in terms of resource
utilization.
• If both processes have the same cost, it picks a
process at random and kills it.
Minimizing Deadlocks
When reviewing code, prior to code release, you should
look to ensure that the following guidelines are being
followed:
• Optimistic isolation levels are being used where
appropriate.
• There should be no user interaction within transactions
• Transactions are as short as possible and within the same
batch
• All programmable objects access objects in the same
order
Understanding Transactions
Three types of transaction:
• Autocommit : default behavior
• Explicit start with a statement and end with either
a BEGIN TRANSACTION a COMMIT
statement or a ROLLBACK statement.
• Implicit : transactions are started automatically,
and then committed manually, using a COMMIT
statement.
Transactional Properties
Atomic : all actions within a transaction must either
commit together or roll back together.

Save point is a marker within a transaction, in the


event of a rollback,everything before the Save point
is committed, and everything after the Save point
can be either committed or rolled back.
Transactional Properties

Consistent property means that the transaction


moves the database from one consistent state to
another;
at the end of the transaction, all data must conform
to all data rules, which are enforced with
constraints, data types, and so on.
Transactional Properties

• Isolation refers to the concurrent transaction’s


ability to see data modifications made by a
transaction before they are committed.
• Isolating transactions avoids transactional
anomalies and is enforced by either acquiring
locks or maintaining multiple versions of rows.
• Each transaction runs with a defined isolation
level.
Transactional Anomalies

Transactional anomalies can cause queries to return


unpredictable results.

Three types of transactional anomalies : dirty reads,


nonrepeatable reads, and phantom reads.
Transactional Anomalies
A dirty read occurs when a transaction reads data
that never existed in the database.
Transaction1 Transaction2
Inserts row1 into Table1
Reads row1 from Table1
Rolls back

This anomaly can occur if shared locks are not acquired for
reads, since there is no lock to conflict with the exclusive lock
taken out by Transaction1.
Transactional Anomalies

A nonrepeatable read occurs when a transaction


reads the same row twice but receives different
results each time.
Transaction1 Transaction2
Reads row1 from Table1
Updates row1 in Table1
Commits
Reads row1 from Table1

This anomaly can occur if Transaction1 takes out shared locks


but does not hold them for the duration of the transaction.
Transactional Anomalies

A phantom read occurs when a transaction reads a


range of rows twice but receives a different number
of rows the second time it reads the range.
Transaction1 Transaction2
Reads all rows from Table1
Inserts ten rows into Table1
Commits
Reads all rows from Table1

This anomaly can occur when Transaction1 does not acquire a


key-range lock and hold it for the duration of the transaction.
Pessimistic Isolation Levels

Read Uncommitted
• The least restrictive isolation level.
• Acquiring locks for write operations but not
acquiring any locks for read operations.
• Read operations do not block other readers or
writers.
=> all transactional anomalies are possible.
Pessimistic Isolation Levels

Read Committed
• Default isolation level
• Acquiring shared locks for read operations as
well as locks for write operations.
• Shared locks are only held during the read phase
of a specific row, and the lock is released as soon
as the record has been read.
=> protection against dirty reads, but nonrepeatable
reads and phantom reads are still possible.
Pessimistic Isolation Levels

Repeatable Read
Acquires shared locks on all rows that it touches
and then it holds these locks until the end of the
transaction.
=> dirty reads and nonrepeatable reads are not
possible,although phantom reads can still occur.
Pessimistic Isolation Levels

Serializable
• most restrictive isolation level
• not only acquiring locks for write operations but
also by acquiring key-range locks for read
operations and then holding them for the duration
of the transaction.
=> no transactional anomalies are possible,
including phantom reads.
Optimistic Isolation Levels
• Not acquiring any locks for either read or write
operations.
• Use a technique called row versioning.
• Maintaining a new copy of a row in TempDB for
uncommitted transactions every time the row is updated.
 Dramatically reduce contention on highly concurrent
systems.
=>The trade-off is that you need to scale TempDB
appropriately, in terms of both size and throughput capacity
Need to turn on optimistic isolation levels at the database
level.
Optimistic Isolation Levels
Snapshot isolation
• Uses optimistic concurrency for both read and write
operations.
• Assigning each transaction a transaction sequence
number at the point the transaction begins.
• Reads a row within the transaction and retrieves the
row version from tempdb whose sequence number is
closest to, and lower than, the transaction sequence
number.
=> dirty reads, nonrepeatable reads, and phantom reads
are not possible.
Optimistic Isolation Levels

Read Committed Snapshot


• Uses pessimistic concurrency for write operations and
optimistic concurrency for read operations.
• For read operations, it uses the version of the row that is
current at the beginning of each statement within the
transaction => achieve the same level of isolation as
pessimistic Read Committed isolation level.
When you turn on Read Committed Snapshot, this replaces
the functionality of Read Committed.
Transactional Properties
Durable :
• Transaction to be durable means that the change must be
written to disk.
• SQL Server achieves this by using a process called
write-ahead logging (WAL).
Delayed durability : This feature works by delaying the
flush of the log cache to disk until one of the following
events occurs:
o log cache becomes full and automatically flushes to disk.
o A fully durable transaction in the same database commits.
o sp_flush_log system stored procedure is run against the
database.
Transaction with In-Memory OLTP
• Memory-optimized tables do not support locks to
improve concurrency.
• Pessimistic concurrency is no longer an option.

Isolation Levels
Transaction with In-Memory OLTP
Read Committed isolation level is supported against
memory-optimized tables only for autocommit
transactions.
It is also not possible to use Read Committed in the
ATOMIC block of a natively compiled stored procedure.

When bellow option is set to on, access to a memory-optimized table


under a lower isolation level is automatically elevated to
SNAPSHOT isolation.
ALTER DATABASE CURRENT SET
MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT = OFF;
Transaction with In-Memory OLTP
Read Committed Snapshot supported for memory-
optimized tables, but only when you are using autocommit
transactions.
This isolation level is not supported when the transaction
accesses disk-based tables.

Snapshot
Snapshot isolation is only supported against memory-
optimized tables when you use interpreted SQL if it is
specified as a query hint as opposed to at the transaction
level.
It is fully supported in the ATOMIC block of natively
compiled stored procedures.
Transaction with In-Memory OLTP
Repeatable Read
The guarantee provided by REPEATABLE READ
isolation is that, at commit time, no concurrent
transaction has updated any of the rows read by
this transaction.
Serializable
it guarantees that no rows have been inserted
within the range of rows being accessed by queries
within the transaction.
Transaction with In-Memory OLTP
Cross-Container Transactions
when a transaction accesses both memory-
optimized tables and disk-based tables, you may
need to specify a combination of isolation levels
and query hints.
Retry Logic
Whether you are using interpreted SQL or a natively compiled stored
procedure, always ensure that you use retry logic when you are
running transactions against memory-optimized tables.
Because of the optimistic concurrency mode, conflict detection
mechanism rolls transactions back, as opposed to managing
concurrency with locking
Observing Transactions, Locks, and Deadlocks

You might also like