Understanding SQL Server Memory Internals
Understanding SQL Server Memory Internals
Server Net-Libraries
Distributed Queries
Buffer Cache: This is the pool of memory pages into which data pages are read. An
important indicator of the performance of the buffer cache is the Buffer Cache Hit
Ratio performance counter. It indicates the percentage of data pages found in the buffer
cache as opposed to disk. A value of 95% indicates that pages were found in memory
95% of the time. The other 5% required physical disk access. A consistent value below
90% indicates that more physical memory is needed on the server.
Procedure Cache: This is the pool of memory pages containing the execution plans for
all Transact-SQL statements currently executing in the instance. An important indicator
of the performance of the procedure cache is the Procedure Cache Hit Ratio performance
counter. It indicates the percentage of execution plan pages found in memory as opposed
to disk.
Log Caches: This is the pool of memory used to read and write log pages. Each log has a
set of cache pages. The log caches are managed separately from the buffer cache to
reduce the synchronization between log and data buffers.
Connection Context: Each connection has a set of data structures that record the current
state of the connection. These data structures hold items such as parameter values for
stored procedures, cursor positioning information, and tables currently being referenced.
System-level Data Structures: These are data structures that hold data global to the
instance, such as database descriptors and the lock table.
The buffer cache, procedure cache, and log caches are the only memory elements whose size is
controlled by SQL Server.
A very important aspect to watch for is whether SQL Server is using the maximum memory
available on the system (assuming the system is dedicated to SQL Server). A system with a fully
utilized memory may be prone to performance bottlenecks when competition for resources
increases. Prepared Transact-SQL statements, for example, may suffer when the procedure cache
is unable to expand due to fully utilized buffer caches.
This post takes you through how to monitor SQL Server Memory usage. Most of the clients I
worked with had a common question What is my SQL Server Memory Usage? yah there are
lot of ways to monitor SQL Server Memory Usage. Also if you are a senior database professional
whether you might be an administrator, developer or architect you might be asked the question
How do you know SQL Server Memory usage? One can design the best scalable and an
optimized database system when he/she understands the RDBMS architecture. Here well be
discussing memory related parameters and how sql server uses memory. Before going through
Monitoring SQL Server Memory Usage we should understand below things.
Memory Types
Memory Types:
Page File: When available memory cant serve the coming requests it starts swapping pages to
disk to the page file. The current page file size we can get from sysdm.cpl Advanced
Performance Settings > Advanced
Cached memory: It holds data or program code that has been fetched into memory during the
current session but is no longer in use now.
Free memory: It represents RAM that does not contain any data or program code and is free for
use immediately.
Working Set: Amount of memory currently in use for a process. Peak Working Set is the highest
value recorded for the current instance of this process. Consistently Working Set < Min Server
Memory and Max Server Memory means SQL Server is configured to use too much of memory.
Private Working Set: Amount of memory that is dedicated to that process and will not be given
up for other programs to use.
Sharable Working Set: Shareable Working Set can be surrendered if physical RAM begins to
run scarce
Commit Size: The total amount of virtual memory that a program has touched (committed) in
the current session. Limit is Maximum Virtual Memory that means Physical RAM + Page File +
Kernal Cache.
Hard faults / Page Faults: Pages fetching from the page file on the hard disk instead of from
physical memory. Consistently high number of hard faults per second represents Memory
pressure.
Available Bytes counter indicates how many bytes of memory are currently available for use by
processes.
Memory: Pages/sec:
Pages/sec counter indicates the number of pages that either were retrieved from disk due to hard
page faults or written to disk to free space in the working set due to page faults.
Low < 90%: More requests are getting data from physical disk instead of data cache
Minimum Server Memory: Minimum amount of memory which is initially allocated to SQL
Server. Default value is 0.
Maximum Server Memory: Maximum Server Memory that sql server can use up to. Make sure
you are having a proper statistics and future plan before making any changes. Default value is set
to 2 Peta Bytes. To determine this value first we need to know the memory required for OS and
memory required for any other applications / services. Maximum Server Memory = Total
Physical Memory (Memory Required for OS + Memory Required for Other Applications);
Shows how much memory SQL Server is using. The primary use of SQL Servers memory is for
the buffer pool, but some memory is also used for storing query plans and keeping track of user
process information.
This value shows how much memory SQL Server attempts to acquire. If you havent configured
a max server memory value for SQL Server, the target amount of memory will be about 5MB
less than the total available system memory.
Total Server Memory is almost same as Target Server Memory Good Health
Total Server Memory is much smaller than Target Server Memory There is a Memory
Pressure or Max Server Memory is set to too low.
Number of seconds a page is staying on buffer cache. Usually we do calculate based on the
Memory allocated to SQL server Instance. For 4 GB ram the PLE is supposed to be 300 sec / 5
Min.
This is to set an estimated health benchmark for PLE, one can follow their own formula based on
their environment and experience.
The size of the virtual address space is determined largely by the CPU
architecture.
32 bit can have Max 2^32 = 4 GB VAS and 64 bit will have 2^64 = Almost 16
Trillion GB VAS 8TB is CAP
32 Bit:
Initially allocates memory for Memory To Leave (MTL) also known as VAS
Reservation (384 MB). This MTL value can be modified using the start
parameter g
Then Allocates memory for Buffer Pool = User VAS MTL (Reserved VAS) =
Available VAS
64 Bit:
Ex: Windows Server is having 64 GB physical memory; SQL Server Max Server
Memory = 54 GB and OS and other apps are using 6 GB then the memory
available for
Connections with Network Packet Size higher than 8KB (8192 bytes)
Memory allocated by Linked Server OLEDB Providers and third party DLLs
loaded in SQL Server process
XML Documents
As you may know how memory allocation on windows operating system SQL
Server occupies memory as much as it can based on the configurations and
available memory.
When Lock Pages In Memory is enabled for SQL Server Service Account then
SQL Server can lock the pages and need not release memory when windows
forcing to release.
Q. Can you technically explain how memory allocated and lock pages works?
Ans:
Windows OS runs all processes on its own Virtual Memory known as Virtual Address Space and
this VAS is divided into Kernal (System) and User (Application) mode.
SQL Server memory allocations are made using calls to the function
AllocateUserPhysicalPages() in AWE API.
Q. Does Lock pages in Memory (LPIM) is enabled to by default for SQL Server
2008 R2 / 2012 /2014?
Ans:
No! As per Microsoft documentation its not. We need to enable it.
When using Old windows servers SQL Server 2005 on Windows Server 2003
When using Windows 2008 R2 / 2012 / 2014 and above still seeing Hard Trims
happening SQL Server process memory.
Note: If you are using the latest windows systems and configured SQL Server memory settings
as per the business requirement we need not worry about Lock Pages in Memory.
Computer Configuration
Windows Settings
Security Settings
Local Policies
Q. On which basis you would determine the Max Server Memory Setting for
SQL Server?
Ans:
Max Server Memory is the maximum amount of memory reserved for SQL Server. There are few
things needs to be considered:
Applications sharing the host with SQL Server and required average memory
for those Apps
Remember in 64 bit machine we need to consider Non-Buffer Pool object memory region while
leaving memory for OS. This is just a baseline that we followed while setting up new
environments. We are strictly supposed to monitor the memory usage in peak hours and adjust
the settings.
Q. Here is a scenario: When we are monitoring memory usage by one of the
SQL Server instance, surprisingly sql server is using more memory than Max
Server Memory configuration. Any idea why its happening?
Ans:
BPool can be controlled by the Max Server Memory Setting but not the Non-
BPool memory.
Also Lock Pages in Memory can control BPool memory but still Non-BPool
pages are paged to disk
The actual operation that triggers this verification should wait until this
verification process completed.
If Instant File Initialization is enabled for SQL Server, it skips the zeroing process for data files
and reduces the wait time.
Q. What are the database activities that get benefit from Instant File
Initialization?
Ans:
Note: Remember growing log file still uses the zeroing process
Q. How to check if Instant File Initialization is enabled for a SQL Server
Ans:
Enabling trace flags (3004,3605) enable writing zeroing process information into SQL Server
error log. If Instant File Initialization is enabled for sql server we cant see zeroing process
messages for data file where we can still can see zeroing process messages related to log files
something like Zeroing Completed On .._log.ldf.
Run lusrmgr.msc on the server to find the appropriate group name for each
instance of SQL Server.
Under Security Settings on the left, go to Local Policies and under that to User
Rights Assignment.
Add SQL Server group created by SQL setup (standalone) or cluster domain
group (for clusters)
Note: If your sql server service account is already in part of Windows Local Administrators
Group then we need not add it to the Volume Maintenance Task. Also IFI doesnt works if
Transparent Data Encryption (TDE) is enabled.
I have clearly seen the performance gain by enabling Instant File Initialization. An example:
After Enabling Instant File Initialization the same operation took 2Hr 8 Min
Q. Does Instant File Initialization is enabled for SQL Server by default?
Ans:
No! By default IFI is not enabled for SQL Server as there is a slight security risk.
As you may know, when data is deleted from disk by the operating system, it really is not
physically deleted; the space holding the data is just marked as being available. At some point,
the older data will be overwritten with new data.
When Instant File Initialization is not enabled: Data is zeroed out before
writing anything on that page.
When Instant File Initialization is enabled: There is a slight security risk here.
When a new database is created those new pages are not zeroed out and
there is a chance that newly allocated pages might contain previously
deleted data and one can read that data using a recovery tool.
Q. Can you tell me the difference between Logical Reads and Physical Reads?
Ans:
Logical Reads:
It is very possible that logical read will access same data pages many times,
so count of logical read value may be higher than actual number of pages in
a table.
Usually the best way to reduce logical read is to apply correct index or to
rewrite the query.
Physical Reads:
Physical read indicates total number of data pages that are read from disk.
For a query when required pages are not found on cache memory it picks the
required pages from Hard Disk and keep those pages on cache memory for
further usage. This is known as physical read.
Q. When I have been checking for buffer usage one of the stored procedure
is using large number of Logical Reads. Does this impact performance? What
are the most possible reasons for a large number of logical reads?
Ans:
Yes! Large number of logical reads leads to memory pressure. There are few common reasons
that cause more logical reads:
Unused Indexes: Indexes should be built on the basis of data retrieval
process. If indexes defined on columns and those columns are not being used
in queries that leads to huge number of logical reads.
Wide Indexes: Indexing on the large number of columns will leads to high
logical reads.
Poor Fill Factor/Page Density: When a less fill factor is specified large number
of page needed to qualify a simple query which leads to High Logical Reads.
Poor Query Design: If query leads to index scan, Hash Join when Merge Join is
possible, not using indexes etc. causes the more number of logical reads.