Ultimate SQL Server and Azure SQL for Data Management and Modernization
By Amit Khandelwal and Sumit Sarabhai
()
About this ebook
An Encyclopedic Guide to Data Management and Modernization with SQL Server
Key Features
● Detailed exploration of deployments on Linux, containers, and Kubernetes.
● Advanced techniques for securing, optimizing, and ensuring high availability in SQL Server.
● Strategies for SQL Ser
Related to Ultimate SQL Server and Azure SQL for Data Management and Modernization
Related ebooks
Introduction to Microsoft SQL Server Rating: 0 out of 5 stars0 ratingsMastering SQL Server: From Basics to Expert Proficiency Rating: 0 out of 5 stars0 ratingsAdvanced Database Architecture: Strategic Techniques for Effective Design Rating: 0 out of 5 stars0 ratingsComprehensive SQL Techniques: Mastering Data Analysis and Reporting Rating: 0 out of 5 stars0 ratingsSQL and NoSQL Full Mastery: A Comprehensive Guide to Modern Data Management Rating: 0 out of 5 stars0 ratingsSQL Made Simple Rating: 0 out of 5 stars0 ratingsMicrosoft SQL Server 2008 R2 Administration Cookbook Rating: 5 out of 5 stars5/5NoSQL Essentials: Navigating the World of Non-Relational Databases Rating: 0 out of 5 stars0 ratingsProfessional Guide to Linux System Programming: Understanding and Implementing Advanced Techniques Rating: 0 out of 5 stars0 ratingsInstant SQL Server Analysis Services 2012 Cube Security Rating: 0 out of 5 stars0 ratingsPython Programming for Kids: Fun and Easy Guide to Building Your First Programs Rating: 0 out of 5 stars0 ratingsOpenFlow Cookbook Rating: 5 out of 5 stars5/5Backup and Restore The Ultimate Step-By-Step Guide Rating: 0 out of 5 stars0 ratingsMySQL 8 Cookbook: Ready solutions to achieve highest levels of enterprise database scalability, security, reliability, and uptime Rating: 0 out of 5 stars0 ratingsAndroid Studio Hedgehog Essentials - Kotlin Edition: Developing Android Apps Using Android Studio 2023.1.1 and Kotlin Rating: 0 out of 5 stars0 ratingsExtending Docker Rating: 5 out of 5 stars5/5Google Cloud Data Engineer 100+ Practice Exam Questions With Well Explained Answers Rating: 0 out of 5 stars0 ratingsBasic Concepts in Data Structures Rating: 0 out of 5 stars0 ratingsBig Data: Statistics, Data Mining, Analytics, And Pattern Learning Rating: 0 out of 5 stars0 ratingsMastering Google App Engine: Build robust and highly scalable web applications with Google App Engine Rating: 0 out of 5 stars0 ratingsMySQL 5.1 Plugin Development Rating: 0 out of 5 stars0 ratingsDjango 1.1 Testing and Debugging Rating: 4 out of 5 stars4/5PHP Ajax Cookbook Rating: 2 out of 5 stars2/5Mastering MySQL Database: From Basics to Expert Proficiency Rating: 0 out of 5 stars0 ratingsLearn Hbase in 24 Hours Rating: 0 out of 5 stars0 ratings“Exploring Computer Systems: From Fundamentals to Advanced Concepts”: GoodMan, #1 Rating: 0 out of 5 stars0 ratingsApache Spark Machine Learning Blueprints Rating: 0 out of 5 stars0 ratings
Enterprise Applications For You
QuickBooks 2023 All-in-One For Dummies Rating: 0 out of 5 stars0 ratingsMicrosoft Excel Formulas: Master Microsoft Excel 2016 Formulas in 30 days Rating: 4 out of 5 stars4/5Excel : The Ultimate Comprehensive Step-By-Step Guide to the Basics of Excel Programming: 1 Rating: 5 out of 5 stars5/5Notion for Beginners: Notion for Work, Play, and Productivity Rating: 4 out of 5 stars4/5Agile Project Management: Scrum for Beginners Rating: 4 out of 5 stars4/5Creating Online Courses with ChatGPT | A Step-by-Step Guide with Prompt Templates Rating: 4 out of 5 stars4/5Microsoft Excel 365 Bible Rating: 0 out of 5 stars0 ratingsExcel Formulas and Functions 2020: Excel Academy, #1 Rating: 4 out of 5 stars4/5Learning Microsoft Endpoint Manager: Unified Endpoint Management with Intune and the Enterprise Mobility + Security Suite Rating: 0 out of 5 stars0 ratingsExcel All-in-One For Dummies Rating: 0 out of 5 stars0 ratingsExcel 101: A Beginner's & Intermediate's Guide for Mastering the Quintessence of Microsoft Excel (2010-2019 & 365) in no time! Rating: 0 out of 5 stars0 ratingsBitcoin For Dummies Rating: 4 out of 5 stars4/550 Useful Excel Functions: Excel Essentials, #3 Rating: 5 out of 5 stars5/5Excel 2019 Bible Rating: 5 out of 5 stars5/5Microsoft Teams For Dummies Rating: 0 out of 5 stars0 ratingsExcel Power Pivot and Power Query For Dummies Rating: 3 out of 5 stars3/5Excel 2019 For Dummies Rating: 3 out of 5 stars3/5Microsoft Word Guide for Success: Achieve Efficiency and Professional Results in Every Document [IV EDITION] Rating: 5 out of 5 stars5/5Excel 2016 For Dummies Rating: 4 out of 5 stars4/5Excel 2021 Rating: 4 out of 5 stars4/5CompTIA Project+ Study Guide: Exam PK0-005 Rating: 0 out of 5 stars0 ratingsExcel VBA Programming For Dummies Rating: 4 out of 5 stars4/5Microsoft Copilot For Dummies Rating: 0 out of 5 stars0 ratings
Reviews for Ultimate SQL Server and Azure SQL for Data Management and Modernization
0 ratings0 reviews
Book preview
Ultimate SQL Server and Azure SQL for Data Management and Modernization - Amit Khandelwal
CHAPTER 1
SQL Server – The Fundamentals
Introduction
This introductory chapter helps you to understand the basics of SQL Server, as knowing the core SQL Server database engine gives you a grasp on the fundamentals. The same SQL Server engine runs all the SQL products, such as Azure SQL Database, Azure SQL Managed Instance, Azure SQL Server Virtual Machines, Azure SQL edge, and SQL Server on Linux as shown in Figure 1.1. Hence, understanding the fundamentals of core SQL Server database engine assists you in your journey to learn other SQL suite of products as well.
Figure 1.1: SQL Server Database engine runs all the SQL products
Structure
In this chapter, the following topics will be covered:
Understanding SQL Server
SQL server Internals
Databases, tables
Database data and log files
Indexes
Backup and Restore
Introduction to SQL Server 2022
Understanding SQL Server
Microsoft SQL Server is a relation database management system (RDBMS) and like every RDBMS database, it stores data in a tabular format. Each SQL Server process has an instance that you can find in the task manager. You can have more than one SQL Server instance on a Windows machine, and each one has its own process. You can choose between a named instance or a default instance when you install SQL Server. The default instance has the same name as the host machine, while the named instance has a name that you specify, which is shown as hostname\Name
. For example, in Figure 1.2, we have installed two named instances of SQL Server: mypc\SQLINSTANCE1 and mypc\INSTANCEB. Where ‘mypc’ is the hostname on which these two instances are installed.
Figure 1.2: SQL Server process listing in Windows 1
On Windows-based machine, you can have more than one instance of SQL Server, while on Linux deployments you can install only one instance of SQL Server which is a default instance. To install more than one instance of SQL Server on Linux, you must use containers. We will learn more about SQL Server on Linux in later chapters.
You can create many databases, Logins, Credentials, Linked servers, Endpoints, Jobs, and other Instance level objects under each SQL Server instance. Likewise, each database is a collection of schemas, tables, users, and other database objects. The hierarchy is shown in Figure 1.3 for your reference.
Figure 1.3: SQL Server hierarchy
Using client tools, such as SQL Server Management Studio (SSMS), SQL Server Management Studio (SSMS), SQL Server Management Studio (SSMS, Microsoft Learn (https://learn.microsoft.com/en-us/sql/ssms/sql-server-management-studio-ssms?view=sql-server-ver16) or Azure Data Studio (ADS), What is Azure Data Studio - Azure Data Studio, Microsoft Learn, (https://learn.microsoft.com/en-us/azure-data-studio/what-is-azure-data-studio) you can connect to SQL Server instances running on Windows or Linux to manage and administer.
A tux icon next to the instance icon in SSMS indicates a Linux-based SQL Server instance, while a plain database icon indicates a Windows-based one. This helps you quickly identify the operating system of the SQL Server instance you are connected to. See Figure 1.4 for an example.
Figure 1.4: Connecting to SQL Server using SSMS 1
Azure Data Studio is a data management client tool that works on Windows, macOS, or Linux. It shows the OS and edition details of the connected database, as in Figure 1.5. It also has many extensions to support other databases, such as MySQL, PostgreSQL, and others on cloud or on-premises.
Figure 1.5: Connecting to SQL Server using ADS
SSMS is the preferred tool for SQL Server management and administration, but you may use ADS when you need to connect to different kinds of databases, including SQL Server.
To familiarize yourself with SQL Server and these tools, it is recommended to install SQL Server 2022 Developer edition (free) on Windows by following the instructions provided here: SQL Server installation guide - SQL Server | Microsoft Learn (https://learn.microsoft.com/en-us/sql/database-engine/install-windows/install-sql-server?view=sql-server-ver16). If you prefer Linux, then you can complete the installation of SQL Server 2022 on Linux in less than 2 minutes, on your choice of distribution by following the instructions as documented here: Installation guidance for SQL Server on Linux - SQL Server | ().
SQL Server Internals
Now that you understand SQL Server installation and connecting to them using the client tools, let’s learn more about the internals of SQL Server starting with SQL Server databases. In this section, here are the core concepts that we will learn about:
SQL Server Databases- Files and Filegroups
Transaction log architecture
System and User objects
Indexes
Backup and Restore
SQL Server Databases- Files and Filegroups
To understand the internal workings of the SQL Server database, let’s start with the basics. When you create an SQL Server database through UI in SSMS, you will notice that a minimum of two files are created on the operating system—a data file and a log file as shown in Figure 1.6. The data file/s stores the data and the log file is used for logging to help with the recovery of the database.
Figure 1.6: SQL Server database properties
Figure 1.7 shows how the database structure looks like when connected to the instance via the SSMS.
Figure 1.7: SQL Server database seen in the object explorer in SSMS
You can always create more than one data and log file for a single database. Though it is recommended not to create more than one log file for a database, you can and should always have multiple data files. Every data file is associated with a filegroup; a filegroup is a logical way of grouping data files and they help you spread your data across multiple disks by partitioning your tables and indexes across multiple data files, assisting you with query performance and administrative tasks, such as backup and restore options. Let’s see this in action to understand this further:
In the create database sample script here:
We are creating a new database called sampledb, with three data files: sampledb.mdf, sampledb1.ndf, and sampledb2.ndf.
Two new filegroups are being created as well: Primary and Secondary.
The primary filegroup has the sampledb.mdf datafile and the secondary filegroup has the other two sampledb1.ndf and sampledb2.ndf files.
We are also making the secondary filegroup as the default filegroup, so every time an object is created inside the sampledb database, it gets created on the files that are part of the secondary filegroup.
All the data files that belong to the secondary filegroup are on the D drive, a different physical disk on the machine, and the data files that belong to the primary filegroup are on the default C drive, ensuring the database I/O is distributed across multiple disks.
The log file is placed on the E drive and it is not part of any filegroup. The E drive is a separate physical disk. This ensures that the Data I/O is separate from Log I/O.
Every file that you create for a database has a logical and physical name. The logical name is what you use in T-SQL commands to refer to the physical names on the operating system. In the following sample script ‘sampledb1’ is the logical file name for sampledb1.ndf which is the physical file on the operating system.
CREATE DATABASE [sampledb]
CONTAINMENT = NONE
ON PRIMARY
( NAME = N’sampledb’, FILENAME = N’C:\Program Files\Microsoft SQL Server\MSSQL16.SQLINSTANCE1\MSSQL\DATA\sampledb.mdf’, SIZE = 8192KB, FILEGROWTH = 65536KB ),
FILEGROUP [secondary]
( NAME = N’sampledb1’, FILENAME = N’D:\sqldata\sampledb1.ndf’, SIZE = 8192KB, FILEGROWTH = 65536KB ),
( NAME = N’sampledb2’, FILENAME = N’D:\sqldata\sampledb2.ndf’, SIZE = 8192KB, FILEGROWTH = 65536KB )
LOG ON
( NAME = N’sampledb_log’, FILENAME = N’E:\sqllogs\sampledb_log.ldf’, SIZE = 8192KB, FILEGROWTH = 65536KB )
WITH LEDGER = OFF
GO
USE [sampledb]
GO
IF NOT EXISTS (SELECT name FROM sys.filegroups WHERE is_default=1 AND name = N’secondary’) ALTER DATABASE [sampledb] MODIFY FILEGROUP [secondary] DEFAULT
GO
This configuration ensures that the system tables and objects are all in the files that belong to the primary filegroup, and the user tables and objects are in the secondary filegroup. Note that the initial size of the files in the filegroup is same, this ensures that proportionate data is written on the data files that belong to the filegroup. This is a critical aspect of the database design; if you use files of unequal size, then the data will not be balanced, and some files will be used more than others, hence throttling the I/O throughput by causing disk contention. SQL Server uses proportional fill strategy across all files so that when you have files of equal size all the files are used proportionally.
A data file can be part of one filegroup and one database only. Also, a file or a filegroup can never be shared amongst databases; they are exclusively part of one database only.
Let us now understand the nuances of Data I/O. SQL Server mostly uses random I/O for data files and sequential I/O for log files. When we say data I/O, it refers to the process of reading and writing data pages from and to the disk specifically onto to the datafiles. A page is the fundamental unit of data storage in SQL Server. The size of a page is 8KB. A page holds user and metadata. Eight physically contiguous pages is called as an Extent. Thus, the size of an extent is 64 KB (8 pages × 8 KB/page) as depicted in Figure 1.8.
Figure 1.8: A pictorial representation of an Extent and page
An extent is used as the basic unit for space management by SQL Server. There are two types of extents:
Mixed Extent: The pages part of this extent belongs to different objects, in fact, each page could belong to different objects, hence one mixed extent can be shared by up to 8 objects.
Uniform Extent: All the eight contiguous pages in this extent belong to the same object.
SQL Server uses allocation maps, also referred to as system pages, to record allocation of extents to data objects. There are two types of allocation maps:
Global Allocation Map (GAM): GAM pages record what extents have been allocated. Each GAM covers 64000 extents. If the bit is 1, then that extent is free, else, it is allocated.
Shared Global Allocation Map (SGAM): SGAM records extents that are currently used as mixed extend and have at least one unused page. Each SGAM covers 64000 extents as well. So, for each extent, there is a bit, and if that is set to 1, it means that it is a mixed extent with at least 1 page as free. If the bit is 0, then the extent is not a mixed extent or if it is a mixed extent, it does not have a free page.
So, using the GAM and SGAM allocation maps makes it simple for SQL Server to identify mixed extents that have one free page or identify uniform extent to allocate to an object. While this is helpful at extent level, there is also a metadata page called Page Free Space (PFS) pages. After an extent is allocated to an object, the database engine uses the PFS pages to record which pages in the extent are allocated or free. This helps the SQL engine to allocate a new page needed to insert a new row or index key values.
When you query system dynamic management views (system-created objects that you can refer to monitor, manage, or tune SQL Server performance) a few DMVs that you can use to query and learn more about pages and extents are:
sys.database_files
sys.dm_db_file_space_usage
Here’s an example showing you how to use them:
select db_name (database_id) as db_name, file_name (file_id) as File_id, total_page_Count, allocated_extent_page_count as pages_in_allocated_extent, unallocated_extent_page_count as pages_in_unallocated_extents
, mixed_extent_page_count as pages_in_mixed_extent, modified_Extent_page_count as pages_modified_in_allocation_extent_since_last_full_backup from sys.dm_db_file_space_usage
Figure 1.9: SQL Server page details
select file_name(file_id) as filename, size as size_in_pages, (size*8) as size_in_MB from sys.database_files where type = 0
Figure 1.10: SQL Server database Size in MB
To learn more about the pages and extent architecture, it is recommended that you refer to the official Microsoft documentation: Pages and Extents Architecture Guide - SQL Server | Microsoft Learn (https://learn.microsoft.com/en-us/sql/relational-databases/pages-and-extents-architecture-guide?view=sql-server-ver16).
Transaction Log Architecture
SQL Server Log file is an essential component required for the databases to be recovered to a consistent state. The transaction log records all the transactions and modifications made by every transaction. A transaction log is a string of log records. Each log record can be identified through the unique Log sequence number (LSN). The LSN of the current log record will always be greater than the previous log record. Thus LSNs follow sequential series. General recommendations about transaction log are:
You should try and have only one transaction log file per database, unlike the data files where it’s recommended to have multiple data files.
For better I/O throughput, always create the log files of the databases on separate drives from the data files as the I/O pattern for Log files is completely different from data files.
Try and ensure that you pre-size your log file to maximum size that you think the log file will grow to so that this will avoid unnecessary log file growth during production hours.
After reading the following section, you will have a better idea on the aforementioned recommendations.
Microsoft SQL Server uses the Write-ahead logging (WAL) technique to ensure the durability and consistency of the database after a restart. The WAL protocol guarantees that no data modifications are written to the disk before the associated log records are written to the disk. When you make changes to the database, such as creating a table, index or inserting/updating a row, the page that contains the row is fetched from the disk to part of the memory (cache) called buffer cache (also known as buffer pool), if it’s not there already in the buffer pool. Once fetched in the buffer pool, the page is latched (not locked) and modified; this page is now referred to as dirty page.
The page also has the information on the transaction log record that modified the page. The activity of writing from memory to disk is called flush. For every modification, a transaction log record is inserted in the log cache. First, the log cache is flushed and then the corresponding buffer caches are flushed to the disk. The operation that performs the flush of data pages from the buffer cache to the disk is called as checkpoint.
Imagine a scenario, where a lot of modifications are being performed on a database marking multiple pages as dirty, the log records for those modifications have already been flushed to the log file and the data pages were being flushed, when the system goes down, without the data buffers flushed completely. When the database restarts and performs recovery, it can now refer to the log file to identify the active transactions, then if it were committed redo (roll forward) those transactions to ensure the changes are persisted. And undo (rollback) the transactions that were not committed. This ensures the databases is consistent before and after the restart.
Let us look at the transaction logs and their physical architecture in detail. The physical SQL Server log file is divided into several virtual log files (VLF) as shown in Figure 1.11 which is an example of the database sample db created based on the T-SQL script shared in the previous section. The sizes of the virtual log files are decided by SQL Server dynamically while the log file is created or extended based on the auto-growth setting. The SQL Server database engine tries to keep the VLFs as few as possible. SQL Server uses the VLFs as a mechanism to manage and reuse the physical log file of the database.
Figure 1.11: Transactional Log Logical Architecture
You can also query this using the DMV: sys.dm_db_log_info with the query as shown here:
select db_name(database_id) as db_name, file_name(file_id) as file_name, vlf_size_mb, vlf_active, vlf_status from sys.dm_db_log_info (8)
Figure 1.12: Query SQL Server Transactional log
There is a total of 4 VLFs and the size of each VLF is shown in the table. As you can also see, only the first VLF is active, the rest of the VLFs are unused or inactive as the logical log ends at the end of the virtual log file.
Now, as modifications happen in the database, you will notice that the logical log starts to increase and the VLFs change the state from inactive (unused) to active based on the next output after running the same query as shown here:
select db_name(database_id) as db_name, file_name(file_id) as file_name, vlf_size_mb, vlf_active, vlf_status from sys.dm_db_log_info (8)
Figure 1.13: Query SQL Server Transactional log
As you can see, all four VLFs are now active, meaning, the entire log file is in use. We will now run a checkpoint, causing the flushing of the log cache and corresponding buffer cache which will truncate the log file for reuse without the need of the growth. Here is the output of the sys.dm_db_log_info and as you can see the VLFs 1, 2, and 3 are now inactive and the last VLF is the only active VLF.
Figure 1.14: Query SQL Server Transactional log
The file architecture, as shown in Figure 1.15, after the log truncation, marks the first three VLFs as inactive. Note that the transaction log file did not grow but it wrapped around shown by the green line and marked the first VLF as inactive, meaning ready to be used. Also, for easy understanding, it is shown the last checkpoint, the start of the VLF and the min LSN are the same but in a real environment the min LSN could be behind the checkpoint LSN as that transaction might not have been completed when the checkpoint occurred.
Figure 1.15: Transactional log logical architecture
Truncation of the log is the process that marks a VLF as an inactive signalling SQL Server that it can use for logging as all the changes done previously on that VLF have been successfully captured in the database and can be recovered consistently. Log truncation ensures that log files eventually do not fill all the disk space and provides the ability to reuse the virtual log files. Also, remember a VLF cannot be partially active or partially inactive. If there is a single log record that is still active, the entire VLF is marked as active.
The log truncation occurs automatically after the following events:
When the database is configured under Simple recovery model, after a checkpoint log truncation occurs.
When the database is configured under Full recovery model after the log backup is completed and a checkpoint has occurred since the previous backup, then the truncation occurs.
If you see a lot many VLFs being created and the log file size increasing, that means that the truncation of the log file is not happening. To find the reason why the log truncation is not happening for a database, you can refer sys.databases and use a query like:
select name as db_name, log_reuse_wait, log_reuse_wait_desc from sys.databases sd where database_id= db_id(sd.name)
This query should let you know the reason why the log_reuse is not happening. A few common reasons are: Log backup, Active transaction, and Replication. To see the entire list, refer to the official Microsoft documentation and refer to the columns: log_reuse_wait_desc sys.databases (Transact-SQL) - SQL Server | Microsoft Learn (https://learn.microsoft.com/en-us/sql/relational-databases/system-catalog-views/sys-databases-transact-sql?view=sql-server-ver16)
Finally, when talking about data files, we spoke about data pages and extents. Data pages are normally 8KB in size and the extents is a collection of 8 pages, so the size of an extent is 64 KB. Similarly, the log block consists of log records. The log records themselves can vary in size but are always in integer multiple of 512 bytes, which is the minimum sector size that SQL Server supports. The maximum size of the log block is 60 KB. So, as you can understand, the log flushes are small in size and mostly sequential I/O. To learn more about the transaction log and its architecture, read the official Microsoft documentation available here: SQL Server transaction log architecture and management guide - SQL Server | Microsoft Learn (https://learn.microsoft.com/en-us/sql/relational-databases/sql-server-transaction-log-architecture-and-management-guide?view=sql-server-ver16)
System Objects and Users Objects
When you connect to a SQL Server instance, you will notice that there are a few system objects, such as the system databases, tables, views, logins, and other objects that are pre-created. You can list all the system objects contained in the schemas named sys or INFORMATION_SCHEMA using the query:
select * from sys.system_objects
System objects store the metadata, that is, data about data. It is essential for SQL Server functionality. Most of these objects are physically stored in the resource database, one of the system databases. All these system objects logically appear in the sys schema of every database including the user databases. These system objects help in the functioning, monitoring, and administration of the SQL Server.
Let’s first look at the system databases that are by default created as soon as you install SQL Server on Windows or Linux. There are 5 system databases, and you can view 4 out of the 5 system databases when you connect to the SQL Server instance as shown in Figure 1.16. For Azure SQL Database and elastic pools, you will see only the master and tempdb database. For the Azure SQL Managed instance, Azure SQL Server VMs all the system databases apply.
Figure 1.16: System databases as seen from SSMS
Master Database: This database is used to record instance-level metadata objects, such as login accounts, endpoints, Linked servers, service master key (SMK), and other system configurations specifically related to sp_configure.
Model Database: This is the database that is used as the template to create other databases on the instance. Most of the database options like the recovery model and other contents of the model database are copied to the new database that is created.
Msdb Database: This system database is specifically used by the SQL Server agent for scheduling, tracking, and monitoring the various jobs and alerts that are configured. This contains system objects that can be used to track the backup restore history, agent job run history, dbmail, and other features provided by SQL Server.
Tempdb Database: This is a global resource available to all users connected to an instance of SQL Server. This database is used only to store temporary objects, and hence every time the SQL Server is restarted, the tempdb is recreated. All the previous data in tempdb will be lost after restarting the SQL Server. You should not use this database to create objects that you intend to persist after SQL Server restart.
Often, this database is also used internally by the SQL Server engine to create temporary objects for storing intermediate results of a query, creating work files for joins, or storing intermediate sort results or version data. You can track the usage of the tempdb using the Dynamic Management views, such as:
sys.dm_db_file_space_usage
sys.dm_db_session_space_usage
For sample queries please refer: tempdb database - SQL Server | Microsoft Learn (https://learn.microsoft.com/en-us/sql/relational-databases/databases/tempdb-database?view=sql-server-ver16#monitoring-tempdb-use)
Since, this is a global single database and is used across the entire SQL Server instance, to avoid performance throttling on this database, there are published recommendations and guidelines for capacity planning and optimizing the tempdb performance. In fact, there have been multiple performance improvements in the tempdb in SQL Server releases. Some of the major features introduced were setting up tempdb during the installation of SQL Server and memory-optimized tempdb metadata in SQL Server 2022 as well there are improvements to achieve better concurrency for the system pages. For details please see: tempdb database - SQL Server | Microsoft Learn (https://learn.microsoft.com/en-us/sql/relational-databases/databases/tempdb-database?view=sql-server-ver16#monitoring-tempdb-use)
Resource Database: This is a read-only database and contains all the system objects that are included with SQL Server. You cannot view this database like the others when you connect to the SQL Server instance. But, all the system objects are physically persisted in this database and all these objects appear in the sys schema of every database that is created.
You can also verify in the errorlog : Starting up database‘mssqlsystemresource’
. The physical files of the database for SQL Server on Windows is located at: "