BP 2015 Microsoft SQL Server
BP 2015 Microsoft SQL Server
BEST PRACTICES
Contents
1. Executive Summary................................................................................. 5
8. Conclusion.............................................................................................. 52
9. References.............................................................................................. 53
Supporting SQL Server on Nutanix.................................................................................................. 53
Nutanix Networking........................................................................................................................... 53
About Nutanix.............................................................................................54
List of Figures.............................................................................................................................................55
Microsoft SQL Server on Nutanix
1. Executive Summary
This document makes recommendations for designing, optimizing, and scaling Microsoft
SQL Server deployments on the Nutanix Cloud Platform. Historically, it was challenging
to virtualize SQL Server because of the high cost of traditional virtualization stacks and
the impact that a SAN-based architecture can have on performance. Businesses and
their IT departments have fought to balance cost, operational simplicity, and consistent
predictable performance.
Nutanix removes many of these challenges and makes virtualizing a business-critical
application such as SQL Server much easier. Nutanix storage is a software-defined
solution that provides all the features one typically expects in an enterprise SAN, without
a SAN's physical limitations and bottlenecks. SQL Server particularly benefits from the
following storage features:
• Localized I/O and the use of flash for index and key database files to reduce operation
latency
• A highly distributed approach that can handle both random and sequential workloads
• Ability to add new nodes and scale the infrastructure without system downtime or
performance impact
• Nutanix data protection and disaster recovery workflows that simplify backup
operations and business continuity processes
Nutanix lets you run both Microsoft SQL Server and other VM workloads simultaneously
on the same platform. Density for SQL Server deployments is driven by the database's
CPU and storage requirements. To take full advantage of the system's performance
and capabilities, validated testing shows that it's better to scale out and increase the
number of SQL Server VMs on the Nutanix platform than to scale up individual SQL
Server instances. The Nutanix platform handles SQL Server's demanding throughput and
transaction requirements with localized I/O, server-attached flash, and distributed data
protection capabilities.
This best practice guide addresses key items for each role, enabling a successful design,
implementation, and transition to operation. Most of the recommendations apply equally
to all currently supported versions of Microsoft SQL Server. We call out differences
between versions as needed.
This document covers the following topics:
• Overview of the Nutanix solution
• Benefits of running Microsoft SQL Server on Nutanix
• Overview of high-level SQL Server best practices for Nutanix
• Design and configuration considerations when architecting a SQL Server solution on
Nutanix
• Virtualization optimizations for VMware ESXi, Microsoft Hyper-V, and Nutanix AHV
Table: Document Version History
Version Number Published Notes
1.0 April 2015 Original publication.
2.0 September 2016 Updated for SQL Server 2016
and added discussion of
Nutanix Volumes.
2.1 May 2017 Updated for all-flash platform
availability.
3.0 August 2019 Major updates throughout.
3.1 November 2019 Updated the Understanding
RAM Configuration section.
3.2 June 2020 Updated the Nutanix overview
and the Use of Jumbo Frames
section.
3.3 October 2020 Updated the Best Practices
Checklist section.
3.4 December 2020 Updated the SQL Server
Storage Best Practice, High
Availability Design, and SQL
Server Maintenance Scripts
sections.
For more information on Advanced Micro Devices–based clusters, see Advanced Micro
Devices NUMA Topology for AMD EPYC Naples Family Processors.
VMware ESXi, Microsoft Hyper-V, and Nutanix AHV are all NUMA compatible, so they
can present a virtual NUMA (vNUMA) topology to the VM for NUMA-aware applications
such as SQL Server to consume. For SQL Server VMs on AHV that don’t fit within a
NUMA boundary (number of cores + memory), enable vNUMA to give SQL Server
visibility.
ESXi uses a NUMA scheduler to dynamically balance processor load and memory
locality or processor load balancing. For more details, see the Virtual Machine CPU
Configuration.
AHV hosts support virtual NUMA (vNUMA) on VMs. For more details, see the section
Enabling vNUMA on Virtual Machines in the AHV Administration Guide.
The size of the CVM varies depending on the deployment; however, based on
experience in the field, for database deployments such as storage-heavy Microsoft SQL
Server databases, we recommend 16 vCPU (if the server hardware contains CPUs
with 16 or more cores) and 64 GB of memory. If you must customize your deployment,
contact your Nutanix partner and Nutanix Support to ensure that the configuration is
supported.
During initial cluster deployment, Nutanix Foundation automatically configures the vCPU
and memory assigned to the CVM. For more information on how Nutanix Foundation
allocates vCPUs and memory, see Controller VM (CVM) Field Specifications.
vCPUs are assigned to the CVM and not necessarily consumed. If the CVM has a
low-end workload, it uses the CPU cycles it needs. If the workload becomes more
demanding, the CVM uses more CPU cycles. Think of the number of CPUs you
assign as the maximum number of CPUs the CVM can use, not the number of CPUs
immediately or constantly consumed.
When you size any Nutanix cluster, ensure that the cluster has enough resources
(especially failover capacity) to handle situations where a node fails or goes down for
maintenance. Sufficient resources are even more important when you’re sizing a solution
to host a critical workload like Microsoft SQL Server.
For Nutanix AOS 5.11 and later versions, AHV clusters can include a new type of node
that doesn’t run a local CVM. These compute-only nodes are for very specific use cases,
so engage with your Nutanix partner or Nutanix account manager to see if compute-only
nodes are advantageous for your workload. Because standard nodes (running CVMs)
provide storage from AOS storage, compute-only nodes can’t benefit from data locality.
Note: Compute-only nodes are available when you use Nutanix AHV or VMware ESXi (as long as the HCI
nodes run at least AOS 6.7) as the hypervisor.
operating systems can run its own programs, as the hypervisor presents to it the
host hardware’s processor, memory, and resources.
vSocket
Each VM running on the hypervisor has a vSocket construct. vCPUs virtually plug
in to a vSocket, which helps present the vNUMA topology to the guest OS. You can
have many vSockets per guest VM or just one.
vCPU
When determining how many vCPUs to assign to a SQL Server, always size
assuming 1 vCPU = 1 physical core. For example, if the physical host for a
production environment contains a single 10-core CPU package, we recommend
assigning 10 vCPU to the SQL Server. vCPU oversubscription amounts depend on
the workload and environment. For test and development workloads, use ratios of
one-to-two and one-to-three.
Hyperthreading Technology
Hyperthreading is Intel’s proprietary simultaneous multithreading (SMT) implementation
used to improve computing parallelization. Hyperthreading’s main function is to deliver
two distinct pipelines into a single processor implementation core. From the perspective
of the hypervisor running on the physical host, enabling hyperthreading makes twice as
many processors available.
In theory, it’s beneficial to use hyperthreading all the time so that you use all available
processor cycles. However, in practice, using this technology requires additional
consideration. Two key issues are worth considering when assuming that hyperthreading
simply doubles the amount of CPU on a system:
• How the hypervisor schedules threads to the pCPU
• The length of time a unit of work stays on a processor without interruption
Because a processor is still a single entity, appropriately scheduling processes from
guest VMs is critical. For example, scheduling two threads from the same process on
the same physical core results in each thread taking turns stopping the other to change
the core’s architectural state, with a negative impact on performance. SQL Server
complicates this constraint because it schedules its own threads; depending on the
combination of SQL Server and Windows OS versions in use, SQL Server might not be
aware of the distinction between physical cores or know how to handle this distinction
properly. Be mindful of the hypervisor CPU scheduling function, which is abstracted from
the guest OS altogether.
Because of this complexity, you must size for physical cores and not hyperthreaded
cores when sizing SQL Server in a virtual environment. For mission-critical deployments,
start with no vCPU oversubscription. SQL Server deployments—especially OLTP
workloads—benefit from this approach.
CPU Hot-Add
The effect of hot-adding vCPUs is particularly relevant to database VMs, which can be
NUMA-wide. Because we generally size databases using vCPUs capable of handling
peak workloads with an additional buffer, hot-add might not be an urgent use case.
However, if you need to hot-add vCPUs beyond a NUMA boundary, what you lose in
vNUMA benefits depends on the workload and on NUMA optimization algorithms specific
to the database vendor and the version of VMware vSphere you're using. To determine
whether the performance tradeoff is warranted in your specific circumstances, VMware
recommends determining the NUMA optimization benefits based on your own workload
before setting the hot-add vCPU function.
Consider NUMA boundaries before hot-adding vCPUs. Hot-adding vCPUs can disable
vNUMA in VMware vSphere. For more information, see VMware KB 2040375.
As a rule, having CPU ready time over 5 percent for a VM is cause for concern. Always
ensure that the size of the physical CPU and the guest VM is in line with SQL Server
requirements to avoid this problem.
Memory Configuration
The following sections cover the different memory configurations for SQL Server on
Nutanix deployments.
Memory Reservations
Because memory is the most important resource for SQL Server, don’t oversubscribe
memory for SQL Server databases (or for any business-critical applications). Starting in
AOS 6.0, Nutanix AHV supports memory overcommit on all corresponding AHV versions.
For more information, see Memory Overcommit.
VMs are still created as fixed size by default, so you must enable memory overcommit
on a VM to use this feature. When you enable memory overcommit on a VM, AHV
dynamically limits physical memory usage for the VM based on recent virtual memory
usage. Additional memory allocated in the VM guest OS initially swaps to disk until the
host updates the physical memory limit for the VM.
Because overcommitted VMs give up part of their memory during idle periods, that
memory might not be available immediately when a VM suddenly requires a large
amount of memory again. In such situations, the VM might run out of memory before
AHV can react to the increased requirement, causing it to end applications. Ensure that
you have sufficient swap space or paging file space to prevent unexpected application
shutdown. Swap space available inside the VM provides a temporary memory area for
overflow while memory is reassigned.
We recommend sizing the swap space or paging file space to always exceed the amount
of allocated memory for the VM. For more information, see Microsoft’s troubleshooting
article Determine the Appropriate Page File Size.
Sudden changes in memory demand can put pressure on the SQL Server VM, which
rapidly tries to increase memory usage beyond what's available in the reserved buffer
maintained by the hypervisor. This scenario is likely to occur when multiple VMs request
memory increases simultaneously.
When you deploy SQL Server or any critical application in an environment where
oversubscription might occur, Nutanix recommends reserving 100 percent of all SQL
Server VM memory to maintain consistent performance. This reservation prevents the
SQL Server VMs from swapping or paging virtual memory to disk.
Replace vm-name with the name of the VM and new_memory_size with the memory
size.
it’s best to fit a VM within the memory footprint of one NUMA node. This arrangement
provides optimal memory performance across the entire address space. Also, be aware
of the total memory allocated to all VMs so that you don’t oversubscribe memory on the
host and potentially start swapping to disk.
Once you increase the memory on VMs where hot-plug is enabled, remember to adjust
the SQL Server maximum memory setting (outlined in the section SQL Server Memory
Configuration in this document). Adjusting this value doesn’t require you to restart the
SQL Server service.
Network Configuration
The following sections describe network configuration factors.
Networking Overview
Along with compute resources such as CPU and memory, the network is also a critical
piece of any successful SQL Server deployment. Without a robust, fully redundant
network, access to databases and supporting applications can be slow, intermittent, or
even unavailable.
Each hypervisor has port groups that the CVMs and VMs connect to, a virtual switch,
and uplinks that connect to the physical network switches. Although each hypervisor
implements virtual networking constructs slightly differently, the basic concepts remain
the same.
Storage Configuration
The following sections describe storage configuration options.
Nutanix Volumes
Nutanix Volumes is a native scale-out block storage solution that provides direct
block-level access through the iSCSI protocol to AOS Storage. It enables enterprise
applications running on external servers to benefit from the hyperconverged Nutanix
architecture. Nutanix Volumes also allows guest VMs on any hypervisor to support
clustered file systems.
Nutanix Volumes supports SCSI-3 persistent reservations for the shared storage used
by SQL Server failover cluster instances (FCIs). Nutanix manages storage allocation
and assignment for Nutanix Volumes through a construct called a volume group (VG).
A VG is a collection of virtual disks (vDisks) presented as available block devices using
the iSCSI protocol. All Nutanix CVMs in a cluster participate in presenting these VGs,
creating a highly redundant and scalable block storage solution. When you create a
Windows failover cluster for SQL Server, you need Nutanix Volumes because Windows
failover clusters require shared storage, but when you deploy SQL Always On availability
groups, you don’t need Nutanix Volumes because Always On availability groups don’t
require shared storage.
Nutanix Volumes seamlessly manages failure events and automatically balances iSCSI
clients to take advantage of all cluster resources. iSCSI redirection controls target path
management for vDisk load balancing and path resiliency. Instead of configuring host
iSCSI client sessions to connect directly to all CVMs in the Nutanix cluster, Nutanix
Volumes uses the Data Services IP address. This design allows administrators to add
Nutanix nodes to the cluster in a nondisruptive manner without needing to update every
iSCSI initiator.
For more information, see the Nutanix Volumes best practice guide.
You might find instances where VGLB can improve performance, especially with large
workloads where the size of the data set is greater than the size of the underlying host or
where throughput is higher than what a single node can provide, as with OLAP and DSS
workloads. Particularly when physical systems with a small number of drives are hosting
VMs and volume groups with a high number of vDisks, the solution is to use VGLB. This
feature isn’t enabled with VGs by default, so you must enable it on a per-VM basis.
Enabling VGLB removes the bottleneck of the local Nutanix CVM so that all CVMs
can participate in presenting primary read and write requests. While this approach can
provide great performance benefits overall, using VGLB adds a small amount of I/O
latency because some data locality is lost when the vDisks shard across the CVMs on
the cluster. In the right use case, this cost generally is justified, because VGLB gives the
system a far greater amount of available bandwidth and CPU resources to serve I/O.
• Use one drive for database log files and another drive for the TempDB log file, with
each drive on its own virtual storage controller (for example, controller 3). Read more
in the section titled SQL Server Log Files Best Practices.
• Separate drives for user databases and system databases.
without worrying about space requirements for other files (within certain limits—filling the
disk to 100 percent capacity still causes issues for TempDB). The more separation you
can build into the solution, the easier it is to correlate potential disk performance issues
with specific database files.
In a VMware environment, Nutanix recommends using the VMware paravirtual (PVSCSI)
controller with all drives. It's standard practice to use PVSCSI for the boot disks and then
load the driver during the image creation to use as a template. For more information, see
Configuring Disks to Use VMware Paravirtual SCSI (PVSCSI) Controllers.
Workloads with intensive I/O requirements might require a greater queue depth with the
PVSCSI adapter. For more information on how to increase the PVSCSI device queue
depth, see VMware KB article 2053145. Test and validate the solution before changing
the environment.
In a Hyper-V environment, if you are using Generation 1 VMs, use SCSI disks for all
drives except the OS. For Generation 2 VMs, use SCSI disks for all drives, including the
OS.
For versions before SQL Server 2016, apply trace flag 1117 at SQL Server startup:
1. In SQL Server Configuration Manager, in the left menu, click SQL Server Services.
2. Right-click the SQL Server instance and click Properties.
3. In the SQL Server (MSSQLSERVER) Properties dialog, click the Startup Parameters
tab.
4. In the Startup Parameters tab, in the Specify a startup parameter box, enter -T1117
and click Add.
5. Click OK.
To avoid unnecessary complexity, add files to databases only in the event of database
page I/O latch contention. To look for contention, monitor the PAGEIOLATCH_XX values
and spread the data across more files as needed. Several other factors, such as memory
pressure, can also cause PAGEIOLATCH_XX latency, so investigate the situation
thoroughly before adding files.
Unlike transaction log files, SQL Server accesses data files in parallel, and access
patterns can be highly random. Spreading large databases across several vDisks can
improve performance. Create multiple data files before writing data to the database so
that SQL Server’s proportional fill algorithm spreads data evenly across all data files from
the beginning.
Only use volume managers (dynamic disks, storage spaces) with the OS as a last resort
after careful testing. We recommend using multiple data files across multiple disks as a
best practice. Also, don’t use autoshrink on database files, as the resulting fragmentation
can reduce performance.
Note: Nutanix recommendations and our tested configuration rely on using multiple vDisks to host a
database’s data files. Nutanix recommends against using any in-guest OS volume manager. Using a volume
manager might negatively impact performance or cause issues with third-party applications (for example,
backup tools).
It isn’t uncommon for SQL Server to host several small databases, none of which are
particularly I/O intensive. In this case, a database can have one or two data files, which
can easily fit on one or two disks and not spread over multiple disks. The goal of disk
and file design is to keep the solution as simple as possible while meeting business and
performance requirements.
If the databases have an unusually high number of activities that influence the size of the
log file (inserts, updates, deletes, changes in the recovery model), it can be beneficial to
perform regular backups more often than every 24 hours. Increasing backup frequency
on Nutanix doesn’t require a change in the existing backup strategy or design.
Physical transaction logs are made up of several smaller units called virtual log
files (VLFs). VLFs play a critical part in performing database backups and recovery
operations. The number of VLFs is determined at log creation and during file growth. If
the VLFs are too small, database recovery and other operations can be very slow. If the
VLFs are too large, backing up and clearing logs can be slow.
Optimal VLF size is between 256 MB and 512 MB. Even if the logs reach 2 TB in size, it’s
best to limit them to no more than 10,000 VLFs. To preserve this limit, preset the log file
to either 4 GB or 8 GB and grow it by increments of the same starting amount to reach
the preferred log size. If you need a 128 GB log file, initially create an 8 GB log file and
grow it 15 times in 8 GB increments. Configure the autogrow value to the same base (4
GB or 8 GB) to maintain the same VLF size. In SQL Server Management Studio, issue
the DBCC LOGINFO (dbName) command to return the number of VLFs in a database.
Remember that, unlike data files, SQL Server log files are written to in a sequential or “fill
and spill” manner. Therefore, using multiple log files spread across several vDisks isn’t
advantageous and just makes the solution more complex than necessary. Because SQL
Server only accesses one log file at any time, you don’t need more than one.
of the database size. During a proof-of-concept deployment, monitor TempDB size and
use the high-water mark as the starting point for the production sizing. If you need more
TempDB space in the future, grow all data files equally. You can rely on autogrowth but
autogrowth can grow the data files to a size beyond the free space available on the disk,
so we don’t recommend relying solely on autogrowth.
Nutanix recommends starting with two TempDB data drives and one TempDB log drive.
This arrangement suffices for most SQL Server workloads. Use trace flag 1118 to avoid
mixed extent sizes. When using trace flag 1118, SQL Server uses only full extents. SQL
Server 2016 automatically enables the behavior of trace flag 1118 for both TempDB and
user databases. No downside exists when using this setting for SQL Server versions
prior to 2016.
You must manually reconfigure SQL Server maximum memory any time the memory
assigned to the VM changes. You must increase this parameter to allow SQL Server
to take advantage of memory added to the VM, and you must reduce this parameter to
leave enough memory for the Windows OS if memory is removed from the VM. When
using large pages with the lock pages in memory local security policy, restart the SQL
Server service for the new value to take effect; this process incurs downtime.
• If the query uses too many threads, the overhead of managing the threads can reduce
the performance instead of improving it. To control the number of threads, use the
maximum degree of parallelism or MAXDOP setting. You can define the MAXDOP
setting on the instance, database, and query level. Setting the MAXDOP value to one
disables parallel plans altogether. Setting the MAXDOP value to zero lets SQL Server
use all available cores if a query runs in parallel.
• If small queries run in parallel, they consume more resources than if they run serially.
To control the greater expense of running parallel queries, tune the cost threshold for
parallelism or CTFP setting. This setting adjusts the minimal cost for a query to be run
in parallel. The cost is a custom SQL Server measure—it’s not a measure of time.
When they see low performance on their database and the CXPACKET wait time is
high, many database administrators decide to set the MAXDOP value to 1 to disable
parallelism altogether. Because the database might have large jobs that can benefit
from processing on multiple CPUs, Nutanix doesn’t recommend this approach. Instead,
we recommend increasing the CTFP value from 5 to approximately 50 to make sure
that only large queries run in parallel. Set the MAXDOP according to Microsoft’s
recommendation for the number of cores in the VM’s vNUMA node (no more than 8).
wider application compatibility. Always On availability groups are the modern and much
preferred high availability solution for SQL Server. If you can't use Always On availability
groups or log shipping, SQL Server Failover Cluster Instances (FCIs) are fully supported
across all hypervisors if you use in-guest iSCSI to access a shared volume (as required
for this type of solution).
Like other clustered applications, Windows failover clustering uses SCSI-3 persistent
reservations to coordinate access to storage by enabling multiple instances of the
application cluster to write to a shared drive. Before the AOS 5.17 release, administrators
were forced to use iSCSI volumes to configure persistent reservations.
With AOS 5.17 and later versions, Nutanix makes it significantly easier to set up
application-level clustering on AHV with volumes. The new functionality allows you to
create a volume group and connect it to a VM using Nutanix Prism, bypassing the need
to configure iSCSI. The system can then access the volume group from within the guest
as a SCSI device with access to persistent reservations.
Always On availability groups can also serve as a built-in disaster recovery solution if you
replicate an Always On availability group to a second or third datacenter. Ensure that you
have sufficient bandwidth to keep up with the replication requirements and that latencies
are acceptable.
When deploying FCIs or Always On availability groups, use hypervisor antiaffinity rules
to enforce placement of the nodes on different physical hosts. This placement provides
resilience in the case of an unplanned host failure. When designing your hypervisor
clusters, allow for at least n + 1 availability. With this allowance, VMs can continue with
adequate resources even if the system loses one host.
If using VMware high availability, review the restart priority of VMs. You can increase the
SQL Server priority to high for production databases. The best priority setting depends
on what other services are running in your cluster.
For larger databases, using multiple drives to host those backup files can greatly improve
performance. An effective (and simpler) way to manage multiple drives is to use mount
points and assign each drive to a folder rather than to a letter.
In the following example script, the SQLBACK1, SQLBACK2, SQLBACK3, and
SQLBACK4 folders serve as mount points for four independent disks. Nutanix
recommends using multiple backup files to benefit from the Nutanix scale-out
architecture.
$ BACKUP DATABASE [ntnxdb] TO
$ DISK = N'R:\SQLBACK1\ntnxdb-tpcc-SQL2014-1000wh-01.bak',
$ DISK = N'R:\SQLBACK2\ntnxdb-tpcc-SQL2014-1000wh-02.bak',
$ DISK = N'R:\SQLBACK3\ntnxdb-tpcc-SQL2014-1000wh-03.bak',
$ DISK = N'R:\SQLBACK4\ntnxdb-tpcc-SQL2014-1000wh-04.bak'
• For very write-intensive databases, distribute database files over four or more vDisks.
• Size all data files in a file group equally.
• Enable autogrow in 256 MB or 512 MB increments to start.
• Use trace flag 1117 for equal file autogrowth. With SQL Server 2016 and later
versions, use the ALTER DATABASE AUTOGROW_ALL_FILES command instead.
• Don’t shrink databases.
SQL Server log files:
• Base total size on the high-water mark before the next log truncation period.
• Optimal VLF size is between 256 MB and 512 MB.
• Don't allow log files to exceed 10,000 VLFs.
• Configure initial log file size to 4 GB or 8 GB and iterate by the initial amount to reach
the preferred size.
• DBCC LOGINFO (“dbName”): The number of rows returned is the number of VLFs.
• One log per database (including TempDB) is sufficient because SQL Server can only
use a single log file at any time.
TempDB:
• Use multiple TempDB data files, all the same size.
• Use autogrow on TempDB files with caution to avoid situations where files use 100
percent of the disk that hosts the log files.
• For eight or fewer cores, the number of TempDB data files is equal to the number of
cores.
• For more than eight cores, start with eight TempDB data files and monitor for
performance.
• Initially size TempDB to be 1 to 10 percent of database size.
• Use trace flag 1118 to avoid mixed extent sizes (full extents only). SQL Server 2016
automatically enables the behavior of trace flag 1118.
• Join the SQL Server to the domain and use Windows authentication.
• Use Windows cluster-aware updating for Always On availability group instances.
• Test patches and roll them out in a staggered manner during maintenance windows.
SQL Server compression:
• Carefully evaluate using SQL Server compression for all databases.
• Look into using SQL Server compression either at the row or page level, depending on
which one provides the best savings based on the workload. Be aware of licensing.
Nutanix:
• Enable inline compression (delay set as zero) at the container level, as this setting
benefits sequential I/O performance. If your environment has high CPU usage
(consistently over 80 percent on the CVM) or if the environment is highly sensitive to
latency, we recommend setting compression with a 60-minute delay to avoid using too
many CPU cycles.
• Don’t use container-level deduplication.
• Only use Nutanix erasure coding (EC-X) after carefully examining the workload
pattern, because EC-X is best suited to datasets that aren’t written or rewritten often.
Reach out to the Nutanix account team for further guidance.
• Use the fewest possible containers (one is fine).
• Take database growth into account when calculating sizing estimates.
Backup and disaster recovery:
• Establish RTOs and RPOs for each database.
• Validate that the backup system can meet RTOs and RPOs.
• Size SQL Server VMs so that they can be restored according to RTOs.
• Use Always On availability groups as a built-in disaster recovery solution.
SQL Server maintenance:
• Use SQL Server maintenance scripts.
8. Conclusion
Microsoft SQL Server deployments are crucial to organizations and are used in
everything from departmental databases to business-critical workloads, including
enterprise resource planning, customer relationship management, and business
intelligence. At the same time, enterprises are virtualizing SQL Server to shrink their
datacenter footprint, control costs, and accelerate provisioning. The Nutanix platform
provides the ability to:
• Consolidate all types of SQL Server databases and VMs onto a single converged
platform with excellent performance.
• Start small and scale databases as your needs grow.
• Eliminate planned downtime and protect against unplanned issues to deliver
continuous availability of critical databases.
• Reduce operational complexity through simple, consumer-grade management with
complete insight into application and infrastructure components and performance.
• Keep pace with rapidly growing business needs, without large up-front investments or
disruptive forklift upgrades.
9. References
Nutanix Networking
1. Nutanix AHV Networking Best Practice Guide
2. VMware vSphere Networking Best Practice Guide
3. Hyper-V Windows Server 2016 Networking Best Practice Guide
About Nutanix
Nutanix offers a single platform to run all your apps and data across multiple clouds
while simplifying operations and reducing complexity. Trusted by companies worldwide,
Nutanix powers hybrid multicloud environments efficiently and cost effectively. This
enables companies to focus on successful business outcomes and new innovations.
Learn more at Nutanix.com.
List of Figures
Figure 1: Logical View of Intel NUMA Topology.......................................................................................... 13