Skip to main content
Pure1 Support Portal

Microsoft SQL Server

Microsoft SQL Server
In this guide we detail our recommendations for achieving maximum performance while ensuring ease of management for Microsoft SQL Server databases hosted on Pure Storage arrays. These are general recommendations are not model specific, and apply to all generations of FlashArray.

Quick Reference: Best Practice

Setting Recommended Value
Maximum degree of parallelism As a rule of thumb, use the number of cores in a NUMA node - this is subject to any third party vendor recommendations, of course.
Cost threshold for parallelism This will vary depending on your workload requirements. If your system has an OLTP-like workload, set this to 15-20 and possibly higher, as you seek to minimize parallelism and offer more concurrency. On a DSS/OLAP/Data Warehousing-like workload, consider leaving at default and managing parallelism through tweaking of MAXDOP if concurrency is a problem.
Number of tempdb files Typically we recommend that you use no less than 8 tempdb files as a starting point, and increase if you observe PFS/GAM/SGAM contention on your system. This recommendation applies to virtualized systems as well.
Number of Data/Log Volumes

Our general recommendation is that you keep all database data and transaction log files on the same volume. We have found no difference in performance from splitting these into separate volumes. However, we do recommend that you split backups into a separate volume and tempdb data files/transaction log into another, separate volume. This would help snap + copy backup volumes, which is a great alternative for scenarios like seeding an AG replica. In the case of tempdb, this is would protect against the scenario where database files/log are snapshotted/replicated in a protection group - skipping tempdb is a good idea, since it gets recreated during SQL Server startup anyway.

 

In the case of virtualized infrastructure with virtualized filesystems (VMDK or VHD/VHDX), you will want to consider potential queuing issues at the virtual disk adapter level. In VMware, for example - multiple ParaVirtual SCSI adapters can help increase performance. We recommend you test with your workload to see if you will benefit from less or more virtual SCSI adapters. You will likely achieve higher performance by splitting your database across multiple VMDKs/RDMs. At a minimum, split database and log into different VMDKs/RDMs. A separate datastore for things you will not want replicated (tempdb, swap/page files) is encouraged.

Backup Compression

In our testing, we've shown that compression increases backup throughput by at least 25%, reaching 35% on some scenarios. We recommend you continue to use backup compression, with very few exceptions (one being CPU being adversely affected by the use of compression, for example). 

 

Please note that compressed backups will not be further reduced by the array - high entropy data rarely reduces further. Assume a 1:1 data reduction ratio for compressed backups. On the flipside, uncompressed backups will take longer to complete, but they will dedupe with the actual database files (since backups contain copies of all the database pages) shortly after the backups complete. Do not expect full deduplication, however. Your mileage will vary depending on how much your data changes.

Backup/Restore IO size

By default, SQL Server will use jumbo sized IOs (typically of 1MB) to perform backup and restore operations. These can adversely affect the IO latency for your array. Whenever possible, use MAXTRANSFERSIZE=65536 on your BACKUP or RESTORE statements. This could make your backups and restore take a tad longer, so test accordingly. If you are leveraging Transaction Log Shipping, you might have no way to tune this, as sqllogship.exe doesn't offer a means to tune this parameter. If log shipping backups/restores are affecting your performance, considering rolling out scripts to perform log shipping manually. 

 

If you are not achieving good throughput from your backup/restore operations after using the MAXTRANSFERSIZE=65536 parameter, considering bumping up the number of buffers available to the process by using the BUFFERCOUNT parameter. In our testing, a 1.5TB database backed up in 24 minutes, 38 seconds at default settings, and 25 minutes, 6 seconds with 64Kb IO block size and BUFFERCOUNT=2048.

Index Rebuilds If you are using array replication, you might want to consider using SORT_IN_TEMPDB=ON flag for ALTER INDEX...REBUILD statements, along with a separate, non-replicated volume for tempdb database and transaction log files. This will help you minimize the amount of data sent across the replication link, as intermediate sort results for the index rebuild would never need to be replicated.
Volume Formatting We recommend you follow Microsoft's recommendation of formatting volumes with an NTFS allocation unit size of 64 Kilobytes, unless you need to leverage NTFS' compression capabilities, which are only available on 4 Kilobyte allocation unit sized-volumes. This doesn't mean that SQL Server will use an IO block size of the allocation unit size you pick -- SQL will issue IOs as small as a single sector (512 bytes) or as large as 8 Megabytes for ColumnStore indexes. We also recommend that you use the /L parameter of the FORMAT command for your database volumes (or the -UseLargeFRS parameter for the Format-Volume Powershell cmdlet), which will help avoid certain issues like CHECKDB failing with error 665. This creates NTFS volumes with large File Record Segments. Please note that the /L parameter is only available on Windows Server 2012 and above.
Virtual Disk Considerations When using VMware virtual disks (VMDKs) be sure to leverage ParaVirtual SCSI adapters for the best performance on all data/log/tempdb/backup volumes, and also make sure you create the disks as "Eager Zeroed Thick" to prevent performance issues arising from in-flight allocations.

Recommended Links