Best Practices: Microsoft SQL Server
These best practices apply to all versions of SQL Server unless otherwise noted.
Quick Reference: Best Practice
|Maximum degree of parallelism||Subject to any third party vendor recommendations, set the maximum degree of parallelism to be no more than the number of logical processors per NUMA node (CPU socket).|
|Cost threshold for parallelism||Setting this depends on the type of SQL Server workload, set this to 15-20 and possibly higher for OLTP type workloads, the aim being to minimize parallelism and pursue greater concurrency. For DSS/OLAP/Data Warehousing type workloads, consider leaving this at its default and managing parallelism through MAXDOP if concurrency is a problem.|
|Number of tempdb files||Use 8 tempdb data files (extensions .mdf, .ndf) as a starting point and increase this if PFS/GAM/SGAM is observed. Only one tempdb log file (extension .ldf) is necessary. This recommendation applies to SQL Server instances irrespective of whether the underlying server is physical or virtualized and follows the advice outlined in this Microsoft support article.|
|Number of Data/Log Volumes||
From a pure performance perspective, storing all database files and transaction logs on the same volume will not impact performance. However, from a management perspective, it is recommended that:
Testing conducted by Pure Storage has demonstrated that compression can increase backup throughput by at least 25%, reaching 35% in some scenarios. The continued use of backup compression is recommended, with very few exceptions, one being CPU being adversely affected by the use of compression.
Note that compressed backups will not be further reduced by the array, as high entropy data rarely reduces further. Assume a 1:1 data reduction ratio for compressed backups. Uncompressed backups will take longer to complete, but will de-dupe with the actual database files (since backups contain copies of all the database pages) shortly after the backups complete. Full de-duplication should not be expected. Data reduction rates will vary depending on how much your data changes.
Restore IO size
By default, SQL Server will use jumbo sized IOs (typically 1MB) to perform backup and restore operations. These can adversely affect the IO latency for your array. Whenever possible, use MAXTRANSFERSIZE=65536 in BACKUP or RESTORE statements. This can make backups and restores take slightly longer, therefore test accordingly. If Transaction Log Shipping is being used, there might be no way to tune this, as sqllogship.exe doesn't offer a means to tune this parameter. If log shipping backups/restores are affecting performance, consider using scripts to perform log shipping manually.
If good throughput from backup/restore operations is not achieved after using the MAXTRANSFERSIZE=65536, considering increasing the number of buffers available to the process by using the BUFFERCOUNT parameter. In testing, a 1.5TB database was backed up in 24 minutes, 38 seconds using default settings, and 25 minutes, 6 seconds with 64Kb IO block size and BUFFERCOUNT=2048.
|Index Rebuilds||When using array based replication, consider using SORT_IN_TEMPDB=ON for ALTER INDEX...REBUILD statements, along with a separate, non-replicated volume for tempdb database and transaction log files. This will help minimize the amount of data sent across the replication link, as intermediate sort results for the index rebuild should never need to be replicated.|
Microsoft's recommendation of formatting volumes with an NTFS allocation unit size of 64 Kilobytes should be adhered to, unless there is a requirement to leverage NTFS compression, which mandates the use of 4 Kilobyte allocation unit sized-volumes. This doesn't mean that SQL Server will use an IO block size corresponding to the chosen allocation unit size, the database engine will issue IOs as small as a single sector (512 bytes) or as large as 8 Megabytes for column store indexes. We also recommend that you use the /L parameter of the FORMAT command for your database volumes (or the -UseLargeFRS parameter for the Format-Volume Powershell cmdlet), which will help avoid certain issues like CHECKDB failing with error 665. (Use /L to prevent NTFS from becoming fragmented by smaller files.) This creates NTFS volumes with large File Record Segments. Please note that the /L parameter is only available on Windows Server 2012 and above.
|Virtual Disk Considerations||ESXi 5.5: