Skip to main content
Pure1 Support Portal

Windows Server: Best Practices

This article is part of a series, and it is recommended you review all articles:

  1. Windows Server: Best Practices (You are here)
  2. Windows Configuration: Adding LUN's to the Host & Configuring MPIO

Version 2.5 for Purity version 4.x+

Overview

This guide references the recommended best practices for provisioning and utilizing a Pure Storage FlashArray. It will cover the best practices for the Purity Operating Environment (POE).  Even though the FlashArray has been designed to be ultra-simplistic and efficient, there are a number of best practices recommendations that should be followed.  The best practices include host multipathing, SAN zoning configurations and policies, and file system recommendations that should be enforced to ensure a highly available and enterprise class implementation.

The target audience for this document includes storage administrators, server administrators, and consulting data center architects. A working knowledge of servers, server operating systems, storage, and networking is recommended, but is not a prerequisite to read this document.

Operating System Guidelines

All attached hosts should have a minimum of two paths, connected to different Pure Storage FlashArray controller nodes, to ensure host to storage availability.

Supported Versions

The following distributions have been officially tested. Full details on the Windows Server Catalog can be viewed here.  

  • Windows Server 2016
    • Note: The Windows Server Catalog has not been updated as of 3/7/2017 with our approved certification. We are working with Microsoft to get this updated as soon as possible. 
  • Windows Server 2012 R2
  • Windows Server 2012 
  • Windows Server 2008 R2 Service Pack 1

Logical Disk Manager and Partition Alignment

Both Windows 2008 R2, 2012 and 2012 R2 automatically use 1024MB offsets. Pure Storage uses a 512-byte geometry on the FlashArray and, as such, there will never be a block alignment issue. To check the StartingOffset of a Windows host use the following Windows PowerShell:

Get-WmiObject Win32_DiskPartition -ComputerName $env:COMPUTERNAME | select Name, Index, BlockSize, StartingOffset | Format-Table *

Host Connectivity Steps

The following are the high-level steps that outline successful connectivity from a Windows host to the Pure Storage FlashArray:

  1. Validate Windows hotfixes
  2. Install Multipath I/O (MPIO)
  3. Configure New MPIO Device
  4. Configuring Disks
  5. Setting SAN Policy
  6. Configure MPIO Policies
  7. Configure HBA settings

Microsoft Windows Hotfixes

Depending on what version of Microsoft Windows Server that is deployed please ensure the below Hotfixes are installed. To check which Hotfixes, also known as Quick Fix Engineering (QFE), are installed the following Windows PowerShell will list out all the details:

Get-WmiObject -Class Win32_QuickFixEngineering | Select-Object -Property Description, HotFixID, InstalledOn | Format-Table -Wrap

Windows Server 2008 R2

  • KB979711
  • KB2520235
  • KB2522766
  • KB2528357
  • KB2684681
  • KB2718576
  • KB2754704

Windows Server 2008 R2 SP1

  • KB2528357
  • KB2684681
  • KB2754704
  • KB2990170

Windows Server 2012

  • KB2796995
  • KB2990170

Windows Server 2012 R2

  • KB2990170

Additional Tools

Windows allows administrators to see some of the additional Fibre Channel information. One tool that can be used is ‘fcinfo’ which can be downloaded from the Microsoft download site. It allows you access to most of the older Host Bus Adapter API functions

original.png

Yet another helpful tool is ‘mpclaim’, which is actually a built-in tool. When using the tool an administrator will be able to see which device targets actually attached

original (1).png

Space Reclamation

One challenge inherent in storage arrays that present Thin Provisioned volumes is how the various operating systems that use those volumes indicate that data has been deleted.

This is referred to as Dead Space Reclamation and is provided by one of two techniques: SSD Trim or SCSI Unmap.

This process enables you to reclaim blocks of thin-provisioned LUNs by telling the array that specific blocks are obsolete. Most legacy operating systems inherently do not provide this capability, so special attention needs to be paid if a Host performs large delete operations without rewriting new data into the deleted space. Most current operating environments, such as ESX 5.x, Windows 2012 / 2012 R2 and RedHat Enterprise Linux 6 provide this functionality.

SSD Trim

TRIM is not a command that forces the SSD to immediately erase data. The TRIM command simply notifies the SSD which LBAs (Logical Block Addresses) are no longer needed.

The SSD takes those addresses and updates its own internal map in order to mark those locations as invalid. With this information, the SSD will no longer move that marked invalid block during garbage collection (GC); thus eliminating the time wasted in order to rewrite invalid data to new flash pages.

Benefits of TRIM

  • Lower write amplification: Less data is rewritten and more free space is available during GC
  • Higher throughput: Less data to move during GC
  • Improved flash endurance: The drive is writing less to the flash by not rewriting invalid data during GC
  • Keeps SSDs “Trim”: As an SSD comes close to full, there is a substantial slowdown in write performance as more flash cells must undergo write erase cycles before data can be rewritten
  • Reduce flash controller (processor) time: A lot of resources are used for wear levelling, so more free blocks can help dynamic wear levelling algorithms

SCSI UNMAP

SCSI UNMAP is the full equivalent of TRIM, but for SCSI disks. UNMAP is a SCSI command that a host can issue to a storage array to free blocks (LBAs) that no longer need to be allocated.

Benefits of SCSI UNMAP

  • Beneficial to thinly provisioned storage pools as reclaimed blocks will be put back into the unused pool
  • Avoid out of space condition for thinly provisioned pools of storage
  • Automatic operation that no longer needs to be run manually on host
  • No longer need to run backend array tools to perform thin reclamation (zero page reclaim) that consumed valuable array cycles and potentially slowed down host performance

Microsoft Windows and SCSI UNMAP

Microsoft Windows Server 2008 R2

Windows Server 2008 R2 does not natively provide the capability to reclaim space. Microsoft has provided an alternative through a tool called sDelete. This tool can be downloaded through TechNet at: http://technet.microsoft.com/en-us/sysinternals/bb897443.aspx

sDelete is a command line utility that allows you to delete one or more files and/or directories, or to cleanse free space on a logical disk. sDelete accepts wild card characters as part of the directory or file specifier.

usage: sdelete [-p passes] [-s] [-q] <file or directory> ... sdelete [-p passes] [-z|-c] [drive letter] ...
-a Remove Read-Only attribute
-c Clean free space
-p passes Specifies number of overwrite passes (default is 1)
-q Don't print errors (Quiet)
-s or -r Recurse subdirectories
-z Zero free space (good for virtual disk optimization)

Note: When utilizing the -z option, a balloon file is generated. Please evaluate the space available before performing this option. 

If utilization is high (80-90%), Garbage Collection (GC) will take care of the space clean-up after host side deletion. Garbage Collection may take some time and the reader should be aware of this.

Microsoft Windows Server 2012 / 2012 R2 

Windows 2012 natively supports the capability to reclaim space and will do so by default. If you wish to disable automatic reclamation, then run the following Windows PowerShell as appropriate:

Disable Delete Notification

Set-ItemProperty -Path "HKLM:\System\CurrentControlSet\Control\FileSystem" -Name DisableDeleteNotification -Value 1

Enable Delete Notification

Set-ItemProperty -Path "HKLM:\System\CurrentControlSet\Control\FileSystem" -Name DisableDeleteNotification -Value 0

If space reclamation is disabled, then you can use Defragment and Optimize Drives to manually perform space reclamation. To start the tool, on the Server Manager > Tools > Defragment and Optimize Drives.

Microsoft Hyper-V

Deleting a file from the file system of an UNMAP capable guest operating system sends UNMAP requests to the Hyper-V host.

For this to work successfully, the virtual hard disk must be formatted as a VHDX file; either dynamic or fixed. This feature does not work with the older Virtual Hard Disk (VHD) format. Also, the guest OS must support SCSI UNMAP (see chart underOperating Systems that support SCSI UNMAP).

Hyper-V pass-through disks and Virtual Fibre-Channel (NPIV), which will show up as physical disks to the Guest VM, are also supported.

SAN Zoning Recommendations

Pure Storage supports enterprise class single host initiator zoning configurations. It is recommended, whenever possible and to aid in troubleshooting, that the zoning practices advised by the switch vendors be implemented.

Figure 1: Current Pure Storage Port Connectivity

original (2).png

Offset host connections to the Pure Storage FlashArray so as to optimize fibre channel or iSCSI HBA load. A fair balance can be obtained by alternating connectivity from the fabric between odd and even host ports on the relevant storage controller node.

For example in a highly available 2-node storage controller configuration:

pureuser@purestorage> pureport list --initiator
Initiator WWN Target Target WWN 
21:00:00:24:FF:23:23:F4 CT0.FC1 52:4A:93:70:00:00:86:01 
21:00:00:24:FF:23:23:F4 CT1.FC1 52:4A:93:70:00:00:86:11 
21:00:00:24:FF:27:29:D6 CT0.FC2 52:4A:93:70:00:00:86:02 
21:00:00:24:FF:27:29:D6 CT1.FC2 52:4A:93:70:00:00:86:12

Troubleshooting

Brocade Fill Words

Brocade FC switches and their OEMs have been known to have some performance and connectivity deficiencies when used with QLogic HBA’s that are operating at 8Gb. The Pure Storage FlashArray uses the QLogic 2642 Dual-Port FC HBA and is thus susceptible to this deficiency.

This section outlines how to properly configure and tune a Brocade switch in order to avoid excessive CRC and Decode errors.

Idle Fill Word

Prior to FOS version 7.0, Brocade FC switches and their derivatives used IDLE primitives for both link initialization and for fill words. This ensured successful link initialization between Brocade switch ports and end devices operating at 1G/2G/4G speeds.

However, some 8G devices, such as QLogic HBA’s, are not capable of properly establishing links with Brocade 8G FC switches when ARB/ARB or IDLE/ARB primitives are used.

For these devices, a new mode is available that provides a hybrid for both link initialization and the fill word.

Problem Symptoms

Excessive errors can prevent servers from connecting properly or performing with optimum efficiency with the Pure FlashArray. Decode errors indicate failure on an HBA. Failure on a Brocade switch may be indicated by “er_enc_out” and/or a large number of “er_bad_os” errors.

swd77:root> portstatsshow 6
> stat_wtx              547107199    4-byte words transmitted 
> stat_wrx              785641731    4-byte words received 
> stat_ftx              1082261      Frames transmitted 
> stat_frx              1528326      Frames received 
> stat_c2_frx           0            Class 2 frames received 
> stat_c3_frx           1528326      Class 3 frames received 
> stat_lc_rx            0            Link control frames received 
> stat_mc_rx            0            Multicast frames received 
> stat_mc_to            0            Multicast timeouts 
> stat_mc_tx            0            Multicast frames transmitted 
> tim_rdy_pri           0            Time R_RDY high priority 
> tim_txcrd_z           0            Time TX Credit Zero (2.5Us ticks) 
> tim_txcrd_z_vc 0- 3:  0            0       0       0 
> tim_txcrd_z_vc 4- 7:  0            0       0       0 
> tim_txcrd_z_vc 8-11:  0            0       0       0 
> tim_txcrd_z_vc 12-15: 0            0       0       0 
> er_enc_in             0            Encoding errors inside of frames 
> er_crc                0            Frames with CRC errors 
> er_trunc              0            Frames shorter than minimum 
> er_toolong            0            Frames longer than maximum 
> er_bad_eof            0            Frames with bad end-of-frame 
> er_enc_out            318          Encoding error outside of frames 
> er_bad_os             2016423236   Invalid ordered set 
> er_rx_c3_timeout      0            Class 3 receive frames discarded due to timeout 
> er_tx_c3_timeout      0            Class 3 transmit frames discarded due to timeout
> er_c3_dest_unreach    0            Class 3 frames discarded due to destination unreachable 
> er_other_discard      0            Other discards 
> er_type1_miss         0            frames with FTB type 1 miss 
> er_type2_miss         0            frames with FTB type 2 miss 
> er_type6_miss         0            frames with FTB type 6 miss 
> er_zone_miss          0            frames with hard zoning miss 
> er_lun_zone_miss      0            frames with LUN zoning miss 
> er_crc_good_eof       0            Crc error with good eof 
> er_inv_arb            0            Invalid ARB 
> open                  0            loop_open 
> transfer              0            loop_transfer 
> opened                0            FL_Port opened 
> starve_stop           0            tenancies stopped due to starvation 
> fl_tenancy            0            number of times FL has the tenancy 
> nl_tenancy            0            number of times NL has the tenancy > zero_tenancy 0 zero tenancy

Problem Resolution

In order to ensure correct interoperability between a Brocade FC and the Pure FlashArray, use the “portCfgFillWord” command to set the fill word of the connecting port to option3 (aa-then-ia).

Brocade5100:admin> portcfgfillword 0 3
Usage: portCfgFillWord PortNumber Mode
Mode: 0/-idle-idle - IDLE in Link Init, IDLE as fill word (default)
      1/-arbff-arbff - ARBFF in Link Init, ARBFF as fill word
      2/-idle-arbff - IDLE in Link Init, ARBFF as fill word (SW)
      3/-aa-then-ia - If ARBFF/ARBFF failed, then do IDLE/ARBFF