Skip to main content
Pure1 Support Portal

Windows Configuration: Configuring MPIO & Adding LUNs to Windows Hosts

This article is part of a series, and it is recommended you review all articles: 

  1. Windows Server: Best Practices
  2. Windows Configuration: Adding LUN's to the Host & Configuring MPIO (You are here) 

Applies to: Windows Server 2008, Windows Server 2012

Prequisites

  1. Before continuing, please review our Windows Server Best Practice Guide making any necessary adjustments. 
  2. If you are running Windows Server 2008, we recommend that applying R2 and SP1 in order to install MPIO hotfixes.  (See section below, Installing Hotfixes)
  3. All attached hosts should have a minimum of two paths, connected to different Pure Storage controller nodes, to ensure host to storage availability.  
  4. This guide assumes that SAN zoning has been completed, if this has not been done please refer to our Best Practice Guide for SAN configuration. 

Installing Hotfixes

Customers running Windows 2008 R2 SP1 and MPIO should apply the following hotfixes.  These hotfixes have been tested by Pure engineering to prevent issues we have observed with loss of access to LUNs during a reboot of a controller during an upgrade of purity. 

 Hotfix  Requirement  KB   Description  File / Version / Date

456116

Always

2754704

A hotfix is available that provides a mechanism for DSM to notify MPIO that a particular path is back to online in Windows Server 2008 and Windows Server 2008 R2 MPIO.SYS 6.1.7601.22177 6.1.7601.18015 30-NOV-2012
MSDSM.SYS 6.1.7601.22177  6.1.7601.18015 30-NOV-2012 

445355

iSCSI

2684681

Iscsicpl.exe process stops responding when you try to reconnect a storage device to a computer that is running Windows Vista, Windows Server 2008, Windows 7, or Windows Server 2008 R2 Iscsilog.dll 6.1.7600.16385 / 14-JUL-2009 

429927

Cluster

2520235 

"0x0000009E" Stop error when you add an extra storage disk to a failover cluster in Windows Server 2008 R2 Clusres.dll 6.1.7601.21680 11-MAR-2011 

517618

Always

2990170

Multipath I/O identifies different disks as the same disk in Windows

Mpio.sys        6.0.6002.19153        134,584        07-Aug-2014        00:50        x64

Mpio.sys        6.0.6002.23457        137,144        07-Aug-2014        00:35        x64

Msdsm.sys        6.0.6002.19153        111,544        07-Aug-2014        00:50        x64

Msdsm.sys        6.0.6002.23457        112,568        07-Aug-2014        00:35        x64

Windows Server 2012 required hotfixes:

 Hotfix  Requirement  KB   Description  File / Version / Date

517618

Always

2990170

Multipath I/O identifies different disks as the same disk in Windows

Mpio.sys        6.0.6002.19153        134,584        07-Aug-2014        00:50        x64

Mpio.sys        6.0.6002.23457        137,144        07-Aug-2014        00:35        x64

Msdsm.sys        6.0.6002.19153        111,544        07-Aug-2014        00:50        x64

Msdsm.sys        6.0.6002.23457        112,568        07-Aug-2014        00:35        x64

Note #1: These hotfixes will require a reboot of any server they are installed on. It is also recommended that customers check the setup event log after the reboot of the server to confirm the hotfix has applied successfully.

Note #2: These hotfixes also apply to virtual machines that are running the Microsoft MPIO stack inside the virtual machine. They are not necessary for virtual machines that are not using the Microsoft MPIO stack .. i.e. the hypervisor is handling multipathing.

Install MPIO

You can install the Multipath I/O Windows feature using either Server Manager or Windows PowerShell, both methods are provided below.

Windows 2008 R2

  1. Open Server Manager.  To open Server Manager, click Start, point to Administrative Tools, and then click Server Manager. 
  2. In the Server Manager tree, click Features.
  3. In the Features area, click Add Features. 
  4. In the Add Features Wizard, on the Select Features page, select the Multipath I/O check box, and then click Next.
  5. On the Confirm Installation Selection page, click Install.
  6. When the installation has completed, on the Installation Results page, click Close. When prompted to restart the computer, click Yes.
  7. After restarting the computer, the computer finalizes the MPIO installation. 
  8. Click Close.

[Source: Installing and Configuring MPIO]

Windows 2012  

Adding MPIO Using Server Manager

  1. Open up Server Manager select the Local Server
  2. Click Manage and select Add Roles and Features
  3. Navigate to the Features section in Add Roles and Features Wizard
  4. Scroll down in the list of Features and select the Multipath I/O feature
  5. Click Next and choose Restart the destination server automatically if required
  6. Click Install

Adding MPIO Using Windows PowerShell 

Open up a Windows PowerShell session as an Administrator and run the following command to install Multipath I/O feature:

Add-WindowsFeature -Name "Multipath-IO"

Adding Pure LUN's to the MPIO Configuration

Windows 2008 R2

Add the Pure FlashArray to the MPIO control panel: 

  1. Go to the Start Menu and type MPIO in the search. 
  2. Open the MPIO Control Panel
  3. Click the Discover Multi-Paths tab.
  4. Click once on the "PURE    Flasharray"  and then click the Add button. (This should be automatically populated as long as the zoning is correct). 
    mpio-iscsi.png

    NOTE: If you are using iSCSI you will need to check the Add support for iSCSI devices option.  After checking this, it will prompt you for a reboot, click "NO".   When using the iSCSI devices you will need to enter the device manually on the "MPIO Devices" tab, make sure you leave 4 spaces in between "PURE" and "FlashArray" 

  5. Once you have added the device, you may proceed with the reboot
    reboot.png
  6. Upon boot up the Pure array will be added to "MPIO Devices"
    original (1).png

Windows Server 2012

Add New MPIO Device via Control Panel

Open the Windows Start menu or a Run command and type mpiocpl. The MPIO Properties dialog will open. The first tab lists the MPIO Devices, a default device is listed as “Vendor 8Product 16”, it is safe to leave this entry.

  1. To add the Pure Storage FlashArray, go to the Discover Multi-path tab, provided zoning is correct this should be auto-populated: 
    original (2).png
     
  2. Highlight the entry for PURE and click Add
     
  3. For iSCSI configurations:  Check the "Add support for iSCSI devices" option on the Discover Multi-Paths tab. Click Add and enter "PURE    FlashArray", be sure to follow the proper formatting when entering in the Device Hardware ID.
    Figure 2: Add MPIO Support dialog, note the 4 spaces between “PURE” and “FlashArray”

    original (3).png
     
  4. You will be prompted to reboot.  Upon boot up the Pure Storage FlashArray will be added to MPIO Devices as shown: 

    original (4).png
     

Add new MPIO Device Using Windows PowerShell

The following steps walkthrough configuring MPIO with the same details as using the Windows Control Panel applet.

  1. Open up an elevated Windows PowerShell session with Run as an Administrator.
  2. Run Get-MSDSMSupportedHW to list out the existing VendorId and ProductId details.
  3. Run New-MSDSMSupportedHW -ProductId FlashArray -VendorId PURE to add the PURE FlashArray to the list of MPIO Devices.
  4. Prepare to reboot and run Restart-Computer, this will reboot the Windows host.
  5. After Windows restarts open up an elevated Windows PowerShell session and run the command from Step 2 above to ensure the PURE FlashArray is now listed.

Configure MPIO Policies

Now that the Windows host has the disks connected, initialized and online the MPIO device properties can now be verified. To access the Multi-Path Disk Device Properties perform the following steps:

  1. Open Windows Server Manager
  2. Click Tools > Computer Management to open up the Computer Management application.
  3. Click Storage > Disk Management to access all of the volumes connected to the Windows host.
  4. Right-click on one of the new Disk # from the Pure Storage FlashArray
  5. Click Properties
  6. Click the MPIO tab

The MPIO Policy defines how the host distributes IOs across the available paths to the storage. The Round Robin (RR) policy distributes IOs evenly across all Active/Optimized paths. A newer MPIO policy, Least Queue Depth (LQD), is similar to round robin in that IOs are distributed across all available Active/Optimized paths, however it provides some additional benefits. LQD will bias IOs towards paths that are servicing IO quicker (paths with shorter queues). In the event that one path becomes intermittently disruptive or is experiencing higher latency, LQD will prevent the utilization of that path reducing the effect of the problem path.


The dropdown menu Select the MPIO Policy, shown in Figure 9, can be used to select the desired policy, but as mentioned earlier the default policy of Least Queue Depth is recommended. An equivalent number of paths that have been setup from the host to the Pure Storage FlashArray will be listed with their Path Id, Path State, etc. All of these should read Active/Optimized, shown in Figure 9. It is important to note that Pure Storage leverages the Microsoft Device Specific Module (DSM) as you will see that listed as the DSM Name, this can also be seen below.

Configuring Windows MPIO for iSCSI Targets

Usually the iSCSI initiator client is built-in. If it is not present, download and install the latest version (2.08) of the Microsoft iSCSI initiator that is relevant for your operating system. Purity 4.6.0 is required for VLAN Tagging.

Pure Storage provides high availability through the use of Multipath I/O. MCS or Link Aggregation (NIC teaming) is not supported.

 If this is your first time configuring iSCSI, you may need to start the Microsoft iSCSI service.  On Windows Server 2012, search the server for "iSCSI Initiator" which will prompt you for the following: 
original (7).png
Configure the IP networking settings on the Pure Storage iSCSI 10Gbe ports. If this task has not been performed the IP settings can be configured via the Pure Storage GUI or CLI interfaces.


From the Pure Storage GUI

  1. Select the System tab
  2. Select the Networking Option. 
  3. Select the relevant iSCSI interface and select the 'Edit' option. 

Once the changes are completed the interface can be enabled.  Pure Storage iSCSI interfaces support jumbo frames so an MTU of 9000 can be selected if the intervening network supports jumbo frames without fragmentation 

original (8).png

From the Pure Storage CLI

Use the purenetwork command to set the required attributes:

pureuser@purestorage> purenetwork setattr -address xxx.xxx.xxx.xxx -netmask xxx.xxx.xxx.xxx --gateway xxx.xxx.xxx.xxx --mtu 9000 <Ethernet interface>
pureuser@purestorage> purenetwork enable <Ethernet interface>

Once configured setup and discovery of the Pure Storage FlashArray and relevant targets can be completed. 

Configuration for Windows Server 2008 and Windows Server 2012

Launch the iSCSI initiator, discover the target IP address and connect to the Pure Storage FlashArray. Add the discovered target or targets to the list of favorite targets and enable the multipath option. 

original (9).png

Connect to all the discovered iSCSI interfaces on the Pure Storage FlashArray and add them to favorite targets:

original (10).png

Select the MPIO Policy

original (5).png
NOTE: On the MPIO tab the DSM name is Microsoft DSM. This is the default DSM and works very well with Pure Storage. It simply defines the different events that could impact multipathing. You can click on Details and edit timers if required.  

3rd party DSMs will not claim pure Storage LUNs. At this time, Pure LUNs are not supported by 3rd party DSM modules, such as: EMC PowerPath, NetApp ONTAP DSM, HP 3PAR DSM, etc.

By selecting the individual Path Id and clicking Edit it is possible to see all of the details for the given path as seen in below.

MPIO Path Details

original (6).png

Performance Tuning - iSCSI

In order to get the best performance out of a single host, 8 iSCSI sessions to a Pure Storage FlashArray are recommended. A session is normally created for every target port where a host is connected. If this host is connected to less than 8 paths, additional sessions can be configured going to the same target ports. To add more iSCSI sessions, repeat the steps above for the same target portal IP address.

Configure LUN's

Once MPIO has been installed and the proper configurations set, any volumes that have been created with a host or host group can be seen from the Windows host.

Note: If no volumes, hosts or host groups have been created please refer to the Pure Storage FlashArray User’s Guide, Using Purity GUI to Administer a FlashArray for step-by-step information. This can be access by logging into the Pure Storage FlashArray and click the Help link in the upper right corner of the GUI.

There are two methods that can be used to perform disk management, first is via a GUI uniquely named Disk Management, and second is with Windows PowerShell. Let’s first walkthrough using Disk Management.

Disk Management GUI

  1. Open Windows Server Manager
  2. Click Tools > Computer Management to open up the Computer Management application.
  3. Click Storage > Disk Management to access all of the volumes connected to the Windows host.

Figure 3 provides an example that shows eight volumes connected to the host varying in size from 200GB – 500GB. The volumes shown in Figure 3 have already been Initialized and set Online. If the Disk Management view does not show any new volumes connected to the Windows host a rescan should be performed so that Windows can rescan the bus for connected volumes that were setup in the Purity GUI as shown in Figure 5.

Perform a rescan using Disk Management (Computer Management) and select Action > Rescan Disks. This will perform a rescan of the bus and display the volumes that are connected to the Windows host. If this is a first time setup of a Pure Storage FlashArray connecting to a Windows host it is most likely that the disks will show Not Initialized and in an Offline state as shown in Figure 4, otherwise it is assumed that the disks where previously setup and should come online and be accessible.

Figure 1: Windows Disk Management

original (11).png

Figure 2: Not Initialized and Offline Disk

original (12).png

Figure 3: Purity GUI Storage View

original (13).png

Now that there are volumes connected to the Windows host they can be individually accessed to Initialize and Online. To perform this operation right-click the Disk # and select Initialize Disk, this will open the Initialize Disk dialog select MBR (Master Boot Record) or GPT (GUID Partition Table) as the desired Partition style. Next, select the Volume to create a New Simple Volume based on your business criteria for size, path or drive letter and format. Perform the same steps for however many volumes that are connected to the Windows host.

Disk Management via Windows PowerShell

Just as with the GUI management we can see and control all of the details for disks connected to the Windows host. Figure 6 shows the same view of information in Figure 3 using:

Get-Disk

Figure 4: Get-Disk to Show all Disks Connected to the Windows Host

original (14).png

Now something that can be done with Windows PowerShell that the GUI does not offer is the ability to only view disks that are from Pure Storage using some additional parameters with the same command run previously.

Get-Disk | Where-Object FriendlyName -like "PURE*"

Figure 5: Viewing only Pure Disks using PowerShell

original (15).png

Just as with Disk Management GUI if there are disks that are not shown a rescan should be performed then the command can be re-run the previous PowerShell command to ensure all of the disks are present.

"rescan" | diskpart

Figure 6: Rescan for New Disks

original (16).png

Figure 6 shows that doing a rescan the Windows host now has a new Disk 14 that was connected and it is in RAW Partition Style.

With Windows Server 2012 and PowerShell it is possible to initialize the newly added disk(s) using Initialize-Disk <DiskNumber>, this will initialize the disk then based on the current SAN Policy have the corresponding effect. Using Initialize-Disk by default set the PartitionStyle to GPT unless specified using the –PartitionStyle parameter to MBR or Unknown.

If the current SAN Policy is set to the default of OfflineShared the newly initialized disk will need to be brought online manually. Run the following PowerShell commands to determine which disks are Offline then set them all Online. The next section on SAN Policy goes into more detail.

Get-Disk | Where-Object IsOffline -eq $True | Set-Disk -IsOffline $False

Next is to create a partition with the newly initialized disk using maximum size. Using the Disk Management GUI it is possible to assign a drive letter or mount point using the Initialize Disk dialog. When creating a new partition with Windows PowerShell you can use the –AssignDriveLetter option and also use Add-PartitionAccessPath to set a mount point location (eg. C:\MyMountPoint). Note that the mount point location needs to exist prior. Finally using the Format-Volume command will create a newly formatted NTFS volume, or whatever FileSystem you choose.

New-Partition -DiskNumber <DiskNumber> –UseMaximumSize –AssignDriveLetter
Add-PartitionAccessPath -DiskNumber <#> -PartitionNumber <#> -AccessPath C:\MyMountPoint 
Format-Volume -DriveLetter <DriveLetter> -FileSystem NTFS 

SAN Policy

One final settings not to overlook is the SAN Policy which defines how disks are mounted. If you are running Windows Server 2012 this is accessible (Get/Set) from PowerShell using Get-StorageSetting to find out the current disk policy. If this has not been changed it will have defaulted to OfflineShared for Windows Server 2012 editions. This should be changed to OnlineAll for standalone Windows Server 2012 servers. The default configuration of OfflineShared should be used with Windows Server Failover Clusters (WSFC) or Microsoft Clustering Services (MSCS).

To change this to the recommended setting run the following:

Set-StorageSetting -NewDiskPolicy OnlineAll 
Policy Setting Effect

 OfflineAll

 All new disks are left offline by default.

 OfflineInternal

 All disks on busses that are detected as internal are left offline as default.

 OfflineShared

 All disks on sharable busses, such as iSCSI, FC, or SAS are left offline by default. 

 OnlineAll (Recommended)

 All disks are automatically brought online. 

On Windows 2008 / 2008 R2 the SAN Policy can also be changed using Windows PowerShell with the following command:

"SAN Policy=OnlineAll" | diskpart 

NOTE:  If for whatever reason working in the Windows Disk Management tool or using Windows PowerShell is not for you, all of the aforementioned tasks can be performed using a command line utility included in Windows called DiskPart. DiskPart provides the ability to manage disks, volumes and partitions. Please refer to the following link for full details http://technet.microsoft.com/en-us/library/bb490893.aspx

Set Disk and MPIO Recommendations

Configure the Pure Storage recommended settings using the below Windows PowerShell. The PowerShell commands will do the following:

  1. Display the current MPIO settings of the Windows host
  2. Set all four of recommended MPIO settings
    Get-MPIOSetting
    Set-MPIOSetting -NewPathRecoveryInterval 20
    Set-MPIOSetting -CustomPathRecovery Enabled
    Set-MPIOSetting -NewPDORemovePeriod 30 
    Set-MPIOSetting -NewDiskTimeout 60
Registry Key Default (Decimal) Recommended (Decimal)

 HKLM\System\CurrentControlSet\Services\Disk\TimeoutValue

 60 Seconds

 60 Seconds

 HKLM\System\CurrentControlSet\Services\MPIO\Parameters\PDORemovePeriod

 20 Seconds

 30 Seconds

 HKLM\System\CurrentControlSet\Services\MPIO\Parameters\UseCustomPathRecoveryInterval

 0 = disabled

 1 = enabled

 HKLM\System\CurrentControlSet\Services\MPIO\Parameters\PathRecoveryInterval

 55 seconds

 20 seconds

 

HBA Settings

Pure recommends the using the following HBA settings. The can be modified via the following tools:

Emulex OneCommand Manager (OCManager) or HBAnyware (GUI/CLI)

Setting HBA Default Recommended

 Queue Depth

 Emulex

 32

 32

 NodeTimeOut

 Emulex

 30

 0

QLogic QConvergeConsole

  • CLI: 2.0.0.0 Build 17
  • GUI: 5.4.0.5
Setting HBA Default Recommended

 Port Down Retry Count

 QLogic

 30

 5

Link Down Timeout (seconds) QLogic 30 5

Brocade Host Connectivity Manager (HCM) or Brocade Command Line Utility (BCU)

Setting HBA Default Recommended

 Queue Depth (qdepth)

 Brocade

 32

 32