Skip to main content
Pure Technical Services

Hyper-V PoC Automation Guide

Currently viewing public documentation. Please login to access the full scope of documentation.


Hyper-V PoC Automation Guide


The objective of this Guide is help Customers and/or Pure SEs quickly configure a Microsoft Windows Server 2019, or newer, Hyper-V environment for testing. Both manual configuration guidance and automation scripts are provided.

There are 2 optional sections, System Center Virtual Machine Manager (SCVMM) and Hyper-V Failover Clustering, which can be used if either are planned to be part of the environment. SCVMM can be used to manage Hyper-V clusters, provision storage, and create VM templates.  It integrates to the Pure FlashArray through SMI-S. Standalone Hyper-V Servers are less complex, and may be deployed if the intent is just to provide one or more Hypervisors to deploy load generating VMs to test the FlashArray. This configuration would not involve Failover Clustering, so VMs could not be easily migrated. Failover Clustering is more complicated to configure, and provides high availability to virtual machines that are configured as highly available. These VMs can be live migrated, and failed over during any failure test cases.

This guide does not take into consideration environments with multiple Server NICs, DHCP, or complex networking configurations such as teaming or VLANs in order to keep the guide concise. If such complexity applies to the lab environment, be sure to manually configure the network cards, virtual switches, and note the DHCP IP Addresses on the Servers and VMs. These IP Addresses will be used as inputs if the automation scripts are used.

The Microsoft ISO files will need to be acquired by the Customer, from Microsoft. Manual steps that can be copied, edited, and pasted into PowerShell accompany each section. Automation leveraging the provided script package is outlined in the Appendix.

Step 1 – Gather the Prerequisites


  • On the 3 physical servers:
    • Install Windows Server 2019 Datacenter GUI  Note: Server Core can be used but is not shown.
    • Configure with static IP Addresses.
  • Connect to the Flash Array with Fibre Channel or iSCSI.
    • If Fibre Channel, configure Zones on the fabric for Fibre Channel connectivity between the Servers and FlashArray.
    • If iSCSI, configure the Microsoft iSCSI initiator.
  • On the FlashArray, create a Host, add the IQN/WWN to the Host, and then create and connect a Pure Volume to each physical server. See Appendix for Pure PowerShell SDK Examples.
    • If a Failover Cluster is being used for Server2 & Server3, be sure to create a Host Group on the FlashArray and place the 2 Hosts in the Host Group. Only 1 Pure Volume is required and should be added to the Host Group. This volume will be configured as a Cluster Shared Volume (CSV).
  • On Server1, format the Pure Volume and assign the drive letter ‘D’.
    • Create a folder called ‘share’ on the root of the Pure volume ‘D:\’ using the Administrative Tools in Windows, and create an SMB share as ‘share’. See Appendix for Examples. Place automation scripts and the following files in the 'D:\share' folder:
      • Windows Server 2019 ISO
      • IO tool of choice (optional: Diskspd for Windows is included in
      • (PowerShell automation Scripts) Extract to d:\share
      • If the Servers will not have Internet Access download the following and place in d:\share
  • SCVMM Deployment (optional)

Step 2 - Install Prerequisites for all physical servers

  • Enable the Hyper-V role on all of the Windows 2019 servers and restart the server using PowerShell as an administrator or use Server Manager.

Install-WindowsFeature Hyper-V -IncludeManagementTools
Get-WindowsFeature Hyper-V


  • Create a vSwitch named 'VMswitch’ in the Hyper-V Virtual Switch Manager and map it to the physical NIC adapter for VM communication.


Create a Virtual Switch in PowerShell Example:

$net = Get-NetAdapter -Name 'Pick_the_management_network_Adapter'
New-VMSwitch -Name "VMSwitch" -AllowManagementOS $True -NetAdapterName $net.Name

Note: If the VMSwitch is placed on the MGMT NIC, it will remove the MGMT IP breaking your connection. The MGMT IP will have to be added to the virtual switch through KVM.


The above "Ethernet 5 Properties" shows how IPv4/IPv6 is now unchecked after creating the Virtual Switch on Ethernet Interface #5.


The above image shows how the new Virtual Switch "vmvswitch" needs to be manually configured with the server's static IP.

  • For Multipath-IO, add the Pure FlashArray to the MPIO settings.

Install-WindowsFeature Multipath-IO -IncludeManagementTools
New-MSDSMSupportedHw -VendorId PURE -ProductId FlashArray
Remove-MSDSMSupportedHw -VendorId 'Vendor*' -ProductId 'Product*'
Set-MPIOSetting -NewPathRecoveryInterval 20 -CustomPathRecovery Enabled -NewPDORemovePeriod 30 -NewDiskTimeout 60 -NewPathVerificationState Enabled




  • Turn off firewall settings for the network.

Set-NetFirewallProfile -Profile Domain,Public,Private -Enabled False


  • Reboot the servers

Step 2a - SCVMM Prerequisites (optional)

SCVMM requires components that are included in the Windows Assessment and Deployment Kit (ADK), and Windows PE addon to the ADK. After downloading the ADK and WINPE addon installers they must be run so that the entire package can be downloaded. Run these installer files from Server1 and select DOWNLOAD. Set the download path to the D:\share in the adk and adkwinpe subfolders. This should be done before Step3 as the automation3.ps1 script will copy these files to the SCVMM VM's VHDx before creating the VM.

  • Run adksetup.exe and set the Download Path to the share, d:\share\adk, on Server1.


  • Run adkwinpesetup.exe and set the Download Path to the share, d:\share\adkwinpe, on Server1.


  • Extract the SCVMM files from the ISO. Execute the ISO which will mount it and open Windows File Manager. Execute the SCVMM_2019.EXE file and extract to d:\share\vmm2019, on Server1.


Step 3 - Infrastructure VM Configuration

Create Golden VHDx Image

  1. Run the following to create a Golden VHDX image from the Windows Server 2019 ISO File. Running the script will open a file browser window. Browse to the Windows Server 2019 ISO, which should be in the same directory as the script, the d:\share on Server1. This will create a Gold.VHDX file in the d:\share folder. This Gold.VHDx image will be modified and copied for each provisioned VM setting the hostname and password. Variables that can be easily changed are located in variables.ps1.


  1. Next run the automation.ps1 and let it run for 40 minutes. If excluding SCVMM run the automation2.ps1 script.

.automation3.ps1 or .automation2.ps1

What follows is the manual execution of each subscript.


VM2 – Diskspd

Diskspd was chosen for this document as it is easy to configure and automate. Other tools can be used, such as Iometer or VDBench.

Diskspd can be run on VM2 to provide IO load in order to test failure test cases and ensure configuration errors are identified. If more load is required, provision more Pure Volumes and connect them to the Host or Host Group and copy VM2 to multiple Pure Volumes. Some sample command lines for Diskspd are located in the Git repo. Diskspd.exe is located in the root of the C:\ drive on the VM. What follows is an example diskspd configuration that is 100% 4k Read IO:

diskspd -c2G -b4K -r -o1 -W60 -d60 -Sh testfile.dat

Step 4 - Server2 & Server3 Configuration

If SCVMM is not being deployed, the physical servers only need to be joined to the domain if Server2 and Server3 are being configured as a Failover Cluster.

  • Set DNS to the AD virtual machine's (VM1) IP Address.

$wmi = Get-WmiObject win32_networkadapterconfiguration | where {($_.IPEnabled -eq "True")}
Set-DnsClientServerAddress -InterfaceIndex $wmi.InterfaceIndex -ServerAddresses <AD-VM-IP>

  • Add the servers to the created AD domain with the provided script.

./join-domain.ps1 server2
./join-domain.ps1 server3

  • Alternatively, login to Server2 or Server3 and join the domain in the Gui or the following powershell:

Add-Computer -DomainName <domain> -Credential <domain>\administrator

Step 4a - Hyper-V Failover Cluster configuration (optional)

  • Make sure the physical servers (Server2 & Server3) are joined to the domain before creating a Failover Cluster. A new Cluster IP will need to be provided

    Install-WindowsFeature -ComputerName Server2 –Name Failover-Clustering –IncludeManagementTools
    Install-WindowsFeature -ComputerName Server3 –Name Failover-Clustering –IncludeManagementTools
    Test-Cluster –Node Server2, Server3
    New-Cluster –Name PureCluster –Node Server2, Server3 –StaticAddress <ClusterIP>

  • If you did not make the Pure Volume for the cluster a unique size, and you don't want to add all available disks to the cluster, follow the directions in the section Find the Pure Volume in the Appendix.
  • Add a Pure Volume as a clustered disk.

#add all available disks
Get-ClusterAvailableDisk | Add-ClusterDisk
#Alternatively - add a specific disk
Get-Disk -Number 11 | Add-ClusterDisk 

  • If placing VM's on the clustered disk, promote the clustered disk to a Cluster Shared Volume (CSV). The cluster disk name is assigned after adding a disk to the cluster in the previous command.

Add-ClusterSharedVolume -Name "Cluster Disk 4"


Install PureStoragePowerShellSDK

Open PowerShell as an Administrator.

Install-module purestoragepowershellsdk -Force -Verbose

If not connected to the internet, download and copy to the Server1 share (D:\share) from:

Install PureStoragePowerShellToolkit

Open PowerShell as an Administrator.

Install-module purestoragepowershelltoolkit -Force -Verbose

If not connected to the internet, download and copy to the Server1 share (D:\share) from:

Enable SMI-S on the FlashArray

Login to the management IP for the FlashArray. Select Settings and then enable both the SMI-S Provider and SLP.


Create a Pure Host or Host Group

Using the PureStoragePowerShellSDK create a host, assign WWN/IQN, and add to a Host, or Host Group if using Failover Clustering.

#connect to the FlashArray
$array = New-PfaArray -EndPoint <array IPAddress> -IgnoreCertificateError -username <user>
#find FC WWN
#if iSCSI find initiator address
#create a host and add WWN if using FC
New-PfaHost -Array $array -Name host1 -WwnList <wwn>
#create a host and add IQN if using iSCSI
New-PfaHost -Array $array -Name host1 -IqnList <iqn>
#create a host and add IQN if using iSCSI
New-PfaHost -Array $Array -Name host1 -IqnList
#create host group and add host to it if using a Failover Cluster
New-PfaHostGroup -Array $Array -Name hostGroup1 -Hosts Server1

Create a Pure Volume and attach to Host or Host Group

Using the PureStoragePowerShellSDK create a Pure Volume and connect it to a Host (or Host Group).

#connect to the FlashArray
$array = New-PfaArray -EndPoint <array IPAddress> -IgnoreCertificateError -username <user>
New-PfaVolume -Array $array -VolumeName myvolume -Size 30 -Unit G
#connect volume to Host if no Failover Cluster
New-PfaHostVolumeConnection -Array $array -VolumeName vol1 -HostName host1
 #In the case of Failover Cluster, connect Volume to the Host Group instead
New-PfaHostGroupVolumeConnection -Array $array -VolumeName vol1 -HostGroupName hostGroup1

Find the Pure Volume on the Physical Server

  • Find the Pure Volume and format, initialize, and assign a drive letter.
  • On Server1 assign it the D letter and create an SMB share.
  • On Server 2 & 3 format, initialize, and assign a drive letter.
  • If a Failover Cluster make it a clustered disk, or alternately a cluster shared volume.
  • If you have more than 1 Pure volume the following command will list them all and show you the disk number which can be used to get the signature.
  • With the disk number, you can online the disk and initialize it


get-disk  | where -property Manufacturer -match "PURE"

  • Once online and initialized you can assign a drive letter. One Server1 make it ‘D’.


set-disk -number 3 -isoffline $false
Initialize-Disk -Number 3 -PartitionStyle GPT
new-Partition -disknumber 3 -DriveLetter "d" -UseMaximumSize | format-volume

  • Once formatted create the ‘share’ folder and an SMB share.


New-Item -Path 'D:\share' -ItemType Directory
New-SmbShare -Name "share" -Path "d:\share" -ChangeAccess "Users" -FullAccess "Administrators"

  • If it is a failover cluster (Server2 & Server3), add the disk the cluster


Get-Disk -Number 3 | Add-ClusterDisk

  • (Optional) If you are not assigning the disk to a Cluster Role (such as a SQL Server FCI), and want a Cluster Shared Volume (CSV) to place VMs on, promote the clustered disk to a CSV.  The “-name” parameter is what is assigned to the disk during the Add-ClusterDisk command.


Add-ClusterSharedVolume -Name "cluster disk 3"

  • Print out the path of the CSV and place your VMs there.

$volume = get-clustersharedvolume -name "cluster disk 3"