Skip to main content
Pure1 Support Portal

Windows File Services on Purity RUN Best Practices

Purity RUN Platform

Applications and/or services within Purity RUN can be created by customers or provided by Pure Storage. They are able to run on VMs and/or containers depending on their deployment model. The apps that are provided by Pure Storage are validated and fully supported by Pure1 Support. Use cases for Purity RUN can be as simple as consolidation of a few servers in a remote office or as demanding as co-locating data-intensive compute jobs closer to the data it consumes.

Introducing the newest member to Purity RUN - Windows File Services

Ease-of-use

Installation and setup of WFS are fully automated. Once the WFS application package is uploaded onto the FlashArray, the setup can take less than 15 minutes. After the setup completes, the WFS clustered instances will show up as a new host in the Pure Storage GUI and the health of WFS can be monitored from the Apps subtab.

All block storage tasks such as volume creations, snapshots, replication, monitoring, and reporting are handled identically to an external host. File administration and configuration is done by leveraging Windows Server management, and the Windows API stack.

Features and Capabilities

All the rich inherent features of the FlashArray such as high availability, security, and data protection are leveraged by the WFS solution. High availability for data services is achieved through the FlashArray’s redundant architecture. Security is provided by the Data-at-Rest Encryption (DARE) capabilities native on the array. Data protection services include crash-consistent snapshots and application level snapshots. Lastly, all data residing on WFS will benefit from the FlashArray’s global data reduction services which dramatically reduce physical capacity consumption.

How Does it Work?

Purity RUN has dedicated system hardware resources which includes CPU and memory. This allows the apps deployed on Purity RUN to have complete performance isolation from the rest of the FlashArray.

The Purity RUN environment is completely virtual and consists of a hypervisor running on each FlashArray controller. Apps are launched either in the form of VMs or containers within VMs. In the case of WFS, each FlashArray controller will launch a separate VM instance of Windows Server 2016 which will then form a  Windows Failover Cluster. File Servers will then be created within the cluster to serve SMB shares.

Each WFS VM is installed on its own separate boot volume. For Windows clustering purposes a default quorum witness volume is exported to both WFS VMs. Lastly, a default data volume is also created where file services data will reside. Subsequent data volumes can be created if additional capacity is required. Data volumes are also exported to both WFS VMs to ensure persistent data across WFS VM failovers.

wfs_1.png

For high availability, WFS leverages Windows Failover Cluster to manage File Server failovers which ensures continuous file services uptime. In the event of an unforeseen cluster resource failure such as a FlashArray controller or VM failure, any File Server running on the failed FlashArray controller or VM will automatically be moved to the surviving VM. Upon recovery of a FlashArray controller or VM, customers can manually move the File Server back to the original VM. Customers also have the option to set an automatic migration of the File Server back to the preferred VM.

wfs_2.png

Requirements

  1. FlashArray Support: The FlashArray must be an //M20 or higher, this also includes the //X models.  The FlashArray must have two 10G iSCSI ports on each controller for cluster and file services client traffic. 
  2. Purity//FA Code: Purity//FA 4.10.9 or higher.   
  3. Domain Controller: Microsoft Failover Cluster requires a domain controller in the environment, therefore, a working domain controller must exist in order to run WFS. 
  4. Domain Administrator Privileges: Customers must have appropriately elevated Domain Administrator privileges in order to perform many of the required setup steps like the following: 
    1. Configuring WFS VM IP addresses
    2. Creating Microsoft Failover Clusters
    3. Creating File Servers
  5. DNS Server: There must be a functional DNS server in the environment in order to run file services with WFS.  The two WFS VMs, Failover Cluster, and File Servers will be given a default hostname as shown in Table A.  Customers have the option of using the given default hostnames or to specify their own hostnames.

    Some customers prefer DNS entries to be manually entered.  If so, the default or customized DNS hostnames and corresponding IP addresses must be entered prior to setting up Failover Cluster.
  6. IP Addresses: A minimum of six total IP addresses are required to run WFS.  Table A below shows the required IP addresses. 
     

   

Table A: Required IP addresses and correlating default DNS names
Ethernet IP Address Requirement Default DNS Hostname
Ethernet port A for WFS VM on CT0 WFS-ct0
Ethernet port A for WFS VM on CT1 WFS-ct1
Ethernet port B for WFS VM on CT0 WFS-ct0
Ethernet port B for WFS VM on CT1 WFS-ct1
Failover Cluster wfs-cluster
File Server wfs-fs

 

Enabling Purity RUN and Launching Apps

WFS on Purity RUN was first introduced in Purity//FA 4.10.7 (GA release). Customers can work with Pure1 Support to upgrade to the appropriate Purity//FA version. Pure1 Support will also download an .apkg package file onto the FlashArray. This package file is used to launch the individual WFS VMs.

Once the appropriate Purity//FA version is installed and the .apkg package file has been downloaded, Pure Storage personnel (Pure1 Support or Pure Storage Professional Services) will enable Purity RUN with the desired FlashArray Ethernet ports mapped for client traffic. They will also perform a sequential reboot of each controller, to ensure non-disruptive enablement of Purity RUN.

Once Purity RUN is enabled, Pure Storage personnel will use automated scripts to launch WFS and configure it to run within the customer specific environment. Below is a summary of the high-level procedures:

  1. Launch WFS APP with APP-specific .apkg package file

  2. Configure IP addresses for each VM

  3. Join each VM into customer’s domain

  4. Form Windows Failover Cluster using the two VMs

  5. Launch Windows File Server

Once the setup is complete customers can log onto the live WFS VMs to create file shares. Additional File Servers can be created and set up in an Active/Active configuration. More details on Active/Active configurations can be found in the Active/Passive vs Active/Active File Servers section below.

Best Practices

Networking

Network Switch

Network switches must be deployed for file services traffic between the FlashArray and the file services client. A physical direct connection of Ethernet ports between the  FlashArray and the file services client is not supported. Ensure that there are redundant switches available to allow for high availability. To minimize latency, keep the network topology as flat as possible in order to reduce switch hops between the client and the FlashArray.

High Availability of Ethernet HBA

SMB clients connect to WFS via 10G ports on the FlashArray controllers. These ports are mapped directly to the WFS VMs and used for file services traffic between the WFS VMs and the clients. For redundancy, each FlashArray controller will map two physical 10G Ethernet ports to its respective WFS VM. Therefore, customers must connect two 10G ports from each FlashArray controller to their network.  

Additionally, if the available hardware permits, it is preferred that the two Ethernet ports on each FlashArray controller be spread across two separate 10G Ethernets HBAs. This allows for redundancy across the HBAs wherein the event that an Ethernet HBA failure occurs, the second Ethernet port will be available on the second HBA of the same controller. Since not all customers will have two 10G Ethernet HBAs on a single FlashArray controller, this is not a requirement but a recommended best practice.

wfs_3.png

Each of the WFS VMs uses its respective FlashArray controller’s Ethernet ports to send and receive file services data traffic. Although no additional Ethernet configuration is needed after WFS is installed, it is worth noting how the WFS VM’s Ethernet ports are mapped to the FlashArray controller’s physical Ethernet ports.

Table B and the diagram below shows the correlation between the physical Ethernet ports on the FlashArray and the Ethernet ports within the WFS VM. Eth X and Eth Y refer to any eligible Ethernet ports assigned to WFS. These physical Ethernet ports are assigned when Purity RUN is enabled. Refer to the Window File Services Support Matrix to view eligible Ethernet ports.

Table B: FlashArray Controller Ethernet port to WFS VM Ethernet port mapping.
FlashArray Controller Port WFS VM Ethernet Ports
Eth X Ethernet 4
Eth Y Ethernet 5

 

wfs_4.png

Jumbo Frames

Jumbo Frames are strongly encouraged if the network infrastructure supports them for WFS.  Jumbo Frames allow more data to be transferred with each Ethernet transaction and reduce the number of frames. The larger frame size reduces the overhead on the servers and clients. CPU and memory utilization within the WFS VMs can be dramatically improved if Jumbo Frames are enabled. For end-to-end support, each device on the network must support Jumbo Frames, including the client network ports and Ethernet switches. Jumbo Frames are enabled by default for both WFS VMs and any FlashArray Ethernet port configured for WFS. The default jumbo frame MTU size for WFS is set for 9000.

Performance

CPU Resources

The Purity RUN platform was originally released in Purity//FA version 4.9 and the allocated CPU resources were fixed at 4 CPU cores per controller. Upon the release of Purity//FA version 4.10.7, customers are allowed to choose between 4 (default) or 8 CPU cores per controller*. Increasing the CPU core count will effectively increase the potential performance depending on the load type and environment of the WFS VMs.

The number of CPU cores allocated to WFS is determined when Purity RUN and WFS is initially enabled. Customers have the freedom to increase or decrease the number of CPU cores allocated at a later point in time. It is suggested to start with the default 4 CPU cores per controller and increase to 8 CPU cores as needed. To increase or decrease the CPU allocation, the customer can simply make a request with Pure1 Support to perform the change.

* Increasing CPU core count is only supported on //M50 models or higher.

Adding Volumes to WFS

Volume Sizes

When adding volumes for WFS, customers should be aware that the maximum Windows NTFS size for a single volume is dependent on the allocation unit size (also referred to as cluster size) chosen when the volume is initially created. For example, an allocation unit size of 4K (default) for an NTFS file system will yield a maximum volume size of 16 TB. A 64K allocation unit size will yield a maximum volume size of approximately 256 TB.  Once the allocation unit is set, it cannot be changed unless the volume is reformatted. Therefore customers should be aware and set the allocation unit appropriately.

The below Table C shows the NTFS limits depending on the allocation unit (shown as cluster size).  

Table C: NTFS Volume Size Limits
Cluster Size Largest Volume
4 KB (default size) 16 TB
8 KB 32 TB
16 KB 64 TB*
32 KB 128 TB
64 KB (maximum size) 256 TB
Source: https://technet.microsoft.com/en-us/library/dn466522(v=ws.11).aspx

* IMPORTANT:  For VSS support, the max volume size is 64 TB. For VSS, a cluster size (AKA allocation unit size) should be 16 KB or larger.

 

How To Add Volumes

Similar to adding volumes to any external hosts connected to a FlashArray, adding volumes to WFS is as simple as creating a new volume and connecting it to the WFS host, aptly named @WFS. The new volume should be visible immediately after a disk rescan within the WFS VMs. In order for the new volume to be used by the cluster, a few additional steps are needed. Below is the procedure after a new volume has been exported to the @WFS host:

  1. Using Disk Management within a WFS VM, create an NTFS file system for the volume and ensure that both WFS VMs see the same drive letter.
  2. In the Failover Cluster Manager on either WFS VM, expand the cluster tree, highlight Disks, and select Add Disk under the Actions menu
    wfs_5.png
  3. Select the newly created volume, and click OK.  The new volume should be added and appear on the list of Disks.
  4. In the Failover Cluster Manager on either WFS VM, example the cluster tree, highlight Roles, and right-click on the desired File Server to which the volume will be added. 
  5. Select Add Storage
    wfs_6.png

     
  6. Select desired volume(s) and click OK
    wfs_7.png

     
  7. Confirm new volume has been added for the desired File Server
    wfs_8.png
  8. Ensure the Volume properties are set according to procedures in the below section: Volume Setting - Quick Removal 

Volume Setting - Quick Removal 

In Windows, there is a volume removal property called Better Performance that enables Windows write caching for performance optimization. Due to a Windows behavior, all WFS volumes default with this setting enabled. For data consistency, it is required that this setting is set to "Quick removal" which will disable write caching in Windows.

  1. Customers can perform this action from Windows Disk Management.  Simply right-click on the left-most side of a volume and select Properties
    wfs_9.png
  2. Navigate to the Policies tab and select "Quick removal"
    wfs_10.png
  3. A Windows reboot will be required.  If there are multiple volumes, customers can change the setting on all WFS volumes prior to performing a Windows reboot.  Customers must change this setting for the same volumes on both WFS VMs.

Adding File Servers

It is supported to deploy multiple File Servers in a WFS environment.  Any additional File Servers will require a new IP address as well as a new volume.  Below is the general procedure: 

  1. Add a new volume to WFS by following steps in the previous section: How To Add Volumes
  2. Once the volume has been added to the cluster, open the Failover Cluster Manager on either VM.  Expand the cluster tree, right-click on Roles and select Configure Role
    wfs_11.png
  3. Follow the High Availability Wizard to create a new File Server
    wfs_12.png
     
  4. When prompted, select option File Server for general use
  5. When prompted, enter the desired name for the File Server, the preferred IP address, and the newly created volume for this File Server. 

Creating SMB Shares

Note: Cross protocol where folders are accessed via both SMB and NFS are not supported due to permissions conflicts. A folder should only be created as either an SMB share or NFS share. It should not be shared using both protocols. 

Creating SMB shares in WFS is simple and can be accomplished in several ways, including the Server Manager GUI or Windows PowerShell. The following steps will illustrate a quick SMB share creation using PowerShell.

  1. Click on the Start button from within one of the WFS VMS
  2. Right-click on the Windows Powershell icon, select More and select Run as administrator
  3. When the PowerShell window opens up, type the following command: 
    New-SmbShare -Name <Share Name> -Path <Share folder path> -FullAccess Everyone
  4. The PowerShell command above will create a new share with the user-defined name on a user-specified folder path and will allow full access to all users.  NOTE: Modify the access type and users as needed. For example: 
    wfs_13.png
     
  5. Verify that the new share appears in the Shares window
    wfs_14.png

SMB Transparent Failover - Continuous Availability

SMB Transparent Failover (also known as Continuous Availability) is a native Windows feature that allows SMB client data transfers to continue non-disruptively in an event of a WFS cluster failover. SMB clients would experience a short pause in the IO transfer but SMB services would resume gracefully once the file server role has failed over onto the adjacent WFS VM. This setting is enabled by default when a share is created using the New Share Wizard or via PowerShell. Customers can confirm Continuous Availability is set by checking the properties of the SMB share in Server Manager or using PowerShell with the following command:

Get-SmbShare -Name <Share Name> | Select *

To ensure SMB Transparent Failover works as expected, the following requirements must be met by the SMB client:

  • SMB client computers must be Windows 8, Windows 10, Windows Server 2012, or Windows Server 2016
  • SMB clients must support SMB 3.0
  • SMB clients must have joined the same domain as the WFS VM cluster

More details regarding this Windows feature can be found in the following link: https://blogs.technet.microsoft.com/filecab/2016/03/25/smb-transparent-failover-making-file-shares-continuously-available-2/

Creating NFS Shares

Creating NFS shares in WFS is simple and can be accomplished in several ways, including the Server Manager GUI or Windows PowerShelll. The following steps will illustrate a new NFS share creation using the Server Manager GUI.     

Note: Cross protocol where folders are accessed via both SMB and NFS are not supported due to permissions conflicts. A folder should only be created as either an SMB share or NFS share. It should not be shared using both protocols. 

  1. From the Server Manager, navigate to Files and Storage Services and highlight Shares.
  2. Click on TASKS and select New Share…
    wfs_15.png

  3. Select NFS Share - Quick and click Next
    wfs_16.png

  4. Select the desired File Server. If the desired share is to be from a new folder, choose option a below and select the desired volume for the new folder. If the folder to be shared already exists, select option b and browse for the existing folder. 

  5. Click Next.

  6. Give the share a name and click Next.
    wfssnap4.jpg

  7. Select the Authentication type for the client server. For simplicity, select No server authentication (AUTH_SYS).

    Also select Enable unmapped user access and Allow unmapped user access by UID/GID

    NOTE: Customers can choose Kerberos v5 authentication as long as it is configured appropriately in their environment.
    wfs_18.png

  8. Click Next

  9. In the next step, customers choose the the client hosts that are allowed to mount the NFS shares. Customers can select individual host names (or IP addresses), groups, or all host machines. For simplicity in this example, select All Machines.

  10. Set the share permissions to Read/Write.
    wfs_19.png

  11. Click Add.   

  12. Confirm the selected hosts, groups, or all machines appear. Click Next

  1. In the next Permissions window, click Customized permissions...

  1. Click Add

  2. Click Select a Principal

  1. In the empty box, type: everyone  and click Check Names.

  1. Everyone” should be recognized by Windows Server. Click OK.

  1. Click OK again

  1. Confirm Everyone was added to the Permissions entries. Click OK

.

  1. Click Next.

  2. Confirm the selections and click Create. Then click Close.

    Note: Properties and settings for this share may be edited after the share creation.   
    wfsnfs0.JPG

 

  1. The new share should now appear in the Shares field of Server Manager.
    wfsnfs1.JPG

NFS Settings for Non-Disruptive Cluster Failovers

Similar to SMB Transparent Failover, NFS clients will only experience a brief pause in IO in the event of a WFS cluster failover or File Server migration. There should be no IO error during these events. To ensure this behavior, as a best practice the NFS clients should use a hard mount rather than a soft mount for the file shares. The below steps show how to mount an NFS share with a hard mount:

Windows Client

  1. Open a command window on the Windows Client.
  2. Mount the NFS share with the following command: 
    mount -o mtype=hard \\<FileServerName or IP>\<ShareName> <drive letter>
    
    Example:
    C:\Users\Administrator.WSS.000>mount -o mtype=hard \\wfsfilerA\mynfsshareA g:
    g: is now successfully connected to \\wfsfilerA\mynfsshareA
    
    The command completed successfully.
    
  3. Confirm the mount type with the following command: 
    mount
    Example:
    C:\Users\Administrator.WSS.000>mount
    
    Local     Remote                       Properties
    --------------------------------------------------------------
    g:        \\wfsfilerA\mynfsshareA      UID=-2, GID=-2
                                           rsize=32768, wsize=32768
                                           mount=hard, timeout=0.8
                                           retry=1, locking=yes
                                           fileaccess=755, lang=ANSI
                                           casesensitive=no
                                           sec=sys
    

Linux Client

The syntax may vary slightly depending on the Linux client OS

  1. Run the following command on a Linux terminal:
    1. Ubuntu Client: 
      sudo mount -o hard <FileServerName or IP>:/<ShareName> <mount point>
      
      Ubuntu Example:
      userA@ubuntuserver:/mnt$ sudo mount -o hard wfsfilerA.wss.local:/mynfssharea /mnt/mynfsshare
    2. CentOS Client: 
      mount -t nfs -o hard --source <FileServerName or IP>:/<ShareName> --target <mount point>
      CentOS Example:
      [userA@centosserver]# mount -t nfs -o hard --source 10.21.1.106:/mynfssharea --target /mnt/mynfsshare
  2. Confirm the mount type with the following command:
    nfsstat -m
    
    Example:
    userA@ubuntuserver:/home/users/temp$ nfsstat -m
    Flags: rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,
    timeo=600,retrans=2,sec=sys,mountaddr=10.21.1.106,mountvers=3,
    mountport=2049,mountproto=udp,local_lock=none,addr=10.21.1.106
    

Active/Passive vs. Active/Active File Servers

Pure Storage supports both Active/Passive and Active/Active configurations for WFS. There are simple pros and cons to either configuration that are worth considering.

wfs_27.png

An Active/Passive configuration in WFS refers to one or more File Servers running solely on one WFS VM. The customer is free to choose the WFS VM on which the File Server resides since it does not need to be aligned with the Primary or Secondary roles of the FlashArray controllers. In this configuration, since there is a WFS VM that is idle, the full performance potential of WFS is not realized. However, this configuration does allow for minimal degradation in performance in the event of a WFS VM or FlashArray controller failover.

An Active/Active configuration in WFS refers to two or more File Servers distributed across both WFS VMs. This configuration allows better load balancing between the two WFS VMs. Furthermore, since CPU and memory resources from both VMs are utilized, performance throughput can be increased. However, in the event of a WFS VM or FlashArray controller failover, the total performance can degrade up to 50% if both WFS VMs were performing at 100% of its resources prior to the failover.

Snapshots

WFS Data Volume

A WFS data volume (including the quorum witness volume) is no different than any other FlashArray volume. Hence FlashArray snapshots can be taken of any WFS data volume and used to perform backups or restores of the data from a previous point in time. WFS data volumes can also be placed into a Protection Group and snapped in a single instance. To perform a snapshot of a WFS data volume, users may use the GUI or the FlashArray CLI.

Create WFS Snapshot

FlashArray GUI
  1. Log onto the FlashArray GUI using a browser.
    1. Select the Storage tab.
    2. Highlight the WFS data volume desired for the snapshot.
    3. Select the corresponding Snapshot subtab.
    4. Click on the actions drop-down and select Create Snapshot.wfssnap.JPG
  2. Enter an optional suffix which may be a simple descriptive name or date.    
    wfssnap2.JPG
  3. The FlashArray snapshot will now appear in the Snapshots subtab. wfssnap3.JPG

 

FlashArray CLI

From the CLI, run the command

purevol snap --suffix <suffix> <WFS_data_vol1> <WFS_data_vol2> <WFS_data_volN>

 where the optional <suffix> may be a simple descriptive name or date.

Example:

pureuser@tmefa03> purevol snap --suffix snapA wfsfilerA-data
Name                  Size  Source          Created                  Serial
wfsfilerA-data.snapA  555G  wfsfilerA-data  2018-03-06 10:40:41 PST  640B932891844E5900011231

Restore WFS Snapshot

Customers can restore a whole WFS data volume or restore individual files from a FlashArray-based snapshot. A restore of the whole volume will overwrite the entire existing WFS data volume with the contents of the WFS snapshot data volume. Therefore customers should take caution when performing this and be certain that the data in the snapshot is correct. Restoring the entire WFS data volume should also be an offline process. To ensure consistency, the WFS data volume should be taken offline prior to the restore and brought back online after the restore completes. 

Single file restores using FlashArray-based snapshots are much more granular and do not require the existing WFS data volume to be taken offline. However, due to the behavior of the Microsoft Cluster Disk Driver, any WFS snapshot data volume that is mounted to the WFS VMs will be kept offline. This is because the original WFS data volume will have an identical Unique ID as the WFS snapshot data volume resulting in a collision. To work around this behavior, users can change the Unique ID of the WFS snapshot data volume by mounting the WFS snapshot data volume onto a separate standalone Windows Server and running the native Diskpart tool. Once the WFS Snapshot data volume's Unique ID has been changed, users may then mount the snapshot volume to the WFS VMs and bring it online. More details on this procedure can be found in a Pure Storage Blog on this topic: https://support.purestorage.com/Solu...Shared_Volumes

As an alternative to using FlashArray-based snapshots to recover individual files, customers have the option of using Microsoft's Shadow Copies and Previous Versions. This feature allows the customer to make point in time copies of individual volumes from within WFS. SMB and NFS clients can then use the Previous Versions feature to recover individual files from the client machine. Details to configure Shadow Copies and Previous Versions are below.

Below are steps to restore WFS data from a FlashArray-base snapshot by either restoring individual files or by restoring the entire existing volume.

Restoring single files using FlashArray-based snapshots

Once steps have been taken to change the Unique ID of the WFS snapshot data volume, the below steps will allow users to mount the WFS snapshot data volume and recover individual files.  

  1. Log onto the FlashArray GUI.
    1. Highlight the desired WFS data volume.
    2. Click on the Snapshot subtab.
    3. Click on the drop-down menu of the desired snapshot and select Copy Snapshot.wfssnaprestore7.JPG
  2. Enter name of new volume. This step will instantaneously copy the snapshot onto a base volume.
  3. The new volume will now appear under the Volumes windows.
    1. Highlight the new volume.
    2. Click on the Connected Hosts subtab.
    3. Click on the actions drop-down menu and select Connect Hosts.wfssnaprestore8.JPG
  4. Select the @WFS host and click Confirm.wfssnaprestore9.JPG
  5. Log onto either of the WFS VMs.
  6. Right-click on the Start menu and open up Disk Management
  7. Click Actions on the tool bar and select Rescan Disks.
    wfssnaprestore10.JPG 
  8. The volume should appear after the rescan. Right-click on the left side of the volume and select Online. wfssnaprestore11.JPG
  9. Once the volume is online and associated with a drive letter, users can now access any of the files on the volume and restore as needed.
Restoring entire volume

As previously mentioned, restoring a whole volume will require the volume to be taken offline. All clients will lose access to any shares associated with this volume until the volume is fully restored.

  1. Log into the WFS VM and go to the Failover Cluster Manager.
    1. Expand the Storage tree and go to Disks. 
    2. Right-click on the Cluster Disk the that corresponds to volume to be restored.
    3. Select Take Offline.wfssnaprestore1.jpg
  2. Click Yes to confirm.
  3. Notice the Cluster Disk will reflect an offline status. The File Server may reflect as either offline or partially running, depending on whether any other volumes are associated with the File Server.
    wfssnaprestore2.JPG
  4. Log onto the FlashArray GUI.
    1. Highlight the corresponding WFS data volume to be restored.
    2. Click on the Snapshot subtab.
    3. Click on the dropdown menu of the desired snapshot and select Restore Volume from Snapshot.wfssnaprestore4.JPG
  5. Click Confirm to proceed with the restore. The restore will be instantaneous.
  6. Go back to the Failover Cluster Manager on the WFS VM.
  7. Right-click on the File Server corresponding the volume and select Start Role. This will bring the File Server as well as the WFS data volume back online.  wfssnaprestore6.JPG
  8. The volume has been restored and all SMB/NFS clients should now have access to their shares.

  

WFS VM boot volume 

WFS VMs reside on special boot volumes that are treated slightly differently from other FlashArray volumes. From the customer’s point of view, the WFS VM boot volumes cannot be manually deleted or edited in any way. However, snapshots of the WFS VM boot volumes can be taken (requires Purity OE 4.10.7 or higher). For consistency, snapshots of both WFS boot volumes as well as the quorum witness volume should be performed concurrently at the same point in time. As a possible use case, customers can take boot volume snapshots is as a safety measure prior to applying Windows patches on the WFS VMs.  

It should be noted that while snapshots of the WFS VM boot volumes can be created by the customer, Pure1 Support assistance is required to restore or rollback a live WFS VM boot volume.

To create WFS boot volume snapshots using the GUI:

  1. Navigate to the Storage tab and click on Volumes.wfs_28.png

 

  1. From the Volumes subtab, click on the actions icon on the right hand side, and select Create Snapshots of Volumes.wfs_29.png
  2. Select both WFS VM boot volumes as well as cluster quorum witness:
    1. @WFS_boot-ct0
    2. @WFS_boot-ct1
    3. wfscluster-witness
      wfs_30.png

 

  1. Enter a suffix for the snapshot. It can be a simple descriptive name or date.wfs_31.png
  2. Click Confirm.
  3. The snapshots will appear under the Snapshots subtab for the respective volumes

    1. @WFS_boot-ct0
      wfs_32.png

    2. @WFS_boot-ct1
      wfs_33.png

    3. wfscluster-witness
      wfs_34.png

 

To create WFS boot volume snapshots using the CLI:

  1. Log onto the FlashArray and run the following command:
    purevol snap --suffix <suffix> @WFS_boot-ct0 @WFS_boot-ct1 wfscluster-witness
    
    where <suffix> can be a simple descriptive name or date.

    Example:
    pureuser@tmefa03> purevol snap --suffix snapEFGH @WFS_boot-ct0 @WFS_boot-ct1 wfscluster-witness
    Name                         Size  Source              Created                   Serial
    @WFS_boot-ct0.snapEFGH       100G  @WFS_boot-ct0       2018-03-05 15:27:15 PST   640B932891844E590001122D
    @WFS_boot-ct1.snapEFGH       100G  @WFS_boot-ct1       2018-03-05 15:27:15 PST   640B932891844E590001122E
    wfscluster-witness.snapEFGH  1G    wfscluster-witness  2018-03-05 15:27:15 PST   640B932891844E590001122F
    
  2. The snapshot will appear in the output of the following command: 
    purevol list --snap <optional volume name>
    Example:
    pureuser@tmefa03> purevol list --snap *.snapEFGH
    Name                         Size  Source              Created                   Serial
    @WFS_boot-ct0.snapEFGH       100G  @WFS_boot-ct0       2018-03-05 15:27:15 PST   640B932891844E590001122D
    @WFS_boot-ct1.snapEFGH       100G  @WFS_boot-ct1       2018-03-05 15:27:15 PST   640B932891844E590001122E
    wfscluster-witness.snapEFGH  1G    wfscluster-witness  2018-03-05 15:27:15 PST   640B932891844E590001122F

 

Volume Shadow Copy Service (VSS)

VSS is a service provided by Microsoft to facilitate application consistent snapshots that can be used to perform volume backups or restoration of data on a Windows server. Since WFS is composed of two Windows Server VMs, it uses VSS the same way any Windows server would and works well with the Pure Storage VSS plugin (version 1.6 or higher). The Pure Storage VSS plugin allows customers to run Windows Diskshadow commands that will coordinate and create application consistent FlashArray snapshots from the Windows interface. The Windows Diskshadow commands can also export and mount the VSS triggered FlashArray snapshots onto the WFS VM. Furthermore, customers can download a script that will automate the sequence of Diskshadow commands to take snapshots and/or mount them to the WFS VMs.

IMPORTANT Note: The maximum volume size supported by Microsoft for VSS is 64 TB. For VSS, a cluster size (AKA allocation unit size) should be 16 KB or larger.

Additional best practices for Microsoft Shadow Copies are available: https://docs.microsoft.com/en-us/pre...53975(v=ws.11)

More detailed information about VSS can be found here: Volume Shadow Copy Service (VSS) 

Download the Pure Storage VSS plugin here: Volume Shadow Copy Service (VSS) Hardware Provider

Powershell script to automate VSS Diskshadow commands: http://www.purepowershellguy.com/?p=2131

Previous Versions

Volume Shadow Copies can be used to create point in time Windows-based snapshots of WFS data volumes. The SMB or NFS clients can then use the Previous Versions feature to view or recover individual files and folders that may have been accidentally altered, corrupted, or deleted. Changes are tracked by the Microsoft Windows Server and by default stored on the same source volume. Customers have the option to store the the delta changes on a separate dedicated volume as well as set various other options including maximum capacity limits and snapshot schedules.

Enable Shadow Copies

  1. To enable Shadow Copies on a particular volume, log onto one of the WFS VMs. 
  2. Launch the Failover Cluster Manager.
  3. Open the Storage tree and click on Disks.
  4. In the middle window pane, right-click on the desired volume and select Properties.

wfs_previousver1.JPG

 

  1. In the Properties menu,
    1. Click on the Shadow Copies tab.
    2. Highlight the desired volume .
    3. Click Enable and click OK.
      wfs_previousver2.JPG

Note: Once enabled, a default schedule will be set to trigger a Shadow Copy twice a day.

 

Optional Shadow Copy Settings

  1. To change the Shadow Copy snapshot schedule or other settings, go to Disk Management and right-click on the right hand side of the desired volume and select Properties.

Note: If all the options are grayed out, that means the WFS VM is not the current owner of the volume. If this is the case, log onto the other WFS VM and perform the same actions in this step.

  1. Click on Settings.
    1. To set a different target volume of which to store delta changes of the Shadow Copy snapshots, click on the pull down menu and select desired target volume.
    2. To set a capacity limit of which to store delta changes of the Shadow Copy snapshots, select Use Limit and enter a maximum size value.
    3. To set Windows-based snapshot schedule 
      1. Click Schedule and set schedule as desired.
      2. Remove or add additional point in time Shadow Copies as needed.
        wfssnaprestore15.JPG
  2. Click OK.

 

Restoring single files using Previous Versions

Previous Versions of files and folders can be accessed from the WFS VMs or from the clients over SMB or NFS. 

Windows SMB clients
  1. Open the SMB mounted volume.
  2. Right-click on the desired file or folder and select Properties.
  3. Select the Previous Versions tab. A list of previous versions of the file or folder will be listed along with the date the file/folder was modified.
    1. To view the file contents, select Open.
    2. To restore the contents of the file to this previous version, select Restore. This will overwrite the current existing file or folder with the contents of the previous version of the file or folder. 
      wfssnaprestore16.JPG

 

NFS Clients

Clients that have the shares mounted over NFS can have read-only access to previous versions of their file and folders via subdirectories that are visible on the NFS mounted share and designated with a specific naming convention: .@GMT-YYYY.MM.DD-HH:MM:SS  Each subdirectory contains the files and folders of the share at the point in time the Shadow Copies were created. To restore, users can copy their files and folders back from these subdirectories.

wfssnaprestore17.JPG

For Linux NFS Clients, these subdirectories reside on the NFS shares but are hidden by default. They can be visible using the ls -al command.

wfssnaprestore12a.JPG

Replication

Many enterprise environments require site availability and data replication between two physical locations to protect from power outages and/or other disasters that may disrupt access to the primary datacenter. The FlashArray has native replication features to protect data from such events. WFS can leverage the FlashArray's asynchronous replication feature to perform failovers and fail backs between two different sites. Details about the FlashArray's replication capabilities can be viewed in the FlashRecover Replication Configuration and Best Practices Guide.

This section will provide steps to set up replication of WFS data volumes between two FlashArrays. Furthermore, it will provide guidance and best practices on workflow during a failover scenario. Below are high level diagrams and flow of the replication setup.

This example diagram depicts a standard WFS setup on Site A with a file share mapped to a client. The file share has data that resides on a volume called VolA.

wfs_rep1.JPG

The next diagram shows the same example WFS setup on Site A, except with asynchronous replication configured to a secondary FlashArray on Site B. A separate secondary WFS instance is also configured on the secondary FlashArray on Site B. This secondary WFS instance is configured with the same number of file servers and shares as the primary WFS instance. A snapshot of VolA data volume is created on the primary FlashArray and replicated to the secondary FlashArray. The replicated snapshot is then copied (and overwritten) onto VolA of the secondary FlashArray. The secondary FlashArray now has the data contents on its respective VolA, which is connected to the secondary WFS instance. Notice that the file server on the secondary FlashArray (FileServerB) is not online and will be kept offline until a failover occurs.

wfs_rep2.JPG

 

Upon a failover, the client would be able to connect to FileServerB to gain access to the replicated data. Depending on customer preference, clients can reconnect directly to the file servers on Site B or leverage Aliases which allow clients to have a minimal disruptive failover experience.

wfs_rep3.JPG

 

Replication Setup

This following sections will provide steps to configure asynchronous replication between two FlashArrays. This section will not cover WFS setup on the secondary site as it is assumed that two separate WFS instances have already been setup on the two respective FlashArrays. 

Replication Prerequisites:

  1. Both primary and secondary WFS instances have been setup properly and are accessible by clients on the network. 
  2. The secondary WFS instances must have the equivalent number of file servers as the primary WFS instance.
  3. The data volume size that is attached to the secondary WFS file server must match the volume that is attached to the equivalent file server on the primary WFS instance.
  4. Both WFS instances are connected to the same domain and DNS server.
  5. Replication ports and network routing is configured correctly between the two FlashArrays.

 

Creating a Replication Relationship

The first step is to establish a relationship between the two FlashArrays. 

  1. On the primary FlashArray, log into the GUI. Navigate to the Storage tab, go to the Connected Arrays section and click on the pull-down menu. Select Get Connection Key.

wfs_rep4.JPG

  1. Click Copy to copy the Connection Key.
  2. Go to the secondary FlashArray GUI. Navigate to the Storage tab, go to the Connected Arrays section and click on the pull-down menu. Select Connect Array.

wfs_rep5.JPG

  1. Enter the information into the provided fields:
    1. Management IP address or Full Qualified Domain Name (FQDN) of the primary FlashArray
    2. Select Asynchronous Replication
    3. Paste the Connection Key (from primary FlashArray in step 1) into the Connection Key field.
    4. Replication Address (replbond) of the primary FlashArray. 

wfs_rep6.JPG

Create Protection Groups

Once a replication relationship has been established between the two FlashArrays, the next step is to create a protection group. A protection group can be used as a vehicle to replicate one or more volumes consistently between all volumes within the group. Additionally, users can apply snapshots and/or replication policies and schedules to the protection groups. 

  1. On the primary FlashArray GUI, go to the Storage tab, click on Protection Groups, and click on the + icon to add a new Protection Group.

wfs_rep8.JPG

  1. Enter a name for the new protection group. The example below will use the name WFS-data.

 

  1. Once created, the new protection group entry will appear with a hyperlink. Click on the new protection group hyperlink.

wfs_rep9.JPG 

  1. The hyperlink will bring up the properties of the protection group. Click on the pull-down icon in the Members box and select Add Volumes.

wfs_rep10.JPG

 

  1. Select the WFS data volume(s) that are to be replicated and click Add. 

Note: WFS boot volumes are not supported for replication. Only WFS data volumes. 

 

  1. Once the WFS data volumes have been added to the protection group, go to the Targets box, click on the pull-down icon, and select Add.

wfs_rep10.JPG 

  1. Select the secondary FlashArray as the target. 

 

  1. Next, apply a replication schedule for the protection group. Click on the Edit icon in Replication Schedule box.

wfs_rep12.JPG

 

  1. Click Enabled. Then set the replication schedule that best suites the desired RPO (Recovery Point Objective) and data retention policies. Click Save.

wfs_rep13.JPG

 

  1. Go to the secondary FlashArray GUI. Navigate to the Storage tab. Under the Protection Groups window, there is an entry for the protection group that was created earlier. If the entry shows "Disallowed on this array", click on the Edit icon and select Allow.

wfs_rep14.JPG

 

  1. Once allowed, an initial sync will occur by replicating a snapshot of all the volumes in the protection group. Replication status and details of the protection group snapshot can be viewed in the Protection Group Snapshots box of the target FlashArray, which in this case is the secondary FlashArray. Any subsequent protection group snapshots will also appear in this box.

wfs_rep15.JPG

 

Copy WFS Data

At this point, WFS data volumes are being replicated between the primary FlashArray and the secondary FlashArray. It is presumed that the secondary WFS instance has been configured with the same number of file servers as well as the same number, and size, of WFS data volumes as the primary WFS instance. However, there should be no data on the secondary WFS data volumes since they are simply placeholders onto which replicated data will be copied. In order to do this, the replicated data volumes on the secondary FlashArray (which are currently viewed as snapshots) will be copied and overwritten onto the WFS data volumes of the secondary WFS instance. The diagram below depicts a replicated VolA snapshot being copied onto a volume named VolA of the secondary FlashArray. 

wfs_rep16.JPG

The following steps show the procedure using the CLI. Note that the same function can be performed on the FlashArray GUI.

  1. Log into the CLI of the secondary FlashArray.

  2. Copy the replicated snapshot onto the WFS data volume.

purevol copy --overwrite <replicated_snapshot_vol_name> <WFS_data_volume_name>

Example:

pureuser@tme-ndu> purevol copy --overwrite tmefa03:WFS-data.1.VolA SiteBVolA
Name       Size  Source             Created                  Serial
SiteBVolA  222G  tmefa03:SiteAVolA  2018-06-19 15:48:30 PDT  9919382D2CED44D4000127B3

Note:

- In the above command, tmefa03:WFS-data.1.VolA is the name of the replicated snapshot. The snapshot name can be view using the CLI command   

purevol list --snap

  Below is a breakdown of the replicated snapshot naming convention.

tmefa03: refers to the name of source FlashArray from which the replication snapshot was created. 

WFS-data.1 refers to the Protection Group name and a suffix which in this case is 1.

VolA refers to the name of the replicated source volume from which the snapshot was taken.

- In the above command, SiteBVolA refers to the WFS data volume on secondary WFS instance. This volume should be (if not already) connected and mounted to the secondary WFS instance and added to the WFS Failover Cluster.  

 

Create File Servers on Secondary WFS

As a prerequisite, the secondary WFS instance should have the same equivalent number of file servers as the primary WFS instance. There should be a 1:1 ratio of the file servers between the primary and secondary WFS instances.

  1. If the file servers on the secondary WFS instance have not been created yet, proceed and create them. Refer to the Adding File Servers section for any guidance.

 

wfs_rep30.JPG

Create Shares on Secondary WFS 

Once replicated data contents have been copied to VolA on the secondary FlashArray, it is a best practice to configure as many of the same shares as possible on the secondary WFS instance as there are on the primary WFS instance. Doing so reduces the time that clients need to wait for their shares to be available on the secondary site during a failover event, effectively reducing the Recovery Time Objective (RTO). Otherwise after a failover event, client users will not have access to the shares until they are created.

  1. Log onto the any of the WFS VMs on the secondary WFS instance using Remote Desktop or VNC. 
  2. Create the same exact shares and permissions as the primary WFS instance. Refer to steps on creating shares are in the previous sections Creating SMB Shares and Creating NFS Shares.

IMPORTANT: It is a best practice to keep the share names, paths, and permissions on the secondary WFS instances the same as the primary WFS instance. This allows the usage of Aliases which would allow a minimally disruptive failover experience for the client. Notice in the below screenshot that the share names and corresponding local paths are identical between the primary and secondary WFS instances. 

wfs_rep17.JPG

  1. Once shares are created on the secondary WFS instance, verify access to the shares by mounting/mapping them to clients. Confirm the data content is valid.

 

Failover and Aliases

During a failover, the clients that have already mapped/mounted their shares can access their data by mapping/mounting their shares from the secondary WFS file servers. The Storage Administrator would need to provide the clients with the secondary WFS file server's FQDN or IP address in order to map/mount the shares from a different location. Requiring clients to map/mount to a different file server may be an acceptable workflow for some customer environments. Other environments may require a more seamless client experience. 

wfs_rep19.JPG

 

Aliases

As an option, customers may deploy the use of Aliases which provides a less disruptive failover experience without requiring clients to re-map/re-mount their shares from the secondary file server names or IP addresses. Setting up Aliases is very simple. A example diagram is given below.

  1. An alias (CNAME) entry is created in the DNS server. The alias entry would point to the existing DNS host entry of the file server on the primary FlashArray during normal production. The below example shows an alias is created on the DNS server with the name FileServerX which points to FilerServerA.
  2. Both WFS instances would also have the same alias name associated with its respective file server. For example, FileServerA on the primary FlashArray is set with an alias called FileServerX. FileServerB on the secondary FlashArray would also have the same FileServerX alias is set . Setting this alias allows both file servers to respond to any network calls to FileServerX
  3. Lastly, the client will map/mount shares by connecting to the alias name, rather than the direct file server FQDN (or IP address). For example, rather than mapping a share using \\fileserverA\myshare, the client would use \\fileserverX\myshare. Since the DNS alias points fileserverX to fileserverA, the client will be redirected to fileserverA. 
  4. This provides a simple virtual layer that can easily redirect clients between the two sites by simply changing the DNS alias to point between the primary file server (during normal production) or the secondary file server (during failover or failover test). 

 

During Production:

wfs_rep20.JPG

 

During Failover:

wfs_rep20.JPG

 

 

Configuring aliases is non-invasive and very simple. Steps are below:

Create Alias on DNS Server

  1. Log onto the DNS server.
  2. Open up the DNS Manager.
  3. Right-click on the desired DNS zone and select New Alias.

wfs_rep22.JPG

  1. Enter the desired alias name.
  2. Enter the the target host FQDN. This should be the primary file server that the alias would point to during normal production. If needed, use the Browse button to help locate the primary file server's FQDN.

wfs_rep23.JPG

  1. Click OK.
  2. The new alias entry should now appear in the DNS Manager list.

wfs_rep24.JPG

8. Test out and ensure the alias works. Log onto another host and ping the alias name.

wfs_rep25.JPG

 

Create Alias on Primary WFS instance

  1. Log onto any one of the WFS VMs on the primary FlashArray via Remote Desktop or VNC.
  2. Open up a PowerShell window.
  3. Run the command:
Get-ClusterResource "<file_server_name>" | set-ClusterParameter Aliases <alias_name>

where:

<file_server_name> is the name of the file server on the primary FlashArray

<alias_name> is the name of the alias. This should be the same alias name created on the DNS server.

Example:

PS C:\Users\Administrator.WSS> Get-ClusterResource "FileServerA" | set-ClusterParameter Aliases FileServerX
WARNING: The properties were stored, but not all changes will take effect until FileServerA is taken offline and then
online again.
  1. To view and confirm the alias setting, run:  Get-ClusterResource "<file_server_name>" | get-ClusterParameter Aliases 

Example:

PS C:\Users\Administrator.WSS> Get-ClusterResource "FileServerA" | get-ClusterParameter Aliases 
Object      Name    Value       Type
------      ----    -----       ----
FileServerA Aliases FileServerX String

Note: To remove the alias, run:  Get-ClusterResource "<file_server_name>" | set-ClusterParameter Aliases ""

 

  1. Next, restart the file server by going to the Failover Cluster Manager on the primary WFS instance. Right-click on the file server role and click Stop Role.

wfs_rep26.JPG

  1. Once stopped, right-click on the same file server role and click Start Role

 

Create Alias on Secondary WFS instance (Same steps will be performed as on Primary WFS instance)

  1. Log onto any one of the WFS VMs on the secondary FlashArray via Remote Desktop or VNC.
  2. Open up a PowerShell window.
  3. Run the command:
Get-ClusterResource "<file_server_name>" | set-ClusterParameter Aliases <alias_name>

where:

<file_server_name> is the name of the file server on the secondary FlashArray

<alias_name> is the name of the alias. This should be the same alias name created on the DNS server.

Example:

PS C:\Users\Administrator.WSS> Get-ClusterResource "FileServerB" | set-ClusterParameter Aliases FileServerX
WARNING: The properties were stored, but not all changes will take effect until FileServerA is taken offline and then
online again.
  1. To view and confirm the alias setting, run:  Get-ClusterResource "<file_server_name>" | get-ClusterParameter Aliases 

Example:

PS C:\Users\Administrator.WSS> Get-ClusterResource "FileServerB" | get-ClusterParameter Aliases 
Object      Name    Value       Type
------      ----    -----       ----
FileServerB Aliases FileServerX String

Note: To remove the alias alias, run:  Get-ClusterResource "<file_server_name>" | set-ClusterParameter Aliases ""

 

  1. Next, restart the file server by going to the Failover Cluster Manager on the secondary WFS instance. Right-click on the file server role and click Stop Role.

wfs_rep26.JPG

  1. Once stopped, right-click on the same file server role and click Start Role

 

Test out alias

  1. Log onto a client. 
  2. Flush DNS cache. On Windows client, run the command ipconfig /flushdns in a command window.
  3. Map/Mount a share using the alias.

wfs_rep27.JPG

 

Finalize Replication Setup

The final step for the replication setup is to take the secondary WFS file server offline. It will be brought online only for failover or test failover events. Furthermore, in order for newly replicated data to be recognized by WFS file systems on the secondary site, it is best to keep the file server offline and brought online only after replicated data has been copied to the secondary WFS data volumes.

  1. Log onto any one of the secondary WFS VMs. 
  2. Open Failover Cluster Manager.
  3. Right-click on the file server role, and select Stop Role.

wfs_rep28.JPG

 

Replication Failover and Failover Testing

Customers can perform various levels of testing to ensure data is properly replicated and failover functionality works as expected.

Validate Replicated Data

To check and ensure data is valid on the secondary FlashArray, no failover is needed. Customers simply: 

  1. Copy the latest replicated snapshot(s) onto the secondary WFS data volume(s) according to steps provided in the Copy WFS Data section.
  2. Log onto any of the secondary WFS VMs via remote desktop or VNC.
  3. Open the Failover Cluster Manager.
  4. Right-click on the file server role, and select Start Role. 
  5. Explore the volumes and validate the data.
  6. When data has been validated, take secondary file servers back offline. Go to Failover Cluster Manager, right-click on the file server role, and select Stop Role. 

Failover Testing

The following steps provide guidance to perform a failover test and confirm the workflow. 

  1. Copy the latest replicated snapshot to the WFS data volume(s) according to steps provided in the Copy WFS Data section.
  2. Log onto any of the secondary WFS VMs via remote desktop or VNC.
  3. Open the Failover Cluster Manager.
  4. Right-click on the file server role, and select Start Role.
  5. Log onto the DNS server.
  6. Right-click on the alias and select Properties. Change the alias to point to the secondary file server. Use the Browse button to help locate the secondary file server's FQDN.

wfs_rep29.JPG

  1. Click OK.
  2. Log onto any of the primary WFS VMs via remote desktop or VNC.
  3. Open the Failover Cluster Manager.
  4. Right-click on the primary file server role, and select Stop Role. This step simulates the primary file server being down.

Note: For clients using aliases to access shares, they may temporarily lose access to the shares until the client DNS cache gets flushed. 

 

WFS clients 

If WFS clients are connecting to shares using Aliases:

  1. From the client, flush the DNS cache. 

Example Windows client using a command window:

ipconfig /dnsflush
  1. On the client, open the existing mapped share and refresh the share. Some Linux clients may need to unmount and remount to the same DNS alias name.
  2. Confirm share is accessible and the data is valid.

 

If WFS clients are connecting to shares directly via the file server on the primary WFS instance:

  1. Clients can map/mount shares using the secondary WFS file server FQDN or IP address.
  2. Confirm share is accessible and the data is valid.

 

Failover Event

In the event where the primary WFS instance is offline, the secondary WFS instance can be brought online to provide access to the WFS data. Note that WFS data is replicated asynchronously and therefore the secondary WFS instance is not expected to have the latest data. The amount of data that has not been replicated will depend on the replication period of the protection group. 

The following section provides steps to bring up the secondary WFS instance after the primary WFS instance is unavailable. 

  1. Log onto any of the secondary WFS VMs via remote desktop or VNC.
  2. Open the Failover Cluster Manager.
  3. Right-click on the file server role, and select Start Role. 
  4. Log onto the DNS server.
  5. Right-click on the alias and select Properties. Change the alias to point to the secondary file server. Use the Browse button to help locate the secondary file server's FQDN.

wfs_rep29.JPG

  1. Click OK.

WFS clients 

If WFS clients are connecting to shares using Aliases:

  1. From the client, flush the DNS cache. 

Example Windows client using a command window:

ipconfig /dnsflush
  1. On the client, open the existing mapped share and refresh the share. Some Linux clients may need to unmount and remount to the same DNS alias name.
  2. Confirm share is accessible and the data is valid.

 

If WFS clients are connecting to shares directly via the file server on the primary WFS instance:

  1. Clients can map/mount shares using the secondary WFS file server FQDN or IP address.
  2. Confirm share is accessible and the data is valid.

 

Recovery After Failover Event (or Failover Test)

Recovery (fail back) after a failover event, or failover test, can occur with or without replicating any new data from the secondary WFS instance back to the primary WFS instance. Procedures for both options are below. 

Recovery Option A: Fail back without replicating new data to primary

For a fail back to the primary WFS instance without replicating any data back from the secondary, steps are provided below:

  1. Log onto any of the primary WFS VMs via remote desktop or VNC.
  2. Open the Failover Cluster Manager.
  3. Ensure the file server on the primary WFS instance is up and running. If the file server role has not started, right-click on the file server role, and select Start Role.
  4. Log onto the DNS server.
  5. Change the alias to point back to the primary file server.
  6. Log onto any of the secondary WFS VMs via remote desktop or VNC.
  7. Open the Failover Cluster Manager.
  8. Right-click on the file server role, and select Stop Role.

WFS clients 

If WFS clients are connecting to shares using Aliases:

  1. From the client, flush the DNS cache. 

Example Windows client using a command window:

ipconfig /dnsflush
  1. On the client, open the existing mapped share and refresh the share. Some Linux clients may need to unmount and remount to the same DNS alias name.
  1. Confirm share is accessible and the data is valid.

 

If WFS clients are connecting to shares directly via the file server on the primary WFS instance:

  1. Clients can map/mount shares using the primary WFS file server FQDN or IP address.
  2. Confirm share is accessible and the data is valid.

 

Recovery Option B: Fail back with new data replicating back to primary

To fail back and replicate newly written data on the secondary FlashArray back to the primary FlashArray, the same procedures would be performed as the original replication steps, except in the opposite direction. A new protection group can be created on the secondary FlashArray. The WFS data volumes in the secondary FlashArray protection group would be replicated in the reverse direction back to the primary FlashArray. The replicated snapshot on the primary FlashArray can be copied (and overwritten) back onto the original WFS data volume which would effectively update the primary data volumes(s) with the latest data. 

wfs_rep31.JPG

 

The fail back steps to replicate data back to the primary FlahsArray are provided below:

  1. Go to the primary FlashArray GUI and navigate to the Protection Groups tab.
  2. Click on the WFS replication protection group hyperlink.

wfs_rep32.JPG

  1. In the Protection Group Snapshots section, click on the + icon to add a new snapshot.

wfs_rep33.JPG

  1. A Create Snapshot window will appear.
    1. Enter a suffix that would help describe the snapshot. The example below uses: BeforeFailBack 
    2. Enable the Apply Retention switch.

wfs_rep49.JPG

  1. Click Create.

Note: This snapshot is taken as a precautionary best practice step to preserve the WFS data before the it is overwritten by the WFS data snapshot from the secondary FlashArray.

  1. Go to the secondary FlashArray GUI and navigate to the Protection Groups tab.
  2. In the Protection Group section, click on the + icon to add a new protection group

wfs_rep34.JPG

  1. Enter a name for the WFS data protection group and click Create. The example below will use the name WFS-data-rev.

wfs_rep35.JPG

  1. Once created, the new protection group entry will appear with a hyperlink. Click on the new protection group hyperlink.

wfs_rep36.JPG

  1. This will bring up the properties of the protection group. Click on the pull-down icon in the Members box and select Add Volumes.

wfs_rep37.JPG

  1. Select the WFS data volume(s) that are to be replicated back to the primary and click Add.

Note: WFS boot volumes are not supported for replication. Only WFS data volumes. 

  1. Once the secondary WFS data volumes have been added to the protection group, go to the Targets box, click on the pull-down icon, and select Add.

wfs_rep38.JPG

  1. Select the primary FlashArray as the target. 
  2. Go to the primary FlashArray GUI. Navigate to the Storage tab. Under the Protection Groups window, there is an entry for the protection group that was created on the secondary FlashArray. It should say "Allowed on this array". If the entry shows "Disallowed on this array", click on the Edit icon and select Allow.

wfs_rep40.JPG

  1. Go to the back the secondary FlashArray GUI. Navigate to the Storage tab. Under the Protection Groups window, click on the hyperlink entry for the protection group that is to be replicated to the primary FlashArray.

wfs_rep41.JPG

 

  1. In the Protection Group Snapshots section, click on the + icon to manually create a new snapshot on the secondary FlashArray.

wfs_rep42.JPG

 

  1. A Create Snapshot window will appear.
    1. Enter a suffix that would help describe the snapshot. The example below uses failback.
    2. Enable the Replicate Now switch.

wfs_rep43.JPG

  1. Click Create.
  2. Depending on the amount of new data that needs to be replicated back, this may take some time. Allow the replication to complete. Replication status and details of the protection group snapshot can be viewed in the Protection Group Snapshots box of the target FlashArray, which in this case is the primary FlashArray.

wfs_rep44.JPG

 

The next several steps will outline the final procedures to cut over back to the primary FlashArray. During this time, a final data sync will be performed. In order for the data to be 100% synchronized between the secondary and primary FlashArray, the secondary FlashArrays must be taken offline to prevent new data from being written. During this brief time, clients will not have any access to the WFS shares.

  1. Log onto any one of the primary WFS VMs.
  2. Open Failover Cluster Manager.
  3. If not already stopped, right-click on the file server role, and select Stop Role. Note that the data for the file server will be updated, therefore it is necessary to take this file server down.

wfs_rep26.JPG

 

  1. Log onto any one of the secondary WFS VMs. 
  2. Open Failover Cluster Manager.
  3. Right-click on the file server role, and select Stop Role to prevent new data from being written during the final data sync.wfs_rep28.JPG

 

  1. Go to the back the secondary FlashArray GUI. Navigate to the Storage tab. Under the Protection Groups window, click on the hyperlink entry for the protection group that is to be replicated to the primary FlashArray

wfs_rep41.JPG

 

  1. In the Protection Group Snapshots section, click on the + icon to add a final sync snapshot on the secondary FlashArray.

wfs_rep42.JPG

  1. A Create Snapshot window will appear.
    1. Enter a suffix that would help describe the snapshot. The example below uses failback_final
    2. Enable the Replicate Now switch.

wfs_rep45.JPG

  1. On the primary FlashArray GUI, navigate to the Protection Groups tab. Confirm the final snapshot replication completed. If completed, click on the protection group snapshot hyperlink.

wfs_rep46.JPG

  1. Click on the Copy snapshot icon for the desired volume that is expected to be updated.

wfs_rep46.JPG

  1. This step will overwrite the primary WFS data volume with the equivalent WFS data volume from the secondary WFS data volume.
    1. Enter the name of the primary WFS data volume. Ensure name of the volume to be overwritten is correct and is the equivalent volume of the secondary FlashArray.
    2. Enable the Overwrite switch.
    3. Click Copy.

Note: This step can also be performed using the FlashArray CLI. An example of using the CLI is given in the previous Copy WFS Data section

 

wfs_rep47.JPG

  1. Once again, confirm the volume to be overwritten is correct and is the equivalent volume of the secondary FlashArray. Click Overwrite to confirm.
  2. Log onto any one of the primary WFS VMs. 
  3. Open Failover Cluster Manager.
  4. Right-click on the file server role and select Start Role.

wfs_rep48.JPG

  1. Open the WFS data volumes and validate the data.
  2. Log onto the DNS server.
  3. Change the alias to point back to the primary file server.
  4. Log onto any of the secondary WFS VMs via remote desktop or VNC.
  5. Open the Failover Cluster Manager.
  6. Right-click on the file server role, and select Stop Role.

 

WFS clients 

If WFS clients are connecting to shares using Aliases:

  1. From the client, flush the DNS cache. 

Example Windows client using a command window:

ipconfig /dnsflush
  1. On the client, open the existing mapped share and refresh the share. Some Linux clients may need to unmount and remount to the same DNS alias name.
  2. Confirm share is accessible and the data is valid.

 

If WFS clients are connecting to shares directly via the file server on the primary WFS instance:

  1. Clients can map/mount shares using the primary WFS file server FQDN or IP address.
  2. Confirm share is accessible and the data is valid.

 

Security

WFS allows for the same type of security that is received from external Windows hosts. Initial access to the WFS VMs is only allowed via link-local addresses from the FlashArray. These addresses are not routable or accessible from any device outside of the FlashArray. Customers have full control to block any ports according to corporate firewall rules or global policies.

In addition, WFS will benefit from the same data protection that is native to the FlashArray. All volumes exported to WFS are fully encrypted. Customers can leverage FlashArray sapshots to protect from ransomware attacks since all block volume snapshots are completely isolated and protected from the application and cannot be compromised.

Third Party Software

Customers are discouraged from installing any other software that is not relevant to file services. Since WFS is purposely built to perform file services, it is not suggested to install non-pertinent software that could potentially contend for available the resources.

Pure Storage does allow customers to install third-party software tools that are pertinent to delivering file services. For example, it is supported to install Anti-Virus software or backup tools on the WFS VMs. However, customers should be aware that some of the Anti-virus and backup tools will utilize resources on the WFS VM. Depending on the software, they can potentially be CPU and memory intensive and the customers should plan to run them during off hours when file services are not being used.

Licensing Requirements WFS

Licensing for WFS is a BYOL Microsoft license model. There are no explicit license requirements for Pure Storage to enable and run Purity RUN. However, Microsoft does require a license for each of the WFS VMs.

  • WFS VMs come with a Microsoft evaluation license. This allows a WFS POC without having to input any additional license information.

  • To deploy WFS in production, customers can simply provide their own Microsoft Windows Server 2016 Standard license for each WFS VM. The license information can either be provided during installation or anytime post install. Most enterprises have existing Microsoft Server licenses and likely have Microsoft Enterprise License agreements in place that can be leveraged for the WFS solution.

Additional licensing considerations that should be noted.

  • Microsoft uses a per-physical-core licensing model. Customers should work with their Pure Storage representative to get details on FlashArray specs, including number of cores.

  • In addition to the server core license, customers may need a Microsoft Client Access License (CAL) depending on their negotiated Microsoft License Agreement. Customers are encouraged to discuss licensing options with their trusted Microsoft reseller to ensure they are in compliance with their Microsoft License Agreement.

  • Customers can also refer to the Microsoft Server 2016 licensing for more information.

Support for WFS Solution

WFS is a fully supported Pure Storage solution. If a customer encounters WFS issues they should reach out to Pure1 Support. Pure1 Support will engage with Microsoft Support as needed. Customers get the best in class support for WFS and the peace of mind that they are working with Pure Storage for all their support needs.

Support & Limits for WFS

Please refer to the following matrix for scale and features currently supported.

Window File Services Support Matrix