Skip to main content
Pure Technical Services

SRM User Guide: FlashArray vVols Array Based Replication and SRM - Requirements and Limitations

Currently viewing public documentation. Please login to access the full scope of documentation.

VMware Site Recovery Manager (SRM) now supports FlashArray Based Replication with vSphere Virtual Volumes (vVols) with the release of SRM 8.3.  Pure Storage is a certified partner with the release or SRM 8.3 and the FlashArray 1.1.0 VASA Provider (Purity//FA 5.3.6+).  This KB covers the requirements to use the FlashArray with SRM 8.3 for vVols.  As well as any known limitations.

Requirements & Best Practices

In order to use Site Recovery Manager, certain prerequisites must be met/configured:

  • FlashArray 

    • Minimum Version: Purity 5.3.6 or later
    • Recommended Version: Purity 6.1.8 or later
    • FlashArray hosts and host groups are created (see here for more information)
    • Configure NTP (see here for more information)
    • Configure syslog target (see here for more information)
    • When using VMs with shared vVols (WSFC or Oracle RAC), run Purity 6.1.8 or later on both FlashArrays
  • VASA Certificates

    • If a given FlashArray will be used with only one vCenter at a time, or multiple vCenters that are in the same Enhanced Linked Mode configuration, default certificates are permitted and no specific certificate configuration is needed
    • If the use case is to connect a FlashArray to more than one non-linked vCenter at a time, a CA-signed certificate must be generated prior to registration of the FlashArray VASA providers. See here for more information
  • vCenter 

    • Minimum Version: vCenter 6.5 or later
    • Recommended Version: vCenter 6.7 U3 or later
    • Configure NTP (see here for more information)
    • Configure syslog target (see here for more information)
    • Port 8084 TCP is open to/from vCenter and FlashArray Management Ports (CT0 and CT1)
    • Both VASA providers (CT0 and CT1) on a FlashArray should be registered with their respective vCenter (see here for more information)
  • ESXi 

    • Minimum Version: ESXi 6.5 or later
    • Recommended Version: ESXi 6.7 U3 p03 or later
    • Configure NTP (see here for more information)
    • Configure syslog target (see here for more information)
    • Port 8084 TCP is open to/from ESXi management ports and FlashArray Management Ports (CT0 and CT1)
    • For iSCSI: ESXi iSCSI configuration (see here for more information)
    • For Fibre Channel: Fabric Zoning (see here for more information)
    • Present FlashArray Protocol Endpoint and Mount vVol Datastore from the local FlashArray(s)
  • Virtual Machines

    • Any VM that needs to be protected must have all of its virtual disks and home directory on the same vVol datastore, with the same storage policy, and they all must be assigned to the same replication group
    • A replication group must not have non-vVol volumes in it
    • A replication group should not have any vVol volumes of a VM that is only partially protected (a VM must entirely be in that replication group, or not at all)
    • A VM should not span to other types of storage (vSAN, NFS, VMFS). The only exception is that a VM may have its swap storage on non-vVol and/or non-replicated storage
    • FlashArray asynchronous periodic replication is currently the only support array replication supported for vVols and SRM protection
  • Site Recovery Manager 

    • Minimum Version: Site Recovery Manager 8.3 or later
    • Recommended Version: Site Recovery Manager 8.3 or later
    • Configure NTP (see here for more information)
    • Configure syslog target (see here for more information)
    • Use a VMFS Datastore for the Placeholder Datastores, do not use the vVol Datastore for the Placeholder (VMware does not support this)

Refer to the vVol page for the latest recommendations and how-tos.

Limitations and Recommendations

  • Only Periodic Replication between two FlashArrays running the same Purity Version is supported
  • Pure Storage and VMware will only support recovering up to 250 protected VMs per Array Pair

    VMware SRM does not support a many to one or one to many configuration with SRM and vVols. The FlashArray Protection Groups must be a one to one Array Pair relationship.

    • Pure Storage found in testing that having more VMs in less replication groups was better than less VMs in more replication groups.
      • For Example, with 250 VMs, splitting them between 10 replication groups was more efficient for the VASA requests and SRM workflow execution, than having the VMs split between 100 replication groups.
  • A vVols based VM must have be in one replication group in order to be protected in SRM (SRM limitation, not VASA)
  • Pure Storage and VMware have found that the Test Failover Replication Group takes longer than expected during testing for SRM Workflows for vVols Array Based Replication.  The increased time to run a Test Failover is primarily noticed at higher Scale, in particular the update virtual machine files operations during the Test Failover Replication Group is taking longer.  

    VMware and Pure Storage are working together to identify the root cause of the difference and cause behind the Test Failover Replication Group taking longer.  Until a fix is found, it will be Pure Storage's recommendation to try to keep SRM Recovery Plans under 75-100 VMs if Test Failover operations are time sensitive and are ran frequently.

    Keep in mind that the time it takes to complete the Test Failover Replication Group is not indicative of the time it will take to run the Failover Replication Group operations. 

  • Pure Storage and VMware have found limitations when using vVols with Shared Data vVols (WSFC or Oracle RAC for example) and SRM. When running a test failover or failover the behavior was observed that the shared disk paths are done correctly being updated on the VMs/Nodes that "adding an existing disk" operation were ran against. In the event that a recovery plan is executed with vVols VMs that share data vVols, then the parent VM/Node will have the paths correctly updated, but the VMs/Nodes that have the shared "existing disk" will need to have manual intervention.

    This issue was root caused to VASA specific issue and is fixed in Purity 6.1.8+ and 6.2.  In the event that shared vVols are in use, please make sure that both FlashArrays are running Purity 6.1.8 or higher.