Skip to main content
Pure Technical Services

Release Notes: Storage Replication Adapter for Site Recovery Manager

Currently viewing public documentation. Please login to access the full scope of documentation.

KP_Ext_Announcement.png

The Pure Storage® FlashArray™ Storage Replication Adapter (SRA, PureSRA) is a plugin for the VMware® vCenter™ Site Recovery Manager™ (SRM). The Pure Storage SRA ensures the integration of both FlashArray storage and Purity replication with VMware vCenter Site Recovery Manager, with these advantages:

  • FlashArray's unique data reduction (pattern elimination, deduplication, and compression) and ultra-efficient micro-provisioning of physical storage
  • Simple configuration that does not require professional services involvement
  • Flexible, policy-based automation of replication
  • Support for bi-directional protection
  • Support for ActiveCluster synchronous replication (v3.0 and above)
  • Support for asynchronous replication from an ActiveCluster pod (v3.1 and above)
  • Support for ActiveDR replication (v4.1 and above - only supported with the Photon SRM Appliance)

Currently all versions of the SRA are not fully compatible with FlashArray SafeMode.  Please see the SafeMode with VMware User Guide for more detailed information. 


SRA Release Timeline

Release Release Date
5.0.3 Janurary, 2024
5.0.1 November, 2023
5.0.0 April, 2023
4.2.1 August, 2022
4.2.0 May, 2022
4.1.0 December, 2020
4.0 August, 2020
3.1 April, 2020
3.0.154 May, 2019
3.0.14 July, 2018
2.0 December, 2016
1.5 December, 2015
1.0 December, 2014

SRA 5.0.3 Release Notes

Release: Jan, 2024

SRA 5.0.3 is supported for the Linux-based SRM deployment only. Windows-based deployments must either use the 4.0.0 release or upgrade to the Linux-based deployment.

New in this Release

  1. A new workflow to manually correct the failover and test failover workflows when the protected volume's name has changed and is different than the target protect groups snapshots source volume name (see known issue 3b for more context).  There will be a KB that goes over this issue in more detail in the near future. 

Fixed Issues

  1. TMAN-19956: Increase the Pod promotion timeout to 2 minutes.  The timeout for pod promotion was 20 seconds in previous SRA releases.
     
  2. TMAN-19936: SRA will fail the test failover or failover workflow if the ActiveDR Pod on the recovery site is unable to promote and come online within the pod promotion timeout window.  If the pod can’t be promoted within 2 minutes, SRA will fail and return an error “Failed to promote pod within the designated timeout period”.
     
  3. TMAN-19882: Adds support for a key value pair for an "alternate source" tag on the volumes to be used as the source volume reference during a test failover or failover recovery workflow.  There will be a KB that goes over this in more detail in the near future. 

Known Issues

  1. ActiveDR
    1. If there are volumes in an ActiveDR pod that are presented to the VMware environment but not in use as an RDM or a VMFS a recovery will succeed but a subsquent reprotect will fail. To complete the reprotect the volumes should be removed or manually tagged. 
       
    2. If a source pod is already demoted when a planned migration recovery is attempted the process will fail and a disaster recovery is required or the source pod must be manually promoted before the recovery can be re-attempted.
       
    3. Non-VMware volumes cannot be in an ActiveDR pod controlled by SRM. All volumes in the pod must be present and in use as an RDM or VMFS in the VMware environment connected to the SRM pair.
       
  2. ActiveCluster
    1. Pods protected by SRM cannot be renamed. ActiveDR pods can be renamed. Pods in use with ActiveCluster to third site cannot be renamed.
       
    2. Failback or reprotect from an asynchronous target into a stretched pod is not supported. Pods much be first unstretched before a failback from an asynchronous distance target.
       
  3. General
    1. Volume names must be less than 42 characters in length except for ActiveDR volumes--these can be up to full supported length of the FlashArray
       
    2. At it's current release SRA depends on the volumes source name to identify which volumes are recovered during test failover and failover workflows.  This is being changed in a future release, however this does expose a problem when volumes have a source name that does not match where the volume's current source name is.  In Purity the volume name is constructed as either "volumeName" , "vgroupName/volumeName" , "podName::volumeName" or "podName::vgroupName/volumeName".  A protection group volume snapshot has fields for a name "pgroupName.snapPrefix.volName" and a source "volName" with the "volName having any combination of the previous volume name constructions just covered. 

      When volumes are moved between Pods, volume groups or both, the volume snapshot source name does not change.  However, the SRA is depending on the source name to correctly identify the recovered objects in the test failover and failover workflows.  In the event that the volume name and the source name do not have matching "volName" strings, an error will appear in SRM that the recovered volume could not be found.  With the release of SRA 5.0.3 a workaround is available in case this situation occurs, but a larger fix for this will be coming in a future release of SRA.
       
    3. At this time Pure Storage will not support protecting non-vVol FlashArray Volumes that are in Volume Groups.  Any support offered by Pure will be best effort. The reason for this is that there are several caveats when using vgroups with SRM that could lead to DR Failovers failing, there has not been enough testing in each of these edge case situations, and getting back into a good state often times requires removing both the Array and SRM protection groups to start from a clean start.  This can be both time consuming and leave VMs in an unprotected state.  Thus the decision not to recommend the use of vgroups with SRM.  This will be resolved in a future SRA release.
       
    4. Best practice and recommendation from Pure Storage is to place volumes in protection groups for use with SRM protection groups and recovery plans.  Using Hosts or Host Groups as placement for volumes to be protected by SRM has inconsistent behavior and support for this would be best effort.  Pure Storage is working to improve these workflows for a future release of the SRA when using host or host groups, but at this time Pure would recommend to avoid using Host or Host Group placement for FlashArray protection groups.
       
    5. Volumes that are in multiple protection groups (added explicitly as volumes or implicitly by hosts/host groups) on a source array will very likely have issues when running SRM workflows. The recommendation is to only have the volume be a member of a single protection group. Because of 3d. above, adding a host or host group to a protection group for SRM is not currently supported. This is something Pure Storage is working to improve in a future release of the Storage Replication Adapter (SRA) for Site Recovery Manager (SRM).
       
    6. If either FlashArray has been renamed after they have been connected, all test failovers, failovers and reprotects will not execute completely.  The is due to the array pair naming not getting updated if either array is renamed (if purearray list --connect is ran from the CLI, the previous names will show).  Should an array be renamed, the recommend remediation if using only async replication is to disconnect the FlashArrays and reconnect them.  Should ActiveCluster be enabled, the recommended remediation is to run through the purearray connect process from the array that had the connection initiated from initially.

SRA 5.0.1 Release Notes

Release: Nov, 2023

SRA 5.0.1 is supported for the Linux-based SRM deployment only. Windows-based deployments must either use the 4.0.0 release or upgrade to the Linux-based deployment.

New in this Release

  1. Adds Support for FlashArray//E to the SRA.

Fixed Issues

  1. TMAN-19199: Issue: When a customer has 25 pairs of active-dr pods across 2 arrays in the SRM environment, each time they select 1 active-dr pair and click discover-device, the log file of discover device is over 400k in size and 2800+ lines. SRA logs end up filling up log partition very fast.  The issue of filling up the log partition filling up is fixed in this release.

Known Issues

  1. ActiveDR
    1. If there are volumes in an ActiveDR pod that are presented to the VMware environment but not in use as an RDM or a VMFS a recovery will succeed but a subsquent reprotect will fail. To complete the reprotect the volumes should be removed or manually tagged. 
    2. If a source pod is already demoted when a planned migration recovery is attempted the process will fail and a disaster recovery is required or the source pod must be manually promoted before the recovery can be re-attempted.
    3. Non-VMware volumes cannot be in an ActiveDR pod controlled by SRM. All volumes in the pod must be present and in use as an RDM or VMFS in the VMware environment connected to the SRM pair.
  2. ActiveCluster
    1. Pods protected by SRM cannot be renamed. ActiveDR pods can be renamed. Pods in use with ActiveCluster to third site cannot be renamed.
    2. Failback or reprotect from an asynchronous target into a stretched pod is not supported. Pods much be first unstretched before a failback from an asynchronous distance target.
  3. General
    1. Volume names must be less than 42 characters in length except for ActiveDR volumes--these can be up to full supported length of the FlashArray
    2. At this time Pure Storage will not support protecting non-vVol FlashArray Volumes that are in Volume Groups.  Any support offered by Pure will be best effort. The reason for this is that there are several caveats when using vgroups with SRM that could lead to DR Failovers failing, there has not been enough testing in each of these edge case situations, and getting back into a good state often times requires removing both the Array and SRM protection groups to start from a clean start.  This can be both time consuming and leave VMs in an unprotected state.  Thus the decision not to recommend the use of vgroups with SRM.  This will be resolved in a future SRA release.
    3. Best practice and recommendation from Pure Storage is to place volumes in protection groups for use with SRM protection groups and recovery plans.  Using Hosts or Host Groups as placement for volumes to be protected by SRM has inconsistent behavior and support for this would be best effort.  Pure Storage is working to improve these workflows for a future release of the SRA when using host or host groups, but at this time Pure would recommend to avoid using Host or Host Group placement for FlashArray protection groups.
    4. Volumes that are in multiple protection groups (added explicitly as volumes or implicitly by hosts/host groups) on a source array will very likely have issues when running SRM workflows. The recommendation is to only have the volume be a member of a single protection group. Because of 3c. above, adding a host or host group to a protection group for SRM is not currently supported. This is something Pure Storage is working to improve in a future release of the Storage Replication Adapter (SRA) for Site Recovery Manager (SRM).
    5. If either FlashArray has been renamed after they have been connected, all test failovers, failovers and reprotects will not execute completely.  The is due to the array pair naming not getting updated if either array is renamed (if purearray list --connect is ran from the CLI, the previous names will show).  Should an array be renamed, the recommend remediation if using only async replication is to disconnect the FlashArrays and reconnect them.  Should ActiveCluster be enabled, the recommended remediation is to run through the purearray connect process from the array that had the connection initiated from initially.

SRA 5.0.0 Release Notes

Release: April, 2023

SRA 5.0.0 is supported for the Linux-based SRM deployment only. Windows-based deployments must either use the 4.0.0 release or upgrade to the Linux-based deployment.

New in this Release

  1. ActiveCluster
    1. Pod to third site array pairs are discovered when the underlying array pair is connected for synchronous replication.
  2. NVMe-oF
    1. FC-NVMe and NVMe/TCP volumes are now supported for asynchronous replication on FlashArray with the SRA. Currently there is no support for ActiveCluster or ADR replication with NVMe-oF datastores. SRM 8.7+ and SRA 5.0.0+ are required for NVMe-oF functionality. To enable NVMe-oF functionality in SRM, please follow the steps in Use NVMe-oF Datastores in Linux-based SRM.
  3. General
    1. Tags are now used instead of name suffixes for the state of volumes (demoted, failed-over, test-failedOver).
    2. Some SRA logging is now phoned home to allow for simplier Pure support analysis of logging and to allow for greater insight into the value of features to our customers.

Fixed Issues

  1. TMAN-17442: A volume wasn't put into the correct protection group after reprotect. The reprotect workflow will create the source pgroup on the target array with the correct volume in the pgroup.

    Background: If a datastore that has a protection group that replicates to an array was reprotected, the protection group wasn't created on the target array and the failed over volume was put into PureSRADefaultProtectionGroup instead of the original pgroup.
  2. TMAN-18649: While running a reprotect after a successful migration if the reprotect is interrupted by the target SRM server being restarted, when the reprotect is attempted again an error was thrown in the SRM logs. The reprotect workflow will now will now able to recognize the tracking snapshots. Here is an error example:
    The remote server returned an error: (400) BAD REQUEST. --> Message from Purity='ctx:PrepareReverse,msg:Name already in use.'

  3. TMAN-18505: When using ActiveCluster with asyncronous replication to a 3rd array on Purity 6.3+ with SafeMode default protection enabled and SRM, failback from the 3rd array to the stretched pod would fail. The SRM workflow has been optimized to work better with SafeMode.

Known Issues

  1. ActiveDR
    1. If there are volumes in an ActiveDR pod that are presented to the VMware environment but not in use as an RDM or a VMFS a recovery will succeed but a subsquent reprotect will fail. To complete the reprotect the volumes should be removed or manually tagged. 
    2. If a source pod is already demoted when a planned migration recovery is attempted the process will fail and a disaster recovery is required or the source pod must be manually promoted before the recovery can be re-attempted.
    3. Non-VMware volumes cannot be in an ActiveDR pod controlled by SRM. All volumes in the pod must be present and in use as an RDM or VMFS in the VMware environment connected to the SRM pair.
  2. ActiveCluster
    1. Pods protected by SRM cannot be renamed. ActiveDR pods can be renamed. Pods in use with ActiveCluster to third site cannot be renamed.
    2. Failback or reprotect from an asynchronous target into a stretched pod is not supported. Pods much be first unstretched before a failback from an asynchronous distance target.
  3. General
    1. Volume names must be less than 42 characters in length except for ActiveDR volumes--these can be up to full supported length of the FlashArray
    2. At this time Pure Storage will not support protecting non-vVol FlashArray Volumes that are in Volume Groups.  Any support offered by Pure will be best effort. The reason for this is that there are several caveats when using vgroups with SRM that could lead to DR Failovers failing, there has not been enough testing in each of these edge case situations, and getting back into a good state often times requires removing both the Array and SRM protection groups to start from a clean start.  This can be both time consuming and leave VMs in an unprotected state.  Thus the decision not to recommend the use of vgroups with SRM.  This will be resolved in a future SRA release.
    3. Best practice and recommendation from Pure Storage is to place volumes in protection groups for use with SRM protection groups and recovery plans.  Using Hosts or Host Groups as placement for volumes to be protected by SRM has inconsistent behavior and support for this would be best effort.  Pure Storage is working to improve these workflows for a future release of the SRA when using host or host groups, but at this time Pure would recommend to avoid using Host or Host Group placement for FlashArray protection groups.
    4. Volumes that are in multiple protection groups (added explicitly as volumes or implicitly by hosts/host groups) on a source array will very likely have issues when running SRM workflows. The recommendation is to only have the volume be a member of a single protection group. Because of 3c. above, adding a host or host group to a protection group for SRM is not currently supported. This is something Pure Storage is working to improve in a future release of the Storage Replication Adapter (SRA) for Site Recovery Manager (SRM).
    5. If either FlashArray has been renamed after they have been connected, all test failovers, failovers and reprotects will not execute completely.  The is due to the array pair naming not getting updated if either array is renamed (if purearray list --connect is ran from the CLI, the previous names will show).  Should an array be renamed, the recommend remediation if using only async replication is to disconnect the FlashArrays and reconnect them.  Should ActiveCluster be enabled, the recommended remediation is to run through the purearray connect process from the array that had the connection initiated from initially.

SRA 4.2.1 Release Notes

Release: August, 2022

SRA 4.2.1 is supported for the Linux-based SRM deployment only. Windows-based deployments must either use the 4.0.0 release or upgrade to the Linux-based deployment.

New in this Release

  1. No new features

Fixed Issues 

  1. General
    1. TMAN-17016: This release includes a resolution for a situation where the replication of a protection group snapshot completes in the middle of a test recovery or recovery operation leading to different point-in-time snapshots being chosen for two or more volumes.

      Background: The SRA pulls all replicated snapshots available at the start of the recovery operation. The SRA then chooses the latest snapshot for each volume and verifies that the selected snapshot has been fully replicated. If a replicated snapshot that was in-progress at the start of the operation happens to complete after at least one volume was recovered, subsequent volumes will use the newer snapshot. This fix now forces the SRA to exclude any snapshot that was not completed upon the start of the recovery operation.

Known Issues 

  1. ActiveDR
    1. If there are volumes in an ActiveDR pod that are presented to the VMware environment but not in use as an RDM or a VMFS a recovery will succeed but a subsquent reprotect will fail. To complete the reprotect the volumes should be removed or manually tagged. 
    2. If a source pod is already demoted when a planned migration recovery is attempted the process will fail and a disaster recovery is required or the source pod must be manually promoted before the recovery can be re-attempted.
    3. Non-VMware volumes cannot be in an ActiveDR pod controlled by SRM. All volumes in the pod must be present and in use as an RDM or VMFS in the VMware environment connected to the SRM pair.
  2. ActiveCluster
    1. Pods protected by SRM cannot be renamed. ActiveDR pods can be renamed. Pods in use with ActiveCluster to third site cannot be renamed.
    2. Failback or reprotect from an asynchronous target into a stretched pod is not supported. Pods much be first unstretched before a failback from an asynchronous distance target.
  3. General
    1. Volume names must be less than 42 characters in length except for ActiveDR volumes--these can be up to full supported length of the FlashArray
    2. At this time Pure Storage will not support protecting non-vVol FlashArray Volumes that are in Volume Groups.  Any support offered by Pure will be best effort. The reason for this is that there are several caveats when using vgroups with SRM that could lead to DR Failovers failing, there has not been enough testing in each of these edge case situations, and getting back into a good state often times requires removing both the Array and SRM protection groups to start from a clean start.  This can be both time consuming and leave VMs in an unprotected state.  Thus the decision not to recommend the use of vgroups with SRM.  This will be resolved in a future SRA release.
    3. Best practice and recommendation from Pure Storage is to place volumes in protection groups for use with SRM protection groups and recovery plans.  Using Hosts or Host Groups as placement for volumes to be protected by SRM has inconsistent behavior and support for this would be best effort.  Pure Storage is working to improve these workflows for a future release of the SRA when using host or host groups, but at this time Pure would recommend to avoid using Host or Host Group placement for FlashArray protection groups.
    4. Volumes that are in multiple protection groups (added explicitly as volumes or implicitly by hosts/host groups) on a source array will very likely have issues when running SRM workflows. The recommendation is to only have the volume be a member of a single protection group. Because of 3c. above, adding a host or host group to a protection group for SRM is not currently supported.
      This is something Pure Storage is working to improve in a future release of the Storage Replication Adapter (SRA) for Site Recovery Manager (SRM).
    5. If either FlashArray has been renamed after they have been connected, all test failovers, failovers and reprotects will not execute completely.  The is due to the array pair naming not getting updated if either array is renamed (if purearray list --connect is ran from the CLI, the previous names will show).  Should an array be renamed, the recommend remediation if using only async replication is to disconnect the FlashArrays and reconnect them.  Should ActiveCluster be enabled, the recommended remediation is to run through the purearray connect process from the array that had the connection initiated from initially.

SRA 4.2.0 Release Notes

Release: May, 2022

SRA 4.2.0 is supported for the Linux-based SRM deployment only. Windows-based deployments must either use the 4.0.0 release or upgrade to the Linux-based deployment.

New in this Release

  1. Safemode support. When Safemode is enabled on one or more FlashArrays in control of SRM, the SRA now interacts appropriately when test or source volumes can no longer be eradicated manually.

  2. FlashArray//XL support. The latest in the FlashArray family, the FlashArray/XL is now supported with Site Recovery Manager.

  3. Purity 6.1 and later. Purity 6.1, 6.2, and 6.3 are now supported with the 4.2.0 release of the SRA.

Fixed Issues 

  1. ActiveDR
    1. On an ActiveDR pod, volumes without datastores were not being tagged and during reprotect and SRM was flagging this as an issue becasue of a mix of untagged/tagged volumes. This has been fixed.
    2. If the replica link of an ActiveDR pod pair is paused, QuerySyncStatus will be called perpetually after syncOnce because the RPO will never be behind the timestamp that is returned in syncOnce. The migration can never complete. This has been fixed.
    3. During a testFailover, if an ActiveDR pod contained volumes that are not protected vSphere datastores, the testFailover would succeed. It should now fail.
    4. During a testFailover, if the target pod was already in a Test Failover Stopped state, there was an unnecessary error message that could cause confusion. Now, it will succeed with a warning.
  2. General
    1. No workarounds are required when using SafeMode on the FlashArray in this version.
  3. ActiveCluster
    1. When running the unplanned failover test case, any VM which has vMotion disabled did not get powered off on the protected site during the failover. We now disconnect this volume on the protected site during the failover operation and not prepareFailover.

Known Issues 

  1. ActiveDR
    1. If there are volumes in an ActiveDR pod that are presented to the VMware environment but not in use as an RDM or a VMFS a recovery will succeed but a subsquent reprotect will fail. To complete the reprotect the volumes should be removed or manually tagged. 
    2. If a source pod is already demoted when a planned migration recovery is attempted the process will fail and a disaster recovery is required or the source pod must be manually promoted before the recovery can be re-attempted.
    3. Non-VMware volumes cannot be in an ActiveDR pod controlled by SRM. All volumes in the pod must be present and in use as an RDM or VMFS in the VMware environment connected to the SRM pair.
  2. ActiveCluster
    1. Pods protected by SRM cannot be renamed. ActiveDR pods can be renamed. Pods in use with ActiveCluster to third site cannot be renamed.
    2. Failback or reprotect from an asynchronous target into a stretched pod is not supported. Pods much be first unstretched before a failback from an asynchronous distance target.
  3. General
    1. Volume names must be less than 42 characters in length except for ActiveDR volumes--these can be up to full supported length of the FlashArray
    2. At this time Pure Storage will not support protecting non-vVol FlashArray Volumes that are in Volume Groups.  Any support offered by Pure will be best effort. The reason for this is that there are several caveats when using vgroups with SRM that could lead to DR Failovers failing, there has not been enough testing in each of these edge case situations, and getting back into a good state often times requires removing both the Array and SRM protection groups to start from a clean start.  This can be both time consuming and leave VMs in an unprotected state.  Thus the decision not to recommend the use of vgroups with SRM.  This will be resolved in a future SRA release.
    3. Best practice and recommendation from Pure Storage is to place volumes in protection groups for use with SRM protection groups and recovery plans.  Using Hosts or Host Groups as placement for volumes to be protected by SRM has inconsistent behavior and support for this would be best effort.  Pure Storage is working to improve these workflows for a future release of the SRA when using host or host groups, but at this time Pure would recommend to avoid using Host or Host Group placement for FlashArray protection groups.
    4. If either FlashArray has been renamed after they have been connected, all test failovers, failovers and reprotects will not execute completely.  The is due to the array pair naming not getting updated if either array is renamed (if purearray list --connect is ran from the CLI, the previous names will show).  Should an array be renamed, the recommend remediation if using only async replication is to disconnect the FlashArrays and reconnect them.  Should ActiveCluster be enabled, the recommended remediation is to run through the purearray connect process from the array that had the connection initiated from initially.
    5. There is an uncommon situation where the replication of a protection group snapshot completes in the middle of a test recovery or recovery operation leading to different point-in-time snapshots being chosen for two or more volumes. This will be fixed in the next patch release.
       

SRA 4.1.0 Release Notes 

Release: December, 2020

SRA 4.1.0 is supported for the Linux-based SRM deployment only. Windows-based deployments must either use the 4.0.0 release or upgrade to the Linux-based deployment.

All customers using SRM with ActiveDR must upgrade to this SRA release. SRA 4.0.0 will not work with Purity 6.1.0+ or 6.0.2+ with ActiveDR.  

This effectively means that ActiveDR is only supported with the Linux SRM Server.

New in This Release 

  • Deprecation of Windows support for the SRM server. Starting with this SRA version the only supported SRM configuration is the Photon appliance.

  • Full support with ActiveDR in Purity//FA 6.0.2+.  See the Fixed Issues for more information.

Fixed Issues 

  1. ActiveDR
    1. ActiveDR failover did not work in Purity 6.0.2 and later due to an internal API change. This has been resolved and the 4.1.0 SRA support ActiveDR in all Purity releases.
    2. A warning like "Expected consistency group not found in SRA's 'queryReplicationSettings' response during test and recovery no longer appears. This was an innocuous failure due to not supporting the queryReplicationSettings command for ActiveDR workflows. This API is now implemented.
    3. Deprecation of the usage of the puresra-demoted tag in ActiveDR workflows which was deemed to be unnecessary. The only tags will be puresra-failover and puresra-testFailover.
  2. Discovery:
    1. Device discovery logs are less verbose, which in larger environments could exhaust logging directory capacity needlessly.
    2. Removal of an error in device discovery if overly long volume names are present which gives a false impression of failure (a warning is returned instead in the SRA logs).
    3. Replicated vVols are now filtered out during a deviceDiscovery operation and will not be returned to SRM via the SRA. 
  3. General:
    1. Recoveries that were partially successful (some volumes were connected and others were not) can lead to a situation where access to recovered volumes is temporarily removed upon a re-attempted recovery. In this release, volumes that have been successfully recovered will be skipped during re-attempts of recoveries.
  4. ActiveCluster:
    1. During a stretched storage disaster recovery, the stretched volume is isolated at the target site, if prepareFailover failed.

Known Issues 

  1. ActiveCluster:
    1. Pods protected by SRM cannot be renamed. ActiveDR pods can be renamed. Pods in use with ActiveCluster to third site cannot be renamed.
    2. Failback or reprotect from an asynchronous target into a stretched pod is not supported. Pods much be first unstretched before a failback from an asynchronous distance target.
  2. General:
    1. Volume names must be less than 42 characters in length except for ActiveDR volumes--these can be up to full supported length of the FlashArray
    2. At this time Pure Storage will not support protecting non-vVol FlashArray Volumes that are in Volume Groups.  Any support offered by Pure will be best effort. The reason for this is that there are several caveats when using vgroups with SRM that could lead to DR Failovers failing, there has not been enough testing in each of these edge case situations, and getting back into a good state often times requires removing both the Array and SRM protection groups to start from a clean start.  This can be both time consuming and leave VMs in an unprotected state.  Thus the decision not to recommend the use of vgroups with SRM.  This will be resolved in a future SRA release.
    3. Best practice and recommendation from Pure Storage is to place volumes in protection groups for use with SRM protection groups and recovery plans.  Using Hosts or Host Groups as placement for volumes to be protected by SRM has inconsistent behavior and support for this would be best effort.  Pure Storage is working to improve these workflows for a future release of the SRA when using host or host groups, but at this time Pure would recommend to avoid using Host or Host Group placement for FlashArray protection groups.
    4. If either FlashArray has been renamed after they have been connected, all test failovers, failovers and reprotects will not execute completely.  The is due to the array pair naming not getting updated if either array is renamed (if purearray list --connect is ran from the CLI, the previous names will show).  Should an array be renamed, the recommend remediation if using only async replication is to disconnect the FlashArrays and reconnect them.  Should ActiveCluster be enabled, the recommended remediation is to run through the purearray connect process from the array that had the connection initiated from initially.
    5. Best practice and recommendation from Pure Storage is to place volumes in protection groups for use with SRM protection groups and recovery plans.  Using Hosts or Host Groups as placement for volumes to be protected by SRM has inconsistent behavior and support for this would be best effort.  Pure Storage is working to improve these workflows for a future release of the SRA when using host or host groups, but at this time Pure would recommend to avoid using Host or Host Group placement for FlashArray protection groups.
  3. ActiveDR:
    1. New volumes cannot be provisioned into a target pod during the test state, nor during the promoted "target"-side pod after a failover and before a reprotect.
    2. If the ActiveDR replication link is paused, the syncOnce operation will continue indefinitely. This will be fixed in an upcoming release. If Synchronization steps in SRM test recovery, recovery, or reprotection is stalled, ensure the replication link is enabled and if not, enable it.
    3. If there are volumes in an ActiveDR pod that are presented to the VMware environment but not in use as an RDM or a VMFS a recovery will succeed but a subsquent reprotect will fail. To complete the reprotect the volumes should be removed or manually tagged. 
    4. If a source pod is already demoted when a planned migration recovery is attempted the process will fail and a disaster recovery is required or the source pod must be manually promoted before the recovery can be re-attempted.
    5. Non-VMware volumes cannot be in an ActiveDR pod controlled by SRM. All volumes in the pod must be present and in use as an RDM or VMFS in the VMware environment connected to the SRM pair.
  4. Safemode:
    1. Currently full automation of SRA on the FlashArray with SafeMode enabled is not supported.  There are manual workarounds on the FlashArray that are required for the SRA to work correctly with SRM.  Please review more information in this KB.

SRA 4.0.0 Release Notes

Release: August, 2020

SRA 4.0 is supported for the Linux SRM Server and the Windows-based SRM.

This is the last release to be supported with a Windows-based SRM deployment.

New in This Release

  • Adds support for failover and test failover from a pod to a remote pod that are linked via ActiveDR replication.

While initial support for ActiveDR was added in the 4.0.0 release, API changes in Purity make it so SRA 4.0.0 will only work with ActiveDR on Purity//FA 6.0.0 and 6.0.1.  Should the Purity version be 6.0.2+ or 6.1.0+ then SRA 4.1.0 is required for ActiveDR.

Effectively ActiveDR is supported in SRA 4.1.0 which is only available with the Linux SRM Appliance.  

  • Volume renames are no longer required during failover, Purity tags are used for ActiveDR volumes

  • ActiveDR pods can be renamed at will.

  • Purity 6.0.0 and 6.0.1 is only supported for ActiveDR with SRA 4.0.0

Fixed Issues

SRA pod-to-third site pairs are showing up in array discovery without asynchronous replication protection group.

  • All pods showed up as array pairs to all remote FlashArray connections even if they did not have replication enabled in them to that array. In this release only pods with an enabled asynchronous connection to a FlashArray will show up as valid array pair

"Snapshot limit reached." error during SRM failover/recovery allows process to continue, creating inconsistent failover/failback during SRM activities

  • If the snapshot limit has been reached, there is now a clear error message indicating such

Windows SRA wrong timestamp format in testFailover response

  • Incorrect timestamp format could cause test recovery to fail. This has been fixed.

Certain failure modes could cause the SRA to not be able to failover from a pod to a third site

  • This workflow has been corrected and all situations where failures on the source site array can be tolerated by the SRA.

Known Issues

  1. Pods protected by SRM cannot be renamed. ActiveDR pods can be renamed. Pods in use with ActiveCluster to third site cannot be renamed.
  2. Virtual volumes are not supported in a pod protected by SRM.
  3. Failback or reprotect from an asynchronous target into a stretched pod is not supported. Pods much be first unstretched before a failback from an asynchronous distance target.
  4. Volume names must be less than 42 characters in length except for ActiveDR volumes--these can be up to full supported length of the FlashArray
  5. Non-VMware volumes cannot be in an ActiveDR pod controlled by SRM. All volumes in the pod must be present and in use as an RDM or VMFS in the VMware environment connected to the SRM pair.
  6. New volumes cannot be provisioned into a target pod during the test state, nor during the promoted "target"-side pod after a failover and before a reprotect.
  7. Verbose logging can fill up SRM data volumes. This is being investigated with VMware but will also be patched from the SRA side in an upcoming release. (Fixed in 4.1.0)
  8. If the ActiveDR replication link is paused, the syncOnce operation will continue indefinitely. This will be fixed in an upcoming release. If Synchronization steps in SRM test recovery, recovery, or reprotection is stalled, ensure the replication link is enabled and if not, enable it.
  9. If either FlashArray has been renamed after they have been connected, all test failovers, failovers and reprotects will not execute completely.  The is due to the array pair naming not getting updated if either array is renamed (if purearray list --connect is ran from the CLI, the previous names will show).  Should an array be renamed, the recommend remediation if using only async replication is to disconnect the FlashArrays and reconnect them.  Should ActiveCluster be enabled, the recommended remediation is to run through the purearray connect process from the array that had the connection initiated from initially.
  10. If locks remain on virtual machine during a failover (meaning the virtual machines cannot be powered down gracefully) a SRM failover will not complete successfully as the source SRM server is down and cannot shut down the source VMs. Therefore the VM locks will still be held by the source ESXi host(s) and failover will not succeed. This can occur if 1) Source SRM server fails 2) Source vCenter fails 3) Network partition to ESXi hosts from vCenter. If the array fails, the SAN fails, or the compute fails entirely this issue does not occur. To resolve this, you will need to manually shut down the VMs on the source site or, if not possible, manually disconnect the storage on the source site on the source FlashArray from the host/host groups (a future release of the SRA will provide a function to attempt the latter of the two). (Fixed in 4.1.0)
  11. Best practice and recommendation from Pure Storage is to place volumes in protection groups for use with SRM protection groups and recovery plans.  Using Hosts or Host Groups as placement for volumes to be protected by SRM has inconsistent behavior and support for this would be best effort.  Pure Storage is working to improve these workflows for a future release of the SRA when using host or host groups, but at this time Pure would recommend to avoid using Host or Host Group placement for FlashArray protection groups.

SRA 3.1 Release Notes

Release: April, 2020

SRA 3.1 is only supported by the Linux SRM Server

New in This Release

Supports for asynchronous replication together with ActiveCluster synchronous replication
Adds support for asynchronous failover from a pod (either stretched or unstretched) that is protected by Active cluster synchronous replication, including the following operations:
  • Recovering virtual machines using point-in-time snapshots as well as synchronous replication
  • Recovering volumes within a protection group that is within an ActiveCluster pod
Purity 5.2.x or higher is required for asynchronous replication from a pod.
Resolves an error importing certificates
Previously, with plugin version 3.0.154 installed on the Linux SRM Server version 8.2, the error “Unable to load shared library” was seen when importing a signed certificate from an offline Certificate Authority.
Improves device discovery times at scale
 
FlashArray//C support

Known Issues and Best Practices

  1. Pods protected by SRM cannot be renamed.
  2. Virtual volumes are not supported in a pod protected by SRM.
  3. Failback or reprotect into a stretched pod is not supported.
  4. If either FlashArray has been renamed after they have been connected, all test failovers, failovers and reprotects will not execute completely.  The is due to the array pair naming not getting updated if either array is renamed (if purearray list --connect is ran from the CLI, the previous names will show).  Should an array be renamed, the recommend remediation if using only async replication is to disconnect the FlashArrays and reconnect them.  Should ActiveCluster be enabled, the recommended remediation is to run through the purearray connect process from the array that had the connection initiated from initially.

 

SRA 3.0.154 Release Notes

Release: May, 2019

New in This Release

Supports the SRM 8.2 Linux Appliance
SRA 3.0.154 support VMware SRM version 8.2 for Linux.

Known Issues and Best Practices

  1. Volume group names are trimmed in the snapshot source when array replication type is changed or is disconnected.  In the vSphere SRM GUI, planned failover and disaster failover will show the status of partially recovered and throw errors for volumes in vgroups, a test failover will also show warnings.  This is only an issue for datastore volumes in volume groups, all other datastores are not be affected.
  2. When using ActiveCluster with SRM, ensure the personality for all Purity hosts connecting to ESXi servers is set to ‘esxi’.  Failure to do so may cause datastores backed by ActiveCluster stretched pods to become inaccessible after an interruption to the stretched storage connection.
  3. If either FlashArray has been renamed after they have been connected, all test failovers, failovers and reprotects will not execute completely.  The is due to the array pair naming not getting updated if either array is renamed (if purearray list --connect is ran from the CLI, the previous names will show).  Should an array be renamed, the recommend remediation if using only async replication is to disconnect the FlashArrays and reconnect them.  Should ActiveCluster be enabled, the recommended remediation is to run through the purearray connect process from the array that had the connection initiated from initially.
  4. After upgrading to PureSRA 3.0, it will be necessary to disable and re-enable the array manager used by PureSRA.  PureSRA 3.0 supports stretched storage so all existing array managers will need to be restarted to trigger a discover arrays operation.
  5. Failover and test failovers of volumes inside a volume group may fail if there is a replicated protection group with a host or host group as a member. A future release of Purity will fix this, but for now, a workaround will be to either not use volume groups, or make sure there are no replicated protection groups with hosts and host group members.
  6. If volumes are in the process of being deleted during a SRM device discovery operation, the discovery can inadvertently fail. This is a known issue and will be resolved. If this occurs re-run the operation within SRM so the device discovery can be retried. This is most likely to occur in dynamic environments with vvols where VMs (and therefore their virtual volumes) are often migrated across arrays.

    The error will look something like:

    ERROR!
    649 com.vmware.vim.binding.dr.storage.fault.CommandFailed: SRA command 'discoverDevices' failed. Array operation failed: "PureRestException: HttpStatusCode = 'BadRequest', RestErrorCode = 'NotExist', Details = '["msg":"Volume does not exist.","ctx":"pod1::sync-vmfs"]', InnerException = 'System.Net.WebException: The remote server returned an error: (400) Bad Request.
    
    
  7. When using ActiveCluster with SRM, if the Stretched Pods are in a state of Resyncing, the device discovery will fail.  You will need to run the device discovery once all stretched pods are in sync.
  8. After installing the SRA to the Linux SRM Server an error that the SRA Upload failed may appear.  There is a issue with the SRM Server that after the SRA has installed the SRM browser session is closed.  The SRA should show up as installed after refreshing the SRM page.

Release Compatibility

  • Supports SRM version 8.2 for Linux.
  • Support REST API version 1.3 or higher for asynchronous (snapshot) replication.  
  • REST API version 1.13 (Purity 5.0) is required for stretched storage support.
  • At this time Pure Storage will not support protecting FlashArray Volumes that are in Volume Groups.  Any support offered by Pure will be best effort.  The reason for this is that there are several caveats when using vgroups with SRM that could lead to DR Failovers failing, there has not been enough testing in each of these edge case situations, and getting back into a good state often times requires removing both the Array and SRM protection groups to start from a clean start.  This can be both time consuming and leave VMs in an unprotected state.  Thus the decision not to recommend the use of vgroups with SRM.  This will be resolved in a future SRA release.
  • With Purity Version 5.2.0 the feature to Async Replicate Volumes in a Stretched Pod was introduced.  With SRA 3.0.154, there is no support for using SRM with volumes in a stretched pod that have an array protection group associated with the pod using async replication to a 3rd site.  The Volumes in that Array Protection group will not show up in the devices discovery page.  Pure Storage is working to support this feature in a future SRA release.

Installation

The installation process has changed for SRM 8.2 Linux

  1. Unzip the Pure Storage SRA package zip file.
  2. Follow the instructions in the VMware SRM 8.2 Quick Start Guide to load the Pure Storage SRA adapter tar file to the SRM appliance manager.

 

SRA 3.0.14 Release Notes

Release: July, 2018

New in this Release

Supports Pure Storage ActiveCluster synchronous replication
SRA 3.0 supports ActiveCluster synchronous replication (a.k.a stretched storage) when used in combination with SRM version 6.1 or later and Purity version 5.0 or later. Users can now create protection groups and recovery plans based on storage policies, and run the full failover/reprotect workflow as well as test failovers.

Known Issues and Best Practices

  1. When configuring your FlashArray Protection Groups to be used with SRM, please ensure that the Retention Policy is configured to retain at minimum 1 snapshot for 1 day.  This is important as the SRA will be leveraging the replication retention policy when taking test, failover and re-protect array based snapshots.
    For Example:
    # purepgroup list --retention
    Name                                         Array   All For  Per Day  Days
    sn1-405-c12-25:sn1-405-25-prod-pgroup-1      source  1d       4        7
                                                 target  2h       1        1
  2. Whenever an array's replication type is changed, or an array loses its connection to replicating arrays, volume group names are trimmed in the snapshot source (as displayed by the CLI purevol list --snap command, for example).
  3. If either FlashArray has been renamed after they have been connected, all test failovers, failovers and reprotects will not execute completely.  The is due to the array pair naming not getting updated if either array is renamed (if purearray list --connect is ran from the CLI, the previous names will show).  Should an array be renamed, the recommend remediation if using only async replication is to disconnect the FlashArrays and reconnect them.  Should ActiveCluster be enabled, the recommended remediation is to run through the purearray connect process from the array that had the connection initiated from initially.
  4. For volumes in vgroups, planned failover and disaster failover show the status partially recovered in the vSphere SRM GUI, with an error message. A test failover also displays warnings. This is only an issue for datastore volumes in volume groups; all other datastores are not affected. To successfully recover, remove any SRM-controlled volumes from volume groups.
  5. For SRM protection groups configured with FlashArray Volumes in Volume Groups the SRM protection group/recovery plan will no longer work if volumes have been moved out of a volume group, volume group names have changed or volumes were moved to a new volume group.
  6. Device Discovery, DR Failover and test failovers of volumes inside a Volume Group (VG) may fail if there is a replicated Protection Group with a host or host group as a member that is also connected to the volume in the VG. We expect a future release of Purity to fix this issue. Until then, the workaround is either to not use Volume Groups, or to ensure there are no replicated Protection Groups with hosts and host group members.
  7. When using ActiveCluster with SRM, ensure the personality for all Purity hosts connecting to ESXi servers is set to esxi.  Failure to do so may cause datastores backed by ActiveCluster stretched pods to become inaccessible after an interruption to the stretched storage connection.
  8. After upgrading to PureSRA 3.0, it is necessary to disable and re-enable the array manager used by PureSRA.  Because PureSRA 3.0 supports stretched storage, all existing array managers must be restarted to trigger a discover arrays operation.
  9. Test failovers of Volumes, Volume Group Volumes or Pod Volumes are expected to fail if the character length of the volume name exceeds 42 characters.  If the volume name exceeds 42 characters, then during the test failover when the volume is renamed with the "-puresra-testFailover" suffix, the volume name will exceed 63 characters and the test failover will fail.  At this time Pure's recommendations to avoid this from occurring, ensure that your Volume Names do not exceed 42 characters.
  10. If volumes are in the process of being deleted during a vSphere SRM device discovery operation, the discovery can fail. This is a known issue that we expect to resolve in a future release. If this occurs, re-run the operation within vSphere SRM so that the device discovery can be retried. This issue is most likely to occur in dynamic environments with VVols where VMs (and therefore also their virtual volumes) are often migrated across arrays.

    The following is an example error message:

    ERROR!
    649 com.vmware.vim.binding.dr.storage.fault.CommandFailed: SRA command 'discoverDevices' failed. Array operation failed: "PureRestException: HttpStatusCode = 'BadRequest', RestErrorCode = 'NotExist', Details = '["msg":"Volume does not exist.","ctx":"pod1::sync-vmfs"]', InnerException = 'System.Net.WebException: The remote server returned an error: (400) Bad Request.
  11. When using ActiveCluster with SRM, if the Stretched Pods are in a state of Resyncing, the device discovery will fail.  You will need to run the device discovery once all stretched pods are in sync. 
  12. When using ActiveCluster with SRM, when testing a Recovery Plan, the test can fail if the [pod_name::volume_name]-testFailoverStart exceeds 42 characters.  The failure will note that the volume was unable to be created.  

Compatibility

  • SRA 3.0 supports SRM 6.0, 6.1, 6.5, 8.1 and 8.2 on Windows® Servers. (note ActiveCluster is only supported with SRM 6.1 and later).
  • REST API version 1.3 or higher is required for asynchronous (snapshot) replication.  REST API version 1.13 (Purity 5.0) is required for ActiveCluster synchronous replication (stretched storage).
  • At this time Pure Storage will not support protecting FlashArray Volumes that are in Volume Groups.  Any support offered by Pure will be best effort.  The reason for this is that there are several caveats when using vgroups with SRM that could lead to DR Failovers failing, there has not been enough testing in each of these edge case situations, and getting back into a good state often times requires removing both the Array and SRM protection groups to start from a clean start.  This can be both time consuming and leave VMs in an unprotected state.  Thus the decision not to recommend the use of vgroups with SRM.  This will be resolved in a future SRA release.
  • With Purity Version 5.2.0 the feature to Async Replicate Volumes in a Stretched Pod was introduced.  With SRA 3.0.14, there is no support for using SRM with volumes in a stretched pod that have an array protection group associated with the pod using async replication to a 3rd site.  The Volumes in that Array Protection group will not show up in the devices discovery page.  Pure Storage is working to support this feature in a future SRA release.

SRA 2.0 Release Notes

Release: December, 2016

New in this Release

  • Support for IPv6
    Requires Purity 4.9.0 or higher and vSphere 6.0 or higher.
    When an IPv6 IP address is configured in the vROps Management Pack, the address must be entered enclosed in square brackets. For example: [2015:0db8:85a3:0042:1000:8a2e:0360:7334]
  • Ability to configure more than one array on the recovery side
    When configuring multiple arrays on the recovery site, one set of login credentials must be used on those arrays.

Known Issues and Best Practices

  1. When configuring your FlashArray Protection Groups to be used with SRM, please ensure that the Retention Policy is configured to retain at minimum 1 snapshot for 1 day.  This is important as the SRA will be leveraging the replication retention policy when taking test, failover and re-protect array based snapshots.
    For Example:
    # purepgroup list --retention
    Name                                         Array   All For  Per Day  Days
    sn1-405-c12-25:sn1-405-25-prod-pgroup-1      source  1d       4        7
                                                 target  2h       1        1

Compatibility

SRA 2.0 supports SRM 5.5, 5.8, 6.0, 6.1, and 6.5 on Windows® Servers.


SRA 1.5 Release Notes

Release: December, 2015

New in this Release

Compatibility

SRA 1.x supports SRM 5.5, 5.8, 6.0, and 6.1 on Windows® Servers.


SRA 1.0 Release Notes

Release: December, 2016

Compatibility

SRA 1.x supports SRM 5.5, 5.8, 6.0, and 6.1 on Windows® Servers.

General

TLS Support

Any client that communicates with the Purity REST API must support TLS 1.1 or 1.2. Ensure that HTTPS calls to the REST API have TLS 1.1 and 1.2 enabled. For information on how to update your code to work properly with TLS 1.1, see the Knowledge Base article Pure Storage REST API Best Practices.

Compatibility and Requirements

  • Supported versions of VMware SRM vary by Pure Storage SRA release:
    • SRA 2.0 supports SRM 5.5, 5.8, 6.0, 6.1, and 6.5 on Windows® Servers.
    • SRA 1.x supports SRM 5.5, 5.8, 6.0, and 6.1 on Windows® Servers.
  • This release supports Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2 and Windows Server 2016.
  • This release has not been tested or verified with Windows Server 2008. Currently the SRA is not officially supported on this server.
  • This release is compatible with FlashArrays with Purity Operating Environment that support REST API 1.2 or higher. We recommend Purity 4.0.14 or higher.
    Note: IPv6 support requires Purity 4.9.0 or higher.
  • Host side connectivity is supported with Fibre Channel and iSCSI.
  • At least two FA-4xx or //m FlashArrays are required.

Installation

  • To install the SRA adapter, extract and run PureSRAInstaller.exe on the Windows Server machines hosting the protected site and recovery site SRM servers.
  • Administrator privilege is required to install.
  • The Pure Storage FlashArray SRA requires .NET Framework 4.5 or later to be installed on the same machine where you run the installer.

Documentation

Please see the Pure Storage FlashArray Storage Replication Adapter Guide SRA Guide on Pure1 Knowledge under Solutions > Virtualization > VMware > Site Recovery Manager (SRM).

Known Issues and Best Practices

Important: Prior to installing and using the Pure Storage FlashArray SRA, read the user guide available in the Site Recovery Manager - SRM section.

  • The SRA uses -puresra-testFailover and -puresra-failover as suffixes for test-failed-over and failed-over volumes. The names of demoted volumes are appended with the suffix -puresrademoted.
  • When protection groups are configured to use hosts or host groups (instead of selected volumes), the SRA may create helper protection groups starting with puresra- during prepareFailover.
  • During SRM operations, do not create, rename, or destroy volumes with those suffixes, or create, rename, or delete protection groups with the puresra- prefix.
  • Do not rename an array, protection group, SRM protection group, host, host group, or volume that is involved in a Protection Plan while the plan is being executed.
  • Names of protection groups cannot exceed 62 characters in length. To allow for the puresra- prefix, do not use a protection group name longer than 54 characters.
  • For protection groups involved in SRM workflows, apply a reasonably short retention policy so that snapshots created during SRM operations are cleaned up in a timely manner.
  • The user should not have two volumes of the same name on the source and target (i.e. the source array has a volume named MyVol, and target array also has a volume named MyVol). Reprotect will not work (and fail with an error message about not being able to rename a volume).
  • If the installer fails to download and the install the required .NET Framework, manually download and install the framework from the Microsoft download site (for example, the .Net framework 4.5 installer download), then re-run the SRA installer.

Log Locations

SRA logs are located in %PROGRAMDATA%\VMware\VMware vCenter Site Recovery Manager\Logs\SRAs\purestorage. Each invocation of the SRA produces one log file. Sort by Date Modified to see the commands executed in chronological order.

SRM logs are located in %PROGRAMDATA%VMware\VMware vCenter Site Recovery Manager\Logs and named vmware-dr-##.log. The file with the largest ## number is the most recent. SRM logs are useful for diagnosing problems in the following cases:

  • The SRA responded correctly, but an SRM operation failed.
  • The SRA operation failed before creating an entry in the SRA logs.

The SRA attaches the REST call transcript at the end of each log file (e.g. testFailoverStart_2014-12-0410-24- 22-0829439-1511666224.log). Look for "Rest Library transcript" in the log file. The SRA logs only the request URL and the response code.

Copyrights and Licenses

© 2018 Pure Storage, Inc. All rights reserved.

No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or stored in a database or retrieval system for any purpose without the express written permission of Pure Storage, Inc.

Pure Storage, Inc., reserves the right to make changes to this document at any time without notice and assumes no responsibility for its use. This document contains the most current information available at the time of publication. When new or revised information becomes available, this entire document will be updated and distributed to all registered users.
This product includes software developed by the JSON.NET Project (https://json.codeplex.com) for high performance JSON framework for .NET, which is distributed under MIT license (https://json.codeplex.com/license) as shown below in Open Source Licenses Section.

All other trademarks, service marks, and company names in this document or website are properties of their respective owners.