Skip to main content
Pure1 Support Portal

Configuring the FlashArray SRA Array Managers

FlashArray Array Manager Overview

In Site Recovery Manager there are two important parts that allow discovery of your replication environment; the SRA, and array managers.

The SRA is an installed "plugin" that provides the libraries for SRM to be able to communicate to a 3rd party array, like the FlashArray. In order for SRM to be able to talk to a given array though it needs to be authenticated. Authentication to a given array, more specifically an array pair, is achieved through something called an array manager. An array manager is an authenticated instance in SRM that allows source and target arrays to be discovered and controlled. 

For Pure Storage FlashArrays, there is no requirement to deploy a management appliance to provide API-based control of the array. Instead, every FlashArray comes built-in with a REST API service. So the process to allow SRM control of a FlashArray is two-fold: installing the SRA, and populating the array managers with FlashArray addresses and respective credentials.

When configuring an SRM array manager, you need to supply credentials for the array(s) hosting your VMs and for the array(s) that they are being replicated to. Furthermore, since SRM is a two-site, bidirectional tool, the remote SRM server needs those same credentials as well.

Before we continue let's define a few terms:

  • Storage Replication Adapter--the installed plugin that imports the required libraries to communicate with a FlashArray
  • Array Manager--an interface that allows specific FlashArrays to be identified and authenticated to.
  • Array Manager Pair--Array managers must be configured on both the local SRM server and the remote SRM server for every given array pair.
  • Discovered Arrays--each array manager pair coordinates in order to identify arrays that are properly authenticated and are replicating to each other. In SRM, array pairs are then returned. This includes physical FlashArrays as well as pods.
  • Discovered Devices--from each discovered array pair, all of the devices that are replicated between the source array and the target array are listed. These listed devices are the storage objects that are marked as replicated by the SRA for use within SRM. It is important to note that these objects can only be including in a SRM protection group if they are in use in that particular VMware environment. If they are not in use as a VMFS or an RDM, SRM will filter them out as options for inclusion in a SRM protection group.

The FlashArray SRA currently supports three modes of replication:

  1. Periodic asynchronous replication from a FlashArray to another FlashArray. These are volumes that exist on one FlashArray that are periodically snapshotted and those snapshots are sent to a target FlashArray. SRM can then failover volumes from the source FlashArray to a target FlashArray connected over the asynchronous distance.
  2. Periodic asynchronous replication from within a pod on one FlashArray to another FlashArray. This pod may or may not be stretched across physical FlashArrays--being stretched, though, is not a requirement. These are volumes that exist in a pod that are periodically snapshotted and those snapshots are sent to a target FlashArray. The main difference between this option and the previous option (volumes that are not in a pod) is that these volumes are not tied to a physical FlashArray as the source--the pod and therefore the volumes in it can be moved from one FlashArray to another without reconfiguring SRM protection groups. SRM can then failover volumes from the source pod to a target FlashArray connected over the asynchronous distance. Array manager configuration is no different for this replication as for the previous section and will be treated as the same.
  3. Stretched storage. In this case, a volume is in a pod that is stretched over two physical FlashArrays. For this to work, the pod MUST be stretched. When a volume is stretched, the volume exists on two arrays and can be written to and read from simultaneously on both FlashArrays. In this configuration, there is no periodic replication, and there is no failing over of datastores. Instead, since the latest copy of the VMs on a datastore is always at both sites, an SRM failover just coordinates a restart of the affected VMs at the recovery site. There is no storage failover. If the sites are properly connected there may not even be a restart of the VMs, instead a cross-vCenter vMotion is attempted to move the running memory and CPU state of the VMs from one vCenter to the target vCenter.

How array managers are configured dictates what type of failover is allowed. Follow through to the appropriate sections for information on configuring the array managers for your specific replication topology.

Array Manager Configuration for Periodic Replication

The FlashArray offers asynchronous replication in a periodic fashion through a mechanism called Protection Groups. A FlashArray Protection Group is a consistency group that has a remote replication schedule that specifies a replication interval (how often a snapshot is created and sent to a remote FlashArray) and a retention policy (how long each replicated snapshot is kept on the remote FlashArray). FlashArray volumes in this scenario can host a VMFS datastore or a Raw Device Mapping (RDM). SRM can then failover over the datastores/RDMs from the source FlashArray to the remote FlashArray. The FlashArray hosting the volumes can be considered the source FlashArray and the FlashArray that is being replicated to can be considered the remote FlashArray.

This section does not cover how to configure array managers for protection groups that are inside of a pod. That will be covered in the next section.

Note that the frequency of replication and/or the retention policy has no direct bearing on SRM. Replication must be enabled on the protection group to allow SRM to discover the volumes as replicated--but no specific settings beyond that are required. It is important to note though, the more frequent the replication the shorter the synchronization period during a failover, and more importantly, the shorter the RPO in the case of a disaster.

clipboard_efc485208060d1f2e5805326bf0439e29.png

The above image is the view of the protection group on the source FlashArray (flasharray-m50-1). The protection group is always created and managed on the source FlashArray. As seen in the image, there is a protection group named srm-groupA (seen near number label 4) created on a FlashArray called flasharray-m50-1 (seen near number label 1). This protection group replicates to a FlashArray called flasharray-m50-2 (seen near number label 2). This particular protection group replicate any volume in it to from flasharray-m50-1 to flasharray-m50-2 every 5 minutes (as seen near number label 3).

The protection group can also be viewed on the remote FlashArray. If you login to the remote FlashArray, you can see the "remote" view of the protection group srm-groupA as well. The remote protection group view shows the protection group name slightly differently as it shows the source FlashArray name as a prefix (followed by a colon) as seen near number label 2 in the following image. The FlashArray hosting this remote view of the protection group can be seen near number label 1.

clipboard_e7b653bacb75a62dc8c54140074761f6b.png

The next step is to configure the SRM array managers with the connection information to both arrays. Let's re-confirm the requirements:

  1. Have a replication connection enabled and healthy between your source and target arrays (this can be a synchronous or asynchronous connection--either is fine)clipboard_e1e0a82844a8bb52e6675f7d1eb322af7.png
  2. Each SRM server should have TCP port 443 access to the virtual IPs of both FlashArrays
  3. Have at least one enabled protection group on the source array to the target array.
  4. A supported release of the SRA installed on both SRM servers. Both SRAs must be the same version. Pure Storage encourages the use of the latest available version of the SRA.

Once configurations are confirmed, log into Site Recovery Manager management interface.

These instructions are focused on the 8.2 release of SRM, so screenshots and exact step-by-step clicks may vary. The requirements and the inputs do not change between different releases though unless specifically noted.

clipboard_eff1d436fa7e1c2cb722b60b332bee1cd.png

Then click on View Details of the SRM pair you would like to configure to find the array manager configuration interface.

clipboard_e4a8eb99551dace0e8ab5c524cc975a38.png

First confirm that the Pure Storage SRA is installed. Click on Array Based Replication -> Storage Replication Adapters.

clipboard_e64602e7e5000df3b4f69a556aef48211.png

Confirm that the status is OK. If so, click on Array Pairs and then the Add button.

clipboard_e65f2a15bf83bf63e020ae8b8bda4c340.png

In the window that appears, select the Pure Storage FlashArray SRA and then Next.

clipboard_e085a1fa3cc6505e18f5ed0a59e275bc8.png

In this wizard, array managers are configured for both the source SRM server and the target SRM server. The first step is usually the source SRM server. Since SRM is technically a bidirectional tool (and therefore there really is no such thing as a "source" and "target" SRM server as they both can be both at the same time) it is important to verify which server you are operating on. In the Local Array Manager step in the wizard, look at the top where it says "Enter a name for the array manager on <insert vCenter name>".

clipboard_ef597061d4b4eb28d9912a52794c451ac.png

In the above case, the vCenter is named "vcenter-01.purecloud.com". Verify which arrays are local to the vCenter reported there in your environment. In this case my array flasharray-m50-1 is local to this vCenter and the flasharray-m50-2 is connected to the other vCenter.

First, name the array manager something that makes sense to you. I will call mine "VC-m50-1". 

clipboard_e5068d8d6c62f59cfbbe806f8800c0a44.png

Next populate the connection information. Enter in the FQDN of the FlashArray (the maps to the virtual IP address of the array). IP addresses are also acceptable, but FQDNs are generally preferred.

The recommendation is to create a special account for SRM interaction. This can be either a local account or through an external source like LDAP. All that matters is that it is at the user level of storage admin or higher.

clipboard_edc6a7e5a7ca946a986aeb0c23a024e02.png

Enter in the credentials and FQDN to the local array in the local array(s) entry form:

clipboard_e18a988d9236a145c58f80157d1038d11.png

Note that starting with SRA release 3.1, you can enter in more than one array in the local arrays address box. Each array can be entered in via comma separation. The requirement to do that though is that the same credentials are valid for each array. If they have different credentials, you will need to create a separate array manager pair.

                                                clipboard_e8bb46900777838047e54242e918f331b.png

Once the local array has been added, add in the connection information for the peer array. Towards the bottom of the screen, enter in the target information for the peer array in the section called "The peer Array(s)".

clipboard_ebe7d9501cf77db8734a23ed70efe0ca4.png

In this case I have entered in the peer FlashArray as flasharray-m20-1.purecloud.com. This represents the following replication connection listed on flasharray-m50-1.purecloud.com:

clipboard_e751d2feb21164f13c6b1c77431e4984d.png

The full local array manager looks as follows:

clipboard_ef88985c32321297de2d5b14385eb8a53.png

Confirm the details and click Next.

Now for the screen labeled Remote array manager, enter in the reverse details as compared to the local array manager. This will enter in the array connection information for the remote SRM server. Confirm the listed vCenter near the top and give the array manager a name:

clipboard_e9963c178f390f7bb54376c55a78d3f11.png

Next populate the array connection information. The local should be what was listed as the peer in the previous array manager and the peer should be listed as what you put in for the local array.

clipboard_e2f542b562ac7b256200fb7aea3b0b8cb.png

When done, click Next.

The next step will list discovered array pairs. The pairs that are listed will be the FlashArrays that have replication connections from the arrays entered. In the case above, I entered the FlashArray flasharray-m50-1 on one site and the FlashArray flasharray-m20-1 on the other.

An important point to understand is that there really isn't such a thing as a remote array manager. In reality all array managers are local array managers.The key is a given array manager is LOCAL to a certain SRM server. That same array manager in reference to the other SRM server in that pair is REMOTE. In other words, an array manager is local to one SRM server and remote to the other one.

These two array managers allow the SRA to see what arrays are available on either side. Since a local array manager was found for both arrays, the pairing of the two is valid. 

clipboard_ee1125090d519974d17f5271d7b622eda.png

Array pairs that have identified local manager spread across the two SRM servers will be shown as Ready to be enabled. The array pair discovery process will also find array pairs that exist to the specifed arrays but are listed as No peer array pair. This means that the SRA found other arrays one or both of the specified arrays are replicating to, but it did not find an array manager that is configured with that array as a local array. For an array pair to be considered a valid array pair, both arrays must be configured as a local array in separate array managers.

Enabling an array pair means that SRM device discovery will occur between those arrays which lists what volumes are replicated and suitable for SRM control. If you do not want volumes on an array pair to be listed for SRM, do not enable that pair.

Valid array pairs will be selected. If you would like that array pair to be enabled, then leave it selected. If there are pairs that are valid, but for whatever reason you do not want them to be enabled, it is safe to deselect them (you can enable them at a later date if you prefer). If there is an array pair that is not listed as ready to be enabled but you want it to be, verify that both arrays in the pair are listed as local arrays in the array manager on their respective SRM server.

Click Next.

clipboard_e493b52a3234ca5c6192805e4f17e16fe.png

Verify the information and click Finish when ready.

The selected array pairs will be enabled and will be listed in the Array Pairs screen.

clipboard_e2a961aa27ad537d5460ccf390df8b54c.png

This will list the source/target array pairs and the corresponding array managers that control each side of the replication. 

Array Manager Configuration for Pod-based Periodic Replication

On the FlashArray, there is an object referred to as a pod. A pod can be defined in many ways, but the simplest explanation is that a pod is a unique namespace. Within this namespace you can create volumes, protection groups, and snapshots with names that do not have to have globally unique names.

clipboard_e2eabcfdd3705c8b2420d66f2af4c0204.png

A pod is created by logging into a FlashArray and simply creating one--the only input required is a name:

clipboard_ea82bd90e99a4172822552c861d5fe735.png

Once that pod is created, objects can then be created within it.

clipboard_e8a65db1b45a4c85bece6dc6519bb0ad2.png

So a simple question is, why do I need a pod? Why not just create volumes with no pod? Well, an important part of a pod is that is not only a unique namespace, but a mobile namespace. It is a namespace that can be moved non-disruptively from one physical FlashArray to another. It is not forever tied to where it was initially created. This allows a user such as yourself to be able to move a group of volumes and their resources (snapshots, asynchronous replication groups, etc) to another FlashArray as needed.

A pod is moved between arrays through a process called stretching. "Stretching" a pod means making that pod and all of its internal resources (volumes, snapshots, protection groups) available on a 2nd array simultaneously. When a pod exists on two arrays at once, all of the volumes can also be written to and read from on both FlashArrays at once. This configuration is referred to as ActiveCluster. ActiveCluster is the FlashArray term for active-active synchronous replication.

A pod can then be unstretched from one array and then stretched back to the original array or a completely different array.

So a basic process around pods might be:

  1. Create a pod on FlashArray A.
  2. Create a volume called myVolume in that pod.
  3. Stretch the pod to FlashArray B. The pod and the volume named myVolume now exist on two arrays, FlashArray A and B. This is now an ActiveCluster configuration and the pod and its volumes can remain in this state indefinitely. Volumes in an ActiveCluster state have higher resiliency because the volumes remain available even if an entire FlashArray fails.
  4. Unstretch the pod from FlashArray A.The pod now only exists on FlashArray B. So the volume myVolume has now been non-disruptively moved from FlashArray A to B. This effectively disables ActiveCluster on the volume myVolume.
  5. Stretch the pod to FlashArray C. This now makes the volume myVolume available on FlashArray B and C at the same time--re-enabling ActiveCluster but with a slightly different pair of FlashArrays (B and C instead of A and B).

Furthermore, the protection provided by pods and ActiveCluster can be complemented with periodic replication over great distance. As of Purity 5.3.x, ActiveCluster supports arrays at a distance up to 11 ms RTT time--which may not be enough distance to put both FlashArrays out of the blast radius of a major disaster (typhoon, hurricane, etc.). Therefore, starting with Purity 5.2.x, it is possible to replicate the volumes in a pod (stretched or not) to a third FlashArray by creating an asynchronous replication-enabled protection group in the pod, and then putting desired volumes in that pod also in the protection group.

clipboard_ee4be2156bc40a17b0be57d502cc07182.png

For more information on ActiveCluster, please see the following page:

https://support.purestorage.com/FlashArray/PurityFA/Protect/ActiveCluster

Your next question might be "Yeah, cool, but what does this have to do with array managers in SRM?". Good question, anonymous reader. 

The characteristic that a pod is not tied to a specific physical array is something we did not want to break, or more accurately, it is a behavior that we didn't want to unnecessarily restrict within SRM. Traditionally, when FlashArray pairs were discovered in SRM array managers, the FlashArray SRA would return physical array pairs (e.g. FlashArray A replicates to FlashArray B). If the SRA returned volumes replicated from a pod under that physical array pair, it would require the containing pod to never be moved. If the pod was moved (unstretched then stretched) to a new array pair, SRM would not be able to understand the change (changing what array pairs owns a volume is not a workflow that SRM supports) and a reconfiguration of SRM would be required--likely breaking disaster recovery abilities until resolved. This is less than ideal.

The avoid this dissonance, the FlashArray SRA version 3.1 and later returns pods as potential replication sources for array pairs. So a pod is the source and a remote physical FlashArray is the target.

Note that currently (as of Purity 5.3.x) a pod cannot be a target for periodic replication--it can only be a source. Periodic replication (snapshot-based replication managed by a protection group) always replicates to the "root" of the array.

So in the below example, we have a pod named podSRM (seen near number label 2) currently residing on a FlashArray called flasharray-m50-1 (seen near number label 1) with a protection group in it called replicateto3rdSite (seen near number label 3) . This protection group replicates to a FlashArray called flasharray-m20-1 (seen near number label 4).

clipboard_e52224cca091120c7c22c55f313621e9e.png

Since this pod has a replication relationship, the SRA will discover the pod as a source "array" and the target physical FlashArray as the target:

clipboard_e856f20207243ac1c4c1f99506dab420d.png

Near number label 1, the flasharray-m50-1 (a physical FlashArray) to flasharray-m20-1 (a physical FlashArray) replication pair can be seen.

Near number label 2 the podSRM (a pod) to flasharray-m20-1 (a physical FlashArray) relationship will be seen.

So volumes that are in asynchronous replication-enabled protection groups that are NOT in any pod will be in the "physical" array pair. Volumes that are in protection groups in the pod podSRM will be listed under the "pod to array" array pair.

Configuring the array managers for this is very similar to non-pod based periodic replication. In the local array, enter in the source array and for the peer add the array that is a failover target. How this is slightly different is for pods that are stretched across two arrays. 

Let's walk through both scenarios.

Configuring Arrays Managers for Periodic Replication from an Unstretched Pod

An unstretched pod is a pod that is currently on only one FlashArray at the current point-in-time. A stretched pod is a pod that is on two FlashArrays at the current point-in-time.

Another name for this configuration is a "local" pod--though I am not a fan of this terminology. It seems to imply that there is a fundamental difference between a local pod or a stretched pod, and more specifically implies there is a "type" of a pod, which could be misleading. There are no pod types, just current pod states. Therefore, I will use the term unstretched pod or stretched pod when necessary. The standalone term "pod" will be used when the fact that a pod happens to be stretched or unstretched makes no difference to the statement.

To configure an array manager for a pod, identify which FlashArray the pod currently sits on, and also the FlashArray that the pod-based protection group replicates to.

Below I have a pod called srmPod, which is currently only on a FlashArray called flasharray-m50-1.

clipboard_ed4998c8069753e5044e2568d75b8045f.png

This pod also has a protection group called srmProtectionGroup:

clipboard_e902f6277014df8c85ae618da564218e6.png

This protection group replicates every 10 minutes to a FlashArray called flasharray-m20-1:

clipboard_e68afc353b8c3595d2a7994616aac93ca.png

In SRM, go to create a new array manager:

clipboard_e0137fcace6845a846da59d6fb4ff83fd.png

Confirm the correct SRA version is installed (latest available is generally recommended)

clipboard_e3f607069de5c0779589b25cdee0fa8a3.png

Name the local array manager something descriptive. This will be for communication to the FlashArrays local to my vCenter called vcenter-01 so I will name it vc01-local. The FlashArray (flasharray-m50-1) hosting my pod is local to vcenter-01, so that will be the address entered in for local array. The target array (flasharray-m20-1) will be entered as my peer array.

clipboard_e04d909cbc98844f85cbeff1a694e32c1.png

Click Next. Now do the opposite FlashArray configuration for the target vCenter. In my case the target vCenter is called vcenter-02.

clipboard_ee76b00689f9517b2a6fd9b0ff5ee10f9.png

Click Next. You will see discovered array pairs in the next screen. SRM will automatically select array pairs that it can immediately enable. Select and de-select as needed. You can also enable or disable array pairs at a later time. If you have no intention of using a specific array pair, the suggestion would be to not enable it. This will shorten device discovery by not having the SRA query unneeded array pairs for replication details.

In my case I will keep on the array pair that reflects my pod to array replication pair (srmPod to flasharray-m20-1):

clipboard_e5ec90c8549c5eb3e38b1c95a0b58aa5f.png

If a discovered pair shows up as "No peer array pair" that usually means SRM could not identify an array manager on the opposing SRM from which the array was discovered for that particular FlashArray (or pod). If you would like to enable that array pair, ensure that the address of the FlashArray hosting the missing array is added as a local array on the opposing SRM server.

Click Finish.

clipboard_e956196ed522b1eac901ee9ce28e7219b.png

All pods on the source array and target arrays will be discovered as an array, whether or not they have a protection group in them replicating to another array.

Configuring Arrays Managers for Periodic Replication from a Stretched Pod

The configuration of array managers to support failover from a stretched pod is almost identical to configuration for an unstretched pod, with one major exception: there are now two local arrays, not one. So both need to be specified in the appropriate place in the local array manager and the remote array manager.

While it is not technically required to specify both, but if only one array is registered and that array fails--the SRA will not be able to perform a planned migration. Instead a disaster recovery will be needed. This will still result a successful failover, but the source side will not be brought down which will then require eventual manual cleanup. Therefore, an SRM failover from a stretched pod requires that both FlashArrays be specified in the array manager. Not doing so is not supported

 Below there is a pod named srmPod that is stretched across two FlashArrays, flasharray-m50-1 and flasharray-m50-2:

SECTION IN PROGRESS...

 

Enabling or Disabling Array Pairs

SECTION IN PROGRESS...

Adding a New Array

SECTION IN PROGRESS...

Moving a Pod to a New Array

SECTION IN PROGRESS...

Deleting an Array Manager

SECTION IN PROGRESS...