Skip to main content
Pure Technical Services

SRM User Guide: Configuring the FlashArray SRA Array Managers

Currently viewing public documentation. Please login to access the full scope of documentation.

KP_Ext_Announcement.png

FlashArray Array Manager Overview

In Site Recovery Manager there are two important parts that allow discovery of your replication environment:  the SRA and array managers.

The SRA is an installed "plugin" that provides the libraries for SRM to be able to communicate to a 3rd party array, like the FlashArray. In order for SRM to be able to talk to a given array though, it needs to be authenticated. Authentication to a given array, more specifically an array pair, is achieved through something called an array manager. An array manager is an authenticated instance in SRM that allows source and target arrays to be discovered and controlled. 

For Pure Storage FlashArrays, there is no requirement to deploy a management appliance to provide API-based control of the array. Instead, every FlashArray comes built-in with a REST API service. So the process to allow SRM control of a FlashArray is two-fold:  installing the SRA and populating the array managers with FlashArray addresses and respective credentials.

When configuring an SRM array manager, you need to supply credentials for the array(s) hosting your VMs and for the array(s) that they are being replicated to. Furthermore, since SRM is a two-site, bidirectional tool, the remote SRM server needs those same credentials as well.

Before we continue let's define a few terms:

  • Storage Replication Adapter--the installed plugin that imports the required libraries to communicate with a FlashArray.
  • Array Manager--an interface that allows specific FlashArrays to be identified and authenticated to.
  • Array Manager Pair--Array managers must be configured on both the local SRM server and the remote SRM server for every given array pair.
  • Discovered Arrays--each array manager pair coordinates in order to identify arrays that are properly authenticated and are replicating to each other. In SRM, array pairs are then returned. This includes physical FlashArrays as well as pods.
  • Discovered Devices--from each discovered array pair, all of the devices that are replicated between the source array and the target array are listed. These listed devices are the storage objects that are marked as replicated by the SRA for use within SRM. It is important to note that these objects can only be including in a SRM protection group if they are in use in that particular VMware environment. If they are not in use as a VMFS or an RDM, SRM will filter them out as options for inclusion in a SRM protection group.

The FlashArray SRA currently supports four modes of replication:

  1. Periodic replication from a FlashArray to another FlashArray. These are volumes that exist on one FlashArray that are periodically snapshotted and those snapshots are sent to a target FlashArray. SRM can then failover volumes from the source FlashArray to a target FlashArray connected over the asynchronous distance.
  2. Periodic replication from within a pod on one FlashArray to another FlashArray. This pod may or may not be stretched across physical FlashArrays--being stretched, though, is not a requirement. These are volumes that exist in a pod that are periodically snapshotted and those snapshots are sent to a target FlashArray. The main difference between this option and the previous option (volumes that are not in a pod) is that these volumes are not tied to a physical FlashArray as the source--the pod and therefore the volumes in it can be moved from one FlashArray to another without reconfiguring SRM protection groups. SRM can then failover volumes from the source pod to a target FlashArray connected over the asynchronous distance. Array manager configuration is no different for this replication as for the previous section and will be treated as the same.
  3. Continuous Replication from a FlashArray to another FlashArray. This is referred to as ActiveDR. Volumes are created in a pod and that pod is linked to a remote pod on a remote FlashArray. All data in the pod (either written to volumes or stored by snapshots) gets sent over to the remote pod and stored in distinct volumes and snapshots that maintain their relationships and configurations. ActiveDR pod relationships are represented by array pairs within SRM array discovery.
  4. Stretched storage. In this case, a volume is in a pod that is stretched over two physical FlashArrays. For this to work, the pod MUST be stretched. When a volume is stretched, the volume exists on two arrays and can be written to and read from simultaneously on both FlashArrays. In this configuration, there is no periodic replication, and there is no failing over of datastores. Instead, since the latest copy of the VMs on a datastore is always at both sites, an SRM failover just coordinates a restart of the affected VMs at the recovery site. There is no storage failover. If the sites are properly connected there may not even be a restart of the VMs, instead a cross-vCenter vMotion is attempted to move the running memory and CPU state of the VMs from one vCenter to the target vCenter.

How array managers are configured dictates what type of failover is allowed. Follow through to the appropriate sections for information on configuring the array managers for your specific replication topology.

When Pure Storage FlashArray array managers are configured, one of more FlashArray addresses (along with credentials) are entered to provide the SRA with access to the REST API services on those FlashArrays. Array managers can be considered to be configured in pairs (one pair for a given FlashArray replication topology for each SRM server) but that isn't an entirely accurate view--this runs on the assumption that an array manager always has an equal and opposite array manager for the opposing paired SRM server. This is not always the case (though is the most common configuration). In short, for a FlashArray replication pair, one array manager must be configured to discover the array on one SRM server and another array manager must be configured to see the other FlashArray in that pair. In the case of two FlashArrays on site A that are both replicating to the same target FlashArray located in site B, there could be two array managers (one on each site) or three array managers (two on site A, one for each FlashArray there, and 1 on site B).

Volumes that are in multiple protection groups on a source FlashArray will very likely have issues when running SRM workflows. The recommendation is to only have the volume be a member of a single protection group. This is something Pure Storage is working to improve in a future release of the Storage Replication Adapter (SRA) for Site Recovery Manager (SRM).

FlashArray Array Manager Credentials

The FlashArray array managers require credentials for both the source and target arrays. For all source (local) FlashArrays listed in a given array manager, there can only be one set of credentials so they must be valid for all specified source (local) FlashArrays. For all target (peer)  FlashArrays listed in a given array manager, there can only be one set of credentials so they must be valid for all specified target (peer) FlashArrays. These credentials will be used in SRM during later parts of this guide.

clipboard_e1bc3c20f43182478ba5237107cb4a96e.png

The credentials need to have storage admin level authorization for every replication type except ActiveDR which requires array admin level. They cannot be read only or ops admin. It is recommended to either use active directory or LDAP credentials, or create a specific local user on the FlashArray for the SRA for auditing purposes.

To create a local user, login to the FlashArray and click Settings > Users > Create User:

clipboard_e0dc8fbd72b3b22db2c846fb73c954442.png

Enter in the username and password and choose Storage Admin.

Note that if you plan on using ActiveDR failover, you must use ARRAY ADMIN level permissions. The failover process includes managing the state of an entire pod which requires admin-level permissions. For customers using ActiveCluster and/or Protection Group/Periodic replication, storage admin level permissions will suffice.

clipboard_ee20c62f4fb5bab503611802728bd466a.png

Repeat for each FlashArray and then use those credentials in SRM.

clipboard_e076e0d60ad5cb97bc827008d563a6516.png

Array Manager Configuration for Periodic Replication

The FlashArray offers asynchronous replication in a periodic fashion through a mechanism called Protection Groups. A FlashArray Protection Group is a consistency group that has a remote replication schedule that specifies a replication interval (how often a snapshot is created and sent to a remote FlashArray) and a retention policy (how long each replicated snapshot is kept on the remote FlashArray). FlashArray volumes in this scenario can host a VMFS datastore or a Raw Device Mapping (RDM). SRM can then failover over the datastores/RDMs from the source FlashArray to the remote FlashArray. The FlashArray hosting the volumes can be considered the source FlashArray and the FlashArray that is being replicated to can be considered the remote FlashArray.

This section does not cover how to configure array managers for protection groups that are inside of a pod. That will be covered in the next section.

Note that the frequency of replication and/or the retention policy has no direct bearing on SRM. Replication must be enabled on the protection group to allow SRM to discover the volumes as replicated--but no specific settings beyond that are required. It is important to note though, the more frequent the replication the shorter the synchronization period during a failover, and more importantly, the shorter the RPO in the case of a disaster.

clipboard_efc485208060d1f2e5805326bf0439e29.png

The above image is the view of the protection group on the source FlashArray (flasharray-m50-1). The protection group is always created and managed on the source FlashArray. As seen in the image, there is a protection group named srm-groupA (seen near number label 4) created on a FlashArray called flasharray-m50-1 (seen near number label 1). This protection group replicates to a FlashArray called flasharray-m50-2 (seen near number label 2). This particular protection group replicate any volume in it to from flasharray-m50-1 to flasharray-m50-2 every 5 minutes (as seen near number label 3).

The protection group can also be viewed on the remote FlashArray. If you login to the remote FlashArray, you can see the "remote" view of the protection group srm-groupA as well. The remote protection group view shows the protection group name slightly differently as it shows the source FlashArray name as a prefix (followed by a colon) as seen near number label 2 in the following image. The FlashArray hosting this remote view of the protection group can be seen near number label 1.

clipboard_e7b653bacb75a62dc8c54140074761f6b.png

The next step is to configure the SRM array managers with the connection information to both arrays. Let's re-confirm the requirements:

  1. Have a replication connection enabled and healthy between your source and target arrays. This can be a synchronous or asynchronous connection--either is fine. For asynchronous, follow this guide starting on page 13. For synchronous, follow this guide for the FlashArray Configuration.clipboard_e1e0a82844a8bb52e6675f7d1eb322af7.png
  2. Each SRM server should have TCP port 443 access to the virtual IPs of both FlashArrays.
  3. Have at least one enabled protection group on the source array to the target array.
  4. A supported release of the SRA installed on both SRM servers. Both SRAs must be the same version. Pure Storage encourages the use of the latest available version of the SRA.

Once configurations are confirmed, log into Site Recovery Manager management interface.

These instructions are focused on the 8.2 release of SRM, so screenshots and exact step-by-step clicks may vary. The requirements and the inputs do not change between different releases though unless specifically noted.

clipboard_eff1d436fa7e1c2cb722b60b332bee1cd.png

Then click on View Details of the SRM pair being configured to find the array manager configuration interface.

clipboard_e4a8eb99551dace0e8ab5c524cc975a38.png

First confirm that the Pure Storage SRA is installed. Click on Array Based Replication -> Storage Replication Adapters.

clipboard_e64602e7e5000df3b4f69a556aef48211.png

Confirm that the status is OK. If so, click on Array Pairs and then the Add button.

clipboard_e65f2a15bf83bf63e020ae8b8bda4c340.png

In the window that appears, select the Pure Storage FlashArray SRA and then Next.

clipboard_e085a1fa3cc6505e18f5ed0a59e275bc8.png

In this wizard, array managers are configured for both the source SRM server and the target SRM server. The first step is usually the source SRM server. Since SRM is technically a bidirectional tool (and therefore there really is no such thing as a "source" and "target" SRM server as they both can be both at the same time) it is important to verify which server is being operating on. In the Local Array Manager step in the wizard, look at the top where it says "Enter a name for the array manager on <insert vCenter name>".

clipboard_ef597061d4b4eb28d9912a52794c451ac.png

In the above case, the vCenter is named "vcenter-01.purecloud.com". Verify which arrays are local to the vCenter reported there in the configured environment. In this case the array flasharray-m50-1 is local to this vCenter and the flasharray-m50-2 is connected to the other vCenter.

First, name the array manager something that makes sense. This one will be called "VC-m50-1". 

clipboard_e5068d8d6c62f59cfbbe806f8800c0a44.png

Next populate the connection information. Enter in the FQDN of the FlashArray (the maps to the virtual IP address of the array). IP addresses are also acceptable, but FQDNs are generally preferred.

The recommendation is to create a special account for SRM interaction. This can be either a local account or through an external source like LDAP. All that matters is that it is at the user level of storage admin or higher.

clipboard_edc6a7e5a7ca946a986aeb0c23a024e02.png

Enter in the credentials and FQDN to the local array in the local array(s) entry form:

clipboard_e18a988d9236a145c58f80157d1038d11.png

Note that starting with SRA release 3.1, more than one array can be entered in the local arrays address box. Each array can be entered in via comma separation. The requirement to do that though is that the same credentials are valid for each array. If they have different credentials, a separate array manager pair will need to be created.

                                                clipboard_e8bb46900777838047e54242e918f331b.png

Once the local array has been added, add in the connection information for the peer array. Towards the bottom of the screen, enter in the target information for the peer array in the section called "The peer Array(s)".

clipboard_ebe7d9501cf77db8734a23ed70efe0ca4.png

In this case flasharray-m20-1.purecloud.com has been entered as the peer FlashArrayThis represents the following replication connection listed on flasharray-m50-1.purecloud.com:

clipboard_e751d2feb21164f13c6b1c77431e4984d.png

The full local array manager looks as follows:

clipboard_ef88985c32321297de2d5b14385eb8a53.png

Confirm the details and click Next.

Now for the screen labeled Remote array manager, enter in the reverse details as compared to the local array manager. This will enter in the array connection information for the remote SRM server. Confirm the listed vCenter near the top and give the array manager a name:

clipboard_e9963c178f390f7bb54376c55a78d3f11.png

Next, populate the array connection information. The local should be what was listed as the peer in the previous array manager and the peer should be listed as what you put in for the local array.

clipboard_e2f542b562ac7b256200fb7aea3b0b8cb.png

When done, click Next.

The next step will list discovered array pairs. The pairs that are listed will be the FlashArrays that have replication connections from the arrays entered. In the case above, the FlashArray flasharray-m50-1 was entered on one site and the FlashArray flasharray-m20-1 on the other.

An important point to understand is that there really isn't such a thing as a remote array manager. All array managers are local array managers. The key is a given array manager is LOCAL to a certain SRM server. That same array manager in reference to the other SRM server in that pair is REMOTE. In other words, an array manager is local to one SRM server and remote to the other one.

These two array managers allow the SRA to see what arrays are available on either side. Since a local array manager was found for both arrays, the pairing of the two is valid.

clipboard_ee1125090d519974d17f5271d7b622eda.png

Array pairs that have identified a local manager spread across the two SRM servers will be shown as Ready to be enabled. The array pair discovery process will also find array pairs that exist to the specified arrays but are listed as No peer array pair. This means that the SRA found other arrays one or both of the specified arrays are replicating to, but it did not find an array manager that is configured with that array as a local array. For an array pair to be considered a valid array pair, both arrays must be configured as a local array in separate array managers.

Enabling an array pair means that SRM device discovery will occur between those arrays which lists what volumes are replicated and suitable for SRM control. If volumes getting listed on an array pair for SRM is not desired, do not enable that pair.

Valid array pairs will be selected. If the array pair should be enabled leave it selected. If there are pairs that are valid, but for whatever reason they should not be enabled, it is safe to deselect them (they can be enabled at a later time). If there is an array pair that is not listed as ready to be enabled but it should be enabled, verify that both arrays in the pair are listed as local arrays in the array manager on their respective SRM server.

Click Next.

clipboard_e493b52a3234ca5c6192805e4f17e16fe.png

Verify the information and click Finish when ready.

The selected array pairs will be enabled and will be listed in the Array Pairs screen.

clipboard_e2a961aa27ad537d5460ccf390df8b54c.png

This will list the source/target array pairs and the corresponding array managers that control each side of the replication. 

Array Manager Configuration for Pod-based Periodic Replication

On the FlashArray, there is an object referred to as a pod. A pod can be defined in many ways, but the simplest explanation is that a pod is a unique namespace. Within this namespace you can create volumes, protection groups, and snapshots with names that do not have to have globally unique names.

clipboard_e2eabcfdd3705c8b2420d66f2af4c0204.png

A pod is created by logging into a FlashArray and simply creating one--the only input required is a name:

clipboard_ea82bd90e99a4172822552c861d5fe735.png

Once that pod is created, objects can then be created within it.

clipboard_e8a65db1b45a4c85bece6dc6519bb0ad2.png

So a simple question is, why do I need a pod? Why not just create volumes with no pod? Well, an important part of a pod is that is not only a unique namespace, but a mobile namespace. It is a namespace that can be moved non-disruptively from one physical FlashArray to another. It is not forever tied to where it was initially created. This allows a user such as yourself to be able to move a group of volumes and their resources (snapshots, asynchronous replication groups, etc) to another FlashArray as needed.

A pod is moved between arrays through a process called stretching. "Stretching" a pod means making that pod and all of its internal resources (volumes, snapshots, protection groups) available on a 2nd array simultaneously. When a pod exists on two arrays at once, all of the volumes can also be written to and read from on both FlashArrays at once. This configuration is referred to as ActiveCluster. ActiveCluster is the FlashArray term for active-active synchronous replication.

A pod can then be unstretched from one array, which effectively moves the pod to a new array. It can then be stretched back to the original array or a completely different array.

So a basic process around pod management might be:

  1. Create a pod on FlashArray A.
  2. Create a volume called myVolume in that pod.
  3. Stretch the pod to FlashArray B. The pod and the volume named myVolume now exist on two arrays, FlashArray A and B. This is now an ActiveCluster configuration and the pod and its volumes can remain in this state indefinitely. Volumes in an ActiveCluster state have higher resiliency because the volumes remain available even if an entire FlashArray fails.
  4. Unstretch the pod from FlashArray A.The pod now only exists on FlashArray B. So the volume myVolume has now been non-disruptively moved from FlashArray A to B. This effectively disables ActiveCluster on the volume myVolume.
  5. Stretch the pod to FlashArray C. This now makes the volume myVolume available on FlashArray B and C at the same time--re-enabling ActiveCluster but with a slightly different pair of FlashArrays (B and C instead of A and B)

Periodic Replication from a Pod

Furthermore, the protection provided by pods and ActiveCluster can be complemented with periodic replication over great distance. As of Purity 5.3.x, ActiveCluster supports arrays at a distance up to 11 ms RTT time--which may not be enough distance to put both FlashArrays out of the blast radius of a major disaster (typhoon, hurricane, etc.). Therefore, starting with Purity 5.2.x, it is possible to replicate the volumes in a pod (stretched or not) to a third FlashArray by creating an periodic replication-enabled protection group in the pod, and then putting desired volumes in that pod also in the protection group. This will replicate snapshots of those protected volumes to another array on a specified interval.

clipboard_ee4be2156bc40a17b0be57d502cc07182.png

Periodic replication can protect either stretched pods or standalone pods.

For more information on ActiveCluster, please see the following page:

ActiveCluster with VMware User Guide

Your next question might be "Yeah, cool, but what does this have to do with array managers in SRM?". Good question, anonymous reader. 

The characteristic that a pod is not tied to a specific physical array is something we did not want to break, or more accurately, it is a behavior that we didn't want to unnecessarily restrict within SRM. Traditionally, when FlashArray pairs were discovered in SRM array managers, the FlashArray SRA would return physical array pairs (e.g. FlashArray A replicates to FlashArray B). If the SRA returned volumes replicated from a pod under that physical array pair, it would require the containing pod to never be moved. If the pod was moved (unstretched then stretched) to a new array pair, SRM would not be able to understand the change (changing what array pairs owns a volume is not a workflow that SRM supports) and a reconfiguration of SRM would be required--likely breaking disaster recovery abilities until resolved. This is less than ideal.

The avoid this dissonance, the FlashArray SRA version 3.1 and later returns pods as potential replication sources for array pairs. So a pod is the source and a remote physical FlashArray is the target.

Note that currently (as of Purity 5.3.x) a pod cannot be a target for periodic replication--it can only be a source. Periodic replication (snapshot-based replication managed by a protection group) always replicates to the "root" of the array.

So in the below example, we have a pod named podSRM (seen near number label 2) currently residing on a FlashArray called flasharray-m50-1 (seen near number label 1) with a protection group in it called replicateto3rdSite (seen near number label 3) . This protection group replicates to a FlashArray called flasharray-m20-1 (seen near number label 4).

clipboard_e52224cca091120c7c22c55f313621e9e.png

Since this pod has a replication relationship, the SRA will discover the pod as a source "array" and the target physical FlashArray as the target:

clipboard_e856f20207243ac1c4c1f99506dab420d.png

Near number label 1, the flasharray-m50-1 (a physical FlashArray) to flasharray-m20-1 (a physical FlashArray) replication pair can be seen.

Near number label 2 the podSRM (a pod) to flasharray-m20-1 (a physical FlashArray) relationship will be seen.

So volumes that are in asynchronous replication-enabled protection groups that are NOT in any pod will be in the "physical" array pair. Volumes that are in protection groups in the pod podSRM will be listed under the "pod to array" array pair.

Configuring the array managers for this is very similar to non-pod based periodic replication. In the local array, enter in the source array and for the peer add the array that is a failover target. How this is slightly different is for pods that are stretched across two arrays. 

Let's walk through both scenarios.

Configuring Arrays Managers for Periodic Replication from an Unstretched Pod

An unstretched pod is a pod that is currently on only one FlashArray at the current point-in-time. A stretched pod is a pod that is on two FlashArrays at the current point-in-time.

Another name for this configuration is a "local" pod--though I am not a fan of this terminology. It seems to imply that there is a fundamental difference between a local pod or a stretched pod, and more specifically implies there is a "type" of a pod, which could be misleading. There are no pod types, just current pod states. Therefore, I will use the term unstretched pod or stretched pod when necessary. The standalone term "pod" will be used when the fact that a pod happens to be stretched or unstretched makes no difference to the statement.

To configure an array manager for a pod, identify which FlashArray the pod currently sits on, and also the FlashArray that the pod-based protection group replicates to.

Below I have a pod called srmPod, which is currently only on a FlashArray called flasharray-m50-1.

clipboard_ed4998c8069753e5044e2568d75b8045f.png

This pod also has a protection group called srmProtectionGroup:

clipboard_e902f6277014df8c85ae618da564218e6.png

This protection group replicates every 10 minutes to a FlashArray called flasharray-m20-1:

clipboard_e68afc353b8c3595d2a7994616aac93ca.png

In SRM, go to create a new array manager:

clipboard_e0137fcace6845a846da59d6fb4ff83fd.png

Confirm the correct SRA version is installed (latest available is generally recommended)

clipboard_e3f607069de5c0779589b25cdee0fa8a3.png

Name the local array manager something descriptive. This will be for communication to the FlashArrays local to my vCenter called vcenter-01 so I will name it vc01-local. The FlashArray (flasharray-m50-1) hosting my pod is local to vcenter-01, so that will be the address entered in for local array. The target array (flasharray-m20-1) will be entered as my peer array.

clipboard_e04d909cbc98844f85cbeff1a694e32c1.png

Click Next. Now do the opposite FlashArray configuration for the target vCenter. In my case the target vCenter is called vcenter-02.

clipboard_ee76b00689f9517b2a6fd9b0ff5ee10f9.png

Click Next. You will see discovered array pairs in the next screen. SRM will automatically select array pairs that it can immediately enable. Select and de-select as needed. You can also enable or disable array pairs at a later time. If you have no intention of using a specific array pair, the suggestion would be to not enable it. This will shorten device discovery by not having the SRA query unneeded array pairs for replication details.

In my case I will keep on the array pair that reflects my pod to array replication pair (srmPod to flasharray-m20-1):

clipboard_e5ec90c8549c5eb3e38b1c95a0b58aa5f.png

If a discovered pair shows up as "No peer array pair" that usually means SRM could not identify an array manager on the opposing SRM from which the array was discovered for that particular FlashArray (or pod). If you would like to enable that array pair, ensure that the address of the FlashArray hosting the missing array is added as a local array on the opposing SRM server.

Click Finish.

clipboard_e956196ed522b1eac901ee9ce28e7219b.png

All pods on the source array and target arrays will be discovered as an array, whether or not they have a protection group in them replicating to another array.

Configuring Arrays Managers for Periodic Replication from a Stretched Pod

The configuration of array managers to support failover from a stretched pod is almost identical to configuration for an unstretched pod, with one major exception: there are now two local arrays, not one. So both need to be specified in the appropriate place in the local array manager and the remote array manager.

While it is not technically required to specify both FlashArrays that host a stretched pod in the array manager it is advisable to do so. If only one array is registered and that array fails--the SRA will not be able to perform a planned migration. Instead an SRM disaster recovery operation will need to be started instead. This will still result a successful failover, but the source side will not be fully brought down which will then require eventual manual cleanup. Therefore, an SRM failover from a stretched pod requires that both FlashArrays be specified in the array manager to provide the most resilient experience..

 Below there is a pod named srmPod that is stretched across two FlashArrays; flasharray-m50-1 and flasharray-m50-2:

clipboard_efdfc2544dad52c334259d11bc061f550.png

This pod has a protection group called srmProtectionGroup that replicates to a 3rd FlashArray called flasharray-m20-1 every 10 minutes:

clipboard_e0627e6a70f4fbb517e56187aef25f9b8.png

Therefore, flasharray-m50-1 and flasharray-m50-2 should be seen as the local array manager sources and flasharray-m20-1 should be seen as the local array manager peer. So log into Site Recovery Manager:

clipboard_e222a3993dcf8e4562ff8881215ea73cc.png

Click Add. Ensure you are using the 3.1 or later release of the SRA in the first screen. Click Next.

clipboard_e7fbcef000086ef55a88b6a0e7c91e914.png

For the local array manager, double check you are configuring the array manager for the correct side of the vCenter pair. To make things simpler, it is best to be on the SRM server that is paired with the vCenter that currently hosts the datastores you want to protect (the one that has access to the volumes in the pods). You can verify the vCenter name at the top of the Local array manager window:

clipboard_e8c0991d020e513cb035314bee423e81a.png

vCenter-01 is the vCenter that has access to my stretched-pod datastores, so I am on the right side. If you are logged into the opposite side, you can still follow these instructions, just do this in reverse order (3rd array as local first, stretched pod arrays as peer, then the opposite).

In my local array manager (which I will call vc01-local), I will put in my flasharray-m50-1 and flasharray-m50-2 FQDNs in the local array address, and the remote FlashArray (flasharray-m20-1) FQDN in the peer array address.

Currently all arrays specified in a single address entry (local or peer) must share the same credentials. In other words, while the local array(s) can have one set of credentials and the peer array(s) can have different credentials, all arrays specified in local arrays must have the same credentials, and all arrays specified in the peer arrays must have the same credentials. So in the example below, flasharray-m50-1 and flasharray-m50-2 must have the same credentials, while flasharray-m20-1 can then either also be configured with those same credentials or can have its own unique credentials.

clipboard_e81568dbcd3dae8be4c7f8551c6929c2b.png

When complete, click Next.

Now configure the array manager that is local to the other vCenter in the pair. In the below case, my "target" vCenter is called vcenter-02.purecloud.com.

clipboard_e25c00877e42c7d1d9a56b78cb1c24d53.png

I will call this array manager vc02-local.

clipboard_e4a6c4a35a92c2b5b0c1c518bacba513f.png

For the local array(s) address entry I only need to enter in my 3rd array, the one I am periodically replicating to from my stretched pod, which in my case is the FQDN of the FlashArray named flasharray-m20-1.

clipboard_ecd40c70ad3143e20388f027964f16d7a.png

For the peer arrays, I will enter the FQDNs of the arrays that host the stretched pod, the arrays named flasharray-m50-1 and flasharray-m50-2. Add in the respective credentials for both the local and peer connections.

Click Next.

The array managers will be configured in SRM and identified array pairs will be listed. SRM will automatically select array pairs that it can immediately enable. Select and de-select as needed. In this case. select the pod and target FlashArray pair to be enabled (and any other pair you would like to enable).

In my case I will keep on the array pair that reflects my pod to array replication pair (srmPod to flasharray-m20-1).

clipboard_edb114bf717a44e886c7060ffa101d067.png

You can also enable or disable array pairs at a later time. If you have no intention of using a specific array pair, the suggestion would be to not enable it. This will shorten device discovery by not having the SRA query unneeded array pairs for replication details.

Confirm the selections and click Finish.

clipboard_e38f5bd83487d8cc596c74ec0a53a9bd4.png

Multi-Array to Multi-Array Replication Configuration

Prior the the release of the FlashArray SRA version 3.1, multi-array replication topologies (fan-in or fan-out) required the use of more than two array managers (one at each site for each array--this blog post details the setup of this). The main reason for this is that the FlashArray SRA did not allow for more than one target FlashArray address in a single array manager.

In the 3.1 release of the SRA and later, this is no longer required. All arrays can be added to a single array manager pair.

Fan-in or Fan-out

Functionally there is no difference between fan-in or fan-out (it is a matter of perspective). It is also important to remember that an array pair does not dictate array-replication directions--an array pair can have devices replicating in both directions as a given time. So whether your configuration has many arrays in the source site, or many arrays in the target site does not change how the array managers are configured. If the arrays are local to that site, put them as local in the local array managers. If they are remote, put them as peers. Then do the same with the opposing site array manager, for the arrays that are local to that site, put their addresses in as local. For the ones that are remote, put their addresses in as peers.

In the case of one array (array1) on Site A which replicates to two arrays (array2 and array3) on Site B, your array managers would be configured like so:

Site A:

clipboard_e8d988a0b14a65b25d5f354ba9e2ec853.png

Site B:

clipboard_e4405907a145b5b5df4b6b7243e58f4a4.png

Note that they are opposites, one array as local on site A, with two as peers, and two arrays as local on site B with one array as a peer.

The process is the same for many to one as well (two in site A as local and one in site B as local).

Many-to-Many Replication

Many-to-many replication is essentially identical to the fan-in or fan-out array manager configuration. The only arguable difference is that both sites have multiple arrays--but the basic tenet remains the same: ff the arrays are local to that site, put them as local in the local array managers. If they are remote, put them in as peers. Then do the same with the opposing site array manager; for the arrays that are local to that site, put their addresses in as local. For the ones that are remote, put their addresses in as peers.

In the case of two arrays (array1 and array2) on Site A which replicate to two arrays (array3 and array4) on Site B, your array managers would be configured like so:

Site A:

clipboard_e9c422162885b3ee4ee59817da1e32748.png

Site B:

clipboard_ecd6431f9d7dab68e844d524074a96ac0.png

Array Manager Configuration for Pod-based Continuous Replication (ActiveDR)

Standalone pods can also be protected by continuous asynchronous replication, which on the FlashArray is called ActiveDR.

ActiveDR links two distinct pods and sends data to the target as quickly as it can--achieving much lower RPO than Periodic Asynchronous replication. In Purity 6.0 and with the 4.0 release of the SRA ActiveDR protection of a pod is supported for control within SRM.

clipboard_e482e934b9d3a889a9baa7e75e5cce352.png

For more information on implementing ActiveDR with VMware, see the following:

ActiveDR with VMware User Guide

A difference between Continuous Replication from a Pod (ActiveDR) and Periodic Replication from a Pod (Protection Groups) is that ActiveDR replicates from one pod to another. The target is not the root (non-pod) part of a FlashArray. So when you configure array managers, you ensure that the FlashArray hosting the source pod is configured as the local array and the FlashArray hosting the target pod is listed as the peer array.

Below is an example ActiveDR configuration. Two pods activeDRpodA and activeDRpodB hosted on FlashArrays flasharray-m20-1 and flasharray-m20-2 respectively.

clipboard_e085c0e8e506eaa04edfed63046c888bc.png
To configure this relationship in SRM, create a new array manager if not already created between those FlashArrays. If this pair is already configured, FISN

clipboard_e09f37f45373901fb87392c8081617242.png

Ensure that the 4.0 or later Pure Storage SRA is installed:

clipboard_e712a3b3b5d888927e29ff91d512e4032.png

Then provide a friendly name for the array manager and then enter in the source FlashArray into the local array and the remote FlashArray address into the peer array:

Ensure you are creating array managers on the correct vCenter/SRM server. Enter local arrays that are local to that vCenter: clipboard_ed2ec347124c931057a4cbdbef2954f07.png

clipboard_eeec2a6ba08c5eb0754444eebcb5253c4.png

For ActiveDR management, the entered credentials must be array admin level credentials, because pod state manipulation is considered an administrative change.

Click Next.

Enter in the information for the remote manager with the information in reverse:

clipboard_ece145433f236a2f81cdb5c347a35ceaf.png

Click Next.

The next screen shows all found array pairs and will auto-select pairs that are eligible to be enabled (the arrays on each side of a pair must be discovered locally by array managers configured on opposite SRM servers).

clipboard_e5bc92173177966a1955636beb13d04bf.png

You do not need to enable every pair. Array managers will list all physical FlashArray pairs, pod to physical array pairs, and ActiveDR pod to pod pairs. In this case, I will only enable the pod pair I want SRM to control by deselecting the others. Other pairs can be enabled at any point in the future.

clipboard_ef227d73502d59163703b397418814d8a.png

Click Next and the confirm the selected pair(s) to enabled and click Finish.

clipboard_ed176c049e94deed601a972edb50e8098.png

This will invoke a discover of that pair and all volumes in the pod will be returned:

clipboard_e7d219455d3acbea2e2964fe423096870.png

Array Manager Configuration for Stretched Storage Support

Traditionally, SRM protected VMs that were on datastores with active/passive array based replication. Therefore a failover meant removing the VMs entirely (shutting them down and unregistering them) and bringing up copies of the datastores on the remote array. In this scenario the source side and the target side had different datastores on different storage devices but the arrays replicated data between them so they could be used to fail VMs back and forth.

In Site Recovery Manager 6.1, VMware introduced support for managing virtual machines that are protected by "stretched" storage. In other words, managing VMs that are on VMFS datastores that are active on two arrays at once. Unlike active/passive, "stretched" storage was active/active. Both sides has the exact same copy of the data at the same time and therefore the same exact datastore was available on both sides at once.

The Pure Storage FlashArray SRA supports this configuration since the 2.x release of the SRA.

In order to allow for the discovery of stretched devices, you need to configure the SRM array managers in a specific way.

To use the Site Recovery Manager support of stretched storage, you need the enterprise license of SRM. The standard edition does not include this feature.

In the example below, I have a volume hosting a VMFS in a pod called srmPod. this pod exists on two FlashArrays so that the volumes in that pod are stretched. This pod exists on flasharray-m50-1 and flasharray-m50-1.

clipboard_e0cc86716b763eea98e53ad92102a6bdd.png

Login to SRM and create a new array manager. It is important to know which FlashArray is local to which vCenter. In the example environment, flasharray-m50-1 is local to vCenter-01 and flasharray-m50-2 is local to vCenter-02.

In the array manager configuration, verify which vCenter you are looking at first:

clipboard_ebe7228262a8aeb1f3a0a7ab7cbe70780.png 

I am configuring the array manager on vCenter-01 first, so my local array will be flasharray-m50-1. I will name the array manager m50-1.

I will enter the FlashArray flasharray-m50-1 address and credentials in the local array:

clipboard_e232bde0c9063b88c8da207630ba6e569.png

and then flasharray-m50-2 in the peer array:

clipboard_e44c575a9f28fcb14ff6041e25e1b26cf.png

For the remote array manager in vCenter-02, I will do the opposite. Verify the vCenter and give the manager a name:

clipboard_ee939359b0fb4389384b97e7055e5be09.png

Now add the array local to that site:

clipboard_ea259d935a73c256965be31c2024b971b.png

And then the peer:

clipboard_ec3e83086487c64a4d00d342c7a1ff6ab.png

The wizard will then show all discovered array pairs.

clipboard_e612b60dcf587d5b6890b91fd2a1cbe3f.png

Enable the pair going from flasharray-m50-1 to flasharray-m50-2 (in this example).

Note pairs with pod names are not relevant to this type of SRM protection. Pod-based pairs are for failing volume from OUT of a pod. This type of failover simply fails VMs from one vCenter to another without moving the data--it stays in the pod. So it will essentially fail over the VM I/O to the pod volumes to go from one FlashArray front-end to the other FlashArray front-end.

If configured right, the array pair will enable and discovered devices will appear.

clipboard_e9db6e9c2e8bce606a22f3e6a8f20da44.png

Enabling or Disabling Array Pairs

In SRM, array managers beget arrays. Arrays then beget array pairs. Array pairs then beget replicated devices. In order for replication devices to be discovered, you need to enable the array pairs that participate in that replication.

Enabling an Array Pair

In order for an array pair to be enabled, one array must be locally discovered by an array manager on one SRM server in an SRM pair, and the other array in that replication pair must be discovered by a separate array manager on the opposing SRM server. If both arrays are discovered as local to the same SRM server, it cannot be enabled. If one of the two arrays are not discovered at all, then the pair cannot be enabled. Finally, there also must be replication between both arrays. If replication is not enabled between FlashArray A and FlashArray B (even if the array managers are properly configured), the replication pair cannot be enabled, because, well, there is no replication.

Take the instance below:

clipboard_e17c3c7a784578c880efca480b12e31f8.png

When selected, it cannot be enabled:

clipboard_eb72bd0bcb2a8f8456af52e2b2edd3e5e.png

The pair flasharray-m50-1 to flasharray-m20-2 is an identified replication pair, but cannot be enabled. Why? Well if you look at the Array Manager Pair column, only one SRM server (or more accurately only one array manager) can find that pair. The other SRM server does not have an array manager that sees that pair too. Therefore, you need to either create an array manager on that SRM server that has access to flasharray-m20-2, or make sure an existing array manager can access it.

Note that the array in an array pair that does not have a corresponding array manager will also display the array serial number next to the name in the listing. 

If a given array pair is seen on both sites (indicated that the array manager column lists two array managers) it can be enabled.

clipboard_e05e5c37512d1eed007dadc8e96c30485.png

Select the pair, click on Array Pair, and click Enable.

clipboard_e98d0bbc67e1a36ffdfb7552fa83b1008.png

Once enabled, the pair will show as enabled and the SRA will discover all devices in that replication pair.

clipboard_e147a99d245daa63bba76faa35eb4d022.png

The next question is likely: "Should I enable all discovered array pairs?". In general, the recommendation is to enable just the array pairs that you need. Each pair that you enable will cause discoveries to occur. If you do not plan on using any devices in that pair, there is no reason to enable it--it just causes unnecessary work in SRM discovery time. 

Disabling an Array Pair

If an array pair is no longer needed (permanently or temporarily) you can disable the array pair. This will ensure no recovery plans are built on top of that pair. In order though for an array pair to be disabled it must not be in use. An array pair is considered to be "in-use" if there are any replication devices discovered from that pair in a SRM protection group.

A disable operation will fail return the below error if it is in-use:

clipboard_e5027b3741617e3eb3f5f913392239a56.png

You can verify a pair is not in-use by selecting it and then verifying that no protection groups are listed in the Protection Group column of the Discovered Devices table:

clipboard_e00c251f8c39ca643245195e0908bf268.png

If that is clear, it is safe to select the array pair, click the Array Pair drop down and choose Disable.

clipboard_e57b141b0066d07b35c841c322df85aeb.png

Changing the FlashArray Membership of a Pod

Part of the design around pod support in the FlashArray SRA is to allow for the easy and non-disruptive migration of a pod from one physical FlashArray to another without breaking recovery plans. Moving a pod can be considered any one of the following operations:

  1. Stretching a currently unstretched pod to a second FlashArray
  2. Unstretching a stretched pod from a FlashArray
  3. Stretching a currently unstretched pod to a second FlashArray AND then unstretching from the original (moving a pod).

In all of these cases, the advice is the same. Prior to making any kind of ownership changes to a pod, you should make sure that the FlashArrays are all already configured in the SRM array managers as mentioned above.

Stretching a Local Pod to a Second FlashArray

Stretching a pod to a second FlashArray means making the volumes (and other objects) that are in that pod available also on a second array. For Site Recovery Manager, it is recommended to make sure that the second array is added into the array managers before stretching the pod.

You can either stretch the pod first and then add the second array to the array managers or you can add the second array to the array managers and then stretch the pod to it.

It is generally recommended to do the second option (stretch last) this way you can be sure that SRM can still manage that pod if one of the arrays hosting the pod fails as if you make sure your first step is to update (and/or verify) the array managers prior to a stretch you are less likely to forget to do so.

So take the example a pod called testPod which is on a FlashArray called flasharray-m50-1.

clipboard_e60863684d158c0abac2534b981bbc12d.png

This pod has a production group called testPG that replicates to flasharray-m20-1:

clipboard_e626946e34f045f1dce44209051a34abc.png

SRM discovers this array pair as well as the VMFS that is replicated in it:

clipboard_e58b0d34715156b5da2e2209cab3343ed.png

I now want to stretch this pod to also exist on the FlashArray called flasharray-m50-2. Before I do that I need to ensure my array managers are configured appropriately.

Select the existing array pair, and choose Edit Local Array Manager from the Array Manager Pair drop down:

clipboard_ea76cf43b6219e3da225520f49f86a3b2.png

If flasharray-m50-2 is not listed in the local array(s) address input, add it:

clipboard_e82a791fe065775b439d2cceccb0fa263.png

Click Save. Now edit the remote array manager:

clipboard_e8ebae05208bb81797edf5b3cd0cf9327.png

Make sure flasharray-m50-2 is added in the peer array(s) address input:

clipboard_e76f16ef29a0fad93fd8768f932c01710.png

If it isn't, add it and click Save.

Now you can stretch the pod to flasharray-m50-2:

clipboard_e3efe9c9369e0cd9ab66eb5f1ad4ad808.png

 

Unstretching a Pod

If you have followed best practices there is nothing required inside of SRM to unstretch a pod. With that being said, mistakes happen and it is important to be sure.

Before unstretching a pod you should be sure of two things:

  1. The array that will still own the pod is configured in SRM array managers. This will ensure that SRM can still control the pod now that it is only on one array.
  2. The array that will still own the pod has existing and valid connections for all of the volumes in use. This will ensure that the VMware environment will still have storage access to the volumes when the one FlashArray is removed from the pod.

So the first step is to verify array manager configuration. 

In this example I have a pod called testPod on stretched across two FlashArrays, flasharray-m50-1 and flasharray-m50-2:

clipboard_ed48443e12972149c70e9739f2e0f2bab.png

I want to remove it from flasharray-m50-2. In SRM, click on the array pair for testPod and choose Edit Local Array Manager from the Array Manager Pair drop down.

clipboard_e1259baf3c9c98ae02de15d0b7227f860.png

Since I will be removing flasharray-m50-2 and the pod will remain on flasharray-m50-1, I want to make sure the address of flasharray-m50-1 is listed. If it is not, add it now:

clipboard_e9f70e4e7b86a9e1ccfe87239d3bd1b65.png

If you have made any changes. Click Save.

Now verify it is listed in the remote array manager. Select the pair, then the Array Manager Pair drop down and then Edit Remote Array Manager:

clipboard_e505284fde6ba07763d7266b33290886f.png

Same as above, ensure that the eventual surviving owner (flasharray-m50-1 in this case) is listed. But this time in the peer array(s) address entry:

clipboard_e10dfb38e9c10f8d622959f5339c9b622.png

If it is not there, add it and click Save.

The next step is to verify storage connectivity. This is standard procedure for any unstretch operation. The FlashArray will not let you unstretch from a FlashArray if the volumes in that pod have active connections to one or more hosts on that particular FlashArray. So, for any volume connections on flasharray-m50-2, they should also be connected to those same hosts on flasharray-m50-1. Below I currently have one volume in the pod (repeat this process for all of them):

clipboard_eec85dfa1142451f6730e72f779fdafa1.png

Click on the volume.

Then go to the Connected Hosts and Connected Host Groups. In the vertical ellipsis menu, click Show Remote Connections:

clipboard_e8a484063492b1ce365a2393cc2738c19.png

This will show the host(s) or host group(s) (depending on what box you are in) connections on both FlashArrays in the pod:

clipboard_e67d5703bd17c51885ab1a6ecad58f10a.png

Verify for every connection on the FlashArray you want to remove that there is a corresponding connection on the other array. In the case of host groups, ensure that the hosts in the host groups are the same (verify by looking click on the host group, then the hosts to verify the initiators).

So for example, the host called esxi-02 is the same on flasharray-m50-1 in host group MountainView as the host esxi-02 in host group MountainView on flasharray-m50-2:

flasharray-m50-1:

clipboard_e645c6766502fff24c489d136ffa1675d.png

flasharray-m50-2:

clipboard_eb061a4fc7dc0b6bcdf735733cb9d9ba7.png

To be extra sure, verify that the host themselves see all of the paths as live by looking at the FlashArray GUI on the array you want to keep the pod on. Click on Health > Connections and then the host name:

clipboard_e472d48c4d4ae747ad3396e285c9674a7.png

Confirm that the host has at least two connections to both CT0 and CT1 that are marked as green. If one or both controllers do not have green connections, verify zoning (for Fibre Channel) or ESXi host configuration (for iSCSI).

Once confirmed you can safely unstretch the pod.

clipboard_edef3a8af8aa8f6d09c1d4d3813db627d.png

Moving a Pod to a different FlashArray

Let's take the case of a pod called testPod that is currently hosted on a FlashArray called flasharray-m50-1:

clipboard_efeeab2cade4858be7ea48763ba5d9605.png

This also has a protection group configured for periodic replication to a FlashArray called flasharray-m20-1:

clipboard_e2b51e9a0870b29f746e6dc37280691e5.png

In SRM, on my site that has access to the flasharray-m50-1, the array manager has it configured as a local array and the flasharray-m20-1 configured as a peer:

clipboard_ee92c927b78ac4a2daf8cfa7a50a58b53.png

The opposing array manager is configured in the opposite way (flasharray-m20-1 as the local and flasharray-m50-1 as the peer):

clipboard_e009463eaac3db9cc4ac94fc5694befc3.png

The replication pair (testPod to flasharray-m20-1) is enabled and has a datastore in use and protected by a SRM protection group:

clipboard_e878309204f4e75c367d3ec120219ab4b.png

For whatever reason, I want to move the pod and its volume(s) from the FlashArray named flasharray-m50-1 to another FlashArray named flasharray-m50-2 while maintaining periodic replication to the array flasharray-m20-1.

The high level process that is recommended is shown below:

  1. Add flasharray-m50-2 to the local arrays on the source site array manager and also as a peer on the remote site array manager.
  2. Stretch the pod to flasharray-m50-2
  3. Connect the volume(s) to the appropriate host(s)/host group(s) on flasharray-m50-2
  4. Ensure the new paths appear in the relevant ESXi host(s)
  5. Disconnect the volume(s) on flasharray-m50-1 from the relevant host(s)/host group(s). The old paths will go dead, rescan the ESXi cluster(s) to clear them out.
  6. Unstretch the pod from flasharray-m50-1.

Optionally, if flasharray-m50-1 is no longer in use, you can disable any pair using it and then remove it from the source and target array managers. Let's walk through the process now.

First add the flasharray-m50-2 to the source site array manager as a local array address (now listed in addition to flasharray-m50-1):

clipboard_e809536fb1afe2f167c3afcd9dfe5b795.png

Now add it on the remote site array manager as a peer:

clipboard_eedbd88d752b044b97d304a46be58e66b.png

Next login to flasharray-m50-1 and stretch the pod to flasharray-m50-2:

clipboard_e924f9b30d5991cb529f63bab223119cd.png

Choose the array:

clipboard_e5f79ad8ff8f37836a1a51a11aa6a1b6a.png

Wait for it to finish syncing:

clipboard_e162bb87b00f61c88fab48065cc9a38c0.png

...

clipboard_e87cc47b09046e4434f639009f6268395.png

Next identify all of your volumes in the pod. Any volume that has a connection should be verified. I currently have one volume in this pod:

clipboard_e935075a69cdf432c6ef2ea0f4dd3040c.png

Inside of vSphere, identify what datastore or RDM is using that volume, the vSphere Plugin is a simple way to verify (though manual methods can be used, or scripted methods work the best at scale):

clipboard_eccceae07e895b6adcf53b144927647fb.png

The plugin verifies that datastore podtest-01 is indeed the volume above in the pod. Click on the Configure tab then Connectivity and Multipathing. Choose a host and verify its pathing (currently will be only from one array):

clipboard_e48899c6d732daae19816103e0733bf15.png

Now login to flasharray-m50-2. Connect the volume to the appropriate host groups. In my case, I want to connect it to a ESXi cluster called MountainView which is similarily named on the flasharray-m50-2:

clipboard_eb3728e072a462e34a3a21106675691d0.png

Choose the host group:

clipboard_e2d4d08840c4dc4bb94ff6d1b01efdd70.png

Back in vCenter you should see the paths double (in this case 4 paths to 8):

clipboard_e3621b26db3bd01fa91f50a97158215cc.png

Click Refresh if they do not appear. If they do not appear after that, verify host connectivity to the array. Verify this path change on all hosts connected to the datastore. You can now safely disconnect it from the host group(s) on flasharray-m50-1:

clipboard_e397a639220551a58603369c5a399727d.png

You will then see the original paths go dead:

clipboard_e83a44d69cae9b4093abf8aee3c6041ab.png

You can clear the old paths immediately with a host rescan.

Once this has been completed for all volumes in the pod that are connected, you can then unstretch. The FlashArrays will NOT let you unstretch from an array if there are any still-connected volumes from the pod on the array you want to unstretch the pod from.

Go to either FlashArray and remove the flasharray-m50-1 from membership of the pod.

clipboard_ebe01b5671cae88eb6d9b34cb5d753cf0.png

Go back to SRM and rescan for array pairs:

clipboard_eb41b9bf6e64605760ac6b1cfa5cf9833.png

The pair will still be valid, and the pod now lives on an entirely new FlashArray!

clipboard_e3a86f353b0e2b03357cf691082fff99a.png

 

Deleting an Array Manager

An array manager can be deleted when no devices are in SRM protection groups from any array pair in that manager and no array pairs are enabled.

First ensure no devices are protected (there should be no protection groups listed in the Discover Devices columns for any enabled array pairs in the array manager pair):

clipboard_e58c8343fe1ed3ccdae018e0d8ad5ca9b.png

Then disable any array pairs:

clipboard_e54c73b48e792d048a0fa31224b95d1d7.png

Then select any disabled pair and then Array Manager Pair and choose Remove:

clipboard_e8fcce632c9c1b514f35e52a909019280.png

SRM will confirm the array pairs related to that array manager pair before proceeding:

clipboard_e7a87612f8d02eb99608ba8fb9d0fdb94.png

Click Remove. If it fails, it means you missed an array pair:

clipboard_e2dc1d7a77f25ae025680f5aa0be5d215.png

Troubleshooting Array Pairs

The following sections refer to issues that can be encountered in array discovery.

An Array Pair is not Listed

Array pair discovery is based on discovered replication connections, as can be seen in the FlashArray Web Interface under Storage > Array > Connected Arrays:

clipboard_e8a8f050dbee45e9cb0783bd564dc9f99.png

If an array is not listed here, then the array pair will not be shown in SRM.

Array Pair Not Found Error during Array Discovery

If an array manager fails to discover arrays with a similar error to below:

clipboard_e211fdc8712034b9df313db5357d22ca5.png

And the listing for Last Array Manager Ping is in a failed state:

 

clipboard_ed0dbc92fcc85bfac91d5bf5f8663c82a.png

The likely cause is one of the following issues:

  1. A pod was renamed that was part of an enabled array pair (renaming protected pods is not support in the 3.1 SRA release)
  2. A pod was destroyed that was part of an enabled array pair
  3. The pod that was part of an enabled array pair was moved to a new FlashArray and that new FlashArray is not included in any array manager.

Check if the Pod was Renamed

In the case of a rename, if the array pair is not needed in SRM, you can simply disable the pair and array discovery will work again.

clipboard_e7c6fdac06630595465df940fa9657711.png

In the 3.1 release of the SRA, renaming pods without reconfiguring SRM protection is not supported. In a future SRA release, the ability to rename pods without having to reconfigure protection in SRM will be added.

If the array pair that included the renamed pod is still needed, you will need to rename it back to the original name. Identify the FlashArray(s) that hosted the original pod and go to either one of the web interfaces of FlashArrays (if of course it was on two) that hosted the pod and go to Settings > Users > Audit Trail.

Search for the command purepod and the subcommand rename. This will show any pod renames that occurred on the array. If no results show up for the original pod, try the other array (if it was stretched).

clipboard_edd15d9ef95bda5839fcf382ee8dc71b6.png

Once you find what the pod was renamed to, rename it back to the original name.

clipboard_eab29a1f0ac90090c4e70c418a1268077.png

Renaming it back to the original name will fix array discovery.

Check if the Pod was Destroyed

It is also possible that the pod itself was destroyed on the FlashArray. Before deciding upon the right course of action, it is important to verify that this is indeed what happened. Login to the FlashArray web interface of the array that hosted the pod originally and go to Settings > Users > Audit Trail.

Search for the command purepod and the subcommand destroy. This will show any pod destroy operations that occurred on the array. If no results show up for the original pod, try the other array (if it was stretched).

clipboard_edc3e5954b7959acad5a87bee12210296.png

Once you confirm that the pod was indeed destroyed, you have a few options:

  1. Is that array pair even needed any more (are there are pre-existing protected volumes discovered from it in-use)? If no, then just disable the destroyed pod in SRM and rediscover.
  2. If there are protected volumes in it, verify the volumes are still in-use and were moved out of the pod prior to the pod destruction. If they are still in-use:
    1. If it has been less than 24 hours, restore the pod from the Destroyed Pods box and move the volumes back in it clipboard_ee3d9a2595b4a425592004a387b322599.png
    2. If it has been more than 24 hours, or the pod was manually eradicated (permanently deleted), you can create a new pod with the same name and move the affected volumes into it. Then re-run array discovery.
    3. Move the volumes into a new pod (or into a non-pod protection group) and re-create (or remove and re-add it to) their protection group entirely in SRM. In this case, you will need to remove any affected devices from their SRM protection groups and re-add them under a new array pair.

Check if the Pod was Moved

In the case of the pod being moved to a different array (or arrays), it is necessary to find where the pod was moved from and then update the array managers with the new FlashArray addresses.

Login to the FlashArray web interface of the array that hosted the pod originally and go to Settings > Users > Audit Trail.

Search for the command purepod and the subcommand remove. This will show any pod unstretches from that array. If there is a remove operation it means it was removed from that array.

clipboard_ecc52c044a8386065205346882eb87cf1.png

Now search for the command purepod and the subcommand add. This will show any pod stretches from that array. If there is a add operation it means it was added to that array.

clipboard_e06fa4d26acf99733507e0fd9597c98fe.png

Now that you know where the pod is, follow the steps in the section, Moving a Pod to a different FlashArray