Skip to main content
Pure Technical Services

Cloud Block Store Support Matrix

 

Cloud Block Store Models and Capacity Upgrades

Model Usable Effective (4:1) Controller Instances VD instances

EBS Volumes 

for NVRAM

Provisioned IOPS for io1 (NVRAM)

CBS //V10A-R1

6.9 TiB

(7.6 TB)

27.6 TiB

(30.3 TB)

2 x c5n.9xlarge **

 

7 x i3.2xlarge **

 

7 x 60 GB io1 volumes

3000 per io1 volume

Total: 21,000*

CBS //V10A-R1

+

First shelf upgrade

13.8 TB

(15.2 TB)

55.2 TiB

(60.7 TB)

2 x c5n.9xlarge **

14 x i3.2xlarge ** 7 x 60 GB io1 volumes

3000 per io1 volume

Total: 21,000*


 

Model Usable Effective (4:1) Controller Instances VD instances

EBS Volumes 

for NVRAM

Provisioned IOPS for io1 (NVRAM)

CBS //V20A-R1

13.8 TiB

(15.2 TB)

55.2 TiB

(60 TB)

2 x c5n.18xlarge **

7 x i3.4xlarge **

 

7 x 120 GB io1 volumes

6000 per io1 volume

Total: 42,000*

CBS //V20A-R1

+

First shelf upgrade

27.6 TiB

(30.4 TB)

110.4 TiB

(121.4 TB)

2 x c5n.18xlarge **

                   14 x i3.4xlarge **                   

7 x 120 GB io1 volumes

6000 per io1 volume

Total: 42,000*

CBS //V20A-R1

+

Second shelf upgrade

55.2 (TiB)

60.8 (TB)

220.8 TiB

(242.8 TB)

2 x c5n.18xlarge **

14 x i3.4xlarge **

+

7 x i3.8xlarge **

7 x 120 GB io1 volumes

6000 per io1 volume

Total: 42,000*

* The provisioned IOPS refer to the number of provisioned IOPS for the io1 volumes which are used for Cloud Block Store NVRAM. This provisioned IOPS value does not reflect the total effective IOPS available from Cloud Block Store.

** Pure strongly recommends using Convertible Reserve Instances (rather than On-demand or Standard Reserve Instances) for the underlying controller and VD EC2 instances. This allows customers the flexibility to NDU to newer, cheaper, and more powerful EC2 types when made available by AWS.   

 

Supported Regions

  • us-east-1 (N. Virginia) *
  • us-east-2 (Ohio)
  • us-west-2 (Oregon) * **
  • eu-central-1 (Frankfurt) *
  • eu-west-1 (Ireland)
  • eu-west-2 (London)*
  • ap-southeast-1 (Singapore)*
  • ap-southeast-2 (Sydney)
  • ap-northest-1 (Tokyo)
  • ap-northeast-2 (Seoul)
  • ca-central-1 (Canada Central)

* These regions are generally supported. However there are some Availability Zones within these regions that do not have the required c5n.9xlarge and c5n.18xlarge instances for Cloud Block Store. These Availability Zones are different for every customer. Customers can contact AWS Support to find out which Availability Zone does not include support for c5n.9xlarge and c5n.18xlarge instances, and avoid deploying Cloud Block Store in subnets tied to these Availability Zones.

**Cloud Block Store in an ActiveCluster configuration is not supported in Oregon if using with Pure1 Mediator. Customers who want to deploy Cloud Block Store with ActiveCluster in Oregon must use the On-Premises Mediator.

Note: Support for each region depends on the availability of EC2 resources for Cloud Block Store. For regions where there are low quantities c5n or i3 instances, customers have the option reserve the instances ahead of usage. See Capacity Reservations.

 

Supported Capabilities and Features 

Feature/Capability Support Notes
Nesting CloudFormation template No Cloud Block Store is deployed using CloudFormation. Pure provides customers with a CloudFormation (CF) template yaml file. Cloud Block Store must be deployed using its own standalone CloudFormation template. It is important that customers do not nest the Cloud Block Store CloudFormation template in other CloudFormation templates as this can lead to unexpected configuration issues over time.
Shut down or Stop No Stopping or Shutting down Cloud Block Store or its underlying resources are not supported
Cloud Block Store termination/deletion

Support Driven: v5.3.0.aws0, 5.3.0.aws1, 5.3.0.aws2

Customer Driven: v5.3.3.aws0+

CBS with Purity version v5.3.0.aws0, 5.3.0.aws1, and 5.3.0.aws2 requires Pure Support in order to delete a CBS instance.

CBS with Purity version 5.3.3.aws0 and higher can be deleted by customers. See CBS Deployment Guide for steps to terminate/delete a CBS instance.

Host ports 2 (iSCSI)  One per controller
Replication ports 2 One per controller
Management ports 2 One per controller
Deduplication Yes  
Compression Yes  
Thin Provisioning Yes  
Snapshots Yes  
CloudSnap creation Yes  
CloudSnap restore Yes  
QoS (Fairness) No  
QoS (Limits) Yes  
Purity Run No  
WFS No  
Encryption Yes  
REST APIs Yes  
Controller NDU** Yes

Support Driven via CLI commands

Capacity NDU*** Yes Support Driven via CLI commands
Pure1 Yes  
Pure1 Meta (Workload Planner) No  
VM Analytics N/A No VMware Cloud support for CBS
VMware Cloud No  
Changing Network IP or subnets No This is an AWS limitation

 

Replication Features Support Notes

Async Replication

 

 

Yes

  • Bi-directional supported between two CBS instances in different AZs or regions.
  • Bi-directional supported between CBS instances and FlashArray

 

  • AWS network charges may apply for replication across Availability Zones, VPCs, and/or regions.
  • Egress charges may apply for any data that sent from Cloud Block Store to FlashArray.

 

Active Cluster (Synchronous Replication)

Yes*

  • Bi-directional supported between two CBS instances in different AZs.

 

AWS network charges may apply for ActiveCluster configurations across Availability Zones and/or VPCs.

Active/Active Async

No

 
ActiveDR No Expected to be supported in next major Purity release.

 

* Cloud Block Store in an ActiveCluster configuration is not supported in Oregon if using with Pure1 Mediator. Customers who want to deploy Cloud Block Store with ActiveCluster in Oregon must use the On-Premises Mediator.

**Controller upgrade from //V10A-R1 to //V20A-R1 is supported but not recommended since backend performance will be limited because the virtual drives cannot be upgraded in-place from i3.2xlarge to i3.4xlarge. The i3.4xlarge virtual drives have more network bandwidth to support higher backend traffic. 

*** Capacity upgrade is supported by adding additional virtual drives, seven at a time. However, capacity upgrade in-place from i3.2xlarge to i3.4xlarge is not yet supported. Support for in-place capacity upgrade will be post-GA.

 

//V10AR1 Limits

 

General Limits

Description

5.3.x [AC Enabled]

Max # of volumes 

500

Max # of volume groups 100

Max # of hosts

200 [100]*

Max # of IQNs

200 [100]*
Max # of sessions 2400

Max # of host groups

50 [25]*

 

EC2 Instance Bandwidth Limits

Description

c5n.9xlarge

Network Bandwidth (Aggregate)1

50 Gbps
Bandwidth per connection 5 Gbps

1 The 100 Gbps aggregate network bandwidth is shared across all virtual Elastic Network Interfaces (ENIs). These include the system, management, replication, and iSCSI interfaces. The virtual ENIs are backed by a physical Elastic Network Adapter (ENA).

Each connection or socket (source IP address:source port, target IP address:target port) is limited to 5 Gbps. If 4 iSCSI connections to the primary controller the resulting bandwidth would be 20 Gbps.

 

Snapshots and Asynchronous Replication Limits

Description

5.3.x [AC Enabled]
Max # of volume snapshots per array 1,000
Max # of pgroups per array1 50 [50]
Max # of remote connected arrays 1
Minimum configurable replication frequency2

1 hour

1 Example for pgroup and snapshot limits: If you create a pgroup containing 100 volumes and then create a single pgroup snapshot for that pgroup then the following would be counted against each maximum:

  • 1 pgroup consumed from the array-wide maximum number of pgroups
  • 1 pgroup snapshot consumed from the array-wide pgroup snapshot maximum
  • 100 volume snapshots consumed from the array-wide volume snapshot maximum

Note: In cases where a pgroup snapshot request will result in more volume snapshots than are supported by the FlashArray,  the pgroup snapshot request and/or scheduled pgroup snapshot will fail. 

2 This is the minimum configurable replication frequency. Replication is a background process and priority is given to front-end workload. Maintaining the configured replication frequency depends on several factors such as the front-end workload on the array, the amount of data reduction, the amount of logical address space being replicated, and the available replication bandwidth.

Snapshot Offload (NFS/Cloud) Replication Limits

Description

5.3.x

Max # of volume snapshots on an offload target

10,000
Max # of volume snapshots that one FlashArray can offload to an offload target 10,000
Max # of offload targets configurable per FlashArray 1
Max # of FlashArrays configurable per offload target 4

Host Limits

Description

5.3.x

Max # of IQNs per host

200
Max # of LUNs (volumes) per host 500
Max # of private LUNs connected per host (LUNs not connected to host groups) 500

LUN IDs assigned for LUN connections

  • LUNs 1-4095 for private or shared connections
Note: Private connections start at 1 and count up. Shared connections start at 255 and count down, then up from 256 if 1-255 are in use.

Host Group Limits

Description

5.3.x
Max # of hosts per host group No specific limit beyond max hosts

Max # of LUNs (volumes) per host group

500

Note: Private LUN connections count against host group max LUN limit. See above for LUN ID assignment.

Volume Limits

Description 5.3.x
Max volume size 4PB

 

 

//V20AR1 Limits

General Limits 

Description

5.3.x [AC Enabled]

Max # of volumes 

1000
Max # of volume groups 200

Max # of hosts

400 [200]*

Max # of IQNs

400 [200]*
Max # of sessions 4800

Max # of host groups

100 [50]

 

EC2 Instance Bandwidth Limits

Description

c5n.18xlarge

Network Bandwidth (Aggregate)1

100 Gbps
Bandwidth per connection 5 Gbps

1 The 100 Gbps aggregate network bandwidth is shared across all virtual Elastic Network Interfaces (ENIs). These include the system, management, replication, and iSCSI interfaces. The virtual ENIs are backed by a physical Elastic Network Adapter (ENA).

Each connection or socket (source IP address:source port, target IP address:target port) is limited to 5 Gbps. If 4 iSCSI connections to the primary controller the resulting bandwidth would be 20 Gbps.

 

Snapshot and Asynchronous Replication Limits

Description

5.3.x [AC Enabled]

Max # of volume snapshots per array2

2000
Max # of pgroups per array2 100 [100]
Max # of remote connected arrays 4
Minimum configurable replication frequency3

30 minutes3

2 Example for pgroup and snapshot limits: If you create a pgroup containing 100 volumes and then create a single pgroup snapshot for that pgroup then the following would be counted against each maximum:

  • 1 pgroup consumed from the array-wide maximum number of pgroups
  • 1 pgroup snapshot consumed from the array-wide pgroup snapshot maximum
  • 100 volume snapshots consumed from the array-wide volume snapshot maximum

Note: In cases where a pgroup snapshot request will result in more volume snapshots than are supported by the FlashArray,  the pgroup snapshot request and/or scheduled pgroup snapshot will fail. 

3 This is the minimum configurable replication frequency. Replication is a background process and priority is given to front-end workload. Maintaining the configured replication frequency depends on several factors such as the front-end workload on the array, the amount of data reduction, the amount of logical address space being replicated, and the available replication bandwidth.

Snapshot Offload (NFS/Cloud) Replication Limits

Description

5.3.x

Max # of volume snapshots on an offload target

100,000
Max # of volume snapshots that one FlashArray can offload to an offload target 100,000
Max # of offload targets configurable per FlashArray 1
Max # of FlashArrays configurable per offload target 4

Host Limits

Description

5.3.x

Max # of IQNs per host

400
Max # of LUNs (volumes) per host 500
Max # of private LUNs connected per host (LUNs not connected to host groups) 500

LUN IDs assigned for LUN connections

  • LUNs 1-4095 for private or shared connections
Note: Private connections start at 1 and count up. Shared connections start at 255 and count down, then up from 256 if 1-255 are in use.

Host Group Limits

Description

5.3.x

Max # of hosts per host group

No specific limit beyond max hosts

Max # of LUNs (volumes) per host group

500

Note: Private LUN connections count against host group max LUN limit. See above for LUN ID assignment.

Volume Limits

Description 5.3.x

Max volume size

4PB