Skip to main content
Pure Technical Services

Cloud Block Store Deployment and Configuration Guide for AWS

Quelyn Gretsky

For assistance please see Creating a New Article.  

This is a "comment" which is only visible in the editor, please add your content outside of this box below.  Any questions? Join #peak in Slack.

Overview

A Cloud Block Store deployment video is also available as a supplement to help users deploy Cloud Block Store from the ground up, including all the Amazon Web Services (AWS) prerequisites.

Video here: https://youtu.be/BPex54CbmUU

Pure Storage's Cloud Block Store (CBS) is a virtual appliance powered by the Purity Operating Environment (POE), which uses the native resources of Amazon Web Services (AWS) to enhance storage services with enterprise features. This document provides detailed procedures to successfully deploy a new Cloud Block Store instance in your Amazon Virtual Private Cloud (Amazon VPC). 

Cloud Block Store's high-level architecture consists of two Amazon Elastic Compute Cloud (Amazon EC2) instances acting as controllers. The Amazon EC2 instances process data and provide data services such as data reduction, snapshots, encryption, and replication. In the initial deployment, there are seven EC2 instances called Virtual Drives. The Virtual Drives store and copy data onto Amazon Simple Storage Service (Amazon S3). You can upgrade a Cloud Block Store instance with additional capacity by adding a group of seven Virtual Drives for each capacity upgrade.

cbs46.JPG

Requirements

Cloud Block Store is deployed using CloudFormation. Pure provides customers with a CloudFormation (CF) template yaml file. Cloud Block Store must be deployed using its own standalone CloudFormation template. It is important that customers do not nest the Cloud Block Store CloudFormation template in other CloudFormation templates as this can lead to unexpected configuration issues over time.

 

Supported Regions

  • us-east-1 (N. Virginia) *
  • us-east-2 (Ohio)
  • us-west-2 (Oregon) * **
  • eu-central-1 (Frankfurt) *
  • eu-west-1 (Ireland)
  • eu-west-2 (London)*
  • ap-southeast-1 (Singapore)*
  • ap-southeast-2 (Sydney)
  • ap-northest-1 (Tokyo)
  • ap-northeast-2 (Seoul)
  • ca-central-1 (Canada Central)

* These regions are generally supported. However there are some Availability Zones within these regions that do not have the required c5n.9xlarge and c5n.18xlarge instances for Cloud Block Store. These Availability Zones are different for every customer. Customers can contact AWS Support to find out which Availability Zone does not include support for c5n.9xlarge and c5n.18xlarge instances, and avoid deploying Cloud Block Store in subnets tied to these Availability Zones.

**Cloud Block Store in an ActiveCluster configuration is not supported in Oregon if using with Pure1 Mediator. Customers who want to deploy Cloud Block Store with ActiveCluster in Oregon must use the On-Premises Mediator.

Note: Support for each region depends on the availability of EC2 resources for Cloud Block Store. For regions where there are low quantities c5n or i3 instances, customers have the option reserve the instances ahead of usage. See Capacity Reservations.

 

 

Convertible Reserve Instances (Strongly Recommended)

A Cloud Block Store instance is composed of various underlying AWS resources including EC2 instances. AWS provides customers the option to pay for the EC2 instances on an On-demand basis or through a pre-committed purchase of EC2 instances called Reserved Instances (RI). Paying for On-Demand EC2 instances allows customers to only get charged for the EC2 when it is powered on. It is expensive on a per-hour basis when turned on compared to Reserve Instances. Reserved Instances are heavily discounted and are much more inexpensive per hour, but customers are committed to paying for them regardless if the EC2 instance is powered on or off. Reserved Instances come in two types, Standard RI's and Convertible RIs. Standard RI's provide the most significant discount. Convertible RI's do not have as large of a discount, but offer the flexibility to apply the discount towards other EC2 types. This is important for Cloud Block Store because as AWS releases new EC2 instance types that may be cheaper and/or more powerful, Cloud Block Store can take advantage of them by performing non-disruptive upgrades (NDU) to the underlying EC2 instances. This can provide a better overall performance and cost-effective solution as new EC2 types are released over time. Because of the flexibility that Convertible RI's offer, Pure Storage strongly recommends customers to purchase Convertible RI's for the underlying Cloud Block Store resources. More information can be found on the AWS Reserved Instance page.

Networking

Private Subnets

Cloud Block Store deploys with the following four Ethernet interfaces on each controller:

  1. System
  2. iSCSI
  3. Management
  4. Replication

During deployment, users will be asked to provide a private subnet for each interface.

  • As a security requirement, all subnets for Cloud Block Store must be private subnets.
  • Each subnet of the Cloud Block Store Ethernet interfaces must be different than the EC2 host initiators. iSCSI traffic between Cloud Block Store and EC2 host initiators can flow using route tables. In most cases, route tables between subnets are set to "local", which allows traffic to flow between subnets in the same Amazon VPC. There are two reasons for this requirement:
    • Separate subnets minimize the chance that an EC2 host initiator would accidentally use IP addresses belonging to the Cloud Block Store Ethernet interfaces.
    • Cloud Block Store instances must use seven IP addresses for each capacity upgrade. Placing Cloud Block Store Ethernet interfaces on a separate subnet as EC2 host initiators reduce the chance that the subnet runs out of IP addresses.
  • There are two typical network configurations:
    • The simplest topology is to place all Cloud Block Store interfaces (system, iSCSI, management, replication) onto a dedicated subnet. In this scenario, create a new private subnet with /25 subnet mask (255.255.255.128), which is dedicated to only Cloud Block Store interfaces. See the following network configurations diagram.
    • As an option, Cloud Block Store also supports multiple private subnets, one for each Cloud Block Store Ethernet interface type. This network topology offers an optimal solution because it allows traffic isolation. To minimize the chance of having a network overlap, we recommend this network topology when replicating between a Cloud Block Store instance and a FlashArray on-premises (or another Cloud Block Store instance) on a different network via Site-to-Site VPN, VPC Peering, or Transit Gateway. See the following network configurations diagram for an example of this topology.
      • If multiple private subnets are used, they must be all in the same Availability zone.
      • Ensure the private subnet for the System interface has internet access; see the Internet Access section.
 Network configuration option diagram:

cbs92.JPG

The following table summarizes the requirements for each interface type:

Interface Name Interface Subnet Type Internet Access Required?
System eth0 Private Y
Replication eth1 Private N
iSCSI eth2 Private N
Management eth3 Private N
Internet Access

The private subnet for the System interface must have internet access. Internet access ensures that Cloud Block Store can phone home to Pure1 providing logs, alerts, and additional cloud management services. The simplest configuration is to route traffic from the System private subnet to a NAT Gateway, which resides in the public subnet.

cbs34.JPG

VPC Endpoint to Amazon S3 and Amazon DynamoDB

A Cloud Block Store instance copies all written data to Amazon S3 to ensure high durability. In addition, a Cloud Block Store instance also sends metadata information to DynamoDB pertaining to the Cloud Block Store configuration and underlying resources used.

It is important to ensure this traffic travels within the AWS network rather than through the public internet. The benefits of VPC Endpoints include:

  • VPC Endpoints ensures that data stays within the AWS network which eliminates the dramatic egress costs for sending data to Amazon S3 and Amazon DynamoDB.
  • VPC Endpoints for Amazon S3 also prevents additional network hops through the public internet which can significantly improve performance.
  • VPC Endpoints is a standard AWS best practice for security reasons.

It is highly advised that customers use VPC Endpoints for Amazon S3 and Amazon DynamoDB. See Appendix B for steps to add an Amazon S3 and Amazon DynamoDB VPC Endpoint to an existing subnet.

 

cbs84.JPG

Example: The following image displays a route entry created in the subnet used for the system interface. It shows all internet-bound traffic being directed to a NAT Gateway as well as all S3-bound and DynamoDB-bound traffic directed to their respective VPC Endpoints. 

cbs91.JPG

 

VPC DNS Resolution

Ensure that the Amazon VPC used for Cloud Block Store has DNS resolution enabled. By default, DNS resolution is enabled when an Amazon VPC is created. To view or change the DNS resolution setting for your Amazon VPC, see instruction from AWS.

IP Addresses

Deploying a new Cloud Block Store instance requires fifteen initial private IP addresses. For each capacity upgrade, an additional seven private IP addresses are required from the subnet where System interfaces reside. We recommend that the subnet used for the System interfaces has a network mask of 255.255.255.128 (/25). This ensures that there is enough space for capacity expansion.

Replication 

Async Replication - When replicating from a FlashArray on-premises to a Cloud Block Store instance in a Amazon VPC, ensure that there is network connectivity between the respective sites. Likewise, replication between multiple Cloud Block Store instances requires network connectivity between the instances. More specifically, in order to replicate between a Cloud Block Store instance and a physical FlashArray (or another Cloud Block Store instance), the management and replication ports must communicate. Configure all security groups, network ACLs, and routing tables to allow traffic between the two sites for the respective management and replication ports. The following table provides the port number for each interface.

Service Type Firewall Port
Management interfaces 443
Replication interfaces 8117

When replicating between a physical datacenter and the Amazon VPC, you can achieve network connectivity a number of ways, including AWS Direct Connect or a Site-to-Site-VPN connection.

For additional details on replication requirements and limits, see the Purity Replication Requirements and Interoperability Matrix.

ActiveCluster - ActiveCluster allows customers to synchronously replicate their Cloud Block Store volumes between two different Availability Zones. This protects customers from a full Availability Zone outage. However, there is no support for ActiveCluster with Cloud Block Store in the Oregon region (us-west-2) if customers plan on using the Pure1 Cloud Mediator. The Pure1 Cloud Mediator resides in the Oregon region. In order to limit the fault domain, customers who wish to deploy Cloud Block Store in an ActiveCluster configuration in Oregon must use the On-Premise Mediator.

CloudSnap - For cost-conscious customers who are looking for a lower cost DR alternative to replication, CloudSnap is a viable option. FlashArray customers can use CloudSnap to send snapshots of their volumes to Amazon S3 buckets. The snapshots are self-contained with the meta-data needed to restore onto any other FlashArray or to Cloud Block Store. For the DR use case, customers can periodically send CloudSnap snapshots to Amazon S3. In a DR event where the primary site is inaccessible, customers can deploy a new Cloud Block Store instance on-demand and restore their CloudSnap snapshots. Once the CloudSnap snapshots are fully restored from S3, customers can attach the volumes to the application compute instances in their Amazon VPC to resume application services. This DR alternative provides a lower cost option for customers who have a higher RTO/RPO tolerance. Since volumes are being restored from Amazon S3, the RTO will largely depend on the amount of data that has to be restored. 

Backup - CloudSnap can also be used as backup tool or even a low cost DR tool for data volumes on a Cloud Block Store instance. Customers using Cloud Block Store can use CloudSnap to backup snapshots to Amazon S3. These snapshots are recoverable to any Cloud Block Store instance or FlashArray.

 

 

EC2 instance vCPU limits

Note: Starting Oct 24, 2019, AWS switched to a new limits implementation where they put default limits on the total vCPU rather than the limits of each EC2 type. This made it much simpler since you no longer need to monitor limits of each instance type, but rather monitor the total vCPU usage of the instance families. More information can be found on the AWS announcement and AWS EC2 FAQ

You should ensure that your total vCPU max limits are sufficient to deploy Cloud Block Store. To view your current limits, go to your Amazon EC2 console.

  1. Click on Limits
  2. Set the search filter to Running Instances.
  3. View the total limits for your A,C,D,H,I,M,R,T,Z instances.

cbs93.JPG

 

When deploying a Cloud Block Store instance, you have the option to choose the Cloud Block Store instance type. Each Cloud Block Store instance type will use up a certain amount of vCPUs. Each customer’s AWS account has a default max limit for the total vCPUs within each region. Prior to deploying Cloud Block Store resources, ensure that your max limit can accommodate the vCPUs needed for your Cloud Block Store instances . The Cloud Block Store minimum vCPUs required for deployments are as follows:

Cloud Block Store Type

Total vCPUs required (Initial deployment)

Total vCPUs required (After one capacity upgrade)
//VA10-R1

128

(Based on 2 x c5n.9xlarges and 7 x i3.2xlarges)

184

(Based on adding 7 x i3.2xlarges)

 

Cloud Block Store Type

Total vCPUs required (Initial deployment)

Total vCPUs required (After first capacity upgrade)

Total vCPUs required (After second capacity upgrade)
//VA20-R1

256

Based on 2 x c5n.18xlarges and 7 x i3.4xlarges

368

(Based on adding 7 x i3.4xlarges)

592

(Based on adding 7 x i3.8xlarges)

 

 

Security Groups 

A Cloud Block Store instance has four Ethernet interfaces that are used for the following types of traffic: iSCSI, management, replication, and system intercommunication. Each Ethernet interface requires different types of TCP access.

  • As a security best practice, create three different security group as shown in the table below with the appropriate TCP access for the replication, iSCSI, and management interfaces. Each security group will be applied during the Cloud Block Store deployment.
  • The security group for the System interface is auto-created during the deployment.
  • The three security groups must be in the same region and VPC.
Security Group Inbound Outbound
System (eth0)* Auto-Created Auto-Created
Replication (eth1) 8117 8117
iSCSI (eth2) 3260  
Mgmt (eth3) 22, 80, 443, 8084 443

* Note: For the System interface, a fourth security group called " PureSystemSecurityGroup" is automatically created and applied as part of the Cloud Block Store deployment.

 

 

IAM Role and Permissions 

To automate a Cloud Block Store deployment, upgrade, or termination, an IAM role with appropriate permissions is required. Even with Administrator permissions elevated, the IAM role is still required. You must create a new IAM policy with the appropriate IAM permissions. Then create the IAM role and attach the IAM policy to your IAM role. For exact steps to create the IAM Role and policy, see Appendix A.

 

 

Before you begin 

Deployment may fail if all the requirements are not met. Before deploying Cloud Block Store, go through the following checklist:

  1. Ensure that you have a private subnet (with /25 network mask) created specifically for Cloud Block Store interfaces. You can put all interfaces onto a single subnet, or create separate subnets for each interface. Refer to the Network Section for details and network options.
  2. Ensure there is internet access from the private subnet used for the system interfaces. (NAT Gateway recommended).
  3. Ensure there are VPC Endpoints for S3 and DynamoDB traffic from the private subnet used for the System interfaces. See VPC Endpoint section for details.
  4. Ensure that there are three separate Security Groups for iSCSI, management, and replication traffic. See Security Group for details.
  5. Ensure that your max vCPU limits can accommodate Cloud Block Store instances. See EC2 Instance vCPU Limits for details.
  6. Has the IAM role with appropriate permissions been created? See IAM Role and Permission for details.
  7. Ensure DNS Resolution is enabled for your VPC. See VPC DNS Resolution for details.
  8. Have you retrieved your Pure-As-A-Service license from Pure1? Steps to retrieve license are here.
    1. Note: - If you have not purchased a license, there are two options to obtain a license key:
      1. Working with Pure Storage sales teams and Pure Storage partners to obtain a Pure as-a-Service subscription.
      2. Going directly to the AWS Marketplace to sign up for a short term subscription service.
  9. Have you added your AWS account ID to Pure1? This is security feature to ensure only user approved AWS account ID's are allowed to deploy Cloud Block Store. Instructions to whitelist AWS account IDs are provide here
  10. If the above requirements have been met, proceed and deploy Cloud Block Store.

 

 

Deploying Cloud Block Store

Note: Pure Store provides a CloudFormation template to deploy Cloud Block Store. Please do NOT modify contents of the provided CloudFormation template. Any changes made by the user without the expressed written consent of Pure Storage may lead to unexpected behavior and will not be supported.

Deploy Cloud Block Store from the AWS Marketplace.

  1. Go to the AWS Marketplace deployment listing for Cloud Block Store.

Alternatively, you can go to the AWS Marketplace and search for Cloud Block Store. 

  1. In the listing, click Continue to Subscribe.

cbs71.JPG

 

  1. Click Continue to Configuration.

cbs72.JPG

 

  1. Select your desired region and click Continue to Launch.

cbs73.JPG

 

  1. Review the selections and click Launch. This launches the AWS CloudFormation stack creation service.

cbs74.JPG

 

  1. The CloudFormation stack creation wizard should appear with all the template options pre-selected. Click Next to proceed.

cbs76.JPG

  1. Enter the desired information for your Cloud Block Store instance:
    1. Enter a Stack name. Stack name is for your Cloud Block Store deployment.
    2. Enter an ArrayName. ArrayName name is for your virtual appliance and is reflected in the name of the EC2 instances.
    3. Enter the RelayHost domain name. RelayHost is your domain name and can be modified later using the Cloud Block Store GUI or CLI. Example: purestorage.com
    4. Select the PurityInstanceType. PurityInstanceType is the desired Cloud Block Store model. You can view the model sizes and details in the CBS Support Matrix
    5. Enter the LicenseKey. You receive the license key when you create the subscription through a Pure as-a-Service subscription or the AWS Marketplace.
    6. (Optional) In the AlertRecipients field, enter a comma-separated list of email contacts to receive email alerts. You can modify this later using the Cloud Block Store GUI or CLI.
    7. Select a KeyName. KeyName is the name of an existing AWS Key Pair you wish to use for SSH access.
    8. Select the SystemSubnet. SystemSubnet is a private subnet for the system interfaces and requires internet access. Refer to the Network Section for details and network options.
    9. Select the ReplicationSubnet. ReplicationSubnet is a private subnet for the Replication interfaces. Refer to the Network Section for details and network options.
    10. Select the iSCSISubnet. iSCSISubnet is a private subnet for the iSCSI interfaces. Refer to the Network Section for details and network options.
    11. Select the ManagementSubnet. ManagementSubnet is a private subnet for the management interfaces. Refer to the Network Section for details and network options.
    12. Select the ReplicationSecurityGroup. ReplicationSecurityGroup allows both inbound and outbound TCP traffic on ports 8117. Refer to Security Group for details.
    13. Select the iSCSISecurityGroup. iSCSISecurityGroup security group allows inbound TCP traffic on ports 3260. Refer to Security Group for details.
    14. Select the ManagementSecurityGroup. ManagementSecurityGroup allows inbound TCP traffic on ports 22, 80, 8084 as well as inbound/outbound on port 443. Refer to Security Group for details.
  2. Keep the default values for the remaining fields and move to the next step.

cbs47b.JPG

  1. Click Next.
  2. Select Stack Options:
    1. (Optional) Apply tags for the Cloud Block Store resources.
    2. Select the IAM Role: PurityServiceRole. Creating this role is a pre-requisite. See IAM Role and Permission for details. 
    3. In the Stack creation options section, set Termination protection to Enabled.

The IAM role selection is required to deploy Cloud Block Store successfully as well as to upgrade or terminate Cloud Block Store in the future.

cbs78.JPG

  1. Click Next.
  2. Review the selected parameters. Scroll to the bottom of the page and check the acknowledge box. 
  3. Click Create stack.

cbs49.JPG

  1. The Cloud Block Store stack takes approximately ten minutes to complete. When complete, the stack should appear with CREATE_COMPLETE status.

cbs50.jpg

 

Do Not Shutdown Cloud Block Store 

It is important to note that Cloud Block Store is an enterprise virtual appliance and is expected to always be on. Therefore, do not try to shut down a Cloud Block Store instance or any of the underlying Cloud Block Store's EC2 resources (controllers or virtual drives). 

 

 

 

Enabling CloudTrail

AWS CloudTrail is a tool used to monitor and log event history in AWS accounts. Customers should use CloudTrail for tracking account activity, troubleshooting issues, and investigating security breaches in their AWS accounts. CloudTrail can also be integrated with other services to trigger actions such as sending alerts.

By default, CloudTrail stores logs for events in the AWS account for 90 days. Customers who wish to store logs for longer than 90 days should create a trail, and specify an S3 bucket where CloudTrail can store logs.

When creating a trail, Pure recommend selecting default options for everything, unless you have a specific reason to select a different option. If you have Cloud Block Store instances running in more than one region, be sure to select the option to enable CloudTrail in all regions in the account; not just the current region.

It is important to note that CloudTrail allows for the logging of two type of events: management events and data events. Management events are enabled by default. For data events, CloudTrail allows for the logging of API calls (PUTS/GETS) to Amazon S3. By default, data events for S3 buckets are not enabled.

  • If customers do enable logging of data events for S3, Pure recommends customers exclude logging data events by Cloud Block Store to its associated S3 bucket. Customers can do this by excluding the Cloud Block Store S3 bucket from the data event of a CloudTrail configuration.

For more details on CloudTrail, please check out https://aws.amazon.com/cloudtrail/.

 

 

Managing Cloud Block Store

Viewing Cloud Block Store Network Interfaces

Once a Cloud Block Store instance is deployed, you can view the IP addresses of the Cloud Block Store instance from different locations. In the CloudFormation console where you deployed the stack, the Output tab displays the IP addresses for each controller. See the following screenshot as an example:

cbs41.JPG

Additionally, you can view the same IP address information by logging onto the Cloud Block Store instance's GUI using the management IP address.

  1. Click Settings and select the Network tab.

For CLI users, SSH into the management port and run: purenetwork list.

cbs42.JPG

Viewing the Cloud Block Store Instance in the AWS Console

You can view the underlying components of Cloud Block Store from the EC2 console. Identify the Cloud Block Store instance's controllers by the ct0 and ct1 suffix. Identify the virtual drives by the -vd suffix.

CBS18c.JPG

 

Logging onto the GUI of a Cloud Block Store

Use the management port IP address to log onto your Cloud Block Store instances.

  1. Log onto a separate Windows or Linux host with network access to the management Ethernet port of the Cloud Block Store instance.

Make sure your subnet route tables, firewalls, and Security groups allow your host network access to the Cloud Block Store instance's management interface.

  1. Open a browser and enter the management IP address. See Viewing Cloud Block Store Network Interfaces section for the location of your management IP addresses. You can also log on and manage a Cloud Block Store instance using CLI or REST APIs.
  2. Enter the username and password:
  • Default username: pureuser
  • Default password: pureuser
  1. The Cloud Block Store's Dashboard displays high-level storage usage and performance metrics. From the Dashboard, you can navigate to the various tabs on the left to view the detailed storage usage, performance analysis, health, and other settings.

CBS20.JPG

 

 

Creating volumes 

The following example provides steps for the GUI. For CLI users, SSH into the management port and run: purevol create --size <size> <vol name>

  1. Using the Cloud Block Store's GUI: 
    1. In the left navigation pane, click Storage.
    2. Click Volumes.
    3. Click the + icon to add a new volume.

CBS22.JPG

  1. Enter the name and desired size of the volume and click Create.
  2. You can see the new volume in the list of volumes.

CBS23.JPG

 

Creating hosts
You must create a host with corresponding IQN before you can attach a volume to it.

The following example provides steps using the GUI. For CLI users, SSH into the management port and run:purehost create  --iqnlist <Host IQN number> <host name> 

Using the Cloud Block Store's GUI:

  1. On the left navigation pane, click Storage.
  2. Click Hosts.
  3. Click the + icon to add a new host.

CBS24.JPG

  1. Enter the desired name for the host and click Create.
  2. Once created, the host is displayed in the list of available hosts.

CBS26.JPG

  1. In the Host Ports box:
    1. Click the expand icon.
    2. Select Configure IQNs.

CBS27.JPG

  1. Enter the IQN name of the iSCSI host. 

CBS28.JPG

Locate the host's IQN number by running the following commands on the respective OS. Ensure iSCSI service has started on the host.

Windows PowerShell (Run as Administrator):

(Get-InitiatorPort).NodeAddress

Linux:

cat /etc/iscsi/initiatorname.iscsi

Solaris:

iscsiadm list initiator-node

 

 

Connecting iSCSI host to Cloud Block Store volumes
Transit Gateway

A Transit Gateway may be utilized to establish connectivity between Cloud Block Store and EC2 hosts when they reside in different VPCs and within the same AZ. The use of a Transit Gateway to establish connectivity between a Cloud Block Store instance's VPC and EC2 host's VPC when they reside in different AZs is not supported. 

See AWS Documentation for Transit Gateway implementation detail and limits.

 

The following example provides steps for the GUI. For CLI users, SSH into the management port and run:purevol connect --host <host name> <vol name>

  Using the Cloud Block Store's GUI:

  1. On the left navigation pane, click Storage.
  2. Click Volumes.

cbs35.JPG

  1. Select the desired volume.

cbs36.JPG

  1. In the Connected Host box, click the expand icon and select Connect.

cbs38.JPG

  1. Select the desired host(s) and click Connect.

Ensure appropriate clustering software on your hosts is installed if you wish to connect the same volume to multiple hosts.

cbs39.JPG

Mounting a volume on iSCSI host

The following steps provide an example of how to connect a Windows and an Amazon Linux 2 EC2 compute host to a Pure Storage Cloud Block Store instance.

Prerequisites:
  • The EC2 compute host must have an iSCSI initiator client software. Most modern operating systems already have them pre-installed. 

  • The EC2 compute host must have network access to the iSCSI subnet of the Cloud Block Store instance. The EC2 compute host's network ports and the Cloud Block Store instance's iSCSI ports must have route table entries allowing them to communicate. Ensure that Security Groups, Network ACLs, and firewalls are not preventing connectivity.

Setting up multipathing with Microsoft MPIO

To protect against a single point of failure, this procedure allows multiple paths to the Cloud Block Store instance. Perform this procedure only when on a new Windows AMI host. 

  1. Log onto the Windows ec2 host.
  2. To check if Microsoft MPIO is installed on the system, open an elevated PowerShell terminal (Run as administrator) and run:
PS C:\> Get-WindowsFeature -Name 'Multipath-IO'
Display Name                                            Name         Install State
------------                                            ----         -------------
[ ] Multipath I/O                                       Multipath-IO Available
  1. If it shows the install state as ‘Available’, follow the next steps to install Microsoft MPIO. If it shows as 'Installed', move on to step 7.

  2. In the same PowerShell terminal, run:

PS C:\> Add-WindowsFeature -Name 'Multipath-IO'
Success Restart Needed Exit Code      Feature Result
------- -------------- ---------      --------------
True    Yes       SuccessRest...     {Multipath I/O}
WARNING: You must restart this server to finish the installation process.
  1. Reboot the Windows ec2 host.
  2. When the Windows ec2 host boots back up, verify that Microsoft MPIO is installed.

PS C:\> Get-WindowsFeature -Name 'Multipath-IO'
Display Name                                            Name         Install State
------------                                            ----         -------------
[X] Multipath I/O                                       Multipath-IO Installed
  1. In the same PowerShell terminal, run the following command to start the iSCSI service.

PS C:\> Set-Service -Name msiscsi
  1. Set the iSCSI service to start on boot, run:
PS C:\> Set-Service -Name msiscsi -StartupType Automatic
  1. Add Pure FlashArray as an MPIO vendor. In the same PowerShell terminal, run:

PS C:\> New-MSDSMSupportedHw -VendorId PURE -ProductId FlashArray
VendorId ProductId
--------       ---------
PURE        FlashArray
  1. Enable iSCSI support by Microsoft MPIO. In the same PowerShell terminal, run:

PS C:\> Enable-MSDSMAutomaticClaim -BusType iSCSI
VendorId ProductId
-------- ---------
MSFT2005 iSCSIBusType_0x9
False
  1. Set default MPIO path policy to Lowest Queue Depth.

PS C:\> Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy LQD
  1. Set MPIO Timer Values. In the same PowerShell terminal, run:

PS C:\> Set-MPIOSetting -NewPathRecoveryInterval 20 -CustomPathRecovery Enabled -NewPDORemovePeriod 30 -NewDiskTimeout 60 -NewPathVerificationState Enabled
  1. If prompted by the above commands, reboot the Windows AMI host.

MPIO setup is now complete. 

Mounting a volume on Windows iSCSI host

Follow the next steps (1-7) to establish iSCSI connections. Once you make a connection, subsequent volumes connected from Cloud Block Store to this host appear in Disk Management.

To complete the following steps, you need the IP addresses of both Cloud Block Store controller iSCSI interfaces. See Viewing Network Interface to obtain the iSCSI IP addresses. Keep the iSCSI IP addresses handy.

  1. (Run as administrator) On the Windows host, open an elevated PowerShell terminal and run the following command to gather the IP address of your Windows instance. The following example shows  10.0.1.118
PS C:\> get-netadapter |Get-NetIPAddress -AddressFamily ipv4
IPAddress         : 10.0.1.118
InterfaceIndex    : 5
InterfaceAlias    : Ethernet
AddressFamily     : IPv4
Type              : Unicast
PrefixLength      : 24
PrefixOrigin      : Dhcp
SuffixOrigin      : Dhcp
AddressState      : Preferred
ValidLifetime     : 00:57:00
PreferredLifetime : 00:57:00
SkipAsSource      : False
PolicyStore       : ActiveStore

2. In the same PowerShell window, run the following command to create a new Target Portal connection between your Windows host and your Cloud Block Store instance.

PS C:\> New-IscsiTargetPortal -TargetPortalAddress <CBS iSCSI IP address>

        where

<CBS iSCSI IP address>  is the IP address of the iSCSI port on Cloud Block controller 0 or controller 1. You only need to enter one.

  1. In the same PowerShell window, run the following command to create an iSCSI session to Cloud Block Store controller 0.
PS C:\> Get-IscsiTarget | Connect-IscsiTarget -InitiatorPortalAddress <Windows IP address> -IsMultipathEnabled $true -IsPersistent $true -TargetPortalAddress <CBS iSCSI interface IP address CT0>

where

<Windows IP address> is the Windows host IP address obtained in step one.

<CBS iSCSI IP address CT0> is the iSCSI IP address of Cloud Block Store controller 0.

See the following screenshot as an example. 

cbs70.JPG

  1. (Optional) For additional performance throughput, you may add additional iSCSI sessions. Repeat the same command for each additional iSCSI session you would like to add to controller 0. You can add up to 32 iSCSI sessions to each controller.

See Appendix C for detailed information.

  1. In the same PowerShell window, run the same command to create iSCSI sessions to Cloud Block Store controller 1.
PS C:\> Get-IscsiTarget | Connect-IscsiTarget -InitiatorPortalAddress <Windows IP address> -IsMultipathEnabled $true -IsPersistent $true -TargetPortalAddress <CBS iSCSI interface IP address CT1>

where

<Windows IP address> is the Windows host IP address obtained in step two.

<CBS iSCSI IP address CT1> is the iSCSI IP address of Cloud Block Store controller 1

  1. (Optional) For additional performance throughput, you may add additional iSCSI sessions. Repeat the same command for each additional iSCSI session you would like to add to controller 1. You can add up to 32 iSCSI sessions to each controller.

See Appendix C for detailed information.

  1. To confirm the total number of sessions, run:  
PS C:\> Get-IscsiSession
  1. Go to Disk Management and perform a rescan to confirm the new volume. 

cbs79.JPG

  1. Bring the volume online and format with the desired file system. Any subsequent volume you create and connect to this host in the Cloud Block Store UI (CLI/GUI/REST) displays automatically in Disk Management after a rescan.

You have successfully connected and mounted a Cloud Block Store volume to your host. 

AWS Linux 2 AMI host

This example walks you through connecting Cloud Block Store volumes to an AWS Linux 2 AMI. Some steps might be repeated from steps seen earlier in this guide.

The steps include:

  • Configuring the Linux host for iSCSI and MPIO with Cloud Block Store
  • Host and volume creation on Cloud Block Store
  • Connecting and mounting Cloud Block Store volumes to Linux host
Configuring iSCSI on Linux Initiator with Cloud Block Store
On Linux host:

1. Log in to Amazon Linux 2 ec2 instance.

2. Install iscsi-initiator-utils onto Linux host.

sudo yum -y install iscsi-initiator-utils

3. Install lsscsi.

sudo yum -y install lsscsi

4. Install the device-mapper-multipath package.

sudo yum -y install device-mapper-multipath

5. Start iSCSI daemon service.

sudo service iscsid start

6. Collect Linux initiator IQN.

cat /etc/iscsi/initiatorname.iscsi

Example: 

[ec2-user@ip-10-0-1-235 ~]$ cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:361dfc3de387

7. Remove 51-ec2-hvm-devices.rules file. 

sudo rm /etc/udev/rules.d/51-ec2-hvm-devices.rules

This step is only required with Amazon Linux 2 AMI.

8. Create a new udev rules file called 99-pure-storage.rules for Pure Storage and copy the contents into the file as shown in the following example.

sudo vim /etc/udev/rules.d/99-pure-storage.rules

Example: 

[ec2-user@ip-10-0-1-235 ~]$ sudo vim /etc/udev/rules.d/99-pure-storage.rules
[ec2-user@ip-10-0-1-235 ~]$ cat /etc/udev/rules.d/99-pure-storage.rules
# Recommended settings for Pure Storage FlashArray.cat 

# Use noop scheduler for high-performance solid-state storage
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/scheduler}="noop"

# Reduce CPU overhead due to entropy collection
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/add_random}="0"

# Spread CPU load by redirecting completions to originating CPU
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/rq_affinity}="2"

# Set the HBA timeout to 60 secondsi
ACTION=="add", SUBSYSTEMS=="scsi", ATTRS{model}=="FlashArray ", RUN+="/bin/sh -c 'echo 60 > /sys/$DEVPATH/device/timeout'"

9. Reboot the Linux host.

sudo reboot

On Cloud Block Store instance:

10. Using the CLI (ssh), log into the Cloud Block Store instance using the management IP. See Viewing Cloud Block Store Network Interfaces section for the location of your management IP addresses.

  • Default username: pureuser
  • Default password: pureuser
ubuntu@ip-10-0-0-107:~$ ssh pureuser@10.0.1.61
pureuser@10.0.1.61's password:

Mon Sep 09 11:40:25 2019
Welcome pureuser. This is Purity Version 5.3.0.beta10 on FlashArray MPIOConfig
http://www.purestorage.com/
pureuser@MPIOConfig>

11. Create a host on Cloud Block Store.

purehost create <Linux hostname>

where

<Linux hostname> is the desired hostname.

This example shows a host created with name Linux2AMI

pureuser@MPIOConfig> purehost create Linux2AMI
Name       WWN  IQN  NQN
Linux2AMI  -    -    -

12. Configure host with IQN number.

purehost setattr --addiqnlist <IQN number> <Linux hostname>

where

<IQN number> is the initiator IQN number gathered in step 6.

<Linux hostname> is the hostname created in step 11.

Example:

pureuser@MPIOConfig> purehost setattr --addiqnlist iqn.1994-05.com.redhat:361dfc3de387 Linux2AMI
Name       WWN  IQN                                  NQN  Host Group
Linux2AMI  -    iqn.1994-05.com.redhat:361dfc3de387  -    -

13. Create one or more volumes on Cloud Block Store.

purevol create <volume name> --size <size>

where

<volume name> is the desired volume name.

<size> is the desired volume size (GB or TB suffix).

This example shows the creation of 2 TB volumes:

pureuser@MPIOConfig> purevol create vol1 --size 2TB
Name  Size  Source  Created                  Serial
vol1  2T    -       2019-09-09 11:41:55 PDT  2B60622E2B014A2200011010
pureuser@MPIOConfig> purevol create vol2 --size 2TB
Name  Size  Source  Created                  Serial
vol2  2T    -       2019-09-09 11:42:00 PDT  2B60622E2B014A2200011011

14. Connect host to volumes.

purevol connect <volume name> --host <host name>

where

<volume name> is the name of the volume.

<host name> is the name of the host. 

Example: 

pureuser@MPIOConfig> purevol connect vol1 --host Linux2AMI
Name  Host Group  Host       LUN
vol1  -           Linux2AMI  1
pureuser@MPIOConfig> purevol connect vol2 --host Linux2AMI
Name  Host Group  Host       LUN
vol2  -           Linux2AMI  2

15. Collect the IP addresses of each controller and the IQN number for Cloud Block Store. The IQN is identical for both iSCSI interfaces.

pureport list

Example

pureuser@MPIOConfig> pureport list
Name      WWN  Portal           IQN                                                      NQN  Failover
CT0.ETH2  -    10.0.1.202:3260  iqn.2010-06.com.purestorage:flasharray.666667d86130ec06  -    -
CT1.ETH2  -    10.0.1.110:3260  iqn.2010-06.com.purestorage:flasharray.666667d86130ec06  -    - 

 

iSCSI login and MPIO Configuration
On Linux host:

16. Create a new iSCSI interface on the initiator. Each iSCSI interface will provide one single iSCSI session.

Optional: For higher performance throughput, you can additional iSCSI interfaces if you wish to increase iSCSI sessions (Up to 32 iSCSI interfaces which will provide 32 iSCSI connections). 

See Appendix C for detailed information.

In this example, we will create 4 iSCSI interfaces, numbered 0-3.

sudo iscsiadm -m iface -I iscsi0 -o new

sudo iscsiadm -m iface -I iscsi1 -o new

sudo iscsiadm -m iface -I iscsi2 -o new

sudo iscsiadm -m iface -I iscsi3 -o new

Example:

[ec2-user@ip-10-0-1-235 ~]$ sudo iscsiadm -m iface -I iscsi0 -o new
New interface iscsi0 added
[ec2-user@ip-10-0-1-235 ~]$ sudo iscsiadm -m iface -I iscsi1 -o new
New interface iscsi1 added
[ec2-user@ip-10-0-1-235 ~]$ sudo iscsiadm -m iface -I iscsi2 -o new
New interface iscsi2 added
[ec2-user@ip-10-0-1-235 ~]$ sudo iscsiadm -m iface -I iscsi3 -o new
New interface iscsi3 added

17. Discover target iSCSI portals using iSCSI interface IP.

sudo iscsiadm -m discovery -t st -p <CBS iSCSI IP>:3260

where

<CBS iSCSI IP>  is the iSCSI IP address of the Cloud Block controller 1 or controller 2, collected in step 15. You only need to enter one iSCSI IP address.

Example

[ec2-user@ip-10-0-1-235 ~]$ sudo iscsiadm -m discovery -t st -p 10.0.1.202:3260
10.0.1.202:3260,1 iqn.2010-06.com.purestorage:flasharray.666667d86130ec06
10.0.1.110:3260,1 iqn.2010-06.com.purestorage:flasharray.666667d86130ec06
10.0.1.202:3260,1 iqn.2010-06.com.purestorage:flasharray.666667d86130ec06
10.0.1.110:3260,1 iqn.2010-06.com.purestorage:flasharray.666667d86130ec06
10.0.1.202:3260,1 iqn.2010-06.com.purestorage:flasharray.666667d86130ec06
10.0.1.110:3260,1 iqn.2010-06.com.purestorage:flasharray.666667d86130ec06
10.0.1.202:3260,1 iqn.2010-06.com.purestorage:flasharray.666667d86130ec06
10.0.1.110:3260,1 iqn.2010-06.com.purestorage:flasharray.666667d86130ec06

18. Connect the Linux host to the Cloud Block Store instance.

sudo iscsiadm -m node -p <CBS iSCSI IP CT0> --login

sudo iscsiadm -m node -p <CBS iSCSI IP CT1> --login

where

<CBS iSCSI IP CT0>  is the Cloud Block Store IP address of controller 0 collected from step 15.

<CBS iSCSI IP CT1>  is the Cloud Block Store IP address of controller 1 collected from step 15.

Example:

[ec2-user@ip-10-0-1-235 ~]$ sudo iscsiadm -m node -p 10.0.1.202 --login
Logging in to [iface: iscsi0, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260] (multiple)
Logging in to [iface: iscsi1, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260] (multiple)
Logging in to [iface: iscsi2, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260] (multiple)
Logging in to [iface: iscsi3, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260] (multiple)
Login to [iface: iscsi0, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260] successful.
Login to [iface: iscsi1, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260] successful.
Login to [iface: iscsi2, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260] successful.
Login to [iface: iscsi3, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260] successful.

[ec2-user@ip-10-0-1-235 ~]$ sudo iscsiadm -m node -p 10.0.1.110 --login
Logging in to [iface: iscsi0, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260] (multiple)
Logging in to [iface: iscsi1, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260] (multiple)
Logging in to [iface: iscsi2, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260] (multiple)
Logging in to [iface: iscsi3, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260] (multiple)
Login to [iface: iscsi0, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260] successful.
Login to [iface: iscsi1, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260] successful.
Login to [iface: iscsi2, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260] successful.
Login to [iface: iscsi3, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260] successful.

19. Add automatic iSCSI login on boot.

sudo iscsiadm -m node -L automatic

20. Confirm that each volume has eight entries, each representing a virtual device path.

lsscsi -d

Example: There are two volumes, therefore there are 16 total entries. 

[ec2-user@ip-10-0-1-235 ~]$ lsscsi -d
[2:0:0:1]    disk    PURE     FlashArray       8888  /dev/sda [8:0]
[2:0:0:2]    disk    PURE     FlashArray       8888  /dev/sdb [8:16]
[3:0:0:1]    disk    PURE     FlashArray       8888  /dev/sde [8:64]
[3:0:0:2]    disk    PURE     FlashArray       8888  /dev/sdf [8:80]
[4:0:0:1]    disk    PURE     FlashArray       8888  /dev/sdj [8:144]
[4:0:0:2]    disk    PURE     FlashArray       8888  /dev/sdl [8:176]
[5:0:0:1]    disk    PURE     FlashArray       8888  /dev/sdi [8:128]
[5:0:0:2]    disk    PURE     FlashArray       8888  /dev/sdk [8:160]
[6:0:0:1]    disk    PURE     FlashArray       8888  /dev/sdc [8:32]
[6:0:0:2]    disk    PURE     FlashArray       8888  /dev/sdd [8:48]
[7:0:0:1]    disk    PURE     FlashArray       8888  /dev/sdg [8:96]
[7:0:0:2]    disk    PURE     FlashArray       8888  /dev/sdh [8:112]
[8:0:0:1]    disk    PURE     FlashArray       8888  /dev/sdn [8:208]
[8:0:0:2]    disk    PURE     FlashArray       8888  /dev/sdp [8:240]
[9:0:0:1]    disk    PURE     FlashArray       8888  /dev/sdm [8:192]
209:0:0:2]    disk    PURE     FlashArray       8888  /dev/sdo [8:224]

21. Enable default multipath configuration file and start the multipath daemon.

sudo mpathconf --enable --with_multipathd y

22. Replace the content of the multipath.conf file with the following configuration for Pure Storage.

sudo vim /etc/multipath.conf

  • polling_interval 10
  • vendor "PURE"
  • path_selector "queue-length 0"
  • path_grouping_policy group_by_prio
  • path_checker tur
  • fast_io_fail_tmo 10
  • dev_loss_tmo 60
  • no_path_retry 0
  • hardware_handler “1 alua” 
  • prio alua 
  • failbackimmediate

See RHEL documentation for /etc/multipath.conf attribute descriptions.

[ec2-user@ip-10-0-1-235 ~]$ sudo vim /etc/multipath.conf
[ec2-user@ip-10-0-1-235 ~]$ sudo cat /etc/multipath.conf
defaults {
       polling_interval      10
}
devices {
       device {
               vendor                "PURE"
               path_selector         "queue-length 0"
               path_grouping_policy  group_by_prio
               path_checker          tur
               fast_io_fail_tmo      10
               dev_loss_tmo          60
               no_path_retry         0
               hardware_handler      "1 alua"
               prio                  alua
               failback              immediate
       }
}

23. Restart multipathd service for multipath.conf changes to take effect.

sudo service multipathd restart

24. Run the multipath command below to confirm each Cloud Block Store volume has multiple paths. A multipathed Cloud Block Store volume should be represented by a device-mapped ID, as seen in green in the example below. Verify the paths are divided into two priority groups, as seen in orange in the following example.

sudo multipath -ll

Example: Two Cloud Block Store volumes are represented by two device-mapped IDs in green. 

[ec2-user@ip-10-0-1-235 ~]$ sudo multipath -ll
3624a93702b60622e2b014a2200011011 dm-1 PURE    ,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| |- 2:0:0:2 sdb  8:16  active ready running
| |- 3:0:0:2 sdf  8:80  active ready running
| |- 4:0:0:2 sdl  8:176 active ready running
| `- 5:0:0:2 sdk  8:160 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
  |- 6:0:0:2 sdd  8:48  active ready running
  |- 7:0:0:2 sdh  8:112 active ready running
  |- 8:0:0:2 sdp  8:240 active ready running
  `- 9:0:0:2 sdo  8:224 active ready running
3624a93702b60622e2b014a2200011010 dm-0 PURE    ,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| |- 2:0:0:1 sda  8:0   active ready running
| |- 3:0:0:1 sde  8:64  active ready running
| |- 4:0:0:1 sdj  8:144 active ready running
| `- 5:0:0:1 sdi  8:128 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
  |- 6:0:0:1 sdc  8:32  active ready running
  |- 7:0:0:1 sdg  8:96  active ready running
  |- 8:0:0:1 sdn  8:208 active ready running
  `- 9:0:0:1 sdm  8:192 active ready running

25. Create mount points on the initiator.

sudo mkdir /mnt/store0
sudo mkdir /mnt/store1

26. Create the desired filesystem on each Cloud Block Store volume using the device-mapped IDs, and then mount each volume to the mount point.

sudo mkfs.ext4 /dev/mapper/<device-mapped ID>

where

<device-mapped ID> is the device-mapped ID from step 24

The following example uses filesystem ext4 for each device.

dm-0

[ec2-user@ip-10-0-1-235 ~]$ sudo mkfs.ext4 /dev/mapper/3624a93702b60622e2b014a2200011010
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=1024 blocks
134217728 inodes, 536870912 blocks
26843545 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
16384 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
    102400000, 214990848, 512000000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

dm-1 

[ec2-user@ip-10-0-1-235 ~]$ sudo mkfs.ext4 /dev/mapper/3624a93702b60622e2b014a2200011011
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=1024 blocks
134217728 inodes, 536870912 blocks
26843545 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
16384 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
    102400000, 214990848, 512000000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

27. Mount Cloud Block Store volumes onto mount point.

sudo mount/dev/mapper/<device-mapped ID> <mount point>

where

<device-mapped ID> is the device-mapped ID collected from step 24.

<mount point> is the mount point created in step 25.

[ec2-user@ip-10-0-1-235 ~]$ sudo mount /dev/mapper/3624a93702b60622e2b014a2200011010 /mnt/store0
[ec2-user@ip-10-0-1-235 ~]$ sudo mount /dev/mapper/3624a93702b60622e2b014a2200011011 /mnt/store1

28. The mount points now report 2TB, and block storage can be consumed.

[ec2-user@ip-10-0-1-235 ~]$ df -h
Filesystem                                     Size  Used Avail Use% Mounted on
/dev/mapper/3624a93702b60622e2b014a2200011010  2.0T   81M  1.9T   1% /mnt/store0
/dev/mapper/3624a93702b60622e2b014a2200011011  2.0T   81M  1.9T   1% /mnt/store1
On Cloud Block Store: 

29. I/O should only flow to the primary controller instance. Run I/O on your Linux host and confirm on your Cloud Block Store instance with the following command:

purehost monitor --balance --interval 3

Example:

pureuser@MPIOConfig> purehost monitor --balance --interval 3
Name       Time                     Initiator WWN  Initiator IQN                        Initiator NQN  Target       Target WWN  Failover  I/O Count  I/O Relative to Max
Linux2AMI  2019-08-26 10:31:32 PDT  -              iqn.1994-05.com.redhat:b9ddc64322ef  -              (primary)    -           -         1626       100%
                                                   iqn.1994-05.com.redhat:b9ddc64322ef                 (secondary)              -         0          0%
                                                  

 

When adding subsequent Cloud Block Store volumes to the Linux host, a rescan will be required to see the additional storage on your Linux host.

sudo iscsiadm -m session --rescan

Example:

[ec2-user@ip-10-0-1-235 ~]$ sudo iscsiadm -m session --rescan
Rescanning session [sid: 2, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260]
Rescanning session [sid: 3, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260]
Rescanning session [sid: 4, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260]
Rescanning session [sid: 1, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260]
Rescanning session [sid: 5, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260]
Rescanning session [sid: 6, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260]
Rescanning session [sid: 8, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260]
Rescanning session [sid: 7, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260]

Removing Cloud Block Store

Version 5.3.0.aws0, 5.3.0.aws1, 5.3.0.aws2

For version 5.3.0.aws0, 5.3.0.aws1, 5.3.0.aws2, Cloud Block Store can only be removed (terminated) by Pure Support to ensure all the resources in the stack are cleanly removed. Please contact Pure Storage Support for Cloud Block Store instance removals.

Version 5.3.3.aws0+

For version 5.3.3.aws0 and above, customers can perform CBS deletion without Pure Support involvement. 

Do not manually delete the Cloud Block Store stack in CloudFormation. To properly terminate and remove a Cloud Block Store instance, run the two CLI commands provided below. The proper steps will ensure that the Cloud Block Store instance removal is reflected accurately and accounted for in the Pure-as-a-Service subscription on Pure1.

Prerequisites:

  • All Cloud Block Store volumes and snapshots must be deleted and eradicated prior to termination of a Cloud Block Store instance. This includes Protection Group snapshots.
  • All connected arrays and targets must be disconnected from any type of Purity replication.
  • Cloud Block Store instance must able to phone home. This ensures the Cloud Block Store instance is properly de-registered in the Pure-as-a-Service subscription.

Once the prerequisite array state has been achieved, the following steps will terminate and remove the Cloud Block Store instance.

  1. Using SSH, log into the Cloud Block Store instance management port. 

Note: See the Viewing Cloud Block Store Network Interfaces section for the management port IP address.

  1. Run the following command:

purearray factory-reset-token create

Example

purearray factory-reset-token create
Name               Token
MyCloudBlockStore  4109498 
  1. A token will be provided in the output. Make a note of the token value.
  2. Run the following command with the token from the previous command.

purearray erase --factory-reset-token <token> --eradicate-all-data

This allows the Cloud Block Store instance to communicate with Pure1 prior to deleting itself.

Example

purearray erase --factory-reset-token 4109498 --eradicate-all-data
Name
MyCloudBlockStore

 

  1. Important: You must confirm the deletion. Wait about 20 minutes and confirm that the Cloud Block Store instance has been fully deleted in your CloudFormation console. If the stack has not fully deleted, please contact Pure Storage Support for assistance. 

cbs94.JPG

 

Customer Support

Customers can contact Pure Storage Support for any issue or questions relating to Cloud Block Store.

Customer support also performs non-disruptive upgrades (NDU) for Cloud Block Store instances, Purity code upgrades, as well as capacity upgrades. 

Appendix A

IAM role and permissions

This section provides steps on creating the IAM role and permissions required to Deploy and Upgrade Cloud Block Store. First, create the permissions policy. Then create the IAM role and attach the permissions policy to your role.

  1. Go to the main IAM console.
  2. Create a new policy by selecting Policies and click Create policy.

CBS65.JPG

  1. Click the JSON tab. Replace the default content of the JSON file with the following permissions. You can copy/paste the following content directly into the JSON file.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "application-autoscaling:DeleteScalingPolicy",
                "application-autoscaling:DeregisterScalableTarget",
                "application-autoscaling:DescribeScalableTargets",
                "application-autoscaling:DescribeScalingPolicies",
                "application-autoscaling:DescribeScheduledActions",
                "application-autoscaling:PutScalingPolicy",
                "application-autoscaling:RegisterScalableTarget",
                "autoscaling:CreateAutoScalingGroup",
                "autoscaling:CreateLaunchConfiguration",
                "autoscaling:CreateOrUpdateTags",
                "autoscaling:DeleteAutoScalingGroup",
                "autoscaling:DeleteLaunchConfiguration",
                "autoscaling:DeleteTags",
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:DescribeLaunchConfigurations",
                "autoscaling:DescribeScalingActivities",
                "autoscaling:DescribeTags",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup",
                "autoscaling:UpdateAutoScalingGroup",
                "dynamodb:CreateTable",
                "dynamodb:DeleteTable",
                "dynamodb:DescribeTable",
                "dynamodb:ListTables",
                "dynamodb:ListTagsOfResource",
                "dynamodb:TagResource",
                "dynamodb:UntagResource",
                "dynamodb:UpdateTable",
                "ec2:AttachNetworkInterface",
                "ec2:AttachVolume",
                "ec2:AuthorizeSecurityGroupEgress",
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:CreateNetworkInterface",
                "ec2:CreatePlacementGroup",
                "ec2:CreateSecurityGroup",
                "ec2:CreateTags",
                "ec2:CreateVolume",
                "ec2:DeleteNetworkInterface",
                "ec2:DeletePlacementGroup",
                "ec2:DeleteSecurityGroup",
                "ec2:DeleteTags",
                "ec2:DeleteVolume",
                "ec2:DescribeAvailabilityZones",
                "ec2:DescribeImages",
                "ec2:DescribeInstances",
                "ec2:DescribeKeyPairs",
                "ec2:DescribeNetworkInterfaces",
                "ec2:DescribePlacementGroups",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeSubnets",
                "ec2:DescribeTags",
                "ec2:DescribeVolumes",
                "ec2:DescribeVolumesModifications",
                "ec2:DescribeVpcs",
                "ec2:DetachNetworkInterface",
                "ec2:ModifyNetworkInterfaceAttribute",
                "ec2:ModifyVolumeAttribute",
                "ec2:ModifyInstanceAttribute",
                "ec2:RevokeSecurityGroupEgress",
                "ec2:RevokeSecurityGroupIngress",
                "ec2:RunInstances",
                "ec2:StartInstances",
                "ec2:StopInstances",
                "ec2:TerminateInstances",
                "iam:AddRoleToInstanceProfile",
                "iam:AttachRolePolicy",
                "iam:CreateInstanceProfile",
                "iam:CreatePolicy",
                "iam:CreateRole",
                "iam:CreateServiceLinkedRole",
                "iam:DeleteInstanceProfile",
                "iam:DeletePolicy",
                "iam:DeleteRole",
                "iam:DeleteRolePolicy",
                "iam:DetachRolePolicy",
                "iam:GetInstanceProfile",
                "iam:GetPolicy",
                "iam:GetRole",
                "iam:GetRolePolicy",
                "iam:ListRoleTags",
                "iam:PassRole",
                "iam:PutRolePolicy",
                "iam:RemoveRoleFromInstanceProfile",
                "iam:TagRole",
                "iam:UntagRole",
                "kms:CreateAlias",
                "kms:CreateKey",
                "kms:DeleteAlias",
                "kms:DescribeKey",
                "kms:DisableKey",
                "kms:EnableKey",
                "kms:ListAliases",
                "kms:ListKeyPolicies",
                "kms:ListKeys",
                "kms:ListResourceTags",
                "kms:PutKeyPolicy",
                "kms:ScheduleKeyDeletion",
                "kms:TagResource",
                "kms:UntagResource",
                "kms:UpdateAlias",
                "lambda:CreateFunction",
                "lambda:DeleteFunction",
                "lambda:GetFunction",
                "lambda:GetFunctionConfiguration",
                "lambda:InvokeFunction",
                "lambda:ListTags",
                "lambda:TagResource",
                "lambda:UntagResource",
                "lambda:UpdateFunctionCode",
                "lambda:UpdateFunctionConfiguration",
                "s3:CreateBucket",
                "s3:DeleteBucket",
                "s3:DeleteBucketPolicy",
                "s3:GetBucketPolicy",
                "s3:GetBucketTagging",
                "s3:ListBucket",
                "s3:PutBucketPolicy",
                "s3:PutBucketTagging",
                "s3:PutBucketVersioning",
                "sts:assumerole"
            ],
            "Resource": "*"
        }
    ]
}
  1. Click Review Policy.
  2. Provide a name for the policy. You can call it PurityServicePermission.
  3. Click Create policy.
  4. Go back in the main IAM console.
  5. Create a role by selecting Roles and click Create role.
  6. Select the trusted entity by selecting:
    1. AWS service
    2. CloudFormation

cbs66.JPG

  1. Click Next: Permission.
  2. In the search box, type the policy name PurityServicePermission created in step 4, and check the box for this policy.

cbs68.JPG

  1. Click Next: Tags.
  2. (Optional) Add Tag if desired and click Next: Review.
  3. Enter the role name PurityServiceRole and click Create role.

cbs69.JPG

You now have a new role with the appropriate permission to deploy a new Cloud Block Store instance.

Appendix B   

Adding an S3 VPC Endpoint 

This appendix section shows you how to create an S3 VPC Endpoint. The following procedure also allows you to apply appropriate routes to the VPC Endpoint for your desired subnet.

  1. From the AWS console, navigate to the VPCs console.
  2. Click Endpoints.
  3. Click Create Endpoint.

cbs59.JPG

 

  1. Select the S3 service

cbs86.JPG

 

  1. Select the following parameters:
    1. Select the desired VPC for your Cloud Block Instance.
    2. Select the route table associated with the private subnet for the Cloud Block Store System interface.

cbs90.JPG

  1. Set custom access policy if desired. Otherwise, leave as Full Access and click Create Endpoint.

cbs62.JPG

Adding an DynamoDB VPC Endpoint 

This appendix section shows you how to create a DynamoDB VPC Endpoint. The following procedure also allows you to apply appropriate routes to the VPC Endpoint for your desired subnet.

  1. From the AWS console, navigate to the VPCs console.
  2. Click Endpoints.
  3. Click Create Endpoint.
  4. Select the DynamoDB service

cbs87.JPG

  1. Select the following parameters:
    1. Select the desired VPC for your Cloud Block Instance.
    2. Select the route table associated with the private subnet for the Cloud Block Store System interface.

 

cbs89.JPG

  1. Set custom access policy if desired. Otherwise, leave as Full Access and click Create Endpoint.

cbs62.JPG

 

Confirming the route created for VPC endpoints
  1. Navigate to the VPC console.
  2. Select the private subnet used for the Cloud Block Store System interface and check that there is a routing entry for both your Amazon S3 and DynamoDB VPC Endpoints. As seen in the following example, the name of the VPC Endpoint has a vpce- prefix.

cbs88.JPG

Appendix C

Performance Considerations

iSCSI Sessions:

It is important to note that AWS has bandwidth limits on each TCP connection. A single TCP connection is only capable of 5 Gb networking in AWS. Since a single iSCSI session equates to a single TCP connection, each iSCSI session is also limited on throughput. For applications that need higher throughput, it is advised to increase the number of iSCSI sessions on the ec2 host. If you are looking to maximize throughput from a given ec2, the number of iSCSI sessions will vary depending on the size of the ec2 instance. 

        

iscsi_sesh.JPG

The approximate guidance is to have 2 iSCSI sessions for each ec2  "xlarge" size. You can always increase the number of sessions beyond approximate guidance (up to 32 iSCSI per controller) if needed.

For example, an application running on an ec2 size of:

c5.2xlarge would have 4 iSCSI sessions to each CBS controller.

m5.4xlarge would have 8 iSCSI sessions to each CBS  controller.

r5.8xlarge would have 16 iSCSI sessions to each CBS controller.

r5n.16xlarge would have 32 iSCSI sessions to each CBS controller.

The guidance above provide approximate values and can be increased, up to 32 iSCSI sessions.