Skip to main content
Pure1 Support Portal

Cloud Block Store Deployment and Configuration Guide for AWS

 

Overview

Pure Storage's Cloud Block Store is a virtual appliance powered by the Purity Operating Environment (POE), which uses the native resources of AWS to enhance storage services with enterprise features. This document provides detailed procedures to successfully deploy a new Cloud Block Store instance in your AWS VPC.

Cloud Block Store's high-level architecture consists of two EC2 instances acting as controllers. The EC2 instances process data and provide data services such as data reduction, snapshots, encryption, and replication. In the initial deployment, there are seven EC2 instances called Virtual Drives. The Virtual Drives store and copy data onto S3 storage. You can upgrade a Cloud Block Store instance with additional capacity by adding a group of seven Virtual Drives for each capacity upgrade.

cbs46.JPG

Requirements

Cloud Block Store is deployed using CloudFormation. Pure provides customers with a CloudFormation (CF) template yaml file. Cloud Block Store must be deployed using its own standalone CloudFormation template. It is important that customers do not nest the Cloud Block Store CloudFormation template in other CloudFormation templates as this can lead to unexpected configuration issues over time.

Support Regions

 

Networking

Private Subnets

Cloud Block Store deploys with the following four Ethernet interfaces on each controller:

  1. System
  2. iSCSI
  3. Management
  4. Replication

During deployment, users will be asked to provide a private subnet for each interface.

  • As a security requirement, all subnets for Cloud Block Store must be private subnets.
  • Each subnet of the Cloud Block Store Ethernet interfaces must be different than the EC2 host initiators. iSCSI traffic between Cloud Block Store and EC2 host initiators can flow using route tables. In most cases, route tables between subnets are set to "local", which allows traffic to flow between subnets in the same VPC. There are two reasons for this requirement:
  • Separate subnets minimize the chance that an EC2 host would accidentally use IP addresses belonging to the Cloud Block Store Ethernet interfaces.
  • Cloud Block Store instances must use seven IP addresses for each capacity upgrade. Placing Cloud Block Store Ethernet interfaces on a separate subnet as EC2 host initiators reduce the chance that the subnet runs out of IP addresses.
  • The simplest topology is to place all Cloud Block Store interfaces (system, iSCSI, management, replication) onto a dedicated subnet. In this scenario, create a new private subnet with /25 subnet mask (255.255.255.128), which is dedicated to only Cloud Block Store interfaces. See the following network configurations diagram.
  • As an option, Cloud Block Store also supports multiple private subnets, one for each Cloud Block Store Ethernet interface type. This network topology offers an optimal solution because it allows traffic isolation. To minimize the chance of having a network overlap, we recommend this network topology when replicating between a Cloud Block Store instance and a FlashArray on-premises (or another Cloud Block Store instance) on a different network via Site-to-Site VPN, VPC Peering, or Transit Gateway. See the following network configurations diagram for an example of this topology.
  • If multiple private subnets are used, they must be all in the same Availability zone.
  • Ensure the private subnet for the System interface has internet access; see the Internet Access section.
 Network configurations

cbs92.JPG

The following table summarizes the requirements for each interface type:

Interface Name Interface Subnet Type Internet Access Required?
System eth0 Private Y
Replication eth1 Private N
iSCSI eth2 Private N
Management eth3 Private N

 

 

Internet Access

The private subnet for the System interface must have internet access. Internet access ensures that Cloud Block Store can phone home to Pure1 providing logs, alerts, and additional cloud management services. The simplest configuration is to route traffic from the System private subnet to a NAT Gateway, which resides in the public subnet.

cbs34.JPG

VPC Endpoint to S3 and DynamoDB

A Cloud Block Store instance copies all written data to S3 storage for high durability. In addition, a Cloud Block Store instance also communicates with DynamoDB. It is important to ensure this traffic travels within the AWS network rather than through the public internet. To route S3 and DynamoDB traffic, create a separate VPC Endpoint for each S3 and DynamoDB. The VPC Endpoint ensures that there is a minimal egress cost for sending data to S3 and DynamoDB. This is a standard AWS best practice and we highly encourage you to follow it.

See Appendix B for steps to add an S3 and DynamoDB VPC Endpoint to an existing subnet.

 

cbs84.JPG

Example: The following image displays a route entry created in the subnet used for the system interface. It shows all internet-bound traffic being directed to a NAT Gateway as well as all S3-bound and DynamoDB-bound traffic directed to their respective VPC Endpoints. 

cbs91.JPG

 

IP Addresses

Deploying a new Cloud Block Store instance requires fifteen initial IP addresses. For each capacity upgrade, an additional seven IP addresses are required from the subnet where System interfaces reside. We recommend that the subnet used for the System interfaces has a network mask of 255.255.255.128 (/25). This ensures that there is enough space for capacity expansion.

Replication 

Async Replication - When replicating from a FlashArray on-premises to a Cloud Block Store instance in a VPC, ensure that there is network connectivity between the respective sites. Likewise, replication between multiple Cloud Block Store instances requires network connectivity between the instances. More specifically, in order to replicate between a Cloud Block Store instance and a physical FlashArray (or another Cloud Block Store instance), the management and replication ports must communicate. Configure all security groups, network ACLs, and routing tables to allow traffic between the two sites for the respective management and replication ports. The following table provides the port number for each interface.

Service Type Firewall Port
Management interfaces 443
Replication interfaces 8117

When replicating between a physical datacenter and the AWS VPC, you can achieve network connectivity a number of ways, including AWS Direct Connect or a Site-to-Site-VPN connection.

For additional details on replication requirements and limits, see the Purity Replication Requirements and Interoperability Matrix.

ActiveCluster - ActiveCluster allows customers to synchronously replicate their Cloud Block Store volumes between two different Availability Zones. This protects customers from a full Availability Zone outage. However, there is no support for ActiveCluster with Cloud Block Store in the Oregon region (us-west-2) if customers plan on using the Pure1 Cloud Mediator. The Pure1 Cloud Mediator resides in the Oregon region. In order to limit the fault domain, customers who wish to deploy Cloud Block Store in an ActiveCluster configuration in Oregon must use the On-Premise Mediator.

EC2 instance vCPU limits

Note: Starting Oct 24, 2019, AWS switched to a new limits implementation where they put default limits on the total vCPU rather than the limits of each EC2 type. This made it much simpler since you no longer need to monitor limits of each instance type, but rather monitor the total vCPU usage of the instance families. More information can be found on the AWS announcement and AWS EC2 FAQ

You should ensure that your total vCPU max limits are sufficient to deploy Cloud Block Store. To view your current limits, go to your AWS EC2 console.

  1. Click on Limits
  2. Set the search filter to Running Instances.
  3. View the total limits for your A,C,D,H,I,M,R,T,Z instances.

cbs93.JPG

 

When deploying a Cloud Block Store instance, you have the option to choose the Cloud Block Store instance type. Each Cloud Block Store instance type will use up a certain amount of vCPUs. Each customer’s AWS account has a default max limit for the total vCPUs within each region. Prior to deploying Cloud Block Store resources, ensure that your max limit can accommodate the vCPUs needed for your Cloud Block Store instances . The Cloud Block Store minimum vCPUs required for deployments are as follows:

Cloud Block Store Type

Total vCPUs required (Initial deployment)

Total vCPUs required (After one capacity upgrade)
//VA10-R1

128

(Based on 2 x c5n.9xlarges and 7 x i3.2xlarges)

184

(Based on adding 7 x i3.2xlarges)

 

Cloud Block Store Type

Total vCPUs required (Initial deployment)

Total vCPUs required (After first capacity upgrade)

Total vCPUs required (After second capacity upgrade)
//VA20-R1

256

Based on 2 x c5n.18xlarges and 7 x i3.4xlarges

368

(Based on adding 7 x i3.4xlarges)

480

(Based on adding 7 x i3.4xlarges)

 

 

 

Security Groups 

A Cloud Block Store instance has four Ethernet interfaces that are used for the following types of traffic: iSCSI, management, replication, and system intercommunication. Each Ethernet interface requires different types of TCP access.

  • As a security best practice, create three different security group as shown in the table below with the appropriate TCP access for the replication, iSCSI, and management interfaces. Each security group will be applied during the Cloud Block Store deployment.
  • The security group for the System interface is auto-created during the deployment.
  • The three security groups must be in the same region and VPC.
Security Group Inbound Outbound
System (eth0)* Auto-Created Auto-Created
Replication (eth1) 8117 8117
iSCSI (eth2) 3260  
Mgmt (eth3) 22, 80, 443, 8084 443

* Note: For the System interface, a fourth security group called " PureSystemSecurityGroup" is automatically created and applied as part of the Cloud Block Store deployment.

 

IAM Role and Permissions 

To automate a Cloud Block Store deployment and upgrade, an IAM role with appropriate permissions is required. Even with Administrator permissions elevated, the IAM role is still required. You must create a new IAM policy with the appropriate IAM permissions. Then create the IAM role and attach the IAM policy to your IAM role. For exact steps to create the IAM Role and policy, see Appendix A.

Before you begin 

Deployment may fail if all the requirements are not met. Before deploying Cloud Block Store, go through the following checklist:

  1. Ensure that you have a private subnet (with /25 network mask) created specifically for Cloud Block Store interfaces. You can put all interfaces onto a single subnet, or create separate subnets for each interface. Refer to the Network Section for details and network options.
  2. Ensure there is internet access from the private subnet used for the system interfaces. (NAT Gateway recommended).
  3. Ensure there are VPC Endpoints for S3 and DynamoDB traffic from the private subnet used for the System interfaces. See VPC Endpoint section for details.
  4. Ensure that there are three separate Security Groups for iSCSI, management, and replication traffic. See Security Group for details.
  5. Ensure that you increase EC2 max limits for c5n and i3 instances. See EC2 Instance Limits for details.
  6. Has the IAM role with appropriate permissions been created? See IAM Role and Permission for details.
  7. If the above requirements have been met, proceed and deploy Cloud Block Store.
  8. Do you have a valid license key provided by Pure Storage? There are two options to obtain a license key:
    1. Working with Pure Storage sales teams and Pure Storage partners to obtain a Pure as-a-Service subscription.
    2. Going directly to the AWS Marketplace to sign up for a short term subscription service.

Deploying Cloud Block Store

Deploy Cloud Block Store from the AWS Marketplace.

  1. Go to the AWS Marketplace deployment listing for Cloud Block Store.

Alternatively, you can go to the AWS Marketplace and search for Cloud Block Store. 

  1. In the listing, click Continue to Subscribe.

cbs71.JPG

 

  1. Click Continue to Configuration.

cbs72.JPG

 

  1. Select your desired region and click Continue to Launch.

cbs73.JPG

 

  1. Review the selections and click Launch. This launches the AWS CloudFormation stack creation service.

cbs74.JPG

 

  1. The CloudFormation stack creation wizard should appear with all the template options pre-selected. Click Next to proceed.

cbs76.JPG

 

  1. Enter the desired information for your Cloud Block Store instance:
    1. Enter a Stack name. Stack name is for your Cloud Block Store deployment.
    2. Enter an ArrayName. ArrayName name is for your virtual appliance and is reflected in the name of the Cloud Block Store EC2 resources.
    3. Enter the RelayHost domain name. RelayHost is your domain name and can be modified later using the Cloud Block Store GUI or CLI. Example: purestorage.com
    4. Select the PurityInstanceType. PurityInstanceType is the desired Cloud Block Store model. You can view the model sizes and details in the Cloud Block Store Support Matrix.
    5. Enter the LicenseKey. You receive the license key when you create the subscription through a Pure as-a-Service subscription or the AWS Marketplace.
    6. (Optional) In the AlertRecipients field, enter a comma-separated list of email contacts to receive email alerts. You can modify this later using the Cloud Block Store GUI or CLI.
    7. Select a KeyName. KeyName is the name of an existing AWS Key Pair you wish to use for SSH access.
    8. Select the SystemSubnet. SystemSubnet is a private subnet for the system interfaces and requires internet access. Refer to the Network Section for details and network options.
    9. Select the ReplicationSubnet. ReplicationSubnet is a private subnet for the Replication interfaces. Refer to the Network Section for details and network options.
    10. Select the iSCSISubnet. iSCSISubnet is a private subnet for the iSCSI interfaces. Refer to the Network Section for details and network options.
    11. Select the ManagementSubnet. ManagementSubnet is a private subnet for the management interfaces. Refer to the Network Section for details and network options.
    12. Select the ReplicationSecurityGroup. ReplicationSecurityGroup allows both inbound and outbound TCP traffic on ports 8117. Refer to Security Group for details.
    13. Select the iSCSISecurityGroup. iSCSISecurityGroup security group allows inbound TCP traffic on ports 3260. Refer to Security Group for details.
    14. Select the ManagementSecurityGroup. ManagementSecurityGroup allows inbound TCP traffic on ports 22, 80, 8084 as well as inbound/outbound on port 443. Refer to Security Group for details.

 

  1. Keep the default values for the remaining fields and move to the next step.

cbs47b.JPG

  1. Click Next.
  2. Select Stack Options:
    1. (Optional) Apply tags for the Cloud Block Store resources.
    2. Select the IAM Role: PurityServiceRole. Creating this role is a pre-requisite. See IAM Role and Permission for details. 
    3. In the Stack creation options section, set Termination protection to Enabled.

The IAM role option is required to deploy Cloud Block Store successfully as well as to upgrade Cloud Block Store in the future.

cbs78.JPG

  1. Click Next.
  2. Review the selected parameters. Scroll to the bottom of the page and check the acknowledge box. 
  3. Click Create stack.

cbs49.JPG

  1. The Cloud Block Store stack takes approximately ten minutes to complete. When complete, the stack should appear with CREATE_COMPLETE status.

cbs50.jpg

 

Do Not Shutdown Cloud Block Store 

It is important to note that Cloud Block Store is an enterprise virtual appliance and is expected to always be on. Therefore, do not try to shut down a Cloud Block Store instance or any of the underlying Cloud Block Store's EC2 resources (controllers or virtual drives). 

Removing Cloud Block Store

Cloud Block Store can only be removed (terminated) by Pure Support in the initial version to ensure all the resources in the stack are cleanly removed. Support for customers to perform the Cloud Block Store stack termination will be supported in the next version release. Please contact Pure Storage Support for Cloud Block Store instance removals.

Managing Cloud Block Store

Viewing Cloud Block Store Network Interfaces

Once a Cloud Block Store instance is deployed, you can view the IP addresses of the Cloud Block Store instance from different locations. In the CloudFormation console where you deployed the stack, the Output tab displays the IP addresses for each controller. See the following screenshot as an example:

cbs41.JPG

Additionally, you can view the same IP address information by logging onto the Cloud Block Store instance's GUI using the management IP address.

  1. Click Settings and select the Network tab.

For CLI users, SSH into the management port and run: purenetwork list.

cbs42.JPG

Viewing the Cloud Block Store Instance in the AWS Console

You can view the underlying components of Cloud Block Store from the EC2 console. Identify the Cloud Block Store instance's controllers by the ct0 and ct1 suffix. Identify the virtual drives by the -vd suffix.

CBS18c.JPG

Logging onto the GUI of a Cloud Block Store

Use the management port IP address to log onto your Cloud Block Store instances.

  1. Log onto a separate Windows or Linux host with network access to the management Ethernet port of the Cloud Block Store instance.

Make sure your subnet route tables, firewalls, and Security groups allow your host network access to the Cloud Block Store instance's management interface.

  1. Open a browser and enter the management IP address. See Viewing Cloud Block Store Network Interfaces section for the location of your management IP addresses. You can also log on and manage a Cloud Block Store instance using CLI or REST APIs.
  2. Enter the username and password:
  • Default username: pureuser
  • Default password: pureuser
  1. The Cloud Block Store's Dashboard displays high-level storage usage and performance metrics. From the Dashboard, you can navigate to the various tabs on the left to view the detailed storage usage, performance analysis, health, and other settings.

CBS20.JPG

Creating volumes 

The following example provides steps for the GUI. For CLI users, SSH into the management port and run: purevol create --size <size> <vol name>

  1. Using the Cloud Block Store's GUI: 
    1. In the left navigation pane, click Storage.
    2. Click Volumes.
    3. Click the + icon to add a new volume.

CBS22.JPG

  1. Enter the name and desired size of the volume and click Create.
  2. You can see the new volume in the list of volumes.

CBS23.JPG

Creating hosts
You must create a host with corresponding IQN before you can attach a volume to it.

The following example provides steps using the GUI. For CLI users, SSH into the management port and run:purehost create  --iqnlist <Host IQN number> <host name> 

Using the Cloud Block Store's GUI:

  1. On the left navigation pane, click Storage.
  2. Click Hosts.
  3. Click the + icon to add a new host.

CBS24.JPG

  1. Enter the desired name for the host and click Create.
  2. Once created, the host is displayed in the list of available hosts.

CBS26.JPG

  1. In the Host Ports box:
    1. Click the expand icon.
    2. Select Configure IQNs.

CBS27.JPG

  1. Enter the IQN name of the iSCSI host. 

CBS28.JPG

Locate the host's IQN number by running the following commands on the respective OS. Ensure iSCSI service has started on the host.

Windows PowerShell (Run as Administrator):

(Get-InitiatorPort).NodeAddress

Linux:

cat /etc/iscsi/initiatorname.iscsi

Solaris:

iscsiadm list initiator-node

Connecting iSCSI host to Cloud Block Store volumes
Transit Gateway

A Transit Gateway may be utilized to establish connectivity between Cloud Block Store and EC2 hosts when they reside in different VPCs and within the same AZ. The use of a Transit Gateway to establish connectivity between Cloud Block Store and EC2 host VPCs when they reside in different AZs is not supported. 

See AWS Documentation for Transit Gateway implementation detail and limits.

 

The following example provides steps for the GUI. For CLI users, SSH into the management port and run:purevol connect --host <host name> <vol name>

  Using the Cloud Block Store's GUI:

  1. On the left navigation pane, click Storage.
  2. Click Volumes.

cbs35.JPG

  1. Select the desired volume.

cbs36.JPG

  1. In the Connected Host box, click the expand icon and select Connect.

cbs38.JPG

  1. Select the desired host(s) and click Connect.

Ensure appropriate clustering software on your hosts is installed if you wish to connect the same volume to multiple hosts.

cbs39.JPG

Mounting a volume on iSCSI host

The following steps provide an example of how to connect a Windows and an Amazon Linux 2 EC2 compute host to a Pure Storage Cloud Block Store instance.

Prerequisites:
  • The EC2 compute host must have an iSCSI initiator client software. Most modern operating systems already have them pre-installed. 

  • The EC2 compute host must have network access to the iSCSI subnet of the Cloud Block Store instance. The EC2 compute host's network ports and the Cloud Block Store instance's iSCSI ports must have route table entries allowing them to communicate. Ensure that Security Groups, Network ACLs, and firewalls are not preventing connectivity.

Windows AMI Host

Setting up multipathing with Microsoft MPIO

To protect against a single point of failure, this procedure allows multiple paths to the Cloud Block Store instance. Perform this procedure only when on a new Windows AMI host. 

  1. Log onto the Windows ec2 host.
  2. (Run as administrator) To check if Microsoft MPIO is installed on the system, open an elevated PowerShell terminal () and run:
PS C:\> Get-WindowsFeature -Name 'Multipath-IO'
Display Name                                            Name Install State
------------                                                      ---- -------------
[ ] Multipath I/O                                       Multipath-IO Available
  1. If step one shows install state of ‘Available’, follow these steps to install Microsoft MPIO.

  2. In the same PowerShell terminal, run:

PS C:\> Add-WindowsFeature -Name 'Multipath-IO'
Success Restart Needed Exit Code      Feature Result
------- -------------- ---------      --------------
True    Yes       SuccessRest... {Multipath I/O}
WARNING: You must restart this server to finish the installation process.
  1. Reboot the Windows ec2 host.
  2. When the Windows ec2 host boots back up, verify that Microsoft MPIO is installed.

  3. In a PowerShell terminal, run:

PS C:\> Get-WindowsFeature -Name 'Multipath-IO'
Display Name                                            Name Install State
------------                                                      ---- -------------
[X] Multipath I/O                                       Multipath-IO Installed
  1. In the same PowerShell terminal, run the following command to start the iSCSI service.

PS C:\> Set-Service -Name msiscsi
  1. To set the iSCSI service to start on boot, run:
PS C:\> Set-Service -Name msiscsi -StartupType Automatic
  1. Add Pure FlashArray as an MPIO vendor. In the same PowerShell terminal, run:

PS C:\> New-MSDSMSupportedHw -VendorId PURE -ProductId FlashArray
VendorId ProductId
--------       ---------
PURE        FlashArray
  1. Enable iSCSI support by Microsoft MPIO. In the same PowerShell terminal, run:

PS C:\> Enable-MSDSMAutomaticClaim -BusType iSCSI
VendorId ProductId
-------- ---------
MSFT2005 iSCSIBusType_0x9
False
  1. Set default MPIO path policy to Lowest Queue Depth.

PS C:\> Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy LQD
  1. Set MPIO Timer Values. In the same PowerShell terminal, run:

PS C:\> Set-MPIOSetting -NewPathRecoveryInterval 20 -CustomPathRecovery Enabled -NewPDORemovePeriod 30 -NewDiskTimeout 60 -NewPathVerificationState Enabled
  1. If prompted by the above commands, reboot the Windows AMI host.

MPIO setup is now complete.

 

Mounting a volume on Windows iSCSI host

Follow steps (1-7) to establish iSCSI connections. When you make a connection, subsequent volumes connected from Cloud Block Store to this host appear in Disk Management.

To complete the following steps, you need the IP addresses of both Cloud Block Store controller iSCSI interfaces. See Viewing Network Interface to obtain the iSCSI IP addresses. Keep the iSCSI IP addresses handy.

  1. (Run as administrator) On the Windows host, open an elevated PowerShell terminal and run the following command to gather the IP address of your Windows instance. The following example shows  10.0.1.118
PS C:\> get-netadapter |Get-NetIPAddress -AddressFamily ipv4
IPAddress         : 10.0.1.118
InterfaceIndex    : 5
InterfaceAlias    : Ethernet
AddressFamily     : IPv4
Type              : Unicast
PrefixLength      : 24
PrefixOrigin      : Dhcp
SuffixOrigin      : Dhcp
AddressState      : Preferred
ValidLifetime     : 00:57:00
PreferredLifetime : 00:57:00
SkipAsSource      : False
PolicyStore       : ActiveStore

2. In the same PowerShell window, run the following command to create a new Target Portal connection between your Windows host and your Cloud Block Store instance.

PS C:\> New-IscsiTargetPortal -TargetPortalAddress <CBS iSCSI IP address>

        where

<CBS iSCSI IP address>  is the IP address of the Cloud Block controller 1 or controller 2. You only need to enter one.

  1. In the same PowerShell window, run the following command to create an iSCSI sessions to Cloud Block Store controller 0.
PS C:\> Get-IscsiTarget | Connect-IscsiTarget -InitiatorPortalAddress <Windows IP address> -IsMultipathEnabled $true -IsPersistent $true -TargetPortalAddress <CBS iSCSI interface IP address CT0>

where

<Windows IP address> is the Windows host IP address obtained in step two.

<CBS iSCSI IP address CT0> is the iSCSI IP address of Cloud Block Store controller 0. See the following screenshot as an example. 

cbs70.JPG

  1. (Optional) Run the same command three more times. This step creates three more iSCSI sessions for a total of four sessions for controller 0. This is the recommended number of sessions for applications requiring high throughput. 
  2. In the same PowerShell window, run the same command to create iSCSI sessions to Cloud Block Store controller 1.
PS C:\> Get-IscsiTarget | Connect-IscsiTarget -InitiatorPortalAddress <Windows IP address> -IsMultipathEnabled $true -IsPersistent $true -TargetPortalAddress <CBS iSCSI interface IP address CT0>

where

<Windows IP address> is the Windows host IP address obtained in step two.

<CBS iSCSI IP address CT0> is the iSCSI IP address of Cloud Block Store controller 1. See the following screenshot as an example. 

  1. (Optional) Run the same command three more times. This step will create three more iSCSI sessions for a total of four sessions for controller 1. This is the recommended number of sessions for applications requiring high throughput. 
  2. To confirm the total number of sessions, run:  
PS C:\> Get-IscsiSession
  1. Go to Disk Management and perform a rescan to confirm the new volume. 

cbs79.JPG

  1. Bring the volume online and format with the desired file system. Any subsequent volume you create and connect to this host in the Cloud Block Store UI (CLI/GUI/REST) displays automatically in Disk Management after a rescan.

You have successfully connected and mounted a Cloud Block Store volume to your host. 

AWS Linux 2 AMI host

This example walks you through connecting Cloud Block Store volumes to an AWS Linux 2 AMI. Some steps might be repeated from steps seen earlier in this guide.

The steps include:

  • Configuring the Linux host for iSCSI and MPIO with Cloud Block Store
  • Host and volume creation on Cloud Block Store
  • Connecting and mounting Cloud Block Store volumes to Linux host
Configuring iSCSI on Linux Initiator with Cloud Block Store
On Linux host:

1. Log in to Amazon Linux 2 ec2 instance.

2. Install iscsi-initiator-utils onto Linux host.

sudo yum -y install iscsi-initiator-utils

3. Install lsscsi.

sudo yum -y install lsscsi

4. Install the device-mapper-multipath package.

sudo yum -y install device-mapper-multipath

5. Start iSCSI daemon service.

sudo service iscsid start

6. Collect Linux initiator IQN.

cat /etc/iscsi/initiatorname.iscsi

Example: 

[ec2-user@ip-10-0-1-235 ~]$ cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:361dfc3de387

7. Remove 51-ec2-hvm-devices.rules file. 

sudo rm /etc/udev/rules.d/51-ec2-hvm-devices.rules

This step is only required with Amazon Linux 2 AMI.

8. Create a new udev rules file called 99-pure-storage.rules for Pure Storage and copy the contents into the file as shown in the following example.

sudo vim /etc/udev/rules.d/99-pure-storage.rules

Example: 

[ec2-user@ip-10-0-1-235 ~]$ sudo vim /etc/udev/rules.d/99-pure-storage.rules
[ec2-user@ip-10-0-1-235 ~]$ cat /etc/udev/rules.d/99-pure-storage.rules
# Recommended settings for Pure Storage FlashArray.cat 

# Use noop scheduler for high-performance solid-state storage
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/scheduler}="noop"

# Reduce CPU overhead due to entropy collection
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/add_random}="0"

# Spread CPU load by redirecting completions to originating CPU
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/rq_affinity}="2"

# Set the HBA timeout to 60 secondsi
ACTION=="add", SUBSYSTEMS=="scsi", ATTRS{model}=="FlashArray ", RUN+="/bin/sh -c 'echo 60 > /sys/$DEVPATH/device/timeout'"

9. Reboot the Linux host.

sudo reboot

 

 

On Cloud Block Store instance:

10. Using the CLI (ssh), log ioto the Cloud Block Store instance using the management IP. See Viewing Cloud Block Store Network Interfaces section for the location of your management IP addresses.

  • Default username: pureuser
  • Default password: pureuser
ubuntu@ip-10-0-0-107:~$ ssh pureuser@10.0.1.61
pureuser@10.0.1.61's password:

Mon Sep 09 11:40:25 2019
Welcome pureuser. This is Purity Version 5.3.0.beta10 on FlashArray MPIOConfig
http://www.purestorage.com/
pureuser@MPIOConfig>

11. Create a host on Cloud Block Store.

purehost create <Linux hostname>

where

<Linux hostname> is the desired hostname.

This example shows a host created with name Linux2AMI

pureuser@MPIOConfig> purehost create Linux2AMI
Name       WWN  IQN  NQN
Linux2AMI  -    -    -

12. Configure host with IQN number.

purehost setattr --addiqnlist <IQN number> <Linux hostname>

where

<IQN number> is the initiator IQN number gathered in step 6.

<Linux hostname> is the hostname created in step 11.

Example:

pureuser@MPIOConfig> purehost setattr --addiqnlist iqn.1994-05.com.redhat:361dfc3de387 Linux2AMI
Name       WWN  IQN                                  NQN  Host Group
Linux2AMI  -    iqn.1994-05.com.redhat:361dfc3de387  -    -

13. Create one or more volumes on Cloud Block Store.

purevol create <volume name> --size <size>

where

<volume name> is the desired volume name.

<size> is the desired volume size (GB or TB suffix).

This example shows the creation of 2 TB volumes:

pureuser@MPIOConfig> purevol create vol1 --size 2TB
Name  Size  Source  Created                  Serial
vol1  2T    -       2019-09-09 11:41:55 PDT  2B60622E2B014A2200011010
pureuser@MPIOConfig> purevol create vol2 --size 2TB
Name  Size  Source  Created                  Serial
vol2  2T    -       2019-09-09 11:42:00 PDT  2B60622E2B014A2200011011

14. Connect host to volumes.

purevol connect <volume name> --host <host name>

where

<volume name> is the name of the volume.

<host name> is the name of the host. 

Example: 

pureuser@MPIOConfig> purevol connect vol1 --host Linux2AMI
Name  Host Group  Host       LUN
vol1  -           Linux2AMI  1
pureuser@MPIOConfig> purevol connect vol2 --host Linux2AMI
Name  Host Group  Host       LUN
vol2  -           Linux2AMI  2

15. Collect the IP addresses of each controller and the IQN number for Cloud Block Store. The IQN is identical for both iSCSI interfaces.

pureport list

Example

pureuser@MPIOConfig> pureport list
Name      WWN  Portal           IQN                                                      NQN  Failover
CT0.ETH2  -    10.0.1.202:3260  iqn.2010-06.com.purestorage:flasharray.666667d86130ec06  -    -
CT1.ETH2  -    10.0.1.110:3260  iqn.2010-06.com.purestorage:flasharray.666667d86130ec06  -    -
iSCSI login and MPIO Configuration
On Linux host:

16. Create four new iSCSI interfaces on the initiator numbered 0-3.

sudo iscsiadm -m iface -I iscsi0 -o new

sudo iscsiadm -m iface -I iscsi1 -o new

sudo iscsiadm -m iface -I iscsi2 -o new

sudo iscsiadm -m iface -I iscsi3 -o new

Example:

[ec2-user@ip-10-0-1-235 ~]$ sudo iscsiadm -m iface -I iscsi0 -o new
New interface iscsi0 added
[ec2-user@ip-10-0-1-235 ~]$ sudo iscsiadm -m iface -I iscsi1 -o new
New interface iscsi1 added
[ec2-user@ip-10-0-1-235 ~]$ sudo iscsiadm -m iface -I iscsi2 -o new
New interface iscsi2 added
[ec2-user@ip-10-0-1-235 ~]$ sudo iscsiadm -m iface -I iscsi3 -o new
New interface iscsi3 added

17. Discover target iSCSI portals using iSCSI interface IP.

sudo iscsiadm -m discovery -t st -p <CBS iSCSI IP>:3260

where

<CBS iSCSI IP>  is the iSCSI IP address of the Cloud Block controller 1 or controller 2, collected in step 15. You only need to enter one iSCSI IP address.

Example

[ec2-user@ip-10-0-1-235 ~]$ sudo iscsiadm -m discovery -t st -p 10.0.1.202:3260
10.0.1.202:3260,1 iqn.2010-06.com.purestorage:flasharray.666667d86130ec06
10.0.1.110:3260,1 iqn.2010-06.com.purestorage:flasharray.666667d86130ec06
10.0.1.202:3260,1 iqn.2010-06.com.purestorage:flasharray.666667d86130ec06
10.0.1.110:3260,1 iqn.2010-06.com.purestorage:flasharray.666667d86130ec06
10.0.1.202:3260,1 iqn.2010-06.com.purestorage:flasharray.666667d86130ec06
10.0.1.110:3260,1 iqn.2010-06.com.purestorage:flasharray.666667d86130ec06
10.0.1.202:3260,1 iqn.2010-06.com.purestorage:flasharray.666667d86130ec06
10.0.1.110:3260,1 iqn.2010-06.com.purestorage:flasharray.666667d86130ec06

18. Connect the Linux host to the Cloud Block Store instance.

sudo iscsiadm -m node -p <CBS iSCSI IP CT0> --login

sudo iscsiadm -m node -p <CBS iSCSI IP CT1> --login

where

<CBS iSCSI IP CT0>  is the Cloud Block Store IP address of controller 0 collected from step 15.

<CBS iSCSI IP CT1>  is the Cloud Block Store IP address of controller 1 collected from step 15.

Example:

[ec2-user@ip-10-0-1-235 ~]$ sudo iscsiadm -m node -p 10.0.1.202 --login
Logging in to [iface: iscsi0, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260] (multiple)
Logging in to [iface: iscsi1, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260] (multiple)
Logging in to [iface: iscsi2, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260] (multiple)
Logging in to [iface: iscsi3, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260] (multiple)
Login to [iface: iscsi0, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260] successful.
Login to [iface: iscsi1, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260] successful.
Login to [iface: iscsi2, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260] successful.
Login to [iface: iscsi3, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260] successful.

[ec2-user@ip-10-0-1-235 ~]$ sudo iscsiadm -m node -p 10.0.1.110 --login
Logging in to [iface: iscsi0, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260] (multiple)
Logging in to [iface: iscsi1, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260] (multiple)
Logging in to [iface: iscsi2, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260] (multiple)
Logging in to [iface: iscsi3, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260] (multiple)
Login to [iface: iscsi0, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260] successful.
Login to [iface: iscsi1, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260] successful.
Login to [iface: iscsi2, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260] successful.
Login to [iface: iscsi3, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260] successful.

19. Add automatic iSCSI login on boot.

sudo iscsiadm -m node -L automatic

20. Confirm that each volume has eight entries, each representing a virtual device path.

lsscsi -d

Example: There are two volumes, therefore there are 16 total entries. 

[ec2-user@ip-10-0-1-235 ~]$ lsscsi -d
[2:0:0:1]    disk    PURE     FlashArray       8888  /dev/sda [8:0]
[2:0:0:2]    disk    PURE     FlashArray       8888  /dev/sdb [8:16]
[3:0:0:1]    disk    PURE     FlashArray       8888  /dev/sde [8:64]
[3:0:0:2]    disk    PURE     FlashArray       8888  /dev/sdf [8:80]
[4:0:0:1]    disk    PURE     FlashArray       8888  /dev/sdj [8:144]
[4:0:0:2]    disk    PURE     FlashArray       8888  /dev/sdl [8:176]
[5:0:0:1]    disk    PURE     FlashArray       8888  /dev/sdi [8:128]
[5:0:0:2]    disk    PURE     FlashArray       8888  /dev/sdk [8:160]
[6:0:0:1]    disk    PURE     FlashArray       8888  /dev/sdc [8:32]
[6:0:0:2]    disk    PURE     FlashArray       8888  /dev/sdd [8:48]
[7:0:0:1]    disk    PURE     FlashArray       8888  /dev/sdg [8:96]
[7:0:0:2]    disk    PURE     FlashArray       8888  /dev/sdh [8:112]
[8:0:0:1]    disk    PURE     FlashArray       8888  /dev/sdn [8:208]
[8:0:0:2]    disk    PURE     FlashArray       8888  /dev/sdp [8:240]
[9:0:0:1]    disk    PURE     FlashArray       8888  /dev/sdm [8:192]
209:0:0:2]    disk    PURE     FlashArray       8888  /dev/sdo [8:224]

21. Enable default multipath configuration file and start the multipath daemon.

sudo mpathconf --enable --with_multipathd y

22. Replace the content of the multipath.conf file with the following configuration for Pure Storage.

sudo vim /etc/multipath.conf

  • polling_interval 10
  • vendor "PURE"
  • path_selector "queue-length 0"
  • path_grouping_policy group_by_prio
  • path_checker tur
  • fast_io_fail_tmo 10
  • dev_loss_tmo 60
  • no_path_retry 0
  • hardware_handler “1 alua” 
  • prio alua 
  • failbackimmediate

See RHEL documentation for /etc/multipath.conf attribute descriptions.

[ec2-user@ip-10-0-1-235 ~]$ sudo vim /etc/multipath.conf
[ec2-user@ip-10-0-1-235 ~]$ sudo cat /etc/multipath.conf
defaults {
       polling_interval      10
}
devices {
       device {
               vendor                "PURE"
               path_selector         "queue-length 0"
               path_grouping_policy  group_by_prio
               path_checker          tur
               fast_io_fail_tmo      10
               dev_loss_tmo          60
               no_path_retry         0
               hardware_handler      "1 alua"
               prio                  alua
               failback              immediate
       }
}

23. Restart multipathd service for multipath.conf changes to take effect.

sudo service multipathd restart

24. Run the multipath command below to confirm each Cloud Block Store volume has multiple paths. A multipathed Cloud Block Store volume should be represented by a device-mapped ID, as seen in green in the example below. Verify the paths are divided into two priority groups, as seen in orange in the following example.

sudo multipath -ll

Example: Two Cloud Block Store volumes are represented by two device-mapped IDs in green. 

[ec2-user@ip-10-0-1-235 ~]$ sudo multipath -ll
3624a93702b60622e2b014a2200011011 dm-1 PURE    ,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| |- 2:0:0:2 sdb  8:16  active ready running
| |- 3:0:0:2 sdf  8:80  active ready running
| |- 4:0:0:2 sdl  8:176 active ready running
| `- 5:0:0:2 sdk  8:160 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
  |- 6:0:0:2 sdd  8:48  active ready running
  |- 7:0:0:2 sdh  8:112 active ready running
  |- 8:0:0:2 sdp  8:240 active ready running
  `- 9:0:0:2 sdo  8:224 active ready running
3624a93702b60622e2b014a2200011010 dm-0 PURE    ,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| |- 2:0:0:1 sda  8:0   active ready running
| |- 3:0:0:1 sde  8:64  active ready running
| |- 4:0:0:1 sdj  8:144 active ready running
| `- 5:0:0:1 sdi  8:128 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
  |- 6:0:0:1 sdc  8:32  active ready running
  |- 7:0:0:1 sdg  8:96  active ready running
  |- 8:0:0:1 sdn  8:208 active ready running
  `- 9:0:0:1 sdm  8:192 active ready running

25. Create mount points on the initiator.

sudo mkdir /mnt/store0
sudo mkdir /mnt/store1

26. Create the desired filesystem on each Cloud Block Store volume using the device-mapped IDs, and then mount each volume to the mount point.

sudo mkfs.ext4 /dev/mapper/<device-mapped ID>

where

<device-mapped ID> is the device-mapped ID from step 24

The following example uses filesystem ext4 for each device.

dm-0

[ec2-user@ip-10-0-1-235 ~]$ sudo mkfs.ext4 /dev/mapper/3624a93702b60622e2b014a2200011010
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=1024 blocks
134217728 inodes, 536870912 blocks
26843545 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
16384 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
    102400000, 214990848, 512000000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

dm-1 

[ec2-user@ip-10-0-1-235 ~]$ sudo mkfs.ext4 /dev/mapper/3624a93702b60622e2b014a2200011011
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=1024 blocks
134217728 inodes, 536870912 blocks
26843545 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
16384 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
    102400000, 214990848, 512000000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

27. Mount Cloud Block Store volumes onto mount point.

sudo mount/dev/mapper/<device-mapped ID> <mount point>

where

<device-mapped ID> is the device-mapped ID collected from step 24.

<mount point> is the mount point created in step 25.

[ec2-user@ip-10-0-1-235 ~]$ sudo mount /dev/mapper/3624a93702b60622e2b014a2200011010 /mnt/store0
[ec2-user@ip-10-0-1-235 ~]$ sudo mount /dev/mapper/3624a93702b60622e2b014a2200011011 /mnt/store1

28. The mount points now report 2TB, and block storage can be consumed.

[ec2-user@ip-10-0-1-235 ~]$ df -h
Filesystem                                     Size  Used Avail Use% Mounted on
/dev/mapper/3624a93702b60622e2b014a2200011010  2.0T   81M  1.9T   1% /mnt/store0
/dev/mapper/3624a93702b60622e2b014a2200011011  2.0T   81M  1.9T   1% /mnt/store1
On Cloud Block Store: 

29. I/O should only flow to the primary controller instance. Run I/O on your Linux host and confirm on your Cloud Block Store instance with the following command:

purehost monitor --balance --interval 3

Example:

pureuser@MPIOConfig> purehost monitor --balance --interval 3
Name       Time                     Initiator WWN  Initiator IQN                        Initiator NQN  Target       Target WWN  Failover  I/O Count  I/O Relative to Max
Linux2AMI  2019-08-26 10:31:32 PDT  -              iqn.1994-05.com.redhat:b9ddc64322ef  -              (primary)    -           -         1626       100%
                                                   iqn.1994-05.com.redhat:b9ddc64322ef                 (secondary)              -         0          0%
                                                  

 

When adding subsequent Cloud Block Store volumes to the Linux host, a rescan will be required to see the additional storage on your Linux host.

sudo iscsiadm -m session --rescan

Example:

[ec2-user@ip-10-0-1-235 ~]$ sudo iscsiadm -m session --rescan
Rescanning session [sid: 2, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260]
Rescanning session [sid: 3, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260]
Rescanning session [sid: 4, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260]
Rescanning session [sid: 1, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260]
Rescanning session [sid: 5, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260]
Rescanning session [sid: 6, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260]
Rescanning session [sid: 8, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260]
Rescanning session [sid: 7, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260]

Customer Support

Customer can contact Pure Storage Support for any issue or questions relating to Cloud Block Store.

Customer support also performs non-disruptive upgrades (NDU) for Cloud Block Store instances, Purity code upgrades, as well as capacity upgrades. 

 

 

 

 

Appendix A

IAM role and permissions

This section provides steps on creating the IAM role and permissions required to Deploy and Upgrade Cloud Block Store. First, create the permissions policy. Then create the IAM role and attach the permissions policy to your role.

  1. Go to the main IAM console.
  2. Create a new policy by selecting Policies and click Create policy.

CBS65.JPG

  1. Click the JSON tab. Replace the default content of the JSON file with the following permissions. You can copy/paste the following content directly into the JSON file.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "lambda:CreateFunction",
                "ec2:AuthorizeSecurityGroupIngress",
                "lambda:TagResource",
                "ec2:ModifyVolumeAttribute",
                "ec2:DescribeInstances",
                "iam:ListRoleTags",
                "lambda:GetFunctionConfiguration",
                "iam:PutRolePolicy",
                "dynamodb:DeleteTable",
                "iam:AddRoleToInstanceProfile",
                "ec2:RevokeSecurityGroupEgress",
                "ec2:DescribeVolumes",
                "lambda:DeleteFunction",
                "ec2:DescribeKeyPairs",
                "autoscaling:TerminateInstanceInAutoScalingGroup",
                "autoscaling:DeleteTags",
                "iam:GetRole",
                "iam:GetPolicy",
                "ec2:CreateTags",
                "ec2:ModifyNetworkInterfaceAttribute",
                "iam:DeleteRole",
                "ec2:RunInstances",
                "s3:DeleteBucketPolicy",
                "kms:DisableKey",
                "ec2:CreateVolume",
                "ec2:RevokeSecurityGroupIngress",
                "ec2:CreateNetworkInterface",
                "dynamodb:CreateTable",
                "lambda:UpdateFunctionCode",
                "autoscaling:DeleteAutoScalingGroup",
                "ec2:DescribeSubnets",
                "iam:GetRolePolicy",
                "iam:CreateServiceLinkedRole",
                "iam:CreateInstanceProfile",
                "ec2:AttachVolume",
                "kms:EnableKey",
                "autoscaling:DescribeAutoScalingInstances",
                "kms:UntagResource",
                "s3:GetBucketTagging",
                "iam:UntagRole",
                "dynamodb:ListTables",
                "kms:PutKeyPolicy",
                "iam:TagRole",
                "dynamodb:ListTagsOfResource",
                "kms:ListResourceTags",
                "s3:ListBucket",
                "lambda:UntagResource",
                "iam:PassRole",
                "ec2:DescribeAvailabilityZones",
                "autoscaling:DescribeScalingActivities",
                "lambda:ListTags",
                "s3:PutBucketTagging",
                "iam:DeleteRolePolicy",
                "kms:CreateKey",
                "s3:DeleteBucket",
                "s3:PutBucketVersioning",
                "iam:DeleteInstanceProfile",
                "lambda:UpdateFunctionConfiguration",
                "ec2:DescribeSecurityGroups",
                "iam:CreatePolicy",
                "autoscaling:CreateLaunchConfiguration",
                "ec2:DescribeVpcs",
                "kms:ListAliases",
                "kms:CreateAlias",
                "iam:RemoveRoleFromInstanceProfile",
                "ec2:DescribeVolumesModifications",
                "iam:CreateRole",
                "s3:CreateBucket",
                "iam:AttachRolePolicy",
                "ec2:DescribePlacementGroups",
                "ec2:DeleteVolume",
                "iam:DetachRolePolicy",
                "ec2:CreatePlacementGroup",
                "dynamodb:DescribeTable",
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:UpdateAutoScalingGroup",
                "autoscaling:SetDesiredCapacity",
                "lambda:InvokeFunction",
                "autoscaling:DescribeTags",
                "ec2:DeleteNetworkInterface",
                "autoscaling:CreateOrUpdateTags",
                "autoscaling:CreateAutoScalingGroup",
                "kms:DeleteAlias",
                "ec2:DeleteTags",
                "autoscaling:DescribeLaunchConfigurations",
                "iam:DeletePolicy",
                "s3:GetBucketPolicy",
                "kms:TagResource",
                "ec2:DescribeNetworkInterfaces",
                "dynamodb:TagResource",
                "ec2:CreateSecurityGroup",
                "kms:ScheduleKeyDeletion",
                "kms:DescribeKey",
                "ec2:AuthorizeSecurityGroupEgress",
                "ec2:TerminateInstances",
                "ec2:DetachNetworkInterface",
                "ec2:DeletePlacementGroup",
                "kms:ListKeyPolicies",
                "iam:GetInstanceProfile",
                "dynamodb:UntagResource",
                "ec2:DescribeTags",
                "lambda:GetFunction",
                "kms:UpdateAlias",
                "ec2:DescribeImages",
                "kms:ListKeys",
                "autoscaling:DeleteLaunchConfiguration",
                "ec2:DeleteSecurityGroup",
                "s3:PutBucketPolicy",
                "ec2:AttachNetworkInterface"
            ],
            "Resource": "*"
        }
    ]
}
  1. Click Review Policy.
  2. Provide a name for the policy. You can call it PurityServicePermission.
  3. Click Create policy.
  4. Go back in the main IAM console.
  5. Create a role by selecting Roles and click Create role.
  6. Select the trusted entity by selecting:
    1. AWS service
    2. CloudFormation

cbs66.JPG

  1. Click Next: Permission.
  2. In the search box, type the policy name PurityServicePermission created in step 4, and check the box for this policy.

cbs68.JPG

  1. Click Next: Tags.
  2. (Optional) Add Tag if desired and click Next: Review.
  3. Enter the role name PurityServiceRole and click Create role.

cbs69.JPG

You now have a new role with the appropriate permission to deploy a new Cloud Block Store instance.

Appendix B   

Adding an S3 VPC Endpoint 

This appendix section shows you how to create an S3 VPC Endpoint. The following procedure also allows you to apply appropriate routes to the VPC Endpoint for your desired subnet.

  1. From the AWS console, navigate to the VPCs console.
  2. Click Endpoints.
  3. Click Create Endpoint.

cbs59.JPG

 

  1. Select the S3 service

cbs86.JPG

 

  1. Select the following parameters:
    1. Select the desired VPC for your Cloud Block Instance.
    2. Select the route table associated with the private subnet for the Cloud Block Store System interface.

cbs90.JPG

  1. Set custom access policy if desired. Otherwise, leave as Full Access and click Create Endpoint.

cbs62.JPG

 

Adding an DynamoDB VPC Endpoint 

This appendix section shows you how to create a DynamoDB VPC Endpoint. The following procedure also allows you to apply appropriate routes to the VPC Endpoint for your desired subnet.

  1. From the AWS console, navigate to the VPCs console.
  2. Click Endpoints.
  3. Click Create Endpoint.
  4. Select the DynamoDB service

cbs87.JPG

  1. Select the following parameters:
    1. Select the desired VPC for your Cloud Block Instance.
    2. Select the route table associated with the private subnet for the Cloud Block Store System interface.

 

cbs89.JPG

  1. Set custom access policy if desired. Otherwise, leave as Full Access and click Create Endpoint.

cbs62.JPG

 

Confirming the route created for VPC endpoints
  1. Navigate to the VPC console.
  2. Select the private subnet used for the Cloud Block Store System interface and check that there is a routing entry for both your S3 and DynamoDB VPC Endpoints. As seen in the following example, the name of the VPC Endpoint has a vpce- prefix.

cbs88.JPG