Skip to main content
Pure Technical Services

Host Management for Cloud Block Store on AWS

Currently viewing public documentation. Please login to access the full scope of documentation.

Mounting a volume on an iSCSI host

The following steps provide an example of how to connect a Windows and an Amazon Linux 2 EC2 compute host to a Pure Storage Cloud Block Store instance.

Prerequisites:
  • The EC2 compute host must have an iSCSI initiator client software. Most modern operating systems already have them pre-installed. 

  • The EC2 compute host must have network access to the iSCSI subnet of the Cloud Block Store instance. If the EC2 compute host's network ports and the Cloud Block Store instance's iSCSI ports are in different subnets, you must have route table entries allowing them to communicate. Ensure that Security Groups, Network ACLs, and firewalls are not preventing connectivity.

  • Ensure that you've followed the previous sections to create the host and create the volumes on Cloud Block Store. Also ensure you've made the volume connection to the host in Cloud Block Store as well.

Setup Windows iSCSI for use with Cloud Block Store

Setting up multipathing with Microsoft MPIO

To protect against a single point of failure, this procedure allows multiple paths from the application host to the Cloud Block Store instance. You only need to perform this procedure once on your Windows application host. 

  1. Log onto the Windows host.
  2. To check if Microsoft MPIO is installed on the system, open an elevated PowerShell terminal (Run as administrator) and run:
PS C:\> Get-WindowsFeature -Name 'Multipath-IO'
Display Name                                            Name         Install State
------------                                            ----         -------------
[ ] Multipath I/O                                       Multipath-IO Available
  1. If it shows the install state as ‘Available’, follow the next steps to install Microsoft MPIO. If it shows as 'Installed', move on to step 7.

  2. In the same PowerShell terminal, run:

PS C:\> Add-WindowsFeature -Name 'Multipath-IO'
Success Restart Needed Exit Code      Feature Result
------- -------------- ---------      --------------
True    Yes       SuccessRest...     {Multipath I/O}
WARNING: You must restart this server to finish the installation process.
  1. Reboot the Windows host.
  2. When the Windows host boots back up, verify that Microsoft MPIO is installed.

PS C:\> Get-WindowsFeature -Name 'Multipath-IO'
Display Name                                            Name         Install State
------------                                            ----         -------------
[X] Multipath I/O                                       Multipath-IO Installed
  1. In the same PowerShell terminal, run the following command to start the iSCSI service.

PS C:\> Start-Service -Name msiscsi
  1. Set the iSCSI service to start on boot, run:
PS C:\> Set-Service -Name msiscsi -StartupType Automatic
  1. Add Pure FlashArray as an MPIO vendor. In the same PowerShell terminal, run:

PS C:\> New-MSDSMSupportedHw -VendorId PURE -ProductId FlashArray
VendorId ProductId
--------       ---------
PURE        FlashArray
  1. Enable iSCSI support by Microsoft MPIO. In the same PowerShell terminal, run:

PS C:\> Enable-MSDSMAutomaticClaim -BusType iSCSI
VendorId ProductId
-------- ---------
MSFT2005 iSCSIBusType_0x9
False
  1. Set default MPIO path policy to Lowest Queue Depth.

PS C:\> Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy LQD
  1. Set MPIO Timer Values. In the same PowerShell terminal, run:

PS C:\> Set-MPIOSetting -NewPathRecoveryInterval 20 -CustomPathRecovery Enabled -NewPDORemovePeriod 120 -NewDiskTimeout 60 -NewPathVerificationState Enabled
  1. If prompted by the above commands, reboot the Windows host.

MPIO setup is now complete. 

Mounting a volume on Windows iSCSI host

Follow the next steps (1-7) to establish iSCSI connections. Once you make a connection, subsequent volumes connected from Cloud Block Store to this host appear in Disk Management.

To complete the following steps, you need the IP addresses of both Cloud Block Store controller iSCSI interfaces. See Viewing Network Interface to obtain the iSCSI IP addresses. Keep the iSCSI IP addresses handy.

  1. (Run as administrator) On the Windows host, open an elevated PowerShell terminal and run the following command to gather the IP address of your Windows instance. The following example shows  10.0.1.118
PS C:\> get-netadapter |Get-NetIPAddress -AddressFamily ipv4
IPAddress         : 10.0.1.118
InterfaceIndex    : 5
InterfaceAlias    : Ethernet
AddressFamily     : IPv4
Type              : Unicast
PrefixLength      : 24
PrefixOrigin      : Dhcp
SuffixOrigin      : Dhcp
AddressState      : Preferred
ValidLifetime     : 00:57:00
PreferredLifetime : 00:57:00
SkipAsSource      : False
PolicyStore       : ActiveStore

2. In the same PowerShell window, run the following command to create a new Target Portal connection between your Windows host and your Cloud Block Store instance.

PS C:\> New-IscsiTargetPortal -TargetPortalAddress <CBS iSCSI IP address>

        where

<CBS iSCSI IP address>  is the IP address of the iSCSI port on Cloud Block controller 0 or controller 1. You only need to enter one.

  1. In the same PowerShell window, run the following command to create an iSCSI session to Cloud Block Store controller 0.
PS C:\> Get-IscsiTarget | Connect-IscsiTarget -InitiatorPortalAddress <Windows IP address> -IsMultipathEnabled $true -IsPersistent $true -TargetPortalAddress <CBS iSCSI interface IP address CT0>

where

<Windows IP address> is the Windows host IP address obtained in step one.

<CBS iSCSI IP address CT0> is the iSCSI IP address of Cloud Block Store controller 0.

See the following screenshot as an example.

cbs70.JPG

  1. (Optional) For additional performance throughput, you may add additional iSCSI sessions. Repeat the same command for each additional iSCSI session you would like to add to controller 0. You can add up to 32 iSCSI sessions to each controller.

See Appendix A for detailed information.

  1. In the same PowerShell window, run the same command to create iSCSI sessions to Cloud Block Store controller 1.
PS C:\> Get-IscsiTarget | Connect-IscsiTarget -InitiatorPortalAddress <Windows IP address> -IsMultipathEnabled $true -IsPersistent $true -TargetPortalAddress <CBS iSCSI interface IP address CT1>

where

<Windows IP address> is the Windows host IP address obtained in step two.

<CBS iSCSI IP address CT1> is the iSCSI IP address of Cloud Block Store controller 1

  1. (Optional) For additional performance throughput, you may add additional iSCSI sessions. Repeat the same command for each additional iSCSI session you would like to add to controller 1. You can add up to 32 iSCSI sessions to each controller.

See Appendix C for detailed information.

You can use this Powershell script (GitHub link below) to automate steps 3-6 

https://github.com/PureStorage-OpenC...CSISession.ps1

  1. To confirm the total number of sessions, run:  
PS C:\> Get-IscsiSession | measure

Example:

clipboard_e5b7c88dc58533010c73c57e97fc33f06.png

  1. Go to Disk Management and perform a rescan to confirm the new volume. 

cbs79.JPG

  1. Bring the volume online and format with the desired file system. Any subsequent volume you create and connect to this host in the Cloud Block Store UI (CLI/GUI/REST) displays automatically in Disk Management after a rescan.

You have successfully connected and mounted a Cloud Block Store volume to your host. 

Setup Linux iSCSI for use with Cloud Block Store

This example walks you through connecting Cloud Block Store volumes to an AWS Linux 2 AMI. Some steps might be repeated from steps seen earlier in this guide.

The following instructions cover the 'basics' in terms of setting up Linux Host iSCSI connectivity to Cloud Block Store.  More in-depth best practices for Linux can be found in this KB article.

Similar setup instructions for Ubuntu 18.04 can be found at this link.

The following steps include:

  • Configuring the Linux host for iSCSI and MPIO with Cloud Block Store
  • Host and volume creation on Cloud Block Store
  • Connecting and mounting Cloud Block Store volumes to Linux host
On the AWS Linux 2 AMI Host

1. Log in to the Amazon Linux 2 EC2 instance.

2. Install iscsi-initiator-utils onto Linux host.

sudo yum -y install iscsi-initiator-utils

3. Install lsscsi.

sudo yum -y install lsscsi

4. Install the device-mapper-multipath package.

sudo yum -y install device-mapper-multipath

5. Start iSCSI daemon service.

sudo service iscsid start

6. This step increases the total bandwidth performance by allowing the host to create 32 iSCSI sessions per iSCSI connection. This command below will edit the /etc/iscsi/iscsid.conf file and change the node.session.nr_sessions field to 32. See Appendix A for more detailed information about iscsi sessions.

Run command below.

sudo sed -i 's/^\(node\.session\.nr_sessions\s*=\s*\).*$/\132/' /etc/iscsi/iscsid.conf

7. Remove 51-ec2-hvm-devices.rules file. 

sudo rm /etc/udev/rules.d/51-ec2-hvm-devices.rules

The step above is only required with Amazon Linux 2 AMI.

8. (Optional) This step may help with performance between the host and Cloud Block Store volumes.

Create a new udev rules file called 99-pure-storage.rules for Pure Storage and copy the contents into the file as shown in the following example.

sudo vim /etc/udev/rules.d/99-pure-storage.rules

Example: 

[ec2-user@ip-10-0-1-235 ~]$ cat /etc/udev/rules.d/99-pure-storage.rules
# Recommended settings for Pure Storage FlashArray.cat 

# Use noop scheduler for high-performance solid-state storage
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/scheduler}="noop"

# Reduce CPU overhead due to entropy collection
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/add_random}="0"

# Spread CPU load by redirecting completions to originating CPU
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/rq_affinity}="2"

# Set the HBA timeout to 60 secondsi
ACTION=="add", SUBSYSTEMS=="scsi", ATTRS{model}=="FlashArray ", RUN+="/bin/sh -c 'echo 60 > /sys/$DEVPATH/device/timeout'"

 

9. Enable default multipath configuration file and start the multipath daemon.

sudo mpathconf --enable --with_multipathd y

10. Replace the content of the multipath.conf file with the following configuration for Pure Storage.

sudo vim /etc/multipath.conf

  • polling_interval 10
  • vendor "PURE"
  • path_selector "queue-length 0"
  • path_grouping_policy group_by_prio
  • path_checker tur
  • fast_io_fail_tmo 10
  • no_path_retry queue
  • hardware_handler “1 alua” 
  • prio alua 
  • failbackimmediate

See RHEL documentation for /etc/multipath.conf attribute descriptions.

Example: 

[ec2-user@ip-10-0-1-235 ~]$ sudo cat /etc/multipath.conf
defaults {
       polling_interval 10
       user_friendly_names yes
       find_multipaths yes
}
devices {
       device {
               vendor                "PURE"
               path_selector         "queue-length 0"
               path_grouping_policy  group_by_prio
               path_checker          tur
               fast_io_fail_tmo      10
               no_path_retry         queue
               hardware_handler      "1 alua"
               prio                  alua
               failback              immediate
       }
}

11. Restart multipathd service to get the multipath.conf changes to take effect.

sudo service multipathd restart

12. Retrieve the Linux initiator IQN.

cat /etc/iscsi/initiatorname.iscsi

Example: 

[ec2-user@ip-10-0-1-235 ~]$ cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:361dfc3de387

13. Reboot the Linux host.

sudo reboot

On the Cloud Block Store Instance

14. Using the CLI (ssh), log into the Cloud Block Store instance using the management IP. See Viewing Cloud Block Store Network Interfaces section for the location of your management IP addresses.  Note that all of the following steps can be accomplished via the CBS GUI as well.

  • Default username: pureuser
  • Default password: pureuser
ubuntu@ip-10-0-0-107:~$ ssh pureuser@10.0.1.61
pureuser@10.0.1.61's password:

Mon Sep 09 11:40:25 2019
Welcome pureuser. This is Purity Version 5.3.0.beta10 on FlashArray MPIOConfig
http://www.purestorage.com/
pureuser@CBS>

15. Create a host on Cloud Block Store.

purehost create --iqnlist <IQN number> <hostname>

where

<IQN number> is the initiator IQN number gathered in step 12.

<hostname> is the desired Linux hostname.

Example:

pureuser@CBS> purehost create --iqnlist iqn.1994-05.com.redhat:361dfc3de387 Linux2AMI
Name       WWN  IQN                                  NQN  Host Group
Linux2AMI  -    iqn.1994-05.com.redhat:361dfc3de387  -    -

 

16. Create one or more volumes on Cloud Block Store.

purevol create <volume name> --size <size>

where

<volume name> is the desired volume name.

<size> is the desired volume size (GB or TB suffix).

This example shows the creation of a 2 TB volume:

pureuser@CBS> purevol create vol1 --size 2TB
Name  Size  Source  Created                  Serial
vol1  2T    -       2019-09-09 11:41:55 PDT  2B60622E2B014A2200011010

17. Connect host to volumes.

purevol connect <volume name> --host <host name>

where

<volume name> is the name of the volume.

<host name> is the name of the host. 

Example: 

pureuser@CBS> purevol connect vol1 --host Linux2AMI
Name  Host Group  Host       LUN
vol1  -           Linux2AMI  1

18. Collect the IP addresses of each controller and the IQN number for Cloud Block Store. The IQN is identical for both iSCSI interfaces.

pureport list

Example

pureuser@CBS> pureport list
Name      WWN  Portal           IQN                                                      NQN  Failover
CT0.ETH2  -    10.0.1.202:3260  iqn.2010-06.com.purestorage:flasharray.666667d86130ec06  -    -
CT1.ETH2  -    10.0.1.110:3260  iqn.2010-06.com.purestorage:flasharray.666667d86130ec06  -    - 

iSCSI Login

On the AMI Linux Host

19. Create a new iSCSI interface on the Linux host initiator. 

sudo iscsiadm -m iface -I iscsi0 -o new

Example:

[ec2-user@ip-10-0-1-235 ~]$ sudo iscsiadm -m iface -I iscsi0 -o new
New interface iscsi0 added

20. Discover target iSCSI portals using iSCSI interface IP.

sudo iscsiadm -m discovery -t st -p <CBS iSCSI IP>:3260

where

<CBS iSCSI IP>  is the iSCSI IP address of the Cloud Block controller 1 or controller 2, collected in step 18. You only need to enter one iSCSI IP address.

Example: It will discover iSCSI IP's from both CBS controllers.

[ec2-user@ip-10-0-1-235 ~]$ sudo iscsiadm -m discovery -t st -p 10.0.1.202:3260
10.0.1.202:3260,1 iqn.2010-06.com.purestorage:flasharray.666667d86130ec06
10.0.1.110:3260,1 iqn.2010-06.com.purestorage:flasharray.666667d86130ec06

21. Connect the Linux host to the Cloud Block Store instance.

sudo iscsiadm -m node --login

Example: You will notice there will be  64 logins (32 iSCSi session login per CBS controller).

[ec2-user@ip-10-0-1-235 ~]$ sudo iscsiadm -m node --login
Logging in to [iface: iscsi0, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260] (multiple)
Logging in to [iface: iscsi0, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260] (multiple)
.
.
.
Logging in to [iface: iscsi0, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260] (multiple)
Logging in to [iface: iscsi0, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260] (multiple)
Login to [iface: iscsi0, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260] successful.
Login to [iface: iscsi0, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260] successful.
.
.
.
Login to [iface: iscsi2, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260] successful.
Login to [iface: iscsi3, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260] successful.

22. Add automatic iSCSI login on boot.

sudo iscsiadm -m node -L automatic

23. Confirm the number of iSCSI sessions. There should be 64 entries (32 iSCSI sessions per CBS controller)

iscsiadm --mode session
 

Example: There should be 64 entries (32 iSCSI sessions per CBS controller).

[ec2-user@ip-10-0-1-235 ~]$ iscsiadm --mode session
tcp: [1] 10.0.1.110:3260,1 iqn.2010-06.com.purestorage:flasharray.8650085ea65b9fa (non-flash)
tcp: [10] 10.0.1.110:3260,1 iqn.2010-06.com.purestorage:flasharray.8650085ea65b9fa (non-flash)
tcp: [11] 10.0.1.110:3260,1 iqn.2010-06.com.purestorage:flasharray.8650085ea65b9fa (non-flash)
tcp: [12] 10.0.1.110:3260,1 iqn.2010-06.com.purestorage:flasharray.8650085ea65b9fa (non-flash)
tcp: [13] 10.0.1.110:3260,1 iqn.2010-06.com.purestorage:flasharray.8650085ea65b9fa (non-flash)
.
.
.
tcp: [63] 10.0.1.110:3260,1 iqn.2010-06.com.purestorage:flasharray.8650085ea65b9fa (non-flash)
tcp: [64] 10.0.1.110:3260,1 iqn.2010-06.com.purestorage:flasharray.8650085ea65b9fa (non-flash)
tcp: [7] 110.0.1.110:3260,1 iqn.2010-06.com.purestorage:flasharray.8650085ea65b9fa (non-flash)
tcp: [8] 110.0.1.110:3260,1 iqn.2010-06.com.purestorage:flasharray.8650085ea65b9fa (non-flash)
tcp: [9] 10.0.1.110:3260,1 iqn.2010-06.com.purestorage:flasharray.8650085ea65b9fa (non-flash)

24. Confirm that each volume has 64 entries, each representing a virtual device path.

lsscsi -d

Example: There should be 64 entries per CBS volume connected to this Linux host.

[ec2-user@ip-10-0-1-235 ~]$ lsscsi -d
[2:0:0:1]    disk    PURE     FlashArray       8888  /dev/sdb [8:16]
[3:0:0:1]    disk    PURE     FlashArray       8888  /dev/sda [8:0]
[4:0:0:1]    disk    PURE     FlashArray       8888  /dev/sdc [8:32]
[5:0:0:1]    disk    PURE     FlashArray       8888  /dev/sdd [8:48]
[6:0:0:1]    disk    PURE     FlashArray       8888  /dev/sde [8:64]
[7:0:0:1]    disk    PURE     FlashArray       8888  /dev/sdf [8:80]
[8:0:0:1]    disk    PURE     FlashArray       8888  /dev/sdg [8:96]
[9:0:0:1]    disk    PURE     FlashArray       8888  /dev/sdh [8:112]
.
.
.
[63:0:0:1]   disk    PURE     FlashArray       8888  /dev/sdar[66:176]
[64:0:0:1]   disk    PURE     FlashArray       8888  /dev/sdbe[67:128]
[65:0:0:1]   disk    PURE     FlashArray       8888  /dev/sdbj[67:208]

25. Run the multipath command below to confirm each Cloud Block Store volume has multiple paths. A multipathed Cloud Block Store volume should be represented by a device-mapped ID, as seen in green in the example below. Verify the paths are divided into two priority groups, as seen in orange in the following example. 

sudo multipath -ll

Example: Each Cloud Block Store volumes are represented by a device-mapped IDs in green. 

[ec2-user@ip-10-0-1-235 ~]$ sudo multipath -ll
3624a93702b60622e2b014a2200011011 dm-0 PURE    ,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| |- 2:0:0:2 sdb  8:16  active ready running
| |- 3:0:0:2 sdf  8:80  active ready running
| |- 4:0:0:2 sdl  8:176 active ready running
| `- 5:0:0:2 sdk  8:160 active ready running
.
.
.
.
`-+- policy='queue-length 0' prio=10 status=enabled
  |- 6:0:0:2 sdd  8:48  active ready running
  |- 7:0:0:2 sdh  8:112 active ready running
  |- 8:0:0:2 sdp  8:240 active ready running
  `- 9:0:0:2 sdo  8:224 active ready running
.
.
.

26. Create mount points on the initiator.

sudo mkdir /mnt/store0

27. Create the desired file system on each Cloud Block Store volume using the device-mapped IDs, and then mount each volume to the mount point.

sudo mkfs.ext4 /dev/mapper/<device-mapped ID>

where

<device-mapped ID> is the device-mapped ID from step 25 in green.

The following example uses filesystem ext4 for the Cloud Block Store volume

dm-0

[ec2-user@ip-10-0-1-235 ~]$ sudo mkfs.ext4 /dev/mapper/3624a93702b60622e2b014a2200011011
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=1024 blocks
134217728 inodes, 536870912 blocks
26843545 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
16384 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
    102400000, 214990848, 512000000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

28. Mount Cloud Block Store volume onto mount point.

sudo mount /dev/mapper/<device-mapped ID> <mount point>

where

<device-mapped ID> is the device-mapped ID collected from step 25.

<mount point> is the mount point created in step 26.

[ec2-user@ip-10-0-1-235 ~]$ sudo mount /dev/mapper/3624a93702b60622e2b014a2200011011 /mnt/store0

29. The mount points now report 2TB, and block storage can be consumed.

[ec2-user@ip-10-0-1-235 ~]$ df -h
Filesystem                                     Size  Used Avail Use% Mounted on
/dev/mapper/3624a93702b60622e2b014a2200011010  2.0T   81M  1.9T   1% /mnt/store0
On Cloud Block Store
  • (Optional Check) I/O should only flow to the primary controller instance. Run I/O on your Linux host and confirm on your Cloud Block Store instance with the following command:

purehost monitor --balance --interval 3

Example:

pureuser@CBS> purehost monitor --balance --interval 3
Name       Time                     Initiator WWN  Initiator IQN                        Initiator NQN  Target       Target WWN  Failover  I/O Count  I/O Relative to Max
Linux2AMI  2019-08-26 10:31:32 PDT  -              iqn.1994-05.com.redhat:b9ddc64322ef  -              (primary)    -           -         1626       100%
                                                   iqn.1994-05.com.redhat:b9ddc64322ef                 (secondary)              -         0          0%
                                                  

Adding Additional Volumes 

On Cloud Block Store

You can add more volume using CLI or GUI. 

  1. To create subsequent Cloud Block Store volumes, use the purevol create command to create the the volume.

purevol create <vol name> --size <volume size>

  1. Use the purevol connect command to connect the volume to the desired host.

purevol connect <vol name> --host <hostname>

  1. Rescan the Linux host to see the additional storage.
  • On Windows, go to Disk Management and perform a rescan to confirm the new volume is present. 
  • On Linux use the sudo iscsiadm -m session --rescan command.

Example:

[ec2-user@ip-10-0-1-235 ~]$ sudo iscsiadm -m session --rescan
Rescanning session [sid: 2, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260]
Rescanning session [sid: 3, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260]
Rescanning session [sid: 4, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260]
Rescanning session [sid: 1, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.202:3260]
Rescanning session [sid: 5, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260]
Rescanning session [sid: 6, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260]
Rescanning session [sid: 8, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260]
Rescanning session [sid: 7, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 10.0.1.110:3260]

 

Appendix A

Performance Considerations

  • iSCSI Sessions:

It is important to note that AWS has bandwidth limits on each TCP connection. A single TCP connection is only capable of 5 Gb networking in AWS. Since a single iSCSI session equates to a single TCP connection, each iSCSI session is also limited on throughput. For applications that need higher throughput, it is advised to increase the number of iSCSI sessions on the EC2 host. If you are looking to maximize throughput from a given EC2, the number of iSCSI sessions will vary depending on the size of the EC2 instance. The Windows and Linux examples above already includes steps to set these values.

clipboard_e1489ec7546e197ff5c6b4ad17f8a26d3.png

The approximate guidance is to have 2 iSCSI sessions for each EC2  "xlarge" size. You can always increase the number of sessions beyond approximate guidance (up to 32 iSCSI per controller) if needed.

For example, an application running on an EC2 size of:

c5.2xlarge would have 4 iSCSI sessions to each CBS controller.

m5.4xlarge would have 8 iSCSI sessions to each CBS  controller.

r5.8xlarge would have 16 iSCSI sessions to each CBS controller.

r5n.16xlarge would have 32 iSCSI sessions to each CBS controller.

The guidance above provide approximate values and can be increased, up to 32 iSCSI sessions.

  • MPIO Settings

I/O should only flow to the primary controller. Ensure you followed the steps to appropriately set MPIO parameters for Windows and Linux hosts to ensure IO's are sent to the primary controller.

To confirm, run I/O on your host and run the following command on your Cloud Block Store instance:

purehost monitor --balance --interval 3

Example: Notice I/O Count should only show for the primary controller.

pureuser@CBS> purehost monitor --balance --interval 3
Name       Time                     Initiator WWN  Initiator IQN                        Initiator NQN  Target       Target WWN  Failover  I/O Count  I/O Relative to Max
Linux2AMI  2019-08-26 10:31:32 PDT  -              iqn.1994-05.com.redhat:b9ddc64322ef  -              (primary)    -           -         1626       100%
                                                   iqn.1994-05.com.redhat:b9ddc64322ef                 (secondary)              -         0          0%
                                                  

 

  • Amazon EC2 Host Network Bandwidth Limits

Keep in mind that each Amazon EC2 instance type has network bandwidth limits. See Amazon EC2 instance types.  When running performance load testing with a Cloud Block Store instance, ensure the Amazon EC2 instance that the application is running on has enough network bandwidth. For example, a Cloud Block Store V20A-R1 instance uses C5n.18xlarge instances for the controllers which has up to 100 Gb of bandwidth. Therefore the application host should use a single Amazon EC2 instance with matching network limits (ex: 1 x C5n.18xlarge), or multiple Amazon EC2 instances with networks that add up to the 100 Gb limit (ex: 4 x r5n.8xlarge). This ensures that the application is not the bottle neck.

 

  • VPC Endpoints

AWS highly encourages VPC Endpoints for many security and cost reasons. VPC endpoints also improve Cloud Block Store performance by ensuring there are no unnecessary network hops through the public internet when Cloud Block Store writes encrypted data and metadata to Amazon S3 and DynamoDB, respectively.

It is highly advised that customers use VPC Endpoints for Amazon S3 and Amazon DynamoDB. See Appendix B for steps to add an Amazon S3 and Amazon DynamoDB VPC Endpoint to an existing subnet.