Skip to main content
Pure Technical Services

Host Management for Cloud Block Store on Azure

Currently viewing public documentation. Please login to access the full scope of documentation.

KP_Ext_Announcement.png

Using Azure Accelerated Networking with Cloud Block Store

Beginning with release 6.2.4 of Cloud Block Store, all newly deployed CBS instances by default have enabled support for Azure Accelerated Networking.  The differentiator of Azure Accelerated Networking relative to their traditional networking architecture is that it offloads much of the networking stack from the virtual machine hardware onto the network card, bypassing the virtualized switches.  This in turn leads to less latency and VM host CPU utilization so that applications run faster and more consistently.  

The following Windows operating systems support Accelerated Networking and we recommend using them with CBS whenever possible for optimal performance:

  • Windows Server 2019 Standard/Datacenter
  • Windows Server 2016 Standard/Datacenter
  • Windows Server 2012 R2 Standard/Datacenter

For more information and instructions on how to deploy Windows-based Azure VMs with Accelerated Networking, please see this article.

The following Linux operating systems from the Azure Gallery support Accelerated Networking and we recommend using them with CBS whenever possible for optimal performance:

  • Ubuntu 14.04 with the linux-azure kernel
  • Ubuntu 16.04 or later
  • SLES12 SP3 or later
  • RHEL 7.4 or later
  • CentOS 7.4 or later
  • CoreOS Linux
  • Debian "Stretch" with backports kernel, Debian "Buster" or later
  • Oracle Linux 7.4 and later with Red Hat Compatible Kernel (RHCK)
  • Oracle Linux 7.5 and later with UEK version 5
  • FreeBSD 10.4, 11.1 & 12.0 or later

For more information and instructions on how to deploy Linux-based Azure VMs with Accelerated Networking, please see this article.

Windows Host Setup and Management for iSCSI

Setting up multipathing with Microsoft MPIO

To protect against a single point of failure, this procedure allows multiple paths from the application host to the Cloud Block Store instance. You only need to perform this procedure once on your Windows application host. 

  1. Log onto the Windows host.
  2. To check if Microsoft MPIO is installed on the system, open an elevated PowerShell terminal (Run as administrator) and run:
PS C:\> Get-WindowsFeature -Name 'Multipath-IO'
Display Name                                            Name         Install State
------------                                            ----         -------------
[ ] Multipath I/O                                       Multipath-IO Available
  1. If it shows the install state as ‘Available’, follow the next steps to install Microsoft MPIO. If it shows as 'Installed', move on to step 7.

  2. In the same PowerShell terminal, run:

PS C:\> Add-WindowsFeature -Name 'Multipath-IO'
Success Restart Needed Exit Code      Feature Result
------- -------------- ---------      --------------
True    Yes       SuccessRest...     {Multipath I/O}
WARNING: You must restart this server to finish the installation process.
  1. Reboot the Windows host.
  2. When the Windows host boots back up, verify that Microsoft MPIO is installed.

PS C:\> Get-WindowsFeature -Name 'Multipath-IO'
Display Name                                            Name         Install State
------------                                            ----         -------------
[X] Multipath I/O                                       Multipath-IO Installed
  1. In the same PowerShell terminal, run the following command to start the iSCSI service.

PS C:\> Start-Service -Name msiscsi
  1. Set the iSCSI service to start on boot, run:
PS C:\> Set-Service -Name msiscsi -StartupType Automatic
  1. Add Pure FlashArray as an MPIO vendor. In the same PowerShell terminal, run:

PS C:\> New-MSDSMSupportedHw -VendorId PURE -ProductId FlashArray
VendorId ProductId
--------       ---------
PURE        FlashArray
  1. Enable iSCSI support by Microsoft MPIO. In the same PowerShell terminal, run:

PS C:\> Enable-MSDSMAutomaticClaim -BusType iSCSI
VendorId ProductId
-------- ---------
MSFT2005 iSCSIBusType_0x9
False
  1. Set default MPIO path policy to Lowest Queue Depth.

PS C:\> Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy LQD
  1. Set MPIO Timer Values. In the same PowerShell terminal, run:

PS C:\> Set-MPIOSetting -NewPathRecoveryInterval 20 -CustomPathRecovery Enabled -NewPDORemovePeriod 120 -NewDiskTimeout 60 -NewPathVerificationState Enabled
  1. If prompted by the above commands, reboot the Windows host.

MPIO setup is now complete. 

Mounting a volume on a Windows iSCSI host

Follow the next steps (1-7) to establish iSCSI connections. These steps only need to be performed once on each Windows host. Once you make a connection, subsequent volumes connected from Cloud Block Store to this host appear in Disk Management.

To complete the following steps, you need the IP addresses of both Cloud Block Store controller iSCSI interfaces. See Viewing the Cloud Block Store IP Addresses in Azure Portal section to obtain the iSCSI IP addresses. Keep the iSCSI IP addresses handy.

  1. (Run as administrator) On the Windows host, open an elevated PowerShell terminal and run the following command to gather the IP address of your Windows instance. The following example shows  10.0.1.118
PS C:\> ipconfig
Windows IP Configuration


Ethernet adapter Ethernet:

   Connection-specific DNS Suffix  . : nkb53slgco0urbhag0lvo3pldf.xx.internal.cloudapp.net
   Link-local IPv6 Address . . . . . : fe80::ad14:4dc2:e367:6c7a%24
   IPv4 Address. . . . . . . . . . . : 10.0.1.118
   Subnet Mask . . . . . . . . . . . : 255.255.0.0
   Default Gateway . . . . . . . . . : 10.0.0.1

2. In the same PowerShell window, run the following command to create a new Target Portal connection between your Windows host and your Cloud Block     Store instance.

PS C:\> New-IscsiTargetPortal -TargetPortalAddress <CBS iSCSI IP address>

        where:

<CBS iSCSI IP address>  is the IP address of the iSCSI port on Cloud Block controller 0 or controller 1. You only need to enter one.

3. In the same PowerShell window, run the following command to create an iSCSI session to Cloud Block Store controller 0.  

PS C:\> Get-IscsiTarget | Connect-IscsiTarget -InitiatorPortalAddress <Windows IP address> -IsMultipathEnabled $true -IsPersistent $true -TargetPortalAddress <CBS iSCSI interface IP address CT0>

where:

<Windows IP address> is the Windows host IP address obtained in step one.

<CBS iSCSI IP address CT0> is the iSCSI IP address of Cloud Block Store controller 0.

See the following screenshot as an example. 

  clipboard_ee3937fdd6d0a13d5367f60aade7bf974.png

4. (Optional) For additional performance throughput, you may add additional iSCSI sessions. Repeat the same command for each additional iSCSI session you would like to add to controller 0. You can add up to 32 iSCSI sessions to each controller.

5. In the same PowerShell window, run the same command to create iSCSI sessions to Cloud Block Store controller 1.

PS C:\> Get-IscsiTarget | Connect-IscsiTarget -InitiatorPortalAddress <Windows IP address> -IsMultipathEnabled $true -IsPersistent $true -TargetPortalAddress <CBS iSCSI interface IP address CT1>

where:

<Windows IP address> is the Windows host IP address obtained in step two.

<CBS iSCSI IP address CT1> is the iSCSI IP address of Cloud Block Store controller 1

6. (Optional) For additional performance throughput, you may add additional iSCSI sessions. Repeat the same command for each additional iSCSI session you     would like to add to controller 1. You can add up to 32 iSCSI sessions to each controller.

(Optional) Use the PowerShell script (GitHub link below) to automate steps 3-6 

https://github.com/PureStorage-OpenC...CSISession.ps1

7. To confirm the total number of sessions, run:  

PS C:\> Get-IscsiSession | measure

Example:

clipboard_e5658addc9bde44c5a213ed6d74df8cac.png

8. Go to Disk Management and perform a rescan to confirm the new volume is present. 

9. Bring the volume online and format with the desired file system. Any subsequent volume you create and connect to this host in the Cloud Block Store UI     (CLI/GUI/REST) displays automatically in Disk Management after a rescan.

You have successfully connected and mounted a Cloud Block Store volume to your host and it is ready for usage. 

Note: It is an expected behaviour for secondary iSCSI target sessions to be stated as Active/Unoptimize. Technically secondary can receive IO but it will be sent back to the primary which will lead to latency overhead from host point of view since the backplane between the controllers is network!
So as a best practice keep it as Active/Unoptimised which will be tried when none of the active/optimized paths are available.

Linux Host Setup and Management for iSCSI

This example walks you through connecting Cloud Block Store volumes to an Ubuntu 18.04 Linux host. Some steps might be repeated from steps seen earlier in this guide and some steps might be slightly different for different Linux distributions though the same basic concepts apply.

The steps include:

  • Configuring the Linux host for iSCSI and MPIO with Cloud Block Store
  • Host and volume creation on Cloud Block Store
  • Connecting and mounting Cloud Block Store volumes to Linux host

The following instructions cover the 'basics' in terms of setting up Linux Host iSCSI connectivity to Cloud Block Store.  More in-depth best practices for Linux can be found in this KB article.

Configuring iSCSI on Linux host initiator with Cloud Block Store
On Linux Host
  1. SSH into Linux Ubuntu VM instance for which you wish to provision Cloud Block Store storage.

  2. Install iscsi initiator utils as root:

  1. Install lsscs

  1. Install multi-path tools.

  1. Start iscsi services

  1. Start iSCSI daemon service:

    sudo /etc/init.d/open-iscsi restart

  1. Run command below.

    sudo sed -i 's/^\(node\.session\.nr_sessions\s*=\s*\).*$/\132/' /etc/iscsi/iscsid.conf

    Note: This step is optional and increases the total bandwidth performance by allowing the application host to create 32 iSCSI sessions per iSCSI connection. This command edits the /etc/iscsi/iscsid.conf file and change the node.session.nr_sessions field to 32. See Appendix D for detailed information about iscsi sessions.

  2. Edit the /etc/iscsi/iscsid.conf file to enable automatic iSCSI login by changing the following parameter: 

    node.startup = automatic

 

Example:

          9. Create (or edit) /etc/multipath.conf file with the following contents:

  1. Restart multipathd service to get the multipath.conf changes to take effect.

    sudo service multipathd restart
  2. Retrieve the Initiator IQN on Ubuntu VM:

    sudo cat /etc/iscsi/initiatorname.iscsi

 

Example:

user@MyUbuntuVM:~$ sudo cat /etc/iscsi/initiatorname.iscsi
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator.  The InitiatorName must be unique
## for each iSCSI initiator.  Do NOT duplicate iSCSI InitiatorNames.
InitiatorName=iqn.1993-08.org.debian:01:756ae638fabb

On the Cloud Block Store Instance

  1. Log onto the Pure Storage Cloud Block Store instance via SSH using the Cloud Block Store Management IP address. 

  2. Create a hostname on the Cloud Block Store instance and associate it with the Linux Ubuntu initiator IQN in previous step:

     purehost create --iqnlist <IQN number> <hostname>

     where:

   <IQN number> is the initiator IQN number gathered in step 11.

     <hostname> is the desired hostname for your existing Ubuntu VM in Azure.

 

Example:

pureuser@MyCBS> purehost create --iqnlist iqn.1994-05.com.redhat:361dfc3de387 MyUbuntuVM
Name       WWN  IQN                                  NQN  Host Group
MyUbuntuVM  -    iqn.1994-05.com.redhat:361dfc3de387  -    -
  1. Create volume on Pure.

     purevol create <vol name> --size <volume size>

    where:

    <vol name> is the desired name of the Cloud Block Store volume.

    <volume size> is the volume size (ex: 50M, 50G, 10T).

 

This example shows the creation of a 2 TB volume:

pureuser@MyCBS> purevol create vol1 --size 2TB
Name  Size  Source  Created                  Serial
vol1  2T    -       2019-09-09 11:41:55 PDT  2B60622E2B014A2200011010 
  1. Connect volume to the Ubuntu host VM

     purevol connect <vol name> --host <hostname>

     where:

     <vol name> is the name of the Cloud Block Store volume.

    <hostname> is the name of your Ubuntu VM.

Example: 

pureuser@MyCBS> purevol connect vol1 --host MyUbuntuVM
Name  Host Group  Host       LUN
vol1  -           MyUbuntuVM  1

    To check the connection between your volumes and hosts, run:

     purevol list --connect

  1. Collect the IP addresses of each controller and the IQN number for Cloud Block Store. The IQN is identical for both iSCSI interfaces.

     pureport list

 

Example

pureuser@MyCBS> pureport list
Name      WWN  Portal              IQN                                                     NQN  Failover
CT0.ETH2  -    172.16.180.8:3260   iqn.2010-06.com.purestorage:flasharray.81ce503fd81d0c8  -    -
CT1.ETH2  -    172.16.180.11:3260  iqn.2010-06.com.purestorage:flasharray.81ce503fd81d0c8  -    -
vphan@CBSwestus2van>
iSCSI login 

On the Linux host: 

  1. Create a new iSCSI interface on the Linux host initiator. 

sudo iscsiadm -m iface -I iscsi0 -o new

 

Example:

user@MyUbuntuVM:~$ sudo iscsiadm -m iface -I iscsi0 -o new
New interface iscsi0 added
  1. Discover target iSCSI portals using iSCSI interface IP.

sudo iscsiadm -m discovery -t st -p <CBS iSCSI IP>:3260

where

<CBS iSCSI IP>  is the iSCSI IP address of the Cloud Block controller 1 or controller 2, collected in step 16. You only need to enter one iSCSI IP address.

Example: It will discover iSCSI IP's from both CBS controllers.

user@MyUbuntuVM:~$ sudo iscsiadm -m discovery -t st -p 172.16.180.8:3260
172.16.180.8:3260,1 iqn.2010-06.com.purestorage:flasharray.81ce503fd81d0c8
172.16.180.11:3260,1 iqn.2010-06.com.purestorage:flasharray.81ce503fd81d0c8
  1.  Connect the Linux host to the Cloud Block Store instance.

sudo iscsiadm -m node --login

Example: You will notice there will be 64 logins (32 iSCSi session login per CBS controller)

user@MyUbuntuVM:~$ sudo iscsiadm -m node --login
Logging in to [iface: iscsi0, target: iqn.2010-06.com.purestorage:flasharray.81ce503fd81d0c8, portal: 172.16.180.8:3260] (multiple)
Logging in to [iface: iscsi0, target: iqn.2010-06.com.purestorage:flasharray.81ce503fd81d0c8, portal: 172.16.180.8:3260] (multiple)
.
.
.
Logging in to [iface: iscsi0, target: iqn.2010-06.com.purestorage:flasharray.81ce503fd81d0c8, portal: 172.16.180.11:3260] (multiple)
Logging in to [iface: iscsi0, target: iqn.2010-06.com.purestorage:flasharray.81ce503fd81d0c8, portal: 172.16.180.11:3260] (multiple)
Login to [iface: iscsi0, target: iqn.2010-06.com.purestorage:flasharray.81ce503fd81d0c8, portal: 172.16.180.8:3260] successful.
Login to [iface: iscsi0, target: iqn.2010-06.com.purestorage:flasharray.81ce503fd81d0c8, portal: 172.16.180.8:3260] successful.
.
.
.
Login to [iface: iscsi0, target: iqn.2010-06.com.purestorage:flasharray.81ce503fd81d0c8, portal: 172.16.180.11:3260] successful.
Login to [iface: iscsi0, target: iqn.2010-06.com.purestorage:flasharray.81ce503fd81d0c8, portal: 172.16.180.11:3260] successful.
  1. Confirm the number of iSCSI sessions. There should be 64 entries (32 iSCSi sessions per CBS controller)

iscsiadm --mode session

Example: There should be 64 entries (32 iSCSI sessions per CBS controller)

user@MyUbuntuVM:~$ iscsiadm --mode session
tcp: [1] 172.16.180.8:3260,1 iqn.2010-06.com.purestorage:flasharray.81ce503fd81d0c8 (non-flash)
tcp: [10] 172.16.180.8:3260,1 iqn.2010-06.com.purestorage:flasharray.81ce503fd81d0c8 (non-flash)
tcp: [11] 172.16.180.8:3260,1 iqn.2010-06.com.purestorage:flasharray.81ce503fd81d0c8 (non-flash)
tcp: [12] 172.16.180.8:3260,1 iqn.2010-06.com.purestorage:flasharray.81ce503fd81d0c8 (non-flash)
tcp: [13] 172.16.180.8:3260,1 iqn.2010-06.com.purestorage:flasharray.81ce503fd81d0c8 (non-flash)
.
.
.
tcp: [63] 172.16.180.11:3260,1 iqn.2010-06.com.purestorage:flasharray.81ce503fd81d0c8a (non-flash)
tcp: [64] 172.16.180.11:3260,1 iqn.2010-06.com.purestorage:flasharray.81ce503fd81d0c8 (non-flash)
tcp: [7] 172.16.180.11:3260,1 iqn.2010-06.com.purestorage:flasharray.81ce503fd81d0c8 (non-flash)
tcp: [8] 172.16.180.11:3260,1 iqn.2010-06.com.purestorage:flasharray.81ce503fd81d0c8 (non-flash)
tcp: [9] 172.16.180.11:3260,1 iqn.2010-06.com.purestorage:flasharray.81ce503fd81d0c8 (non-flash) 
  1. Confirm that each volume has 64 entries, each representing a virtual device path.

lsscsi -d

Example: There should be 64 entries per CBS volume connected to this Linux host.

user@MyUbuntuVM:~$ lsscsi -d
[0:0:0:0]    disk    Msft     Virtual Disk     1.0   /dev/sda [8:0]
[1:0:1:0]    disk    Msft     Virtual Disk     1.0   /dev/sdb [8:16]
[5:0:0:0]    cd/dvd  Msft     Virtual CD/ROM   1.0   /dev/sr0 [11:0]
[6:0:0:1]    disk    PURE     FlashArray       8888  /dev/sdc [8:32]
[7:0:0:1]    disk    PURE     FlashArray       8888  /dev/sdb [8:16]
[8:0:0:1]    disk    PURE     FlashArray       8888  /dev/sda [8:0]
.
.
.
[67:0:0:1]   disk    PURE     FlashArray       8888  /dev/sdar[66:176]
[68:0:0:1]   disk    PURE     FlashArray       8888  /dev/sdbe[67:128]
[69:0:0:1]   disk    PURE     FlashArray       8888  /dev/sdbj[67:208]
  1. Run the multipath command below to confirm each Cloud Block Store volume has multiple paths. A multipathed Cloud Block Store volume should be represented by a device-mapped ID, as seen in green in the example below. Verify the paths are divided into two priority groups, as seen in orange in the following example. 

sudo multipath -ll

Example: Each Cloud Block Store volumes are represented by a device-mapped IDs in green. 

user@MyUbuntuVM:~$ sudo multipath -ll
3624a93702b60622e2b014a2200011011 dm-0 PURE    ,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| |- 2:0:0:2 sdb  8:16  active ready running
| |- 3:0:0:2 sdf  8:80  active ready running
| |- 4:0:0:2 sdl  8:176 active ready running
| `- 5:0:0:2 sdk  8:160 active ready running
.
.
.
.
`-+- policy='queue-length 0' prio=10 status=enabled
  |- 6:0:0:2 sdd  8:48  active ready running
  |- 7:0:0:2 sdh  8:112 active ready running
  |- 8:0:0:2 sdp  8:240 active ready running
  `- 9:0:0:2 sdo  8:224 active ready running
.
.
.

 

 

23. Create mount points on the initiator.

sudo mkdir /mnt/store0

24. Create the desired file system on each Cloud Block Store volume using the device-mapped IDs, and then mount each volume to the mount point.

sudo mkfs.ext4 /dev/mapper/<device-mapped ID>

where:

<device-mapped ID> is the device-mapped ID from step 25 in green.

The following example uses filesystem ext4 for the Cloud Block Store volume

dm-0

user@MyUbuntuVM:~$ sudo mkfs.ext4 /dev/mapper/3624a93702b60622e2b014a2200011011
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=1024 blocks
134217728 inodes, 536870912 blocks
26843545 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
16384 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
    102400000, 214990848, 512000000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

25. Mount Cloud Block Store volume onto mount point.

sudo mount /dev/mapper/<device-mapped ID> <mount point>

where:

<device-mapped ID> is the device-mapped ID collected from step 25.

<mount point> is the mount point created in step 26.

user@MyUbuntuVM:~$ sudo mount /dev/mapper/3624a93702b60622e2b014a2200011011 /mnt/store0

26. The mount points now report 2TB, and block storage can be consumed.

user@MyUbuntuVM:~$ df -h
Filesystem                                     Size  Used Avail Use% Mounted on
/dev/mapper/3624a93702b60622e2b014a2200011010  2.0T   81M  1.9T   1% /mnt/store0

On Cloud Block Store

  • (Optional Check) I/O should only flow to the primary controller instance. Run I/O on your Linux host and confirm on your Cloud Block Store instance with the following command:

purehost monitor --balance --interval 3

Example:

pureuser@CBS> purehost monitor --balance --interval 3
Name       Time                     Initiator WWN  Initiator IQN                        Initiator NQN  Target       Target WWN  Failover  I/O Count  I/O Relative to Max
MyUbuntuVM  2019-08-26 10:31:32 PDT  -             iqn.1994-05.com.redhat:b9ddc64322ef  -              (primary)    -           -         1626       100%
                                                   iqn.1994-05.com.redhat:b9ddc64322ef                 (secondary)              -         0          0%
                                                  

Adding More Volumes

You can add more volume using CLI or GUI. 

  1. To create subsequent Cloud Block Store volumes, you can use the purevol create CLI command to create the the volume.

purevol create <vol name> --size <volume size>

  1. Use the purevol connect command to connect the volume to the desired host.

purevol connect <vol name> --host <hostname>

  1. Rescan the host to see the additional storage.
  • On Windows, go to Disk Management and perform a rescan to confirm the new volume is present. 
  • On Linux use the sudo iscsiadm -m session --rescan command.

Example:

user@MyUbuntuVM:~$ sudo iscsiadm -m session --rescan
Rescanning session [sid: 2, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 172.16.180.8:3260]
Rescanning session [sid: 3, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 172.16.180.8:3260]
Rescanning session [sid: 4, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 172.16.180.8:3260]
Rescanning session [sid: 1, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 172.16.180.8:3260]
Rescanning session [sid: 5, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 172.16.180.8:3260]
Rescanning session [sid: 6, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 172.16.180.8:3260]
Rescanning session [sid: 8, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 172.16.180.8:3260]
Rescanning session [sid: 7, target: iqn.2010-06.com.purestorage:flasharray.666667d86130ec06, portal: 172.16.180.8:3260]