Skip to main content
Pure Technical Services

Configuring MySQL on AWS Outposts With FlashArray

Currently viewing public documentation. Please login to access the full scope of documentation.

Amazon Web Services (AWS) Outposts is a fully managed hybrid cloud service with the same infrastructure, services, API's and tools as the AWS cloud services platform. AWS Outposts are suited for scenarios where business operations require low latency access or specific access requirements to on-premises systems. 

FlashArray can be paired with AWS Outposts to provide high performance access to storage with rich data services. This pairing allows for the storage used by AWS Outposts to be more efficient and provide direct benefits to the application infrastructure deployed in the hybrid cloud environment. 

An Outpost is a pool of AWS compute and storage capacity deployed at a customer site.

When using an Outpost with FlashArray for application infrastructure the operating system will reside on the Outpost storage and the application data will reside on FlashArray. 

FlashArray and FlashBlade are both designated AWS Outposts Ready.  

More detailed information can be found in the AWS Outposts user guide

This knowledge article will provide guidance on how MySQL databases deployed on EC2 instances in AWS outposts can utilize FlashArray block storage through iSCSI connectivity. Concepts covered here will include Amazon Elastic Compute Cloud (EC2) administration, Amazon Virtual Private Cloud (VPC) networking, and FlashArray block storage administration

Network Configuration 

The network configuration for an AWS Outpost is done together with Amazon Personnel. Before deployment an AWS Outposts Logical Network Assessment is performed where AWS will request or provide information on topics such as firewall configuration, service link IP addresses, and local gateway configuration. 

To allow iSCSI connectivity between an AWS Outpost and FlashArray a subnet must be created within a VPC that is routable to the customer subnet on which the FlashArray iSCSI ports reside. 

It is strongly advised to have redundant network access between the AWS Outpost and FlashArray.

clipboard_ecbe11d71f5058d514fe7b3c4cfd612b5.png

To configure the VPC and subnet on a new AWS Outpost follow these steps:

Step 1: Open the Services Menu and Select the VPC Dashboard

It will be under Network and Content Delivery.

1.JPG

Step 2: In the VPC Dashboard Navigate to Your VPCs

2.JPG

Step 3: Create a New VPC  

There will be an existing VPC. The new VPC will contain a subnet for EC2 instances which is routable between FlashArray and the AWS Outpost. 

The subnet for the EC2 instance can be dedicated for iSCSI traffic or shared with iSCSI and application traffic. 

This example assumes the subnet will be shared between the application and iSCSI traffic.

In the Your VPCs view select Create VPC.

3.JPG

Provide a name and IPv4 CIDR block for the VPC. 

5.JPG

Once the VPC is created review its details. 

6.JPG

Step 4: Associate the VPC with the Outpost 

In Services search for an navigate to AWS Outposts. Once in the AWS Outposts service view, navigate to Local gateway and route tables.  Select the topmost local gateway and route table. 

8.JPG

In the local gateway and route tables view identify the section titled VPC associations. If there is no existing association then the VPC created in Step 3 will be used. Select Associate VPC.

9.JPG

Select the relevant VPC from the drop down menu and then select Associate VPC.

10.JPG

Ensure that the VPC has associated with the AWS Outpost successfully. 

11.JPG

Step 5: Create a Subnet for the VPC 

In the AWS Outposts, Outposts view select the Actions dropdown and Create subnet from the menu.   

12.JPG

Select the VPC ID from the dropdown menu and then scroll further down to Subnet settings

13.JPG

Provide an IPv4 CIDR block to be used for the Subnet. This subnet must be routable to the customer subnet assigned to the FlashArray iSCSI ports. 

Select Create subnet when complete. 

14.JPG

Ensure that when the subnet is created that it is associated with the correct Outpost ID. 

15.JPG

Step 6: Create a Route Table for the VPC

In the VPC Dashboard navigate to the Route tables view. There will potentially be some existing route tables that can be edited but this guide will assume that a new route table will be created and edited. 

Select Create route table.

28.JPG

Provide the route table with a name and assign the VPC created in Step 3 . Select Create route table when complete. 

29.JPG

Once the route table has been successfully created it needs to be associated with the previously created subnet. Select the Subnet associations for the route table and then select Edit subnet associations.

31.JPG

Select the relevant subnet from the Available subnets and then select Save associations

32.JPG

Once the subnet has been associated the correct route needs to be added. Select the Routes view for the routing table and select Edit routes. 

33.JPG

In the Edit routes view select Add route. 

34.JPG

For this example, the destination route being assigned will be 0.0.0.0/0 . This is then assigned to the logical gateway as its target. Select Save changes when complete. 

35.JPG

Review all of the settings for the route. 

36.JPG

EC2 Instance Configuration 

Deploy EC2 Instance

The easiest way to create an EC2 instance on an AWS Outpost is via the AWS Outposts Service interface. 

In the AWS Outposts service interface select an instance and then use the Actions drop down menu. In the menu select Launch Instance

16.JPG

In order to launch an instance an Amazon Machine Image will be used as the template for the operating system. 

This example will use Red Hat Enterprise Linux 8 (HVM). 

17.JPG

Once the AMI has been selected the instance type can be selected. 

The Operating System will still reside on the Instance Storage. Only application data can be connected to FlashArray. 

18.JPG

Once an instance type has been selected it can be customized. Ensure the VPC created in Step 3 and Subnet Created in Step 5 are selected. Select Next: Add Storage when satisfied with the configuration. 

19.JPG

All that will potentially be required in storage is the Root Volume Type. 

21.JPG

Provide any required tags. 

22.JPG

Create a new or select an existing security group. 

23.JPG

Review and launch the instance. Select Launch when ready to proceed. 

24.JPG

Select or create a new key pair. Select Launch Instances when ready to proceed. 

25.JPG

Ensure that the instance launches correctly. 

26.JPG

Configure EC2 Instance for iSCSI Connectivity and Connect to FlashArray

All of the instructions below will be for Red Hat Enterprise Linux 8. If using a different Operating System ensure that the correct process is followed to add any required iSCSI modules and customize storage parameters for the best performance. 

See the Linux Recommended Settings or the Microsoft Windows iSCSI connection guide for more information. 

Step 1: When Logged in Drop to the Root User 

sudo -i 
[ec2-user@ip-172-26-1-196 ~]$ sudo -i
[root@ip-172-26-1-196 ~]#   

Step 2: Install Required Modules for iSCSI Connectivity 

The required library name is iscsi-initiator-utils. 

dnf install iscsi-initiator-utils
[root@ip-172-26-1-196 ~]# dnf install iscsi-initiator-utils
Updating Subscription Management repositories.
Unable to read consumer identity

This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.

Last metadata expiration check: 0:01:15 ago on Wed 21 Jul 2021 02:17:25 PM UTC.
Dependencies resolved.
==============================================================================================================================================================================================================================================================
 Package                                                                Architecture                                   Version                                                          Repository                                                       Size
==============================================================================================================================================================================================================================================================
Installing:
 iscsi-initiator-utils                                                  x86_64                                         6.2.1.2-1.gita8fcb37.el8                                         rhel-8-baseos-rhui-rpms                                         379 k
Installing dependencies:
 iscsi-initiator-utils-iscsiuio                                         x86_64                                         6.2.1.2-1.gita8fcb37.el8                                         rhel-8-baseos-rhui-rpms                                         100 k
 isns-utils-libs                                                        x86_64                                         0.99-1.el8                                                       rhel-8-baseos-rhui-rpms                                         105 k

Transaction Summary
==============================================================================================================================================================================================================================================================
Install  3 Packages

Total download size: 584 k
Installed size: 2.4 M
Is this ok [y/N]:                                 

Step 3: Install device-mapper-multipath Software

The device mapper software could already be installed on the system, this can be checked with the following command:

rpm -qa | grep device-mapper-multipath
[root@ip-172-26-1-196 ~]# rpm -qa | grep device-mapper-multipath
device-mapper-multipath-0.8.4-10.el8.x86_64
device-mapper-multipath-libs-0.8.4-10.el8.x86_64

Step 4: Configure Device Rules for FlashArray Block Storage 

Review the required /etc/multipath.conf and UDEV rules for the relevant operating system in Linux Recommended Settings

Step 5: Create an iSCSI Host on FlashArray for the EC2 Instance 

On the operating system retrieve the iSCSI internet qualified name (iqn):

[root@ip-172-26-1-196 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:3754fc4ffcf

Record the initiator name iqn and navigate to the FlashArray user interface. 

In the FlashArray user interface under Storage navigate to the Hosts view. Under Hosts select the + in the top right hand corner to create a new host.  

clipboard_e7ff61c9a8d6514ad1a2231ff196979ff.png

Provide a name for the host and then select Create. 

clipboard_eed006845d4c9806cc5719c820c578015.png

 

Once the host has been created navigate to it and select the 3 ellipses in the top right hand corner of Host Ports. In the drop down list select Configure IQNs...

clipboard_ec47b90ea742c9c55247d78b1f7912e25.png

Provide the IQN from Step 5 then select Add

clipboard_e6aaabc29679931602df241360d2e52d6.png

Step 6: Connect a Volume to the EC2 iSCSI Host 

Once the host has been created a volume can be connected to it. Navigate to the Volumes view in Storage. To create a new volume select the in the top right hand corner of the Volumes section. 

clipboard_eeb74143df6b5816dd60eb8e3b6282659.png

Provide a name and capacity for the volume.

clipboard_e80a0076fbd95cc2ccc853abe563430d7.png

Once the volume has been created it can be connected to a host. Navigate to the new volume and select the three ellipses in the top right hand corner of Connected Hosts and select Connect....

clipboard_e826a8de41f5c23c66c05683e58edb0b3.png

In the Connect Hosts dialog select the host to connect and then select Connect. 

clipboard_e1fc88323e1c304399bafcabd40e201d3.png

The volume will now show it is connected to the EC2 instance. 

clipboard_ef01565637b4d13a7b96cd088352eb56e.png

Step 6: Discover and Connect to the iSCSI Target 

To discover the IP address and target IQN from the initiator, use the iscsiadm utility with the discovery mode, replacing the IP address or host name after -p with the relevant address for your FlashArray IP address. 

iscsiadm -m discovery -t sendtargets -p <IP Address or Hostname>

The response should provide all of the IP addresses that target can be reached on and the iqn of the FlashArray. 

[root@ip-172-26-1-196 ~]# iscsiadm -m discovery -t sendtargets -p 10.21.194.55
10.21.194.55:3260,1 iqn.2010-06.com.purestorage:flasharray.593f4fa440d6bf67
10.21.194.57:3260,1 iqn.2010-06.com.purestorage:flasharray.593f4fa440d6bf67
10.21.194.56:3260,1 iqn.2010-06.com.purestorage:flasharray.593f4fa440d6bf67
10.21.194.58:3260,1 iqn.2010-06.com.purestorage:flasharray.593f4fa440d6bf67

Once the target name has been ascertained use the iscsadm utility in node mode to login to the target: 

iscsiadm -m node --targetname <array iqn> -p <Array IP Address> -l

Repeat the steps for each IP address, ensuring that each login is met with success. 

[root@ip-172-26-1-196 ~]# iscsiadm -m node --targetname iqn.2010-06.com.purestorage:flasharray.593f4fa440d6bf67 -p 10.21.194.55 -l
Logging in to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.593f4fa440d6bf67, portal: 10.21.194.55,3260]
Login to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.593f4fa440d6bf67, portal: 10.21.194.55,3260] successful.
[root@ip-172-26-1-196 ~]# iscsiadm -m node --targetname iqn.2010-06.com.purestorage:flasharray.593f4fa440d6bf67 -p 10.21.194.56 -l
Logging in to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.593f4fa440d6bf67, portal: 10.21.194.56,3260]
Login to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.593f4fa440d6bf67, portal: 10.21.194.56,3260] successful.
[root@ip-172-26-1-196 ~]# iscsiadm -m node --targetname iqn.2010-06.com.purestorage:flasharray.593f4fa440d6bf67 -p 10.21.194.57 -l
Logging in to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.593f4fa440d6bf67, portal: 10.21.194.57,3260]
Login to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.593f4fa440d6bf67, portal: 10.21.194.57,3260] successful.
[root@ip-172-26-1-196 ~]# iscsiadm -m node --targetname iqn.2010-06.com.purestorage:flasharray.593f4fa440d6bf67 -p 10.21.194.58 -l
Logging in to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.593f4fa440d6bf67, portal: 10.21.194.58,3260]
Login to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.593f4fa440d6bf67, portal: 10.21.194.58,3260] successful.

 After login ensure that each iscsi node will be logged into automatically with the following iscsiadm command syntax: 

iscsiadm -m node --targetname <Array iqn> -p <IP Address or Hostname of Array> -o update -n node.startup -v automatic

 Ensure that multipath paths to the same device has been found using the multipathutility:  

 multipath -ll 
[root@ip-172-26-1-196 ~]# multipath -ll
3624a9370668f1ab9b15f4bc400013a54 dm-0 PURE,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  |- 3:0:0:1 sda     8:0   active ready running
  |- 1:0:0:1 sdb     8:16  active ready running
  |- 0:0:0:1 sdd     8:48  active ready running
  `- 2:0:0:1 sdc     8:32  active ready running

Step 7: Create a filesystem on the Volume and Ensure it is Mounted at startup 

Format the file system, this example uses the XFS file system, using the multipath device identified in Step 6:

mkfs.xfs /dev/mapper/<device>
[root@ip-172-26-1-196 ~]# mkfs.xfs /dev/mapper/3624a9370668f1ab9b15f4bc400013a54
meta-data=/dev/mapper/3624a9370668f1ab9b15f4bc400013a54 isize=512    agcount=4, agsize=134217728 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=536870912, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=262144, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Discarding blocks...Done. 

Use the blkid utility to identify the volume ID.  

blkid

Note the UUID="xxxx" value as this will be used to mount the volume persistently.

[root@ip-172-26-1-196 ~]# blkid
/dev/nvme0n1: PTUUID="f34b923a-6ce9-4cef-841e-82ec5d63653c" PTTYPE="gpt"
/dev/nvme0n1p1: PARTUUID="07c6574c-7f85-4859-9689-c8090f35545a"
/dev/nvme0n1p2: UUID="d35fe619-1d06-4ace-9fe3-169baad3e421" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="25a742d0-6b18-4c26-951a-2b99f1be934d"
/dev/mapper/3624a9370668f1ab9b15f4bc400013a54: UUID="3f220348-9b9f-4f22-bb00-3c28bb26e892" BLOCK_SIZE="512" TYPE="xfs"

In /etc/fstab add a line similar to the below, mounting the volume at the required location: 

UUID=3f220348-9b9f-4f22-bb00-3c28bb26e892       /var/lib/mysql  xfs       _netdev         0 0

Once the /etc/fstab entry has been created and saved use the mount -a command to mount all devices in /etc/fstab (It may be necessary to create the directory first): 

mount -a 

Use the df command to check if the volume has been mounted successfully. 

df -h 
/dev/mapper/3624a9370668f1ab9b15f4bc400013a54  2.0T   15G  2.0T   1% /var/lib/mysql

MySQL Database Deployment 

The process for installing MySQL from a repository for SLES/OEL/RHEL or Centos is the most straightforward. This guide will showcase how to install MySQL for a dnf based repository. 

Please see the MySQL installation guide for more instructions that may not be covered here. 

The instructions below are taken from this location. 

Run the following command as root. This will add the MySQL repository to the systems respiratory list.

rpm -ivh https://dev.mysql.com/get/mysql80-community-release-el8-1.noarch.rpm

Disable the local MySQL instance by running the following.

dnf module disable mysql

Assuming that the version of MySQL to be installed will be 8.0 or above run the following to ensure 5.7 is not installed.

dnf config-manager --disable mysql57-community

Now to install mysql run the following.

yum install mysql-community-server

More information about how to deploy MySQL on FlashArray can be found in the MySQL Implementation and Best Practice Guide.