This guide provides steps on how to leverage Pure Cloud Block Store (CBS) REST API, and integrate it with your new or existing application running on EC2 that are a part of an Auto Scaling Group.
Many applications hosted in the cloud have been built to automatically leverage the scalability that the cloud offers. Applications can scale up the number of instances needed at any given moment based on monitored metrics, then scale down once demand drops below a user-defined threshold. AWS Auto Scaling Group is one such service that AWS customers are using to help monitor applications and automatically adjust the capacity of the instance fleet.
Besides the dynamically scaling functionality that Auto Scaling Group provides, customers can also use Auto Scaling to extend this functionality by adding additional actions to the scaling process. One example of this that Auto Scaling supports is adding lifecycle hooks to the instance during the launching and terminating phases. Those hooks send AWS SNS (Simple Notification Service) notification, then SNS can be attached as triggers to Lambda functions to run a certain action. This chain of actions give an endless number of custom API calls that can be executed during the lifecycle of the Auto Scaling instance against Pure CBS and many other AWS offerings.
The typical architecture of such an application includes an Application Load Balancer fronting an Auto Scaling Group of EC2 instances that might handle web requests or serve data. For stateful application, the need for a common data layer service is a must. Instances can attach data volumes in a 1:1 relationship or share the same data set amongst other instance(s) in the fleet. This is just one area where Pure Cloud Block Store can bring value to the solution with its enterprise-rich features.
The following example solution outlined in the remainder of this guide combines Auto Scaling Group functionality as well as Lambda functions with Cloud Block Store as the data service layer for the running instance. By using automation, the scaled up EC2 instance launches with auto provisioned storage via in-guest iSCSi and runs for as long as it is needed. Once demand drops to below the monitored threshold level, Auto Scaling sends a SNS notification that in turn triggers a Lambda function to execute API calls against Cloud Block Store to clean up the provisioned storage.
The following workflow will be executed during the implementation of this solution:
Auto Scaling Group (ASG) spins up a new EC2 instance from the launch template.
A CBS volume attaches to the EC2 instance during the instantiation of the instance by leveraging REST API calls ingested into the User Data of the launch template. At this point the EC2 instance will be in a running state for some finite amount of time.
Upon the automated termination of an instance, ASG generates a notification event using Simple Notification Service "SNS" to subscribers.
SNS triggers the Lambda function.
Lambda function runs API calls against CBS in order to clean the provisioned EC2 Instance (Host and Volumes).
The solution in this guide assumes that you have an existing Auto Scaling Group environment configured with a scaling policy that is set to meet your application requirements. One of the most common scenarios for ASG is for databases cluster or Web Apps fronted by a Load Balancer. There are various metrics that can be set to trigger the scaling operation.
The second assumption is that you have a Cloud Block Store deployed within the same region as the ASG. If not, please refer to the deployment guide here.
Setting up the storage auto-provisioning lifecycle requires configuring the following:
- Add the user data script to the Launch Template.
- Create CBS API Client.
- Create and Configure Lambda Function.
- Configure SNS Notifications and Lambda Trigger.
Add User Data Script to EC2 Launch Template
When ASG is performing scale up operation, a new EC2 instance will be launched using the Launch Template. The template contains the configuration information and parameters to launch an instance. Part of the configuration information is user data, which can be leveraged to pass common automated configuration tasks in a script format.
The script performs the following tasks:
- Install required packages.
- Apply iSCSi and multipathing best practice configuration.
- Connect to CBS and create a host.
- Provision the storage at a user-defined size.
- Connect the volume to the host within CBS.
- Create iSCSi initiator and discover the connected volume in the EC2 instance.
- Mount the volume in the EC2 instance.
Download/Copy the User Data Script
The following GitHub links includes two scripts:
Edit Environment Variables in the Script
After downloading/copying the script, you have to enter your own CBS variables, host information, and volume size (The screenshot below shows an example of the Windows user data script).
Add User Data to Launch Template
In AWS console, Navigate to EC2 Launch Template, then select your launch template configured with the auto scaling group.
Click on Advanced details, then from Actions select Modify template. Scroll down and paste the script modified in the previous step into the User Data field.
It is likely your template has already user data included, which is used to bootstrap your instance. Make sure to not overwrite it and append the previous script at the end.
Create CBS API Client
In order to use Pure Python SDK, FlashArray/Cloud Block Store uses API Client method to authenticate to the array for REST access. The following steps produce the required parameters, which will be used in the Lambda function code later on.
Generating a RSA key pair
If you don't already have an RSA key pair available, generate one using the Linux commands below:
openssl genrsa -out cbsprivate.pem 2048 openssl rsa -in cbsprivate.pem -outform PEM -pubout -out cbspublic.pem
Display your new public key in plain text using the following command:
You should see something similar to the following:
-----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAt7WqkyADURjdO9lPtNXW 4ihQ4FF/DDsSslJD7N9b8j5ghD5BJO543L2Nr96rh8Fa9pXpbSbiMGG67RQ695wK uxrTdyhKQ8lZQuCEzX7+sUoNopeRK7vfsmiv+eT6g/wEFg4KBaTIrYKqPfEVw9Ub 0Ib1CjXHlx+DZdmff47ZrhOwaGQ4oYsJEKAA0Yc608b2yD9H84UN/uq/Ukh5Q7Th 3BtzY6LcQe5FrktQomH8AFCvzY7XBUao8iCPmg7jLnaFZmQpipslwoUpRfxDDD5L 7rO1OjR1GX2+3PInKYQ+ROoMp8MPUKCUU/pH4BDFAZF7A9W6H48bD2mnSjUnqwIp gQIDAQAB -----END PUBLIC KEY-----
Copy the public key plain text into your clipboard or in a separate document for future reference, and you will also need it in the next step.
Creating an API Client
To create an API client, SSH into your Cloud Block Store as a user with array admin privileges and enter the following command:
pureapiclient create --max-role array_admin --public-key <name_of_your_app>
Note: Each API Client has an
Issuer value. The issuer for an API Client is the IdP that is associated with that API Client. The issuer defaults to the name of the API Client, but you can optionally user the
--issuer argument to set it to a different value. In this guide, you will always be acting as the IdP (even though you might delegate the actual token creation to a script), so feel free to set the issuer to whatever you want, or leave it as the default.
Note: valid values for the
--max-role parameter are:
Note: you can also specify the optional
--access-token-ttl parameter, which specifies the time interval in which the issued access tokens are valid. Allowed values are between
1000 (1 second) to
86400000 (1 day). The default value is 1 day.
When you press 'Enter' after typing the command above, you get prompted to enter your public key:
pureapiclient create --max-role array_admin --public-key myClient Please enter public key followed by ^D:
Paste the public key plain text you retrieved with the
cat fa2xpublic.pem command above and press Enter. Then press Control+D.
Note: Only after pressing Enter should you press Control+D, otherwise, this won't work.
You should see an output similar to the following:
Name Enabled Max Role Issuer Access Token TTL Client ID Key ID myClient False array_admin myClient 1d ab18a763-2b34-4b61-aa8e-a45afc7ad945 e7d175d3-c88b-41ef-b5f9-79c75a53cd2e
As you may notice above, your API client is disabled by default, so you must enable it with the following command:
pureapiclient enable myClient Name Enabled Role Issuer Access Token TTL Client ID Key ID myClient True array_admin myClient 1d ab18a763-2b34-4b61-aa8e-a45afc7ad945 e7d175d3-c88b-41ef-b5f9-79c75a53cd2e
Copy the details of your API Client in a separate document which will be paste to the python code later.
Store Private Key in AWS System Manager Parameter Store
The private key generated in the previous step will be used by the Lambda function. Therefore, storing it in SSM Paramete Store helps retrieving it upon request securely and programmatically using AWS python SDK.
In your local machine, run the below command after changing to the directory where the private key is located.
aws ssm put-parameter --name "/cbs/apiclient/privatekey" --value "$(cat cbsprivate.pem)" --type String
Create and Configure Lambda Function
The lambda function is used to access to the CBS array and run a cleaning script. The script is written using Pure python SDK and uses the created API client to authenticate to CBS and perform the tasks of removing the terminated instance objects (host and volume).
The creation and configuration instructions can be followed using this guide:
For the sake of simplicity, this guide uses Terraform for implementing the lambda function. To follow a step-by-step implementation using AWS Console or CLI see this guide: Using AWS Lambda for Task Automation
Download and install the appropriate Terraform package for your operating system and hardware architecture via the link below:
Download Lambda Terraform Sample Files
Create a new directory for the Terraform deployment, and copy or download Terraform sample for Lambda function (Three files shown on the screenshot) from the below link:
Authenticate to AWS
There are two options to authenticate to AWS from your local machine or Terraform master machine:
- Install AWS CLI and configure access your environment with Access Key and Secret Key for an IAM role or IAM user. Ensure that they have sufficient permissions to deploy Lambda functions.
- Use Access Key and Secret Key directly inside Terraform files main.tf (the screenshot shows the location).
Download the Lambda Function Script
Access the below GitHub link and copy/download the function script to the same project directory or any directory. Then copy the file path of the script which is required for the Terraform variable file in the next section.
By navigating to the downloaded Terraform sample, use any text editor to edit the terraform.tfvars file and fill the prerequisites.
- region - AWS region where the function to be deployed.
- function_name - The function name.
cbs_ip- The Management IP address for CBS.
cbs_username- Username of a valid user on the array (such as
cbs_api_client_private_key- The file path of the API client private key.
cbs_api_client_id- The API Client ID property(from the
cbs_api_key_id- The API Key ID property(from the
cbs_api_issuer- The API Client's issuer property (from the
vpc_subnet_ids- The subnet id where Lambda function can have communication with CBS managemnt subnet.
vpc_security_group_ids- The securoty group with roles allow communcation to CBS managemnt subent over HTTPS.
lambda_python_script_path- The file path of the function script.
Save the changes to the file, and open a terminal window and navigate to the Terraform deployment directory that has been created, and Run terraform init command to initialize the working directory.
Next, Run the terraform plan command to create the execution plan.
Finally, Run the terraform apply command to execute the plan and start deploying.
Configure SNS Notifications and Lambda Trigger
The following steps show how to get notification events generated using SNS when ASG scales down and the EC2 instance is removed.
Create SNS Notification in ASG
In AWS Console, Navigate to EC2 Auto Scaling Groups. Under Activity, select Create notification, then enter the name of SNS topic to be created, and select Termination from the list of Event Types.
Subscribe SNS to Lambda Function
The last step is to add SNS topic as trigger of the Lambda function. To achieve this, navigate to the Lambda function created in the previous section, then click on Add trigger.
From the drop-down select SNS, then again use the second drop-down to find and select the SNS topic. Click Add.