Skip to main content
Pure Technical Services

Part 1: Deploy FlashStack with SmartConfig

Currently viewing public documentation. Please login to access the full scope of documentation.

KP_Ext_Announcement.png

Before you Begin

It is probably obvious, but the FlashStack components to be used for your SmartConfig + VMware Cloud Foundation deployment need to be racked, powered and cabled up.  We strongly recommend cabling the various FlashStack items together according to the topology diagrams included in the SmartConfig Getting Started Guide.

For the case of our deployment covered in this guide, the component diagram and cabling in use is shown below.

These additional items are needed prior to getting started:

  • UCS FIs, MDS and Nexus components must be factory reset and in DHCP discovery mode.

  • Untagged native VLAN for SmartConfig (/24 minimum).  Typically this is accomplished via plugging the management interfaces of the various components into a dumb switch.

  • Separate, previously deployed ESXi, Hyper-V or other bare metal host or other laptop/desktop where the SmartConfig and VMware Cloud Foundation OVAs can be deployed with connectivity to the above untagged native network.

  • SmartConfig OVA deployed on the above system connected to the untagged native network.  SmartConfig also needs a static IP address assigned to it within the console. 

  • NTP Server on the private network.

  • Minimum of 3 routable production defined VLANS for use with VMware Cloud Foundation (for ESXi Management, vMotion and VMware vSAN).

  • Pure Storage FlashArray with management IP on the SmartConfig.  The FlashArray used in this guide is using Fibre Channel.

  • Cisco UCS, MDS and Nexus firmware and kickstart files.

  • ESXi 7 ISO.

  • Edit rights over a Windows DHCP scope and DNS zone.  Note: static IP addressing and forward/reverse DNS zoning will certainly work for ESXi hosts with other solutions outside of Windows.  However, these two items are required if following the steps outlined in the ‘Prepare ESXi Hosts for VMware Cloud Foundation’ chapter at the end of this section with PowerShell.

Once the above items are verified as being available and online the first step is to navigate to the IP address that has been assigned to the SmartConfig virtual appliance.

This first phase of our deployment will be done exclusively through the static IP address assigned to the SmartConfig OVA.

Enable the DHCP scope within SmartConfig so that the UCS, MDS and Nexus components in our FlashStack can be assigned an address from the DHCP IP pool for configuration and deployment. This is accomplished by clicking on the Enable DHCP Server for Auto Discovery radio button located in the top-right area of the GUI.

Most DHCP server fields should be automatically populated from when the networking for SmartConfig was originally assigned.  Select the DHCP range you want the FlashStack components to reside within.

The next step is to manually add our FlashArray to the SmartConfig inventory so that it can be used in subsequent deployment steps.  To do this, click on the hamburger button just to the right of the DHCP scope radio button and select the Add FlashArray option.

Here we add the VIP for the FlashArray management connection and the pureuser credentials.

While we wait for the various FlashStack components to be picked up on the DHCP scope, adding the various ISOs and kickstart scripts used to initialize, deploy and configure FlashStack can be loaded into the SmartConfig ISO library.  This repository not only contains all of the various UCS and Nexus operating environments but also is where we will stage our installation of ESXi at the end of the SmartConfig phase of this deployment.  The ISO library is in the same menu where we added the FlashArray in the last step:

Multiple firmware versions for all pieces can be housed within this library and used to deploy whatever FlashStack configuration is required.  The ISO library is divided into two sections, the top section shows firmware for Cisco components and provides the ability to upload additional operating systems for the current selection.

There is also a section for the available operating systems that can be installed and their respective kickstart scripts:

The bottom section shows all previously uploaded files to the library and gives the ability to delete them if they are no longer needed or require an update:

After just a few minutes, we can see that all of our Cisco components have been picked up by DHCP and given an address.  The various FlashStack component options available for deployment are a selectable option across the top.  This enables potentially many different Cisco systems to be staged and gives the deployment administrator the ability to pick only those components to be used for the build at hand.  Clicking on a component will toggle it for selection, or clicking on the configuration option you want automatically will select the required underlying pieces.  Our example deployment shows all components highlighted and ready for the next step:  configuration.  Click on the next button at the bottom of the GUI to proceed to the next phase of SmartConfig.

clipboard_e06ceb246df066dbd0311b32224586229.png

SmartConfig Configuration

The Configuration phase of our FlashStack deployment is where we input specific environmental constructs including management IP addresses, operating system versions, kickstart scripts and VLANs to be used on the production network post-deployment.  The beauty of this phase is that you potentially only need to do it once as all values used here can be exported as a JSON at the end of the deployment and imported for use on subsequent deployments - automating them.  A sample JSON based upon the examples provided in this guide is available here.

The top area of the Basic Manual Configuration section of the Configuration window includes some basic networking information, administrative password for all Cisco gear, an IP range for KVM IP addresses to the B and/or C-series blades, which FlashArray is to be used with the deployment and lastly what Operating System (in our case ESXi7) to be installed on top of the UCS servers along with the ESXi kickstart script.  The kickstart script for ESXi is important as it sets some key variables so these hosts can easily be used for eventual use with VMware Cloud Foundation.  A sample of the JSON used for this guide is available at the following GitHub page here

For more detail on all SmartConfig Configuration fields, please follow this link.

The lower section of the basic configuration screen includes firmware version and kickstart selection for the Nexus 9K switches and MDS, as well as management IP address assignment:

The advanced section of the Manual Configuration phase gives us the ability to specify what production VLANs, port channels and SAN connections to be assigned throughout the hardware stack.  By default these values align with a Cisco Validated Design, but in our example deployment we are using VLAN2140 for ESXi management, 2137 for vMotion and 2138 for vSAN (denoted below as ‘Application VLAN’) so they are changed as shown below.  More VLANs can be added here for specific application usage.

 

With these values filled out we next move on to the Initialization Phase of the deployment.  In just a few screens we are already more than halfway done with our SmartConfig deployment!

SmartConfig Initialization

This phase of the process is exceedingly simple - we click on the ‘Initialize’ button at the bottom of the screen and all of the items we put into the Configuration phase previously are loaded and configured onto the various Cisco devices (management IP, upload and/or upgrade selected firmware and kickstart scripts, etc..).

SmartConfig Deployment

This final step within SmartConfig will:

  • Complete our Nexus switch configuration 
  • Create and associate service profiles with our UCS hosts
  • Configure and zone the MDS switches
  • Create a boot from SAN policy 
  • Create a boot LUN for ESXi on the Pure Storage FlashArray 
  • Create a host group and a shared Fibre Channel volume on the FlashArray for immediate use post-deployment

For use cases that match the CVD, simply click on the 'Deploy' button within the basic workflow and all steps will complete automatically.  Imported JSON configurations will also run without the need for user interaction.   

However, just as in the Configuration phase of the SmartConfig process, there are advanced options available to help customize items that may not match a Cisco Validated Design.  As with the Configuration phase, these customizations can be captured and exported as a JSON post-deployment and used to automate subsequent FlashStack deployments.  For this example we will show the individual steps needed to customize our deployment for VMware Cloud Foundation within our lab.

VMware Cloud Foundation requires that the Management, vMotion and vSAN VLANs are all aggregated on ESXi vmnic0 and vmnic1 physical ports as part of a distributed switch.  To allow for this, we first switch into ‘Advanced Mode’ within the Deployment phase of SmartConfig to make a couple of changes to the Nexus switch port channels and later to the UCS service profile management vnics.

To get into the Advanced mode of the Deployment phase, click on the ‘Advanced’ button towards the top-right of the SmartConfig GUI.  Doing so will unlock the ability to edit the individual Deployment workflows as shown below.

clipboard_e75ceab61029d879edd01d1ad292a29ef.png

Clicking on the edit button for the Configure Nexus Switches workflow gives us a graphical representation of the steps for each deployment action.

The change that we need to make specific to VMware Cloud Foundation is to include the additional VLANs that it requires as part of the upstream port channels on both of our Nexus switches.  So that will require editing these following four workflows in the same fashion.

First select the Task Input button for the first Nexus workflow:

Then, switch to the Advanced tab (#1), hit the + sign to add an additional allowed VLAN (#2) and add vMotion, vSAN and the native VLAN to the allowed list (#3).  Note that these VLANs were set and are shown as an alias from the Configuration phase.

Once all of the additional VLANs have been added like in the below example, click on Save (#4).

Repeat this process for the 3 remaining upstream Port Channel workflows.

Now that the proper VLANs have been added, we can proceed with running the Nexus workflow by pushing on the play button.

For our use case, the MDS Switch workflow requires no editing, so that can be run with the default values once the Nexus workflow completes.

In similar fashion to the Nexus setup, we need to add our additional VLANs to the management vNIC template (A&B) of the UCS service profile so that the VLANs can communicate across that interface when VMware Cloud Foundation is deployed.

To update this, click on the edit button in the Configure UCS Fabric Interconnect workflow.

clipboard_ef15e5aebeb9d9a5ebb98b1ad5452e890.png

In the spawned graphical workflow hierarchy, expand Configure UCS LAN Connectivity.

We will be editing the workflows of Create Management vNIC Template (A) and (B).

clipboard_ec182679210acba67f69adee1c6c77fb7.png

As with the Nexus setup, select the Advanced tab (#1), but this time we will be selecting the additional VLANs via the drop-down menu (#2) which we require (#3).  Save the updated configuration (#4) and repeat the process on the other vNIC Template (B) workflow.

This represents the last required customized step needed for the Deployment phase to complete.  All subsequent workflows can be run with default values.

Once the final workflow has been completed, all of the customizations that we made previously can be downloaded as a JSON file via the ‘Export Configuration’ button shown below.  A sample JSON from this above example can be downloaded from our project GitHub page here.  

More savvy users can edit this JSON file directly for their needs and import for automated SmartConfig deployment.

A sample JSON that includes all of the changes used above is available here.

Finalizing ESXi for VMware Cloud Foundation

The ESXi kickstart script deployed within SmartConfig is available at this GitHub repository.  This script handles a majority of the steps needed to prepare the ESXi servers for use with VMware Cloud Foundation. 

The kickstart script does the following once SmartConfig deployment completes:

  • Sets a root password
  • Installs ESXi onto the selected Pure Storage FlashArray via boot from SAN 
  • Sets a VLAN for both the Management and VM networks 
  • Enables ssh and the ESXi shell  
  • Sets and enables NTP 

One item that the kickstart script cannot handle (since it is not able to be run on a per-host basis) is assigning each ESXi host a hostname, DNS entry and static IP address.  

There are numerous ways to assign a management IP and give a DNS name to each host, but for this guide we decided to create a DHCP reserved range that is based upon each host’s MAC address.  This aligns with our theme of minimizing or completely eliminating the need to individually touch servers.

To use DHCP in this fashion, we first need to obtain the MAC address for each UCS/ESXi host in the chassis.  This can be pulled from the Nexus switches (via the show mac address-table command) or, via UCSM which we will show here.

After logging in to the UCS Manager GUI, we navigate to Servers > LAN > MAC Identity Assignment Tab.  We can then filter for the management MAC addresses of interest for our eight ESXi hosts.

clipboard_ede399e5b7f01008e64831ba2bde0b16f.png

 

From the Advanced Filter panel, we remove all non-management MAC addresses as follows:

clipboard_e38c4426c0ea4a366a1c777e770f6efea.png

Then we export the filtered data to a CSV file:

clipboard_e70ad68d7e18d88c8422bf39b3d6c6cc1.png

The exported CSV includes the MAC addresses under the ‘ID’ column.

For the creation of our reserved DHCP range and forward/reverse DNS entries we have created a PowerShell script that is available on GitHub here.  This script will take three inputs, two of which must be created prior to usage on a Windows server:  a DHCP range and a DNS zone.

The third item for the script is a CSV file (example here) that contains the MAC address, IP address (within the DHCP scope) and desired ESXi hostnames: 

When invoked, users input those 3 pieces of information for the script to run:

After script completion and refreshing the DHCP and DNS management windows, we can see that our DHCP reserved scope has been created along with forward/reverse DNS entries for each host within the CSV file.

The final step of this phase in our deployment is to reset the management interfaces on the ESXi hosts so that they can pickup their new hostname and management IP.  One way to accomplish this is to run the below script via connecting to a fabric interconnect with ssh.  Please note that this script will immediately reboot a UCS host, so make certain that you are connected to the correct Fabric Interconnect before running this script.

Once the ESXi hosts come back online after reboot, we can see that this example server has been assigned the desired management IP address and hostname.  We are now able to proceed and deploy VMware Cloud Foundation with CloudBuilder.

Follow this link to proceed to Part 2.