Skip to main content
Pure Technical Services

FlashStack Boot From SAN - Fiber Channel

Document Overview

This document provides a detailed procedure for booting from SAN in a Unified Computing System (UCS). The assumption is that the reader has knowledge of UCS Manager (UCSM) with basic service profile, and LAN/SAN configurations.  

 

This document details the necessary steps to configure servers to boot SAN both on the server and the network side.

 

Fiber Channel Architectures

 

FlashArray M, X, UCS Fabric interconnect 6300 using FC Direct Attached 

 

In this configuration the Flasharray is connected directly to the UCS Fabric Interconnects (FIs). The benefit of this architecture is reduced cost since there are no external network or SAN switches are required yet still provides the low latency of fiber. This Architecture is designed for environments where the server count is not expected to exceed the connectivity of a single pair of fabric interconnects. Fiber Channel Direct Attached requires no MDS or Nexus configuration. The steps outlined below cover the Flasharray and UCS Manager steps needed to configure a FC volume as a Boot From SAN device. 

 

 

FlashArray M, X, MDS 9000, Nexus 9000, UCS Fabric Interconnect 6400 using FC

In this configuration the Flasharray is connected to redundant MDS switches which have connectivity to the UCS Fabric Interconnects (FIs). The benefit of this architecture is for environments that plan to scale beyond a couple chassis as we are using a dedicated SAN fabric instead of utilizing FI ports as this architecture allows the SAN to communicate with many chassis and devices external to the Fabric Interconnects. Likewise the network traffic is routed through redundant Nexus switches to allow the same scalability on the network layer.

 

Prerequisites 

 

In order to complete the configuration the following steps need to be done before reviewing this section: 

  1. Complete base configuration of Fabric Interconnects 

  1. The Fabric Interconnects needs to be in FC Switching Mode. For information on how to configure FC Mode and Unified Ports See the user guide for your specific UCS Manager version here: 

 https://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-manager/series.html#Configuration 

 (CHANGE CONTROL ALERT - This step requires a reboot of the Fabric Interconnects) 

  1. Configure Unified Ports as FC Storage Port (FC Ports connected to Pure) 

 (CHANGE CONTROL ALERT - This step requires a reboot of the Fabric Interconnects) 

  1. Create Service Profile (Note: The Service Profile configuration required two vHBAs) 

  1. Create WWPN and WWNN Pools  

 

NOTES 

When configuring Unified Ports, On Gen3/Gen4 Fabric Interconnects, they are allocated from left to right so the Flasharray should be plugged into the left most ports.  

On Gen1/Gen2 the opposite is true and ports are provisioned from right to left so Flasharray into the left most ports. 

On UCS Mini ports are provisioned from top to bottom (or left to right visually) as shown below.

Screen Shot 2020-07-02 at 10.22.27 AM.png 

Configuration Guide 

 

The Boot Process    

  

The boot process is the iterative loading of the installed operating system code from the boot device into computer memory after the computer is powered on. The BIOS (Basic Input / Output System) is the most basic code and is loaded first from the system firmware after POST initialization process. It initializes the computer hardware and reads in code that loads the operating system, completes hardware setup and produces a fully functional operating system residing in memory. 

  

The boot process can occur from a direct attached disk or over a network. In all cases, a critical step to a successful boot is locating the boot image.  

Storage administrators can enable servers to boot from SAN by configuring the server and the underlying hardware components. After Power On Self-Test (POST), the server hardware component fetches the boot block from the device that is designated as the boot device in the hardware BIOS settings. Once the hardware detects the boot device, it follows the regular boot process. 

 

This is the best practice guidelines for Pure Storage array that has two controllers and has active/active target ports and has the following features: 

1. Redundancy is provided using the two disjointed SAN fabrics from the data perspective, with the dual port HBA’s and the redundant Pure Storage FlashArray Controllers.  

2.  Notice the redundant fabric ports provide no single point of failure (SPOF), and is resilient to port failures, cable failures, target port failures, UCS FI failure.  

 

Configuration Workflow 

To complete the configuration, we will accomplish the following tasks: 

  1. Storage Configuration 

  1. Create Host Group 

  1. Create Boot Volume 

  1. Connect Boot Volume to Host 

  1. Cisco UCS Configuration 

  1. Create Boot from SAN Policy 

  1. Reconfigure Service Profile to use new policy 

 

Storage Array Configuration        

  

First, the storage admin has to provision LUNs of the required size for installing the OS and to enable the boot from SAN.  

 

NOTE  different OS has different boot size requirement, see operating system recommendations. 

 

The boot from SAN LUN has to be any private volume on Pure Storage FlashArray (LUN IDs between 1-9). The Storage admin also need to know the world wide port name (WWPN) of the adapter so that he could do the necessary LUN masking. The LUN masking is also a critical step in the SAN LUN configuration.  The following are the steps for configuring the boot LUN: 

 

GUI STEPS (if you prefer CLI, those configurations to follow this section) 

(This Section Assumes Purity version 5.0 or later) 

First, the storage admin has to provision LUNs of the required size for installing the OS and to enable the boot from SAN.  

 

  1. Create Host Group (If using a hypervisor such as ESXi or Xenserver) 

 

  1. Add Hosts to Host Group.  

NOTE - This is used to share datastores between all hosts, not for Boot from SAN volumes 

 

  1. Create Boot Volume (follow O/S recommendations) 

 

  1. Connect Boot Volume to Host. 

Note - We connect the volume only to the host, do not connect it to the host group 

 

CLI 

Task # 

Description 

1. 

Create a host group 

purehgroup create UCSESX-VDIHostGroup 

Name                                 Hosts 

UCSESX-VDIHostGroup      -     

2. 

Create a host 

purehost create --wwnlist 10:00:00:ab:cd:ef:aa:01,10:00:00:ab:cd:ef:bb:02 

UCSESXHost-13 

Name       WWN                  IQN 

UCSESXHost-13  10:00:00:AB:CD:EF:AA:01  -              10:00:00:AB:CD:EF:BB:02 

3. 

Add host to the host group 

purehgroup setattr --hostlist UCSESXHost-13 UCSESX-VDIHostGroup 

Name             Hosts          

UCSESX-VDIHostGroup  UCSESXHost-13 

4. 

Create the boot volume 

purevol create --size 50G UCSESXHost-13-BootVol 

Name               Size  Source  Created              Serial                   

UCSESXHost-13-BootVol  50G   -   2013-12-05 23:22:58 PST  30C91ABE5715C12700010165 

5. 

Connect the boot volume to a given host 

purehost connect --vol UCSESXHost-13-BootVol UCSESXHost-13 

Name       Vol                LUN 

UCSESXHost-13  UCSESXHost-13-BootVol  1   

  

NOTE - At this point if you are using CIsco MDS you will zone the Flasharray to the UCS Blades. See SAN Switch Configuration (for MDS connected configurations only) for more details.

 

UCS Configuration      

From UCS Configuration perspective there are two steps for configuring Boot From SAN (BFS) namely – 

1. Boot from SAN policy 

2. Associate the Boot From SAN policy to the service profile template 

 

Cisco UCS Configuration Steps 

 

1. Assuming you have configured the UCS system and have created the service profile, the following steps will guide you through the steps needed to create a boot from SAN policy and assign it to the service profile. Figure below shows a service profile with two vHBAs configured. 

  

2. The general process of creating a boot from SAN policy involves creating a SAN Primary and for redundancy reasons a SAN Secondary. As Cisco UCS has two separate fabrics (A and B), it is important to note down which ports of the SAN array is connected to which Fabric Interconnect. There is additional redundancy that is built in for SAN boot, in terms of Primary path and secondary path to reach the boot LUNs with both SAN Primary and SAN secondary when there are multiple ports in the storage array. Figure below shows the Pure Storage FlashArray target ports that are used. 

 

3. Create the boot policy by right clicking the Boot policies under servers->policies->root or just click on the “+” button on the right as shown below. 

  

    

4. The “Create boot policy” wizard will pop up. Enter a name for the boot policy along with a description. Then under vHBA, select – “Add SAN Boot”. In the “Add SAN Boot” window, enter a vHBA name, if the “Enforce vHBA Name” is selected 

  

then the vHBA name should match the name on the service profile. 

  

Eg. If your Service profile vHBA name is vHBA1, then type that in the vHBA field. 

 

5. Once the SAN boot vHBA is added, click on the “Add SAN Boot Target” and input the Pure Storage target port WWPN. In the example below we pick the CT0.FC0 WWPN. Make sure the Boot Target LUN is set to 1. The default value of 0 is not supported on Pure Storage. 

  

 Important note: Set Boot Target LUN to 1 or the private volume LUN ID you configured on Pure Storage array.  

  

  

6. Input the Secondary SAN Boot Target LUN information by clicking “Add SAN Boot Target” one more time. Again make sure the Boot Target LUN is set to 1 or the Private boot volume LUN ID configured on the Pure Storage array (The default value of 0 is not supported on Pure Storage). Input the Boot target WWPN. In the example below we pick the CT1.FC1 WWPN.  

  

 

7. Repeat step 4 and 5 to add a SAN Secondary, this time select one WWPN from CT0 and the another one from CT1. The final setting looks as follows: 

  

 

8. This boot from SAN policy can be associated to a template and when a service profile is deployed from the template each service profile will get the same Boot from SAN policy. Alternatively one can select the service profile to configure the boot order and assign the boot policy.  

   

9. If the UCS Blade is already associated to a service profile, the blade will be rebooted. The UCS Manager will do that for you, you can watch the FSM on the server where the service profile is applied to see the progress. If the service profile is not associated then it would not matter.  

10.  If everything is configured correctly one should see the PURE LUNs on the KVM console while booting and the Cisco VIC driver loading during the server boot should say “Option ROM Installed Successfully” at least once (one for each vHBA port). For ex. 

  

Appendix A has troubleshooting tips and readers should take a look at that for common troubleshooting tips.  

 

This completes the service profile portion of the “Boot from SAN” procedure. Once the service profile is associated to a given blade server the WWN should automatically gets logged in to the fabric and connects to the boot LUN and boot successfully. 

  

 

SAN Switch Configuration (for MDS connected configurations only) 

 

SAN Zoning Best Practices

The NPIV feature has to be turned on in the SAN Switch. Check the SFP modules connected to the UCS fabric interconnect to make sure they are compatible (If using a 8 Gb switch make sure you are using 8 G SFP). 

When using Cisco MDS, make sure the following conditions are in place: 

1.  The port mode is set to AUTO as well as the speed is set to AUTO.   

2.  Check to see the rate mode is “dedicated”  

3.  Check the VSAN configuration to make sure the VSAN configured on the SAN Switch matches the VSAN on Cisco UCS Fabric Interconnect’s FC ports (Very important step). 

SAN Zoning Configuration (for MDS connected configurations only) 

SAN zoning is the next step that needs to be carried out, so that the vHBA from the UCS blade sees the target boot LUN on a SAN fabric. The vHBA needs to have a complete visibility to the array LUN for BFS to succeed. 

Follow the steps in Appendix A for configuring zones on SAN switches. It is very important to know which Pure Storage target ports are connected to Fabric A and Fabric B as that will determine the inputs to the UCS SAN boot policy.  

 

SAN Zoning Example Configuration

Here is an example of how we have configured our SAN with Cisco UCS. We illustrate both Cisco MDS/Nexus 5500UP and Brocade SAN switches here.

Make sure which ports are logged in to the A or B side of the fabric with the following commands.

Fabric A

Sw-1 # sh fcns da | grep 52:4a:93:72:b8:18:37

0xc40001 N 52:4a:93:72:b8:18:37:00             scsi-fcp:target 

0xc40011 N 52:4a:93:72:b8:18:37:02             scsi-fcp:target 

0xc40018 N 52:4a:93:72:b8:18:37:11             scsi-fcp:target 

0xc4001e N 52:4a:93:72:b8:18:37:13             scsi-fcp:target 

 

# sh fcns da |  grep -i 20:00:00:25:B5:0A:00:0F

0xc40015 N 20:00:00:25:b5:0a:00:0f             scsi-fcp:init fc-gs

 

# sh fcns da |  grep -i 20:00:00:25:B5:0B:00:0F

 

Key factor to notice is that the A side of the fabric sees only UCS Fabric A WWPNs and we also verified whether the Pure Storage target ports are connected redundantly.

 

Fabric B

Sw-2 # sh fcns da | grep 52:4a:93:72:b8:18:37

0xc50010 N 52:4a:93:72:b8:18:37:01             scsi-fcp:target 

0xc50011 N 52:4a:93:72:b8:18:37:03             scsi-fcp:target  0xc50018 N 52:4a:93:72:b8:18:37:10             scsi-fcp:target 

0xc5001e N 52:4a:93:72:b8:18:37:12             scsi-fcp:target 

 

# sh fcns da |  grep -i 20:00:00:25:B5:0A:00:0F

 

# sh fcns da |  grep -i 20:00:00:25:B5:0B:00:0F

0xc50015 N 20:00:00:25:b5:0b:00:0f             scsi-fcp:init fc-gs

 

Key factor to notice is that the B side of the fabric sees only UCS Fabric B WWPNs and we also verified whether the Pure Storage target ports are connected redundantly.

For More information on MDS Zoning please see Cisco MDS 9000 CLI Guide: Configuring Zones and Zonesets

 

APPENDIX A   

Debugging BFS problems 

Checklist for troubleshooting includes: 

  

1.  Check UCS Fabric Interconnect configuration: physical port lights should be green, NPIV should be enabled, and the right SFP module should be used.  

2.  Check to make sure the Fabric Interconnect licensing is proper, port states are valid, or there are no port errors. 

3.  Make sure SAN zoning is done correctly, verify with the system tab’s connection map on Pure Storage GUI to see if the hosts are logging into the target ports. 

4.  Make sure the private volume is exclusively created on a per host basis and is not shared with other hosts on Pure Storage side. 

5.  Make sure the UCS boot from SAN policy has the exact target ports without any Typos (trust me on this) and has LUN ID 1. 

6.  If enforce vHBA name is checked; the vHBA name in the service profile should match the Boot policy vHBA name. Uncheck if you don’t want to do that. 

7.  If reboot on Boot policy change is not checked, you have to reboot the host manually to make the changes into effect.  

8. Always make sure you save the boot from SAN policy and reboot the server. Majority of times the “save” button is not clicked and manual server reboot won’t pick up the new changes. 

9. Make sure the server is rebooted after the Service profile is associated. 

10. Open up the KVM console and watch the screen to make sure you see the Pure Storage LUNs are actually showing up during the BIOS bootup. 

11. If the reboot of the server does not pick up the Pure Storage LUNs then try re-associating the service profile to the host and retry.