Skip to main content
Pure1 Support Portal

FlashArray Concepts and Features

This chapter provides a brief introduction to the FlashArray hardware, networking, and storage components and describes where they are managed in Purity.

Purity is the operating environment that manages the FlashArray. Purity, which comes bundled with the FlashArray, can be administered through a graphical user interface (Purity GUI) or command line interface (Purity CLI).

The FlashArray can also be managed through the Pure Storage REpresentational State Transfer (REST) API, which uses HTTP requests to interact with resources within Pure Storage. For more information about the Pure Storage REST API, refer to the Pure Storage REST API Reference Guide in the Pure Storage Knowledge Base at .

Arrays

A FlashArray controller contains the processor and memory complex that runs the Purity software, buffers incoming data, and interfaces to storage shelves, other controllers, and hosts. FlashArray controllers are stateless, meaning that all metadata related to the data stored in a FlashArray is contained in storage shelf storage. Therefore, it is possible to replace the controller of an array at any time with no data loss.

The following array-specific tasks can be performed through the Purity GUI:

  • Display array health through the System > System Health view.

  • Monitor capacity, storage consumption, and performance (latency, IOPS, bandwidth) metrics through the Analysis tab.

  • Change array names through the System > Configuration > Array sub-view.

The same tasks can also be performed through the Purity CLI purearray command.

Connected Arrays

A connection must be established between two arrays in order for data transfer to occur. For example, arrays must be connected to replicate data from one array to another. When two arrays are connected, the array where data is being transferred from is called the source array, and the array where data is being transferred to is called the target array.

Arrays are connected using a connection key, which is supplied from one array and entered into the other array. After two arrays are connected, the target array must allow the connection from the source array to accept the data being transferred.

Once two arrays are connected, optionally configure network bandwidth throttling to set maximum threshold values for outbound traffic.

Connected arrays are managed through the Purity GUI (System > Connected Arrays) and Purity CLI (purearray connect command).

Network bandwidth throttling is configured through the Purity GUI (System > Connected Arrays) and Purity CLI (purearray throttle command).

Hardware Components

Purity displays the operational status of most FlashArray hardware components. The display is primarily useful for diagnosing hardware-related problems.

system-health.png

Status information for each component includes the functioning status, index numbers, speed at which a component is operating, and reported temperature.

In addition to general hardware component operational status, Purity also displays status information for each flash module and NVRAM module on the array. Status information includes module status, physical storage capacity, module health, and time at which a module became non-responsive.

FlashArray hardware names are fixed. When they are powered on, FlashArray controllers and storage shelves automatically discover each other and self-configure to optimize I/O performance, data integrity, availability, and fault recoverability, all without administrator intervention.

Purity visually identifies certain hardware components through LED lights and numbers. Controllers, flash module bays, NVRAM bays, and storage shelves contain LED lights that can be turned on and off through Purity. Furthermore, storage shelves contain LED integers to uniquely identify shelves in multi-shelf arrays.

Hardware components are displayed and administered through the Purity GUI (System > System Health) and Purity CLI (purehw command).

Flash modules and NVRAM modules are displayed through the Purity GUI (System > System Health) and Purity CLI (puredrive command).

Each hardware component in a FlashArray has a unique name that identifies its location in the array for service purposes.

The hardware component names are used throughout Purity, for instance in the GUI System > System Health view, and with CLI commands such as puredrive and purehw.

Hardware components and their naming vary by FlashArray series. To see the hardware technical specifications for each FlashArray model, refer to the Products page at .

Hardware Components in the FA-300 and FA-400 Series

In the FlashArray-300 and FlashArray-400 series, the controller and storage shelf names have the form XXm. The names of other components have the form XXm.YYn or XXm.YYYn, where:

XX

Denotes the type of chassis. Controllers use CT. Storage shelves use SH.

m

Identifies the specific controller or storage shelf.

  • For controllers, m has a value of 0 or 1. For example, CT0, CT1.

  • For storage shelves, m represents the shelf number, starting at 0. For example, SH0, SH1.

    The assigned number can be changed on the shelf front panel or by running the purehw setattr --id command.

YY or YYY

Denotes the type of component. For example, FAN for cooling device, FC for Fibre Channel port.

n

Identifies the specific component by its index (its relative position within the array), starting at 0.

The following tables list the hardware components in the FA-300 and FA-400 series that report status, grouped by their location on the array.

The Identify Light column shows which components have an LED light on the physical component that can be turned on and off.

Controller (CTm)

Component NameIdentify LightComponent Type
CTm Yes Controller
CTm.ETHn -- Ethernet port
CTm.FANn -- Cooling fan
CTm.FCn -- Fibre Channel port
CTm.IBn -- InfiniBand port
CTm.PWRn -- Power module
CTm.SASn -- SAS port
CTm.TMPn -- Temperature sensor

Storage Shelf (SHm)

Component NameIdentify LightComponent Type
SHn Yes Storage shelf
SHn.BAYn Yes Storage bay
SHn.FANn -- Cooling fan
SHn.IOMn -- I/O module
SHn.PWRn -- Power module
SHn.SASn -- SAS port
SHn.TMPn -- Temperature sensor

Hardware Components in FlashArray//m

The FlashArray//m chassis, controller and storage shelf names have the form XXm. The names of other components have the form XXm.YYn or XXm.YYYn, where:

XX

Denotes the type of chassis:

  • CH - FlashArray//m chassis.

  • CT - Controller.

  • SH - Storage shelf.

m

Identifies the specific controller or storage shelf:

  • For the FlashArray//m chassis, m has a value of 0. For example, CH0.

  • For controllers, m has a value of 0 or 1. For example, CT0, CT1.

  • For storage shelves, m represents the shelf number, starting at 0. For example, SH0, SH1.

    The assigned number can be changed on the shelf front panel or by running the purehw setattr --id command.

YY or YYY

Denotes the type of component. For example, FAN for cooling device, FC for Fibre Channel port.

n

Identifies the specific component by its index (its relative position within the FlashArray//m chassis, controller, or storage shelf), starting at 0.

The following tables list the FlashArray//m hardware components that report status, grouped by their location on the array. The hardware component names are used throughout Purity, for instance in the GUI System > System Health view, and with CLI commands such as puredrive and purehw.

The Identify Light column shows which components have an LED light on the physical component that can be turned on and off.

Chassis (CH0)

Component NameIdentify LightComponent Type
CH0 Yes Chassis
CH0.BAYn Yes Storage bay
CH0.NVBn Yes NVRAM bay
CH0.PWRn -- Power module

Controller (CTm)

Component NameIdentify LightComponent Type
CTm Yes Controller
CTm.ETHn -- Ethernet port
CTm.FANn -- Cooling fan
CTm.FCn -- Fibre Channel port
CTm.IBn -- InfiniBand port (included only with certain upgrade kits)
CTm.SASn -- SAS port
CTm.TMPn -- Temperature sensor

Storage Shelf (SHm)

Component NameIdentify LightComponent Type
SHm Yes Storage shelf
SHm.BAYn Yes Storage bay
SHm.FANn -- Cooling fan
SHm.IOMn -- I/O module
SHm.PWRn -- Power module
SHm.SASn -- SAS port
SHm.TMPn -- Temperature sensor

Network Interface

View and configure network interface and DNS attributes through Purity.

The Purity network interfaces manage the bond, Ethernet, virtual, and VLAN interfaces used to connect the array to an administrative network.

networking.png

Each FlashArray controller is equipped with two Ethernet interfaces that connect to a data center network for array administration.

A bond interface combines two or more similar Ethernet interfaces to form a single virtual "bonded" interface with optional slave devices. A bond interface provides higher data transfer rates, load balancing, and link redundancy. A default bond interface, named replbond, is created when Purity starts for the first time.

Array administrators cannot create or delete bond interfaces. To create or delete a bond interface, contact Pure Storage Support.

Applying a service to an Ethernet or bond interface ensures that traffic corresponding to that service is restricted to the specified interface. Supported services include 'management', 'iSCSI', and 'replication'. For example, applying the replication service to the replbond bond interface ensures that all replication traffic is channeled through that device.

View the network connection attributes, including interface, netmask, and gateway IP addresses, maximum transmission units (MTUs), and the network services (iSCSI, management, or replication) attached to each network interface.

Enable or disable an interface through Purity at any time. Disabling an interface while an administrative session is being conducted causes the session to lose SSH connection and no longer be able to connect to the controller.

Configure the network connection attributes, including the interface, netmask, and gateway IP addresses, and the MTU. Ethernet and bond interface IP addresses and netmasks are set explicitly, along with the corresponding netmasks. DHCP mode is not supported.

Manage the domain name system (DNS) domains that are configured for the array. Each DNS domain can include up to three static DNS server IP addresses. DHCP mode is not supported.

Network interfaces and DNS settings are configured through the Purity GUI (System > Configuration > Networking) and Purity CLI (purenetwork command for network interfaces, and puredns for DNS settings).

Storage

Volumes

FlashArrays eliminate drive-oriented concepts such as RAID groups and spare drives that are common with disk arrays. Purity treats the entire storage capacity of all flash modules in an array as a single homogeneous pool from which it allocates storage only when hosts write data to volumes created by administrators. Creating a FlashArray volume, therefore, only requires a volume name, to be used in administrative operations and displays, and a provisioned size.

FlashArray volumes are virtual, so creating, renaming, resizing, and destroying a volume has no meaning outside the array.

Create a single volume or multiple volumes at one time. Purity administrative operations rely on volume names, so they must be unique within an array.

Figure 5. Storage - Details Pane - Volumes - Create Multiple Volumes

Storage - Details Pane - Volumes - Create Multiple Volumes

Creating a volume creates persistent data structures in the array, but does not allocate any physical storage. Purity allocates physical storage only when hosts write data. Volume creation is therefore nearly instantaneous. Volumes do not consume physical storage until data is actually written to them, so volume creation has no immediate effect on an array's physical storage consumption.

Rename a volume to change the name by which Purity identifies the volume in administrative operations and displays. The new volume name is effective immediately and the old name is no longer recognized in CLI or GUI interactions.

Resize an existing volume to change the virtual capacity of the volume as perceived by the hosts. The volume size changes are immediately visible to connected hosts. If you decrease (truncate) the volume size, Purity automatically takes an undo snapshot of the volume. The undo snapshot enters a 24-hour eradication pending period, after which time the snapshot is destroyed. During the 24-hour pending period, the undo snapshot can be viewed, recovered, or permanently eradicated through the Destroyed Volumes folder. Increasing the size of a truncated volume will not restore any data that is lost when the volume was first truncated.

Copy a volume to create a new volume or overwrite an existing one. After you copy a volume, the source of the new or overwritten volume is set to the name of the originating volume.

Destroy a volume if it is no longer needed. When you destroy a volume, Purity automatically takes an undo snapshot of the volume. The undo snapshot enters a 24-hour eradication pending period. During the 24-hour pending period, the undo snapshot can be viewed, recovered, or permanently eradicated through the Destroyed Volumes folder. Eradicating a volume completely obliterates the data within the volume, allowing Purity to reclaim the storage space occupied by the data. After the 24-hour pending period, the undo snapshot is completely eradicated and can no longer be recovered.

Volume Snapshots

Volume snapshots are immutable, point-in-time images of the contents of one or more volumes.

Volume Snapshots vs. Protection Group Snapshots

There are two types of volume snapshots:

Volume Snapshot

A volume snapshot is a snapshot that captures the contents of a single volume. Volume snapshot tasks include creating, renaming, destroying, restoring, and copying volume snapshots.

Volume snapshot tasks are performed through the Purity GUI (Storage > Volume) or Purity CLI (purevol command).

Protection Group Volume Snapshot

A protection group volume snapshot is a volume snapshot that is created from a group of volumes that are part of the same protection group. All of the volume snapshots created from a protection group snapshot are point-in-time consistent with each other.

Protection group snapshots can be manually generated on demand or enabled to automatically generate at scheduled intervals. After a protection group snapshot has been taken, it is either stored on the local array or replicated over to a remote (target) array.

Protection group volume snapshot tasks performed through the Storage tab of the Purity GUI or purevol command of the Purity CLI are limited to copying snapshots. All other protection group snapshot tasks are performed through the Protection tab or purepgroup command.

For more information about protection groups and protection group snapshots, refer to the Protection Groups and Protection Group Snapshots section.

All volume snapshots are visible through the Volumes > Snapshots details pane of the Storage tab.

Figure 6. Storage - Details Pane - Volumes - Snapshots

Storage - Details Pane - Volumes - Snapshots

Create a volume snapshot to generate a point-in-time image of the contents of the specified volume(s). Volume snapshot names append a unique number assigned by Purity to the name of the snapped volume. For example, vol01.4166. Optionally specify a suffix to replace the unique number.

The volume snapshot naming convention is VOL.NNN, where:

  • VOL is the name of the volume.

  • NNN is a unique monotonically increasing number or a manually-assigned volume snapshot suffix name.

Rename a volume snapshot suffix to change the name by which Purity identifies the snapshot in administrative operations and displays. The new snapshot suffix name is effective immediately and the old name is no longer recognized in CLI or GUI interactions.

Destroy a volume snapshot if it is no longer needed. If you destroy a volume snapshot, Purity automatically takes an undo snapshot. The undo snapshot enters a 24-hour eradication pending period, after which time the snapshot is eradicated. During the 24-hour pending period, the undo snapshot can be viewed, recovered, or permanently eradicated through the Destroyed Volumes folder.

Restore a volume from a volume snapshot to bring the volume back to the state it was when the snapshot was taken. When a volume is restored from a volume snapshot, Purity overwrites the entire volume with the snapshot contents. After you restore a volume snapshot, the created date of the overwritten volume is set to the snapshot created date. Purity automatically takes an undo snapshot of the overwritten volume. The undo snapshot enters a 24-hour eradication pending period, after which time the snapshot is destroyed. During the 24-hour pending period, the undo snapshot can be viewed, recovered, or permanently eradicated through the Destroyed Volumes folder.

Copy a volume snapshot or protection group volume snapshot to create a new volume or overwrite an existing one. After you copy a snapshot, the source of the new or overwritten volume is set to the name of the originating volume, and the created date of the volume is set to the snapshot created date. If the copy overwrites an existing volume, Purity automatically takes an undo snapshot of the existing volume. The undo snapshot enters a 24-hour eradication pending period, after which time the snapshot is destroyed. During the 24-hour pending period, the undo snapshot can be viewed, recovered, or permanently eradicated through the Destroyed Volumes folder.

Hosts

The host organizes the storage network addresses - the Fibre Channel worldwide names (WWNs) or iSCSI qualified names (IQNs) - that identify the host computer initiators. The host communicates with the array through the Fibre Channel or iSCSI ports. The array accepts and responds to commands received on any of its ports from any of the WWNs or IQNs associated with a host.

Purity hosts are virtual, so creating, renaming, and deleting a host has no meaning outside the array.

Create hosts to access volumes on the array. A Purity host is comprised of a host name and one or more WWNs or IQNs. Host names must be unique within an array.

Associate one or more WWNs or IQNs with the host after it has been created. The host cannot communicate with the array until at least one WWN or IQN has been associated with it.

Fibre Channel worldwide names (WWNs) follow the naming standards set by the IEEE Standards Association. WWNs are comprised of eight pairs of case-insensitive hexadecimal numbers, optionally separated by colons. For example, 21:00:00:24:FF:4C:C5:49.

iSCSI qualified names (IQNs) follow the naming structures set by the Internet Engineering Task Force (see RFC 3720). For example, iqn.2016-01.com.example:flasharrays-sn-a8675309.

Like hosts, WWNs and IQNs must be unique in an array. A host can be associated with multiple storage network addresses, but a storage network address can only be associated with one host.

Host WWNs and IQNs can be added or removed at any time.

Figure 7. Storage - Details Pane - Configure WWNs

Storage - Details Pane - Configure WWNs

Figure 8. Storage - Details Pane - Configure IQNs

Storage - Details Pane - Configure IQNs

Rename a host to change the name by which Purity identifies the host in administrative operations and displays. Host names are used solely for FlashArray administration and have no significance outside the array, so renaming a host does not change its relationship with host groups and volumes. The new host name is effective immediately and the old name is no longer recognized in CLI or GUI interactions.

Optionally, configure the Challenge-Handshake Authentication Protocol (CHAP) to verify the identity of the iSCSI initiators and targets to each other when they establish a connection.

By default, the CHAP credentials are not set.

Figure 9. Storage - Details Pane - CHAP Configuration

Storage - Details Pane - CHAP Configuration

Configure the Personality to set the operating system personality. The Personality feature reflects the way in which the host personality tunes the protocol used between the array and the initiator. If the host is running the HP-UX operating system, set the personality to HP-UX. By default, the Personality feature is not set.

Delete a host if it is no longer required. Purity will not delete a host while it has connections to volumes, either private or shared. You cannot recover a host after it has been deleted.

Host Guidelines

Purity will not create a host if:

  • The specified name is already associated with another host in the array.

  • Any of the specified WWNs or IQNs is already associated with an existing host in the array.

  • The creation of the host would exceed the limit of concurrent hosts, or the creation of the WWN or IQN would exceed the limit of concurrent initiators.

Purity will not delete a host if:

  • The host has private connections to one or more volumes.

Purity will not associate a WWN or IQN with a host if:

  • The creation of the WWN or IQN would exceed the maximum number of concurrent initiators.

  • The specified WWN or IQN is already associated with another host on the array.

Hosts are configured through the Purity GUI (Storage tab) and Purity CLI (purehost command).

Host Groups

A host group represents a collection of hosts with common connectivity to volumes.

Purity host groups are virtual, so creating, renaming, and deleting a host group has no meaning outside the array.

Create a host group if several hosts share access to the same volume(s). Host group names must be unique within an array.

After you create a host group, add hosts to the host group and then establish connections between the volumes and the host group.

When a volume is connected to a host group, it is assigned a logical unit number (LUN), which all hosts in the group use to communicate with the volume. If a LUN is not manually specified when the connection is first established, Purity automatically assigns a LUN to the connection.

Once a connection has been established between a host group and a volume, all of the hosts within the host group are able to access the volume through the connection. These connections are called shared connections because the connection is shared between all of the hosts within the host group.

Rename a host group to change the name by which Purity identifies the host group in administrative operations and displays. Renaming a host group does not change its relationship with hosts and volumes. The new host group name is effective immediately and the old name is no longer recognized in CLI or GUI interactions.

Delete a host group if it is no longer required. You cannot recover a host group after it has been deleted.

Host Group Guidelines

Purity will not create a host group if:

  • A host group with the specified name already exists in the array.

  • The creation of the host group would exceed the limit of concurrent host groups.

Purity will not delete a host group if:

  • Any hosts are associated with the host group or any volumes are connected to it.

A host cannot be added to a host group if:

  • The host is associated with another host group. A host can only be associated with one host group at a time.

  • The host has a private connection to a volume associated with the host group.

Host groups are configured through the Purity GUI (Storage tab) and Purity CLI (purehgroup command).

Host-Volume Connections

For a host to read and write data on a FlashArray volume, the two must be connected. Purity only responds to I/O commands from hosts to which the volume addressed by the command is connected; it ignores commands from unconnected hosts.

Hosts are connected to volumes through private or shared connections. Private and shared connections are functionally identical: both make it possible for hosts to read and write data on volumes. They differ in how administrators create and delete them.

Private Connections

Connecting a volume to a host establishes a private connection between the volume and the host. You can connect multiple volumes to a host. Likewise, a volume can be connected to multiple hosts.

Disconnecting a volume from a host, or vice versa, breaks the private connection between the volume and host. Other shared and private connections are unaffected.

Shared Connections

Connecting a volume to a host group establishes a shared connection between the volumes and all of the hosts within that host group. You can connect multiple volumes to a host group. Likewise, a volume can be connected to multiple host groups.

Disconnecting a volume-host group connection breaks the shared connection between the volume and all of the hosts within the host group. Other shared and private connections are unaffected.

Breaking Private and Shared Connections

Breaking a connection between a host and volume causes the host to lose access to the volume. There are three ways in which a host-volume connection can be broken:

  • Break the private connection between a volume and a host, which causes the host to lose access to the volume. Volume-host connections are broken when you disconnect a volume from its host, or disconnect a host from the volume.

  • Break the shared connection between a volume and a host group, which disconnects the volume and all of the host group’s member hosts. Other shared and private connections to the volume are unaffected. Volume-host group connections are broken when you disconnect a volume from its host group, or disconnect a host group from the volume.

  • Remove a host from a host group, which breaks the connections between the host and all volumes with shared connections to the host group. The removed host’s private connections are unaffected.

Logical Unit Number (LUN) Guidelines

Each host-volume connection has three components: a host, a volume, and a logical unit number (LUN) used by the host to address the volume. Purity supports LUNs in the [1...4095] range.

Hosts establish connections to volumes either through private or shared (via host groups) connections. A host can have only one connection, private or shared, to a given volume at a time.

You can manually specify a LUN anywhere in the [1...4095] range for either private or shared connection. If you do not manually specify a LUN, Purity follows these guidelines to automatically assign a LUN to the connection:

  • For private connections, Purity starts at LUN 1 and counts up to the maximum LUN 4095, assigning the first available LUN to the connection.

  • For shared connections, Purity starts at LUN 254 and counts down to the minimum LUN 1, assigning the first available LUN to the connection. If all LUNs in the [1...254] range are taken, Purity starts at LUN 255 and counts up to the maximum LUN 4095, assigning the first available LUN to the connection.

A host cannot be associated with a host group if it has a private connection to a volume associated with the same host group.

The LUN can be changed after the connection has been created. If you change a LUN, the volume may become temporarily disconnected from the host to which it is connected.

Connection Guidelines

Purity will not establish a (private) connection between a volume and a host if:

  • An unavailable LUN was specified.

  • The volume is already connected to the host, either through a private or shared connection.

Purity will not establish a (shared) connection between a volume and host group if:

  • An unavailable LUN was specified.

  • The volume is already connected to the host group.

  • The volume is already connected to a host associated with the host group.

Host-volume connections and LUN assignments are performed through the Purity GUI (select System > Storage) and Purity CLI (purehgroup connect, purehost connect and purevol connect commands).

Connections

The Connections view displays connectivity details between the Purity hosts and the array ports.

The Host Connections pane displays a list of hosts, the connectivity status of each host, and the number of initiator ports associated with each host. Connectivity statuses range from "None", where the host does not have any paths to any target ports, to "Redundant", where the host has the same number of paths from every initiator to every target port on both controllers.

Figure 10. Connections - Host Connections

Connections - Host Connections

The Target Ports pane displays the connection mappings between each array port and initiator port. Each array port includes the following connectivity details: associated Fibre Channel Worldwide Name (WWN) or iSCSI Qualified Name (IQN) address, failover status, and communication speed. A check mark in the Failover column indicates that the port has failed over to the corresponding port pair on the primary controller.

Figure 11. Connections - Target Ports

Connections - Target Ports

Host connections and target ports are displayed through the Purity GUI (select System > Connections) and Purity CLI (pureport list, purehost list --all, and purevol list --all commands).

Protection Groups and Protection Group Snapshots

Protection groups support the Purity FlashRecover feature — a policy-based data protection and disaster recovery solution.

Protection groups and protection group snapshots are configured and managed through the Purity GUI (Protection tab) and Purity CLI (purepgroup command).

Protection Groups

A protection group defines a set of volumes, hosts, or host groups (called members) that are protected together through snapshots with point-in-time consistency across the member volumes. The members within the protection group have common data protection requirements and share the same snapshot, replication, and retention schedules.

A single protection group can consist of multiple hosts, host groups, and volumes. Likewise, hosts, host groups, and volumes can be associated with multiple protection groups.

Each protection group includes the following components:

  • Source array. An array from which Purity generates a point-in-time snapshot of its protection group volumes. Depending on the protection group schedule settings, the snapshot data is either retained on the source array or replicated over to and retained on one or more target arrays.

  • Target arrays. One or more arrays that receive snapshot data from the source array. Target arrays are only required if snapshot data needs to be replicated over to remote arrays.

  • Members. Volumes, hosts, or host groups that have common data protection requirements and share the same snapshot/replication frequency and retention policies. Only members of one type can be added to a protection group.

    You can add volumes, hosts, and host groups to multiple protection groups. Likewise, a protection group can include multiple volumes, hosts, or host groups. Protection groups can also contain overlapping volumes, hosts, and host groups. In such cases, Purity counts the volume once and ignores all other occurrences of the same member.

  • Schedules. Each protection group includes a snapshot schedule and a replication schedule.

    Configure and enable the snapshot schedule to generate snapshots and retain them on the source array.

    Configure and enable the replication schedule to generate snapshots on the source array, immediately replicate the snapshots to the target arrays, and retain those snapshots on the target arrays. This type of replication is known as snapshot-based asynchronous replication. When replicating, Purity only transfers the incremental data between two snapshots. Furthermore, during the replication data transfer process, data deduplicated on the source array is not sent again if the same data was previously sent to the same target array.

Figure 12. Protection Group Schedules

Protection Group Schedules

Create a protection group to add members (volumes, hosts, or host groups) that share common data protection requirements. Pure Storage protection groups are virtual, so creating, renaming, and destroying a protection group has no meaning outside the array. Protection group names must be unique within an array.

Copy a protection group to restore the state of the volumes within a protection group to a previous protection group snapshot. The restored volumes are added as real volumes to a new or existing protection group. Note that restoring volumes from a protection group snapshot does not automatically expose the restored volumes to hosts and host groups.

Rename a protection group to change the name by which Purity identifies the protection group in administrative operations and displays. When you rename a protection group, the name change is effective immediately and the old name is no longer recognized by Purity.

Destroy a protection group if it is no longer needed.

Destroying a protection group implicitly destroys all of its snapshots. Once a protection group has been destroyed, all snapshot and replication processes for the protection group stop and the destroyed protection group begins its 24-hour eradication pending period.

When the 24-hour eradication pending period has lapsed, Purity starts reclaiming the physical storage occupied by the protection group snapshots.

During the eradication pending period, you can recover the protection group to bring the group and its content back to its original state, or manually eradicate the destroyed protection group to reclaim physical storage space occupied by the destroyed protection group snapshots.

Once reclamation starts, either because you have manually eradicated the destroyed protection group, or because the eradication pending period has lapsed, the destroyed protection group and its snapshot data can no longer be recovered.

The Time Remaining column displays the 24-hour eradication pending period in hh:mm format, which begins at 24:00 and counts down to 00:00. When the eradication pending period reaches 00:00, Purity starts the reclamation process. The Time Remaining number remains at 00:00 until the protection group or snapshot is completely eradicated.

Space Consumption Considerations

Consider space consumption when you configure the snapshot, replication, and retention schedules.

The amount of space consumed on the source array depends on how many snapshots you want to generate, how frequently you want to generate the snapshots, how many snapshots you want to retain, and how long you want to retain the snapshots.

Likewise, the amount of space consumed on the target array depends how many snapshots you want to replicate, how frequently you want to replicate the snapshots, how many replicated snapshots you want to retain, and how long you want to retain them.

Protection Group Snapshots

Protection group snapshots capture the content of all volumes on the array for the specified protection group at a single point in time. The snapshot is an immutable image of the volume data at that instance in time. The volumes are either direct members of the protection group or connected to any of its hosts or host groups within a protection group.

Generate a protection group snapshot to create snapshots of the volumes within the protection group.

Protection group snapshots can be generated automatically (using schedules) or on-demand.

The volumes within a protection group snapshot can be copied as-needed to create live, host-accessible volumes.

The protection group snapshot naming convention is PGROUP.NNN, where:

  • PGROUP is the name of the protection group.

  • NNN is a unique monotonically increasing number or a manually-assigned protection group snapshot suffix name.

The protection group volume snapshot naming convention is PGROUP.NNN.VOL, where:

  • PGROUP is the name of the protection group.

  • NNN is a unique monotonically increasing number or a manually-assigned protection group snapshot suffix name.

  • VOL is name of the volume member.

If you are viewing replicated snapshots on a target array, the snapshot name begins with the name of the source array from where the snapshot was taken.

Destroy a protection group snapshot if it is no longer required. Destroying a protection group snapshot destroys all of its protection group volume snapshots, thereby reclaiming the physical storage space occupied by its data.

Destroyed protection group snapshots follow the same eradication pending behavior as destroyed protection groups. If you destroy a protection group snapshot, Purity automatically takes an undo snapshot. The undo snapshot enters a 24-hour eradication pending period, after which time the snapshot is eradicated. During the 24-hour pending period, the undo snapshot can be viewed, recovered, or permanently eradicated.

Protection group volume snapshots cannot be destroyed individually. A protection group volume snapshot can only be destroyed by destroying the protection group snapshot to which it belongs.

Figure 13. Storage - Details Pane - Volumes - Protection Group Volume Snapshot

Storage - Details Pane - Volumes - Protection Group Volume Snapshot

Users and Security

The current Purity release comes with a single local administrative account named pureuser. The account is password-protected, and may alternatively be accessed using a public-private key pair. User configuration includes changing the pureuser password and public key.

The Pure Storage REST API uses authentication tokens to create sessions. All Purity users can generate their own API token and view only their own token.

User configuration and API token generation is performed through the Purity GUI (select System > Users) and Purity CLI (pureadmin command).

Directory Service

Additional Purity accounts can be enabled by integrating the array with an existing directory service, such as Microsoft Active Directory or OpenLDAP, allowing multiple users to log in and use the array and providing role-based access control.

Configuring and enabling the Pure Storage directory service changes the array to use the directory when performing user account and permission level searches. If a user is not found locally, the directory servers are queried.

Directory service configuration is performed through the Purity GUI (System > Users) and Purity CLI (pureds command).

SSL Certificate

Purity creates a self-signed certificate and private key when the system is started for the first time.

SSL certificate configuration includes changing certificate attributes, creating new self-signed certificates to replace existing ones, constructing certificate signing requests, importing certificates and private keys, and exporting certificates.

SSL certificate configuration is performed through the Purity GUI (System > Configuration > SSL Certificate) and Purity CLI (purecert command).

Industry Standards

Purity includes the Pure Storage Storage Management Initiative Specification (SMI-S) provider.

The SMI-S initiative was launched by the Storage Networking Industry Association (SNIA) to provide a unifying interface for storage management systems to administer multi-vendor resources in a storage area network. The SMI-S provider in Purity allows FlashArray administrators to manage the array using an SMI-S client over HTTPS.

SMI-S client applications optionally use the Service Location Protocol (SLP) as a directory service to locate resources.

The SMI-S provider is optional and must be enabled before its first use. The SMI-S provider is enabled and disabled through the Purity GUI (System > Configuration > SMI-S) and Purity CLI (puresmis command).

For detailed information on the Pure Storage SMI-S provider, refer to the Pure Storage SMI-S Provider Guide in the Pure Storage Knowledge Base at .

For general information on SMI-S, refer to the Storage Networking Industry Association (SNIA) website at .

Troubleshooting and Logging

Purity continuously logs a variety of array activities, including performance summaries, hardware and operating status reports, and administrative actions that modify the array. For certain array state changes and events that are potentially significant to array operation, Purity immediately generates alert messages and transmits them to one or more user-specified destinations for immediate action.

The FlashArray troubleshooting mechanisms assume that Pure Storage Support can actively participate in helping organizations maintain “healthy” arrays. However, for organizations where operating procedures do not permit outside connections to equipment, troubleshooting reports can be directed to internal email addresses or displayed on a GUI or CLI console.

Alerts, Audit Trails, and User Session Logs

Alert, audit record, and user session messages are retrieved from a list of log entries that are stored on the array.

Figure 14. Alerts, Audit Trails, and User Session Logs

Alerts, Audit Trails, and User Session Logs

To conserve space, Purity stores a reasonable number of log entries on the array. Older entries are deleted from the log as new entries are added. To access the complete list of messages, configure the Syslog Server feature to forward all messages to your remote server.

Alerts, audit trails, and user session logins are displayed through the Purity GUI (Messages tab) and the Purity CLI (puremessage command).

Alerts

An alert is triggered when there is an unexpected change to the array or to one of the Purity hardware or software components. Alerts are categorized by severity level as critical, warning, or informational.

Alerts are displayed in the Purity GUI and Purity CLI. Alerts are also logged and transmitted to Pure Storage Support via the phone home facility. Furthermore, alerts can be sent as electronic messages to designated email addresses and as Simple Network Management Protocol-based (SNMP) traps to designated SNMP managers.

Phone Home Facility

The phone home facility provides a secure direct link between the array and the Pure Storage Support team. The link is used to transmit log contents and alert messages to the Pure Storage Support team.

If the phone home facility is disabled, the log contents are delivered when the facility is next enabled or when the user manually sends the logs through the Purity GUI or Purity CLI.

Optionally configure the proxy host for HTTPS communication.

The phone home facility is managed through the Purity GUI (System > Configuration > Support Connectivity) and Purity CLI (purearray command).

Proxies are configured through the Purity GUI (System > Configuration > Networking) and Purity CLI (purearray setattr --proxy command).

Email

Alerts can be sent to designated email recipients. The list includes the built-in flasharray-alerts@purestorage.com address, which cannot be deleted. Individual email addresses can be added to and removed from the list, and transmission of alert messages to specific addresses can be temporarily enabled or disabled without removing them from the list.

The list of email alert recipients is managed through the Purity GUI (System > Configuration > Alerts) and Purity CLI (purealert command).

SNMP Traps

Alerts can be sent to designated SNMP trap managers. For each alert it generates, Purity sends an SNMP trap message to the designated SNMP manager systems.

The list of SNMP trap managers is managed through the Purity GUI (System > Configuration > SNMP) and Purity CLI (puresnmp command).

Audit Trail

The audit trail represents a chronological history of the Purity GUI, Purity CLI, or REST API operations that a user has performed to modify the configuration of the array. For example, changing the size of a volume, deleting a host, changing the replication frequency of a protection group, and associating a WWN to a host generates an audit record.

User Session Logs

User session logs represent user login and authentication events performed in the Purity GUI, Purity CLI, and REST API. For example, logging in to and out of the Purity GUI, attempting to log in to the Purity CLI with an invalid password, or opening a Pure Storage REST API session generates a user session log entry.

SNMP Agent and Trap Manager

Purity administers the connections to the Simple Network Management Protocol-based (SNMP) managers. FlashArrays integrate with SNMP data center management frameworks through an SNMP agent or through the use of SNMP traps.

Communication with SNMP managers is secure; the array administrator can specify the SNMP authentication and encryption parameters to be used when communicating with each designated SNMP manager. The Purity SNMP agent and trap manager support SNMP protocol versions v2c and v3.

The built-in SNMP agent in Purity, named localhost, responds to SNMP information retrieval requests (Get, Get Next, Get Bulk) made by the SNMP managers that are in the same SNMP community as the array. The agent cannot be delete or renamed. The agent responds to GET-type requests, returning values for the purePerformance information block, or individual variables within it, depending on the type of request issued.

The FlashArray Management Information Base (MIB) describes the variables for which values can be requested.

When the SNMP trap manager is configured, Purity generates and transmits SNMP trap messages, including alerts, to the designated managers running in hosts.

The SNMP agent and list of trap managers are managed through the Purity GUI (System > Configuration > SNMP) and Purity CLI (puresnmp command).

Download the MIB through the Purity GUI (System > Configuration > SNMP).

Remote Assist Facility

In many cases, the most efficient way to service an array or diagnose problems is through direct intervention by a Pure Storage Support representative.

The Remote Assist facility enables Pure Storage Support to communicate with an array, effectively establishing an administrative session for service and diagnosis. Optionally configure the proxy host for HTTPS communication.

Remote assist sessions are controlled by the array administrator, who opens a secure channel between the array and Pure Storage Support, making it possible for a technician to log in to the array. The administrator can check session status and close the channel at any time.

Remote assist sessions are opened and closed through the Purity GUI (System > Support Connectivity) and Purity CLI (purearray remoteassist command).

Proxies are configured through the Purity GUI (System > Configuration > Networking) and Purity CLI (purearray setattr --proxy command).

Syslog Logging Facility

The Purity syslog logging facility generates messages deemed major events within the FlashArray and forwards the messages to remote servers via TCP or UDP protocol. Purity generates syslog messages for three types of events:

  • Alerts (purity.alert)

  • Audit Trails (purity.audit)

  • Tests (purity.test)

The syslog server output location is configured through the Purity GUI (System > Configuration > Syslog Server) and Purity CLI (purearray setattr command).