Skip to main content
Pure Technical Services

FlashArray Snapshots And Replication: MongoDB Replica Set Disaster Recovery

Currently viewing public documentation. Please login to access the full scope of documentation.

Introduction

Pure Storage® FlashArray™ provides multiple high availability methods to protect data. ActiveCluster™ delivers nearly zero Recovery Point Objective (RPO) and Recovery Time Objective (RTO). If RPO and RTO are less critical, asynchronous data replication is also available. FlashArray also offers snapshot and volume copy.  By coupling advanced FlashArray data management functions with MongoDB native replication, it is possible to protect against the data loss and to reduce the down and recovery time. 

In this article, we examine MongoDB replica set implementation and disaster recovery scenarios using MongoDB Replication (Database Native) and FlashArray Asynchronous Replication.

Recovery Workflow

In the event of failure, the recovery workflow consists of locking the MongoDB node for application consistent snapshot, copying the FlashArray volume or FlashArray volume snapshot, and unlocking the database along with adding the recovered node or nodes to the replica set. See Figure 1.This method significantly reduces the time required by the MongoDB replication process to synchronize the nodes. 

fig1.png

Figure 1.

MongoDB Replication (Database Native)

MongoDB has a built-in high availability and redundancy mechanism called replica set, which is a group of mongod instances that maintain the same data set. Replica set allows transparent software and hardware upgrades and protects from system failures. In MongoDB, replica set data is asynchronously replicated among the replica set members. At a minimum, three hosts are required to construct a replica set, where a single node is elected as a primary. By default, the primary node is responsible for processing read and write requests. For additional details see Write Concern for Replica Sets and Read Preference. In the event of the primary node failure, another primary is elected. A three-node replica set can survive a single node failure. If two nodes become unavailable, the database is still reachable but in read only mode. Replica set redundancy level depends on the number of nodes. Additional information is available on MongoDB web site. For customers deploying MongoDB on shared storage such as FlashArray, additional hardware based protection is also possible to increase the database availability. 

The three node MongoDB replica set environment with database native replication is shown in Figure 2.

fig2.png

Figure 2.
The recovery process is dependent on the site and the node failures where one node, multiple nodes and the entire site failing requires different recovery procedures. Considering MongoDB replica set behavior varies based on the number of surviving nodes, we will explore multiple failure and corresponding recovery scenarios.

Failure And Recovery Scenarios

Site A: Single Node Failure

When one of the replica set members on Site A fails, the database operations are unaffected however, MongoDB is no longer redundant. When the node failure condition is corrected, MongoDB will initiate the copy operation to duplicate the database files onto the newly available node. This process (copy), depending on the database size, change rate of data and down time may take minutes to several days. However, with FlashArray nearly instantaneous volume copy function, the node can be quickly restored, thereby returning the replica set to the redundant state.

Restoring Failed Node Using FlashArray Volume Copy Function
Application Consistent FlashArray Volume Copy

To recover MongoDB database files and replica set node, follow the steps below:

  1. MongoDB Node (mongo shell): lock the database on Site A with db.fsyncLock() command. See Figure 3.

If the surviving node on Site A is the primary, all database operations will be halted.

fig3.png

Figure 3.
 

2. FlashArray: Copy Database Volume. 

Using Graphical User Interface (GUI)

To copy volume using FlashArray Graphical User Interface select (see Figure 4):

Storage ➤ Volumes ➤ <Volume> ➤  '⋮' (Vertical Ellipses) ➤ Copy ; in the Copy Volume dialog provide volume name and select Overwrite.

fig4.png
 

Figure 4.

Using Command Line Interface (CLI)

To copy a volume from an existing volume, use the purevol command. The command syntax is:

purevol copy [--overwrite] SOURCE TARGET

For instance:

purevol copy  --overwrite TKmdb01 TKmdb02

This command will copy TKmdb01 volume to TKmdb02.

3. MongoDB Node (mongo shell): Unlock Database with db.fsyncUnlock() command. See Figure 5.

fig5.png

Figure 5.

4. Connect Volume copied in step 2 to the new MongoDB node. See Figure 6.

Using Graphical User Interface (GUI)

To connect the volume to the host, select:

Storage ➤ Volumes ➤ <Volume> ➤  Connected Hosts Pane '⋮' (Vertical Ellipses) ➤ Connect ; in the Connect Hosts dialog select the desired host and click Connect button.


fig6.png

Figure 6.

Command Line Interface (CLI)

To connect a volume to a host, use the purevol command. The command syntax is:

purevol connect --vol <volume_name> HOST

For instance:

purevol connect  --vol TKmdb02 sn1-c220-f11-01-nvme

This command will connect TKmdb02 volume to sn1-c22-f11-01-nvme host.

5. Mount Volume

Mount the new volume on the directory defined in /etc/mongod.conf by the dbpath directive. Ensure that the mongod user and group have ownership and permissions to the database volume.

6. Add New Host To Replica Set

Follow MongoDB documentation to add the new host to an existing replica set. A simplified procedure consists of the following steps (on the new replica set member):

  • Edit /etc/mongod.conf to enable replica set by uncommenting the following lines:

replication:

  replSetName: “<replica_set_name>”

For systemd based systems use:

systemctl start mongod 

for System V:

sudo service mongod start 

  • On the primary node (mongo shell) add the new host to the replica set using rs.add(“host:port”); command - see Figure 7.

7. Verify the replica set status with rs.status() and rs.printSlaveReplicationInfo() commands.

Crash Consistent FlashArray Volume Copy

The crash consistent snapshot does not require database locking and unlocking. The db.fsyncLock() (step 1) and db.fsyncUnlock() (step 3) commands are not necessary. The remaining steps are identical as the steps for an Application Consistent FlashArray Volume Copy.

Site A: Entire Site Failure

When Site A fails, MongoDB on Site B switches to read-only mode. Returning to normal database operations as quickly as possible may require creating a new replica set on Site B. 

If the name of the replica set is embedded into the application database driver, it may be beneficial to use the same replica set name as on the production site.

To reconfigure the existing MongoDB instance do the following:

  1. Connect to the database using mongo shell. 

  2. Drop ‘local’ database.

use local;

db.dropDatabase();

  1. Stop mongod process.

For systemd based systems use

for System V

  1. Modify /etc/mongod.conf.

Follow MongoDB documentation to create replica set; simplified procedure consists of the following steps:

5. Start mongod process.

For systemd based systems use

systemctl start mongod 

for System V

sudo service mongod start 

6. Initialize replica set.

Using mongo shell execute rs.initiate() command. See Figure 8.

fig8.png

Figure 8.

7. Add hosts to replica set.

Adding hosts to the replica set is described previously in step 6 of Application Consistent FlashArray Volume Copy section.

Site B: Entire Site Failure

In the event of site B or node on site B failure, the database remains accessible but it is no longer redundant. Once the site or node functionality has been restored, the database from site A will need to be replicated. The database replication will occur over the network which, depending on the database size, rate of data change, bandwidth, and the server busyness may take from minutes to hours and days. This process can be greatly reduced by using built-in FlashArray asynchronous replication. See FlashArray Asynchronous Asynchronous Replication section below.

FlashArray Asynchronous Replication

A typical MongoDB environment with FlashArray and asynchronous replication is illustrated in Figure 9. The process of configuring asynchronous replication between FlashArrays is described in Asynchronous Replication Configuration And Best Practices Guide.

fig9.png

Figure 9.

In the case depicted in Figure 9, the production site consists of three MongoDB replica set nodes. All replica set members are connected to FlashArray and each node has a dedicated volume. Database volumes on FlashArray are also members of the FlashArray Protection Group. Protection Groups provide convenient means of managing and replicating multiple volumes. All members of the Protection Group are replicated to another FlashArray on a scheduled basis. In this case, a single protection group includes all three MongoDB database disks and is configured for asynchronous FlashArray-based replication to the disaster recovery (DR) site.

The number of replicated disks depends on the business requirements and MonogDB replica set configuration.

There are also three MongoDB hosts on the disaster recovery site. The replicated FlashArray volumes on the DR site are connected to the stand-by nodes but not mounted. The mongod process is not running either.

Failure And Recovery Scenarios

Production Site: Single Node Failure

Single node failure does not affect MongoDB operations however, the replica set is no longer redundant. The failed node recovery process is the same as the process described in Site A: Single Node Failure section.

Production Site: Multiple Nodes Failure

In a three node MongoDB replica set, a two node failure will force the database to switch to read only mode. The failed node recovery process is the same as the process described previously in Site A: Single Node Failure where the procedure must be repeated for each failed node.

Production Site Failure

In the event of the production site failure, FlashArray and MongoDB hosts become unavailable however, the database volumes on the disaster recovery site contain the database data. The most recent database data is stored on the volume connected to the primary node. After the entire site failure it may not be possible to determine which node was a primary. However,  setting up a cron job to execute rs.status().members.find(r=>r.state===1).name command, will retrieve the name of the last known primary node. Once the name of the primary is available, the corresponding FlashArray volume can be easily determined. With the replicated volume identified, the recovery process on the DR site must include the following steps:

Prepare Replica Set Primary

1. Copy volume from the replicated snapshot on target FlashArray

Using Graphical User Interface (GUI) select (see Figure 10):

Storage ➤ Protection Groups ➤ <Protection Group> ➤ Volume Snapshots ➤ Copy Snapshot ➤ Name ➤ Copy [optionally select ‘Overwrite’]

fig10.png


Figure 10.

Using Command Line Interface (CLI)

purevol copy SOURCE TARGET

For instance:

purevol copy sn1-x90r3-f12-28:TKMongoDB.549 TKMongoDB01

This command will copy TKMongoDB.549 snapshot replicated from sn1-x90r3-f12-28 array to a volume called TKMongoDB01

2. Connect and mount the new volume on the selected host (for instance host A). The process of connecting the volume is described in step 4 of Site A: Single Node Failure

3. Start mongod process as described in step 6 of Site A: Single Node Failure

4. Drop local database as described in step 2 of Site A: Entire Site Failure

5. Stop mongod process as described in step 3 of Site A: Entire Site Failure

6. Edit /etc/mongod.conf and enable replication as described in step 4 of Site A: Entire Site Failure

If the name of the replica set is embedded into the application database driver, it may be beneficial to use the same replica set name as on the production site

7. Start mongod process and initialize replication as described in step 5 of Site A: Entire Site Failure.  This node becomes the new replica set primary

Create Secondaries 

  1. Stop mongod process and unmount database volume on the primary.

  2. Using FlashArray, snapshot volume unmounted in step 1.

  3. Start mongod process on the primary.

  4. Copy snapshot and overwrite volume connected to the secondary. 

  5. Add secondary to the replica set.

  6. Repeat steps 3 through 5 for additional replica set members.

Summary

Protecting MongoDB using FlashArray built-in replication mechanism is an efficient and a simple method of protecting the data. Moreover, the process of rebuilding the replica set is fast by utilizing FlashArray snapshots. By combining MongoDB high availability and flexibility delivered by replica set with FlashArray uptime (99.9999%), built-in redundancies, replication and clustering functionality, customers can achieve high-level of protection from hardware, software and entire site failures ensuring business continuity. For additional information describing the benefits of FlashArray and MongoDB see https://support.purestorage.com/Solutions/MongoDB/Blogs 

 

© 2020 Pure Storage, the Pure P Logo, and the marks on the Pure Trademark List at https://www.purestorage.com/legal/pr...duserinfo.html are trademarks of Pure Storage, Inc. Other names are trademarks of their respective owners. 

THIS DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. PURE STORAGE SHALL NOT BE LIABLE FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH THE FURNISHING, PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE INFORMATION CONTAINED IN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE.