Skip to main content
Pure Technical Services

PostgreSQL 12 on FlashArray//X R3 Best Practices Guide

Currently viewing public documentation. Please login to access the full scope of documentation.


This article is about deploying and best practices for PostgreSQL on Pure Storage® FlashArray™ //X. PostgreSQL is one of the most advanced open-source databases. It is completely ACID-compliant and uses the SQL language. PostgreSQL can run all the Linux operating systems as well as various other operating systems like Solaris, Windows, etc.

Let’s discuss the best way to configure and set up a PostgreSQL database on FlashArray//X. In order to test and optimize performance on FlashArray//X we have used benchmarking tools like HammerDB and sysbench to fine-tune the file system and database. Both HammerDB and sysbench have been used extensively in the setup explained below, which is based on Cisco UCS servers and Pure Storage FlashArray//X. 

Please understand and test the parameters on PostgreSQL database before making any changes. It is highly recommended to test it on test systems before making modifications to your production system.


Configuration and setup of database

For the configuration and setup process, PostgreSQL had been tested on Cisco UCS servers and FlashArray //X. The Cisco M5 server was connected to FlashArray//X via Fibre channel connections. It had the following HBA cards installed Cisco Systems Inc VIC FCoE HBA [1137:0045] (rev a2). It had a total of 8 paths from the host to FlashArray//X. 

Here is the complete breakdown of the setup:


  • Purity version: 5.3.2

  • Model: Pure Storage FlashArray//X 90 R3 

  • Capacity: 20*7.93 TB DirectFlash modules => 158 TB Raw capacity

  • Connectivity: 4-port 32Gb Fibre Channel (NVMe-oF Ready) 

Operating System/Servers:

  • Operating System: CentOS Linux 7.6

  • PostgreSQL version: 12.1

  • CPU:Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHz(48 cores)

  • Memory: 512GiB, DDR4 Synchronous 2666 MHz (0.4 ns)

  • Server: Cisco UCSC-C220-M5SX

Best Practices/Configuration for PostgreSQL 12 on FlashArray//X

Before describing best practices and configuration for PostgreSQL, the default configuration for PostgreSQL is not appropriate for deploying on FlashArray//X as there may be a significant performance difference between running with default configuration as compared to properly configured PostgreSQL database.

The table below shows the comparison below running with default settings and with best practices. 


Percentage improvement with best practices 

(transactions per second)





Let us now look at the best practices for PostgreSQL deployment on FlashArray//X. Please check out the support page for the operating system you are planning to use for deploying PostgreSQL. Here is the link for the best performance with the Pure Storage FlashArrays, please use this guide for configuration and implementation of Linux hosts in your environment.

Linux Recommended Settings 

  1. Multipathing on Pure Storage FlashArray//X: Multipathing needs to be set up to do queue-length for all PURE LUNs by configuring it in /etc/multipath.conf.

The file contents of multipath.conf are shown here:

  1. File System for PostgreSQL data and archive logs: We recommend using the XFS file system for deploying PostgreSQL’s data and archive logs on FlashArray//X. 

                    PostgreSQL’s data: /var/lib/pgsql/12/data ->XFS

                    Archive logs: XFS

  1.  Huge pages:  With huge pages set in memory it will create 2MB pages and not the standard 4KB page size. This improves virtual memory management where the kernel keeps a table containing a mapping of virtual memory addresses to physical addresses. With each transaction, the kernel needs to load related mapping and if the page size is small then lot of pages must be loaded. This causes a decrease in performance.

To enable huge pages, just add the line vm.nr_hugepages to /etc/sysctl.conf.

For eg: vm.nr_hugepages = 70000 will create a 128GB buffer as each page is of size 2MB. 

Change the /etc/security/limits.conf and add the following:

postgres soft memlock 100000000

postgres hard memlock 100000000

  1. CPU frequency scaling: The CPU needs to be configured for the optimal performance. We, therefore, set the cpufreq governor to performance as shown below.

./cpupower frequency-set --governor=performance

Also, we make sure that the idle settings are enabled and the intel CPUs intel_idle driver is used. This gives the best possible performance.

./cpupower idle-set --enable-all

  1. PostgreSQL parameters: In this section, we discuss the important PostgreSQL parameters that are shown below. These changes are added to the postgresql.conf file. 




Recommended Value


This is the most important parameter which impacts the performance directly. It basically sets the amount of memory the database server uses for shared memory buffers.

25-40% of server memory. 


This parameter sets the WAL grow to between checkpoints. With higher values of shared_buffers, it is very critical to increase the value of max_wal_size as well. This will help in spreading out the process of writing large quantities of new or changed data over a longer period of time. We have seen a significant  improvement in transactions per second when this value was increased.



As mentioned before, huge pages provides a big advantage and boost in performance. It is highly recommended to set it on.



“replica” is the recommended setting for a Production database. It makes sure how much information is written to WAL. Replica value writes enough data to support WAL archiving and replication.


“minimal” removes all logging but it can recover from a crash. This value gives a huge improvement in transactions per second almost 4x faster than “replica” value.  

“replica” for Production system and “minimal” for performance-based system


This parameter specifies whether each transaction or WAL records are to be written to disk before the command returns success to the client. Obviously the safe option is on for production systems. But, if performance is a priority over durability, then it can be set it to off and you will see 2x faster transactions per second (as observed in our testing). This is because it will issue a group fsync to disk based on all the transactions which are 3 times the value wal_writer_delay. However, it should be noted that there is a chance of losing these transactions in the event that a server crashes.

“on” for production system and “off” for a performance-based system


This parameter controls the maximum time between automatic checkpoints. If we increase this parameter value, the checkpoint gets delayed but we get 1.5 to 2x improvement in transactions per second performance. However, it should be noted that it will, at the same time, increase the time needed for crash recovery.

For production it is recommended to set for 5 minutes. For performance-based systems set it to 30 minutes or more.

Check_point_completion target

This parameter specifies the time spent flushing dirty buffers during checkpoint. Setting this will slow down the checkpoint IO as checkpoints will take longer. But it also makes sure that there is enough IO bandwidth left for other operations.

Recommended setting 0.2-0.5

effective _io_concurrency

This sets the number of concurrent disk I/O operations in PostgreSQL. Increasing this value will increase the number of I/O operations that any individual PostgreSQL session attempts to initiate in parallel. Increasing this value can directly boost query performance. 

256-512 is the recommended value


This parameter determines which method to use for WAL updates out to disk. We recommend making sure that fsync is turned on. The testing achieved the best performance from using fdatasync(call fdatasync() at each commit) rather than fsync().

fdatasync() is the recommended value.



These parameters help PostgreSQL to scale vertically, especially when parallel queries are run.  It is best to keep both the parameters to the same value.

Recommended value to keep it same as number of cores in your database server


This parameter controls the compression of full-page writes written in WAL file and can  slow down the performance when this is set to on. We recommend turning this off, as FlashArray//X’s data reduction capabilities will in itself reduce the WAL size.



**Checkpoint in PostgreSQL

        The checkpoint is the most important IO intensive operation in the PostgreSQL database. The Checkpoint operation can be controlled by checkpoint_timeout and max_wal_size parameters. Checkpoint operations are typically expensive and lead to the database slowing down when the Checkpoint operations are performed. If we delay the checkpoints, the database will have much better  performance in between checkpoints. On the other hand, there may be many changes that can slow down the database recovery when the system crashes. From our testing, we found that delaying the checkpoint operations resulted in a large performance boost to the PostgreSQL database, rather than doing checkpoints operations often. This is a trade-off between performance and database recovery time.   For example, when we changed the checkpoint_timeout to 30 minutes and max_wal_size to 100GB, the checkpoint was delayed. This gave a 1.5x performance boost when the checkpointing is done every 30 seconds. 



Pure Storage FlashArray//X is known for simplicity, high performance, and data reduction capabilities. This makes it an excellent choice for deploying PostgreSQL on FlashArray//X.  We highly recommend trying this new architecture based on FlashArray//X as we have seen up to 20.4% better performance(Please see the other article Performance comparison of PostgreSQL deployed on FlashArray//X and SAS-DAS)  in terms of transactions per second on FlashArray//X over SAS-DAS based deployments.