Distribution For Postgresql Documentation: 15.3 (June 28, 2023)
Distribution For Postgresql Documentation: 15.3 (June 28, 2023)
PostgreSQL
Documentation
15.3 (June 28, 2023)
Table of contents
2. Release Notes 5
4. Extensions 45
4.1 pg_stat_monitor 45
5. Solutions 52
• pgpool2 - a middleware between PostgreSQL server and client for high availability, connection pooling
and load balancing.
• pg_stat_monitor collects and aggregates statistics for PostgreSQL and provides histogram information.
See also
Percona Blog:
Percona Distribution for PostgreSQL is also shipped with the libpq library. It contains "a set of library functions
that allow client programs to pass queries to the PostgreSQL backend server and to receive the results of
these queries."
Contact Us
For paid support and managed or consulting services , contact Percona Sales.
2. Release Notes
CONTACT US
For paid support and managed or consulting services , contact Percona Sales.
Percona Distribution for PostgreSQL is a solution with the collection of tools from PostgreSQL community that
are tested to work together and serve to assist you in deploying and managing PostgreSQL. The aim of
Percona Distribution for PostgreSQL is to address the operational issues like High-Availability, Disaster
Recovery, Security, Performance and Scalability and others that enterprises are facing.
• Percona Distribution for PostgreSQL components now include PostGIS - the open-source extension that
allows storing and manipulating spacial data in PostgreSQL.
The following is the list of extensions available in Percona Distribution for PostgreSQL.
PgAudit 1.7.0 provides detailed session or object audit logging via the standard
logging facility provided by PostgreSQL
pgAudit set_user 4.0.1 provides an additional layer of logging and control when
unprivileged users must escalate themselves to superusers or
object owner roles in order to perform needed maintenance
tasks.
pgpool2 4.4.3 a middleware between PostgreSQL server and client for high
availability, connection pooling and load balancing.
pg_stat_monitor 2.0.1 collects and aggregates statistics for PostgreSQL and provides
histogram information.
• llvm 12.0.1 packages for Red Hat Enterprise Linux 8 / CentOS 8. This fixes compatibility issues with LLVM
from upstream.
• supplemental ETCD packages which can be used for setting up Patroni clusters. These packages are
available for the following operating systems:
Percona Distribution for PostgreSQL is also shipped with the libpq library. It contains "a set of library functions
that allow client programs to pass queries to the PostgreSQL backend server and to receive the results of
these queries."
CONTACT US
For paid support and managed or consulting services , contact Percona Sales.
Percona Distribution for PostgreSQL is a solution with the collection of tools from PostgreSQL community that
are tested to work together and serve to assist you in deploying and managing PostgreSQL. The aim of
Percona Distribution for PostgreSQL is to address the operational issues like High-Availability, Disaster
Recovery, Security, Performance and Scalability and others that enterprises are facing.
This update of Percona Distribution for PostgreSQL includes the new version of pg_stat_monitor 2.0.1 that
fixes the issues with the database failure.
CONTACT US
For paid support and managed or consulting services , contact Percona Sales.
Percona Distribution for PostgreSQL is a solution with the collection of tools from PostgreSQL community that
are tested to work together and serve to assist you in deploying and managing PostgreSQL. The aim of
Percona Distribution for PostgreSQL is to address the operational issues like High-Availability, Disaster
Recovery, Security, Performance and Scalability and others that enterprises are facing.
• A new major version of pg_stat_monitor 2.0.0 has been released and is now generally available with
Percona Distribution for PostgreSQL.
• A new extension pgpool - a middleware between PostgreSQL server and client for high availability,
connection pooling and load balancing - is added.
• Percona Distribution for PostgreSQL is now available on Red Hat Enterprise Linux 9 and compatible
derivatives
The following is the list of extensions available in Percona Distribution for PostgreSQL.
PgAudit 1.7.0 provides detailed session or object audit logging via the standard
logging facility provided by PostgreSQL
pgAudit set_user 4.0.1 provides an additional layer of logging and control when
unprivileged users must escalate themselves to superusers or
object owner roles in order to perform needed maintenance
tasks.
pg_stat_monitor 2.0.0 collects and aggregates statistics for PostgreSQL and provides
histogram information.
pgpool2 4.4.2 a middleware between PostgreSQL server and client for high
availability, connection pooling and load balancing.
• llvm 12.0.1 packages for Red Hat Enterprise Linux 8 / CentOS 8. This fixes compatibility issues with LLVM
from upstream.
• supplemental ETCD packages which can be used for setting up Patroni clusters. These packages are
available for the following operating systems:
Percona Distribution for PostgreSQL is also shipped with the libpq library. It contains "a set of library functions
that allow client programs to pass queries to the PostgreSQL backend server and to receive the results of
these queries."
CONTACT US
For paid support and managed or consulting services , contact Percona Sales.
Percona Distribution for PostgreSQL is a solution with the collection of tools from PostgreSQL community that
are tested to work together and serve to assist you in deploying and managing PostgreSQL. The aim of
Percona Distribution for PostgreSQL is to address the operational issues like High-Availability, Disaster
Recovery, Security, Performance and Scalability and others that enterprises are facing.
Percona Distribution for PostgreSQL now includes the meta-packages that simplify its installation. The
percona-ppg-server meta-package installs PostgreSQL and the extensions, while percona-ppg-server-ha
package installs high-availability components that are recommended by Percona.
The following is the list of extensions available in Percona Distribution for PostgreSQL.
pgaudit 1.7.0 provides detailed session or object audit logging via the standard
logging facility provided by PostgreSQL
pgAudit set user 4.0.0 provides an additional layer of logging and control when
unprivileged users must escalate themselves to superuser or
object owner roles in order to perform needed maintenance
tasks.
pg_stat_monitor 1.1.1 collects and aggregates statistics for PostgreSQL and provides
histogram information.
HAProxy 2.5.9 The high-availability and load balancing solution for PostgreSQL
• llvm 12.0.1 packages for Red Hat Enterprise Linux 8 / CentOS 8. This fixes compatibility issues with LLVM
from upstream.
• supplemental ETCD packages which can be used for setting up Patroni clusters. These packages are
available for the following operating systems:
Percona Distribution for PostgreSQL is also shipped with the libpq library. It contains "a set of library functions
that allow client programs to pass queries to the PostgreSQL backend server and to receive the results of
these queries."
CONTACT US
For paid support and managed or consulting services , contact Percona Sales.
We are pleased to announce the launch of Percona Distribution for PostgreSQL 15.0 - a solution with the
collection of tools from PostgreSQL community that are tested to work together and serve to assist you in
deploying and managing PostgreSQL. The aim of Percona Distribution for PostgreSQL is to address the
operational issues like High-Availability, Disaster Recovery, Security, Performance and Scalability and others
that enterprises are facing.
Percona Distribution for PostgreSQL 15 features a lot of new functionalities and enhancements to
performance, replication, statistics collection, logging and more. Among them are the following:
• Added the MERGE command, which allows your developers to write conditional SQL statements that can
include INSERT , UPDATE , and DELETE actions within a single statement.
• A view can now be created with the permissions of the caller, instead of the view creator. This adds an
additional layer of protection to ensure that view callers have the correct permissions for working with the
underlying data.
• Row filtering and column lists for publishers lets users choose to replicate a subset of data from a table
• The ability to skip replaying a conflicting transaction and to automatically disable a subscription if an
error is detected simplifies the conflict management.
• A new logging format jsonlog outputs log data using a defined JSON structure, which allows PostgreSQL
logs to be processed in structured logging systems
• Now DBAs can manage the PostgreSQL configuration by granting users permission to alter server-level
configuration parameters. Users can also browse the configuration from psql using the \dconfig
command.
• Server-level statistics are now collected in shared memory, eliminating both the statistics collector
process and periodically writing this data to disk
• A new built-in extension, pg_walinspect , lets users inspect the contents of write-ahead log files directly
from a SQL interface.
• Performance optimizations:
• Improved performance of window functions such as rank() , row_number() and count() . Read more about
performance and benchmarking results in Introducing Performance Improvement of Window Functions in
PostgreSQL 15.
See also
• Percona Blog:
The following is the list of extensions available in Percona Distribution for PostgreSQL.
pgaudit 1.7.0 provides detailed session or object audit logging via the standard
logging facility provided by PostgreSQL
pgAudit set user 3.0.0 provides an additional layer of logging and control when
unprivileged users must escalate themselves to superuser or
object owner roles in order to perform needed maintenance
tasks.
pg_stat_monitor 1.1.1 collects and aggregates statistics for PostgreSQL and provides
histogram information.
HAProxy 2.5.9 The high-availability and load balancing solution for PostgreSQL
• llvm 12.0.1 packages for Red Hat Enterprise Linux 8 / CentOS 8. This fixes compatibility issues with LLVM
from upstream.
• supplemental ETCD packages which can be used for setting up Patroni clusters. These packages are
available for the following operating systems:
Percona Distribution for PostgreSQL is also shipped with the libpq library. It contains "a set of library functions
that allow client programs to pass queries to the PostgreSQL backend server and to receive the results of
these queries."
CONTACT US
For paid support and managed or consulting services , contact Percona Sales.
Percona provides installation packages in DEB and RPM format for 64-bit Linux distributions. Find the full list
of supported platforms on the Percona Software and Platform Lifecycle page.
Like many other Percona products, we recommend installing Percona Distribution for PostgreSQL from
Percona repositories by using the percona-release utility. The percona-release utility automatically
enables the required repository for you so you can easily install and update Percona Distribution for
PostgreSQL packages and their dependencies through the package manager of your operating system.
Package contents
In addition to individual packages for its components, Percona Distribution for PostgreSQL also includes two
meta-packages: percona-ppg-server and percona-ppg-server-ha .
Using a meta-package, you can install all components it contains in one go.
PERCONA-PPG-SERVER
percona-ppg-server-15
percona-ppg-server15
The percona-ppg-server meta-package installs the PostgreSQL server with the following packages:
percona-pgaudit Provides detailed session or object audit logging via the standard
PostgreSQL logging facility.
PERCONA-PPG-SERVER-HA
percona-ppg-server-ha-15
percona-ppg-server-ha15
Contact Us
For paid support and managed or consulting services , contact Percona Sales.
This document describes how to install Percona Server for PostgreSQL from Percona repositories on DEB-
based distributions such as Debian and Ubuntu.
Preconditions
Debian and other systems that use the apt package manager include the upstream PostgreSQL server
package (postgresql-15) by default. The components of Percona Distribution for PostgreSQL 15 can only be
installed together with the PostgreSQL server shipped by Percona (percona-postgresql-15). If you wish to
use Percona Distribution for PostgreSQL, uninstall the PostgreSQL package provided by your distribution
(postgresql-15) and then install the chosen components from Percona Distribution for PostgreSQL.
Procedure
Run all the commands in the following sections as root or using the sudo command:
Percona provides two repositories for Percona Distribution for PostgreSQL. We recommend enabling the
Major release repository to timely receive the latest updates.
INSTALL PACKAGES
Install pgAudit :
Install pgBackRest :
Install Patroni :
Install pg_stat_monitor
Install pgBouncer :
Install pgAudit-set_user :
Install pgBadger :
Install wal2json :
Install HAProxy
Install pgpool2
The installation process automatically initializes and starts the default database. You can check the
database status using the following command:
By default, postgres user and postgres database are created in PostgreSQL upon its installation and
initialization. This allows you to connect to the database as the postgres user.
$ sudo su postgres
$ psql
Hint
$ \q
Contact Us
For paid support and managed or consulting services , contact Percona Sales.
3.1.3 Install Percona Distribution for PostgreSQL on Red Hat Enterprise Linux and derivatives
This document describes how to install Percona Distribution for PostgreSQL from Percona repositories on
RPM-based distributions such as Red Hat Enterprise Linux and compatible derivatives.
If you intend to install Percona Distribution for PostgreSQL on Red Hat Enterprise Linux v8, disable the
postgresql and llvm-toolset modules:
Procedure
Run all the commands in the following sections as root or using the sudo command:
Percona provides two repositories for Percona Distribution for PostgreSQL. We recommend enabling the
Major release repository to timely receive the latest updates.
INSTALL PACKAGES
Install pgaudit :
Install pgBackRest :
Install Patroni :
Install pg_stat_monitor :
Install pgBouncer :
Install pgAudit-set_user :
Install pgBadger :
Install wal2json :
Install HAProxy
Install pgpool
After the installation, the default database storage is not automatically initialized. To complete the
installation and start Percona Distribution for PostgreSQL, initialize the database using the following
command:
$ /usr/pgsql-15/bin/postgresql-15-setup initdb
By default, postgres user and postgres database are created in PostgreSQL upon its installation and
initialization. This allows you to connect to the database as the postgres user.
$ sudo su postgres
$ psql
Hint
$ \q
Contact Us
For paid support and managed or consulting services , contact Percona Sales.
Some extensions require additional configuration before using them with Percona Distribution for
PostgreSQL. This sections provides configuration instructions per extension.
Patroni
Patroni is the third-party high availability solution for PostgreSQL. The High Availability in PostgreSQL with
Patroni chapter provides details about the solution overview and architecture deployment.
While setting up a high availability PostgreSQL cluster with Patroni, you will need the following components:
For CentOS 8, RPM packages for ETCD is available within Percona Distribution for PostreSQL. You can install
it using the following command:
• HAProxy.
See the configuration guidelines for Debian and Ubuntu and RHEL and CentOS.
See also
• Patroni documentation
• Percona Blog:
pgBadger
Enable the following options in postgresql.conf configuration file before starting the service:
log_min_duration_statement = 0
log_line_prefix = '%t [%p]: '
log_checkpoints = on
log_connections = on
log_disconnections = on
log_lock_waits = on
log_temp_files = 0
log_autovacuum_min_duration = 0
log_error_verbosity = default
pgAudit set-user
Add the set-user to shared_preload_libraries in postgresql.conf . The recommended way is to use the
ALTER SYSTEM command. Connect to psql and use the following command:
You can fine-tune user behavior with the custom parameters supplied with the extension.
wal2json
After the installation, enable the following option in postgresql.conf configuration file before starting the
service:
wal_level = logical
Contact Us
For paid support and managed or consulting services , contact Percona Sales.
Major Release repository ( ppg-15 ) it Minor Release repository includes a particular minor release of
includes the latest version packages. the database and all of the packages that were tested and
Whenever a package is updated, the verified to work with that minor release (e.g. ppg-15.1 ). You
package manager of your operating may choose to install Percona Distribution for PostgreSQL from
system detects that and prompts you the Minor Release repository if you have decided to
to update. As long as you update all standardize on a particular release which has passed rigorous
Distribution packages at the same time, testing procedures and which has been verified to work with
you can ensure that the packages your applications. This allows you to deploy to a new host and
you’re using have been tested and ensure that you’ll be using the same version of all the
verified by Percona. Distribution packages, even if newer releases exist in other
repositories.
We recommend installing Percona
Distribution for PostgreSQL from the The disadvantage of using a Minor Release repository is that
Major Release repository you are locked in this particular release. When potentially
critical fixes are released in a later minor version of the
database, you will not be prompted for an upgrade by the
package manager of your operating system. You would need
to change the configured repository in order to install the
upgrade.
Contact Us
For paid support and managed or consulting services , contact Percona Sales.
We encourage users to migrate from their PostgreSQL deployments based on community binaries to
Percona Distribution for PostgreSQL. This document provides the migration instructions.
Depending on your business requirements, you may migrate to Percona Distribution for PostgreSQL either on
the same server or onto a different server.
To ensure that your data is safe during the migration, we recommend to make a backup of your data and
all configuration files (such as pg_hba.conf , postgresql.conf , postgresql.auto.conf ) using the tool of your
choice. The backup process is out of scope of this document. You can use pg_dumpall or other tools of your
choice.
3. Install percona-release
7. Start the postgresql service. The installation process starts and initializes the default cluster automatically.
You can check its status with:
To ensure that your data is safe during the migration, we recommend to make a backup of your data and
all configuration files (such as pg_hba.conf , postgresql.conf , postgresql.auto.conf ) using the tool of your
choice. The backup process is out of scope of this document. You can use pg_dumpall or other tools of your
choice.
3. Install percona-release
4. Enable the repository
In this scenario, we will refer to the server with PostgreSQL Community as the "source" and to the server with
Percona Distribution for PostgreSQL as the "target".
To migrate from PostgreSQL Community to Percona Distribution for PostgreSQL on a different server, do the
following:
1. Back up your data and all configuration files (such as pg_hba.conf , postgresql.conf , postgresql.auto.conf )
using the tool of your choice.
1. Install percona-release
CONTACT US
For paid support and managed or consulting services , contact Percona Sales.
The in-place upgrade means installing a new version without removing the old version and keeping the
data files on the server.
See also
pg_upgrade Documentation
Similar to installing, we recommend you to upgrade Percona Distribution for PostgreSQL from Percona
repositories.
Important
A major upgrade is a risky process because of many changes between versions and issues that might occur
during or after the upgrade. Therefore, make sure to back up your data first. The backup tools are out of scope of
this document. Use the backup tool of your choice.
The general in-place upgrade flow for Percona Distribution for PostgreSQL is the following:
The exact steps may differ depending on the package manager of your operating system.
Important
• Install percona-release
• Enable Percona repository:
$ sudo su postgres
• Change the current directory to the tmp directory where logs and some scripts will be recorded:
$ cd tmp/
• Check the ability to upgrade Percona Distribution for PostgreSQL from 14 to 15:
$ /usr/lib/postgresql/15/bin/pg_upgrade
--old-datadir=/var/lib/postgresql/14/main \
--new-datadir=/var/lib/postgresql/15/main \
--old-bindir=/usr/lib/postgresql/14/bin \
--new-bindir=/usr/lib/postgresql/15/bin \
--old-options '-c config_file=/etc/postgresql/14/main/postgresql.conf' \
--new-options '-c config_file=/etc/postgresql/15/main/postgresql.conf' \
--check
The --check flag here instructs pg_upgrade to only check the upgrade without changing any data.
Sample output
$ /usr/lib/postgresql/15/bin/pg_upgrade
--old-datadir=/var/lib/postgresql/14/main \
--new-datadir=/var/lib/postgresql/15/main \
--old-bindir=/usr/lib/postgresql/14/bin \
--new-bindir=/usr/lib/postgresql/15/bin \
--old-options '-c config_file=/etc/postgresql/14/main/postgresql.conf' \
--new-options '-c config_file=/etc/postgresql/15/main/postgresql.conf' \
--link
The --link flag creates hard links to the files on the old version cluster so you don’t need to copy data.
If you don’t wish to use the --link option, make sure that you have enough disk space to store 2 copies of files
for both old version and new version clusters.
$ exit
• The Percona Distribution for PostgreSQL 14 uses the 5432 port while the Percona Distribution for PostgreSQL 15
is set up to use the 5433 port by default. To start the Percona Distribution for PostgreSQL 15, swap ports in the
configuration files of both versions.
$ sudo su postgres
$ tmp/analyze_new_cluster.sh
$ #Logout
$ exit
$ rm -rf /etc/postgresql/14/main
Important
• Install percona-release
• Enable Percona repository:
• Install components:
$ sudo su postgres
export LC_ALL="en_US.UTF-8"
export LC_CTYPE="en_US.UTF-8"
$ /usr/pgsql-15/bin/initdb -D /var/lib/pgsql/15/data
$ sudo su postgres
• Check the ability to upgrade Percona Distribution for PostgreSQL from 14 to 15:
$ /usr/pgsql-15/bin/pg_upgrade \
--old-bindir /usr/pgsql-14/bin \
--new-bindir /usr/pgsql-15/bin \
--old-datadir /var/lib/pgsql/14/data \
--new-datadir /var/lib/pgsql/15/data \
--check
The --check flag here instructs pg_upgrade to only check the upgrade without changing any data.
Sample output
$ /usr/pgsql-15/bin/pg_upgrade \
--old-bindir /usr/pgsql-14/bin \
--new-bindir /usr/pgsql-15/bin \
--old-datadir /var/lib/pgsql/14/data \
--new-datadir /var/lib/pgsql/15/data \
--link
The --link flag creates hard links to the files on the old version cluster so you don’t need to copy data. If you
don’t wish to use the --link option, make sure that you have enough disk space to store 2 copies of files for
both old version and new version clusters.
$ sudo su postgres
$ ./analyze_new_cluster.sh
$ ./delete_old_cluster.sh
• Remove packages
$ rm -rf /var/lib/pgsql/14/data
CONTACT US
For paid support and managed or consulting services , contact Percona Sales.
Though minor upgrades do not change the behavior, we recommend you to back up your data first, in order
to be on the safe side.
Minor upgrade of Percona Distribution for PostgreSQL includes the following steps:
Note
These steps apply if you installed Percona Distribution for PostgreSQL from the Major Release repository. In this
case, you are always upgraded to the latest available release.
If you installed Percona Distribution for PostgreSQL from the Minor Release repository, you will need to enable a
new version repository to upgrade.
For more information about Percona repositories, refer to Installing Percona Distribution for PostgreSQL.
Before the upgrade, update the percona-release utility to the latest version. This is required to install the new
version packages of Percona Distribution for PostgreSQL. Refer to Percona Software Repositories Documentation
for update instructions.
Important
On Debian / Ubuntu
2. Install new version packages. See Installing Percona Distribution for PostgreSQL.
On Debian / Ubuntu
If you wish to upgrade Percona Distribution for PostgreSQL to the major version, refer to Upgrading Percona
Distribution for PostgreSQL from 14 to 15.
CONTACT US
For paid support and managed or consulting services , contact Percona Sales.
4. Extensions
4.1 pg_stat_monitor
Note
4.1.1 Overview
pg_stat_monitor is a Query Performance Monitoring tool for PostgreSQL. It collects various statistics data
such as query statistics, query plan, SQL comments and other performance insights. The collected data is
aggregated and presented in a single view. This allows you to view queries from performance, application
and analysis perspectives.
pg_stat_monitor groups statistics data and writes it in a storage unit called bucket. The data is added and
stored in a bucket for the defined period – the bucket lifetime. This allows you to identify performance issues
and patterns based on time.
• Bucket size. This is the amount of shared memory allocated for buckets. Memory is divided equally among
buckets.
• Bucket lifetime.
When a bucket lifetime expires, pg_stat_monitor resets all statistics and writes the data in the next bucket in
the chain. When the last bucket’s lifetime expires, pg_stat_monitor returns to the first bucket.
Important
The contents of the bucket will be overwritten. In order not to lose the data, make sure to read the bucket before
pg_stat_monitor starts writing new data to it.
Views
PG_STAT_MONITOR VIEW
The pg_stat_monitor view contains all the statistics collected and aggregated by the extension. This view
contains one row for each distinct combination of metrics and whether it is a top-level statement or not (up
to the maximum number of distinct statements that the module can track). For details about available
metrics, refer to the pg_stat_monitor view reference.
• bucket
• userid
• datname
• queryid
• client_ip
• planid
• application_name
For security reasons, only superusers and members of the pg_read_all_stats role are allowed to see the SQL
text, client_ip and queryid of queries executed by other users. Other users can see the statistics, however,
if the view has been installed in their database.
Starting with version 2.0.0, the pg_stat_monitor_settings view is deprecated and removed. All
pg_stat_monitor configuration parameters are now available though the pg_settings view using the
following query:
SELECT name, setting, unit, context, vartype, source, min_val, max_val, enumvals, boot_val,
reset_val, pending_restart FROM pg_settings WHERE name LIKE '%pg_stat_monitor%';
For backward compatibility, you can create the pg_stat_monitor_settings view using the following SQL
statement:
AS
SELECT *
FROM pg_settings
In pg_stat_monitor version 1.1.1 and earlier, the pg_stat_monitor_settings view shows one row per
pg_stat_monitor configuration parameter. It displays configuration parameter name, value, default value,
description, minimum and maximum values, and whether a restart is required for a change in value to be
effective.
4.1.2 Installation
This section describes how to install pg_stat_monitor from Percona repositories. To learn about other
installation methods, see the Installation section in the pg_stat_monitor documentation.
Preconditions:
To install pg_stat_monitor from Percona repositories, you need to subscribe to them. To do this, you must
have the percona-release repository management tool up and running.
4.1.3 Setup
pg_stat_monitor requires additional setup in order to use it with PostgreSQL. The setup steps are the
following:
The recommended way to modify PostgreSQL configuration file is using the ALTER SYSTEM command. Connect
to psql and use the following command:
The parameter value is written to the postgresql.auto.conf file which is read in addition with postgresql.conf
file.
Note
To use pg_stat_monitor together with pg_stat_statements , specify both modules separated by commas for the
ALTER SYSTEM SET command.
2. Start or restart the postgresql instance to enable pg_stat_monitor . Use the following command for restart:
3. Create the extension. Connect to psql and use the following command:
By default, the extension is created against the postgres database. You need to create the extension on every
database where you want to collect statistics.
Tip
To check the version of the extension, run the following command in the psql session:
SELECT pg_stat_monitor_version();
4.1.4 Usage
For example, to view the IP address of the client application that made the query, run the following
command:
Output:
Output
name
| short_desc
-------------------------------------------
+------------------------------------------------------------------------------------------------------------
pg_stat_monitor.pgsm_bucket_time | Sets the time in seconds per bucket.
pg_stat_monitor.pgsm_enable_overflow | Enable/Disable pg_stat_monitor to grow beyond
shared memory into swap space.
pg_stat_monitor.pgsm_enable_pgsm_query_id | Enable/disable PGSM specific query id
calculation which is very useful in comparing same query across databases and clusters..
pg_stat_monitor.pgsm_enable_query_plan | Enable/Disable query plan monitoring.
pg_stat_monitor.pgsm_extract_comments | Enable/Disable extracting comments from
queries.
pg_stat_monitor.pgsm_histogram_buckets | Sets the maximum number of histogram buckets.
pg_stat_monitor.pgsm_histogram_max | Sets the time in millisecond.
pg_stat_monitor.pgsm_histogram_min | Sets the time in millisecond.
You can change a parameter by setting a new value in the configuration file. Some parameters require
server restart to apply a new value. For others, configuration reload is enough. Refer to the configuration
parameters of the pg_stat_monitor documentation for the parameters’ description, how you can change
their values and if the server restart is required to apply them.
As an example, let’s set the bucket lifetime from default 60 seconds to 40 seconds. Use the ALTER SYSTEM
command:
name | setting
----------------------------------+---------
pg_stat_monitor.pgsm_bucket_time | 40
See also
pg_stat_monitor Documentation
Percona Blog:
CONTACT US
For paid support and managed or consulting services , contact Percona Sales.
5. Solutions
Summary
• Solution overview
• Cluster deployment
• Testing the cluster
PostgreSQL has been widely adopted as a modern, high-performance transactional database. A highly
available PostgreSQL cluster can withstand failures caused by network outages, resource saturation,
hardware failures, operating system crashes, or unexpected reboots. Such cluster is often a critical
component of the enterprise application landscape, where four nines of availability is a minimum
requirement.
This document provides instructions on how to set up and test a highly-available, single-primary, three-
node cluster with Percona PostgreSQL and Patroni.
There are a few methods for achieving high availability with PostgreSQL:
In recent times, PostgreSQL high availability is most commonly achieved with [streaming
replication](#streaming-replication).
## Streaming replication
Streaming replication is part of Write-Ahead Log shipping, where changes to the WALs are
immediately made available to standby replicas. With this approach, a standby instance is
always up-to-date with changes from the primary node and can assume the role of primary in
case of a failover.
Although the native streaming replication in PostgreSQL supports failing over to the primary
node, it lacks some key features expected from a truly highly-available solution. These
include:
Percona Distribution for PostgreSQL solves this challenge by providing the [Patroni](https://
patroni.readthedocs.io/en/latest/) extension for achieving PostgreSQL high availability.
Patroni
Patroni provides a template-based approach to create highly available PostgreSQL clusters. Running atop
the PostgreSQL streaming replication process, it integrates with watchdog functionality to detect failed
primary nodes and take corrective actions to prevent outages. Patroni also provides a pluggable
configuration store to manage distributed, multi-node cluster configuration and comes with REST APIs to
monitor and manage the cluster. There is also a command-line utility called patronictl that helps manage
switchovers and failure scenarios.
Architecture layout
The following diagram shows the architecture of a three-node PostgreSQL cluster with a single-leader node.
COMPONENTS
Deployment
Use the links below to navigate to the setup instructions relevant to your operating system
Testing
See the Testing PostgreSQL cluster for the guidelines on how to test your PostgreSQL cluster for replication,
failure, switchover.
Contact Us
For paid support and managed or consulting services , contact Percona Sales.
5.1.2 Deploying PostgreSQL for high availability with Patroni on Debian or Ubuntu
This guide provides instructions on how to set up a highly available PostgreSQL cluster with Patroni on
Debian or Ubuntu.
Preconditions
For this setup, we will use the nodes running on Ubuntu 20.04 as the base operating system and having the
following IP addresses:
Note
In a production (or even non-production) setup, the PostgreSQL nodes will be within a private subnet without any
public connectivity to the Internet, and the HAProxy will be in a different subnet that allows client traffic coming
only from a selected IP range. To keep things simple, we have implemented this architecture in a DigitalOcean VPS
environment, and each node can access the other by its internal, private IP.
To make the nodes aware of each other and allow their seamless communication, resolve their hostnames
to their public IP addresses. Modify the /etc/hosts file of each node as follows:
The /etc/hosts file of the HAProxy-demo node looks like the following:
1. Follow the installation instructions to install Percona Distribution for PostgreSQL on node1 , node2 and node3 .
2. Remove the data directory. Patroni requires a clean environment to initialize a new cluster. Use the following
commands to stop the PostgreSQL service and then remove the data directory:
The distributed configuration store helps establish a consensus among nodes during a failover and will
manage the configuration for the three PostgreSQL instances. Although Patroni can work with other
distributed consensus stores (i.e., Zookeeper, Consul, etc.), the most commonly used one is etcd .
The etcd cluster is first started in one node and then the subsequent nodes are added to the first node
using the add command. The configuration is stored in the /etc/default/etcd file.
• On node1 , add the IP address of node1 to the ETCD_INITIAL_CLUSTER parameter. The configuration file looks as
follows:
ETCD_NAME=node1
ETCD_INITIAL_CLUSTER="node1=http://10.104.0.7:2380"
ETCD_INITIAL_CLUSTER_TOKEN="devops_token"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.104.0.7:2380"
ETCD_DATA_DIR="/var/lib/etcd/postgresql"
ETCD_LISTEN_PEER_URLS="http://10.104.0.7:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.104.0.7:2379,http://localhost:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://10.104.0.7:2379"
…
• On node2 , add the IP addresses of both node1 and node2 to the ETCD_INITIAL_CLUSTER parameter:
ETCD_NAME=node2
ETCD_INITIAL_CLUSTER="node1=http://10.104.0.7:2380,node2=http://10.104.0.2:2380"
ETCD_INITIAL_CLUSTER_TOKEN="devops_token"
ETCD_INITIAL_CLUSTER_STATE="existing"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.104.0.2:2380"
ETCD_DATA_DIR="/var/lib/etcd/postgresql"
ETCD_LISTEN_PEER_URLS="http://10.104.0.2:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.104.0.2:2379,http://localhost:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://10.104.0.2:2379"
…
• On node3 , the ETCD_INITIAL_CLUSTER parameter includes the IP addresses of all three nodes:
ETCD_NAME=node3
ETCD_INITIAL_CLUSTER="node1=http://10.104.0.7:2380,node2=http://10.104.0.2:2380,node3=http://
10.104.0.8:2380"
ETCD_INITIAL_CLUSTER_TOKEN="devops_token"
ETCD_INITIAL_CLUSTER_STATE="existing"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.104.0.8:2380"
ETCD_DATA_DIR="/var/lib/etcd/postgresql"
ETCD_LISTEN_PEER_URLS="http://10.104.0.8:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.104.0.8:2379,http://localhost:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://10.104.0.8:2379"
…
3. On node1 , add node2 and node3 to the cluster using the add command:
The Linux kernel uses the utility called a watchdog to protect against an unresponsive system. The
watchdog monitors a system for unrecoverable application errors, depleted system resources, etc., and
initiates a reboot to safely return the system to a working state. The watchdog functionality is useful for
servers that are intended to run without human intervention for a long time. Instead of users finding a hung
server, the watchdog functionality can help maintain the service.
In this example, we will configure Softdog - a standard software implementation for watchdog that is
shipped with Ubuntu 20.04.
Complete the following steps on all three PostgreSQL nodes to load and configure Softdog.
1. Load Softdog:
2. Patroni will be interacting with the watchdog service. Since Patroni is run by the postgres user, this user must
have access to Softdog. To make this happen, change the ownership of the watchdog.rules file to the
postgres user:
/lib/modprobe.d/blacklist_linux_5.4.0-73-generic.conf:blacklist softdog
softdog 16384 0
4. Check that the Softdog files under the /dev/ folder are owned by the postgres user:
$ ls -l /dev/watchdog*
Tip
If the ownership has not been changed for any reason, run the following command to manually change it:
Configure Patroni
2. Create the patroni.yml configuration file under the /etc/patroni directory. The file holds the default
configuration values for a PostgreSQL cluster and will reflect the current cluster setup.
scope: stampede1
name: node1
restapi:
listen: 0.0.0.0:8008
connect_address: node1:8008
etcd:
host: node1:2379
bootstrap:
# this section will be written into Etcd:/<namespace>/<scope>/config after initializing new
cluster
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
# primary_start_timeout: 300
# synchronous_mode: false
postgresql:
use_pg_rewind: true
use_slots: true
parameters:
wal_level: replica
hot_standby: "on"
logging_collector: 'on'
max_wal_senders: 5
max_replication_slots: 5
wal_log_hints: "on"
#archive_mode: "on"
#archive_timeout: 600
#archive_command: "cp -f %p /home/postgres/archived/%f"
#recovery_conf:
#restore_command: cp /home/postgres/archived/%f %p
# Additional script to be launched after initial cluster creation (will be passed the
connection URL as parameter)
# post_init: /usr/local/bin/setup_cluster.sh
# Some additional users users which needs to be created after initializing new cluster
users:
admin:
password: admin
options:
- createrole
- createdb
replicator:
password: password
options:
- replication
postgresql:
listen: 0.0.0.0:5432
connect_address: node1:5432
data_dir: "/var/lib/postgresql/14/main"
bin_dir: "/usr/lib/postgresql/14/bin"
# config_dir:
pgpass: /tmp/pgpass0
authentication:
replication:
username: replicator
password: password
superuser:
username: postgres
password: password
parameters:
unix_socket_directories: '/var/run/postgresql'
watchdog:
mode: required # Allowed values: off, automatic, required
device: /dev/watchdog
safety_margin: 5
tags:
nofailover: false
noloadbalance: false
clonefrom: false
nosync: false
Following these, there is a bootstrap section that contains the PostgreSQL configurations and the steps to run once
the database is initialized. The pg_hba.conf entries specify all the other nodes that can connect to this node and
their authentication mechanism.
4. Create the configuration files for node2 and node3 . Replace the reference to node1 with node2 and node3 ,
respectively.
5. Enable and restart the patroni service on every node. Use the following commands:
When Patroni starts, it initializes PostgreSQL (because the service is not currently running and the data
directory is empty) following the directives in the bootstrap section of the configuration file.
Troubleshooting Patroni
To ensure that Patroni has started properly, check the logs using the following command:
A common error is Patroni complaining about the lack of proper entries in the pg_hba.conf file. If you see such
errors, you must manually add or fix the entries in that file and then restart the service.
Changing the patroni.yml file and restarting the service will not have any effect here because the bootstrap
section specifies the configuration to apply when PostgreSQL is first started in the node. It will not repeat the
process even if the Patroni configuration file is modified and the service is restarted.
If Patroni has started properly, you should be able to locally connect to a PostgreSQL node using the
following command:
psql (14.1)
Type "help" for help.
postgres=#
Configure HAProxy
HAProxy node will accept client connection requests and route those to the active node of the PostgreSQL
cluster. This way, a client application doesn’t have to know what node in the underlying cluster is the current
primary. All it needs to do is to access a single HAProxy URL and send its read/write requests there. Behind-
the-scene, HAProxy routes the connection to a healthy node (as long as there is at least one healthy node
available) and ensures that client application requests are never rejected.
HAProxy is capable of routing write requests to the primary node and read requests - to the secondaries in a
round-robin fashion so that no secondary instance is unnecessarily loaded. To make this happen, provide
different ports in the HAProxy configuration file. In this deployment, writes are routed to port 5000 and reads
- to port 5001.
2. The HAProxy configuration file path is: /etc/haproxy/haproxy.cfg . Specify the following configuration in this file.
global
maxconn 100
defaults
log global
mode tcp
retries 2
timeout client 30m
timeout connect 4s
timeout server 30m
timeout check 5s
listen stats
mode http
bind *:7000
stats enable
stats uri /
listen primary
bind *:5000
option httpchk /primary
http-check expect status 200
default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
server node1 node1:5432 maxconn 100 check port 8008
server node2 node2:5432 maxconn 100 check port 8008
server node3 node3:5432 maxconn 100 check port 8008
listen standbys
balance roundrobin
bind *:5001
option httpchk /replica
http-check expect status 200
default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
server node1 node1:5432 maxconn 100 check port 8008
server node2 node2:5432 maxconn 100 check port 8008
server node3 node3:5432 maxconn 100 check port 8008
HAProxy will use the REST APIs hosted by Patroni to check the health status of each PostgreSQL node and route
the requests appropriately.
3. Restart HAProxy:
Testing
See the Testing PostgreSQL cluster for the guidelines on how to test your PostgreSQL cluster for replication,
failure, switchover.
Contact Us
For paid support and managed or consulting services , contact Percona Sales.
5.1.3 Deploying PostgreSQL for high availability with Patroni on RHEL or CentOS
This guide provides instructions on how to set up a highly available PostgreSQL cluster with Patroni on Red
Hat Enterprise Linux or CentOS.
Preconditions
For this setup, we will use the nodes running on CentOS 8 as the base operating system and having the
following IP addresses:
Note
In a production (or even non-production) setup, the PostgreSQL and ETCD nodes will be within a private subnet
without any public connectivity to the Internet, and the HAProxy will be in a different subnet that allows client traffic
coming only from a selected IP range. To keep things simple, we have implemented this architecture in a
DigitalOcean VPS environment, and each node can access the other by its internal, private IP.
To make the nodes aware of each other and allow their seamless communication, resolve their hostnames
to their public IP addresses. Modify the /etc/hosts file of each PostgreSQL node to include the hostnames
and IP addresses of the remaining nodes. The following is the /etc/hosts file for node1 :
The /etc/hosts file of the HAProxy-demo node hostnames and IP addresses of all PostgreSQL nodes:
The distributed configuration store helps establish a consensus among nodes during a failover and will
manage the configuration for the three PostgreSQL instances. Although Patroni can work with other
distributed consensus stores (i.e., Zookeeper, Consul, etc.), the most commonly used one is etcd .
1. Install etcd on the ETCD node. For CentOS 8, the etcd packages are available from Percona repository:
2. Install percona-release .
3. Enable the repository:
[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://10.104.0.5:2380,http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.104.0.5:2379,http://localhost:2379"
ETCD_NAME="default"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.104.0.5:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://10.104.0.5:2379"
ETCD_INITIAL_CLUSTER="default=http://10.104.0.5:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
Install Percona Distribution for PostgreSQL on node1 , node2 and node3 from Percona repository:
1. Install percona-release .
2. Enable the repository:
Important
Don't initialize the cluster and start the postgresql service. The cluster initialization and setup are handled by
Patroni during the bootsrapping stage.
Configure Patroni
2. Install the Python module that enables Patroni to communicate with ETCD.
• Create the directory to store the configuration file and make it owned by the postgres user.
• Create the data directory for Patroni. Change its ownership to the postgres user and restrict the access to it
$ su postgres
$ vim /etc/patroni/patroni.yml
scope: postgres
namespace: /pg_cluster/
name: node1
restapi:
listen: 10.104.0.7:8008 # PostgreSQL node IP address
connect_address: 10.104.0.7:8008 # PostgreSQL node IP address
etcd:
host: 10.104.0.5:2379 # ETCD node IP address
bootstrap:
# this section will be written into Etcd:/<namespace>/<scope>/config after initializing new
cluster
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
postgresql:
use_pg_rewind: true
use_slots: true
parameters:
wal_level: replica
hot_standby: "on"
logging_collector: 'on'
max_wal_senders: 5
max_replication_slots: 5
wal_log_hints: "on"
# Some additional users users which needs to be created after initializing new cluster
users:
admin:
password: admin
options:
- createrole
- createdb
postgresql:
listen: 10.104.0.7:5432 # PostgreSQL node IP address
connect_address: 10.104.0.7:5432 # PostgreSQL node IP address
data_dir: /data/patroni # The datadir you created
bin_dir: /usr/pgsql-14/bin
pgpass: /tmp/pgpass0
authentication:
replication:
username: replicator
password: replicator
superuser:
username: postgres
password: postgres
parameters:
unix_socket_directories: '.'
tags:
nofailover: false
noloadbalance: false
clonefrom: false
nosync: false
6. Create the configuration files for node2 and node3 . Replace the node and IP address of node1 to those of
node2 and node3 , respectively.
[Unit]
Description=Runners to orchestrate a high-availability PostgreSQL
After=syslog.target network.target
[Service]
Type=simple
User=postgres
Group=postgres
# only kill the patroni process, not its children, so it will gracefully stop postgres
KillMode=process
# Give a reasonable amount of time for the server to start up/shut down
TimeoutSec=30
# Do not restart the service if it crashes, we want to manually inspect database on failure
Restart=no
[Install]
WantedBy=multi-user.target
Troubleshooting Patroni
To ensure that Patroni has started properly, check the logs using the following command:
A common error is Patroni complaining about the lack of proper entries in the pg_hba.conf file. If you see such
errors, you must manually add or fix the entries in that file and then restart the service.
Changing the patroni.yml file and restarting the service will not have any effect here because the bootstrap section
specifies the configuration to apply when PostgreSQL is first started in the node. It will not repeat the process even if
the Patroni configuration file is modified and the service is restarted.
If Patroni has started properly, you should be able to locally connect to a PostgreSQL node using the following
command:
psql (14.1)
Type "help" for help.
postgres=#
10. When all nodes are up and running, you can check the cluster status using the following command:
Output:
Configure HAProxy
HAProxy node will accept client connection requests and route those to the active node of the PostgreSQL
cluster. This way, a client application doesn’t have to know what node in the underlying cluster is the current
primary. All it needs to do is to access a single HAProxy URL and send its read/write requests there. Behind-
the-scene, HAProxy routes the connection to a healthy node (as long as there is at least one healthy node
available) and ensures that client application requests are never rejected.
HAProxy is capable of routing write requests to the primary node and read requests - to the secondaries in a
round-robin fashion so that no secondary instance is unnecessarily loaded. To make this happen, provide
different ports in the HAProxy configuration file. In this deployment, writes are routed to port 5000 and reads
- to port 5001.
2. The HAProxy configuration file path is: /etc/haproxy/haproxy.cfg . Specify the following configuration in this file.
global
maxconn 100
defaults
log global
mode tcp
retries 2
timeout client 30m
timeout connect 4s
timeout server 30m
timeout check 5s
listen stats
mode http
bind *:7000
stats enable
stats uri /
listen primary
bind *:5000
option httpchk /primary
http-check expect status 200
default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
server node1 node1:5432 maxconn 100 check port 8008
server node2 node2:5432 maxconn 100 check port 8008
server node3 node3:5432 maxconn 100 check port 8008
listen standbys
balance roundrobin
bind *:5001
option httpchk /replica
http-check expect status 200
default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
server node1 node1:5432 maxconn 100 check port 8008
server node2 node2:5432 maxconn 100 check port 8008
server node3 node3:5432 maxconn 100 check port 8008
HAProxy will use the REST APIs hosted by Patroni to check the health status of each PostgreSQL node and route
the requests appropriately.
3. Enable a SELinux boolean to allow HAProxy to bind to non standard ports:
4. Restart HAProxy:
Contact Us
For paid support and managed or consulting services , contact Percona Sales.
This document covers the following scenarios to test the PostgreSQL cluster:
• replication,
• connectivity,
• failover, and
• manual switchover.
TESTING REPLICATION
1. Connect to the cluster and establish the psql session from a client machine that can connect to the HAProxy
node. Use the HAProxy-demo node's public IP address:
2. Run the following commands to create a table and insert a few rows:
3. To ensure that the replication is working, we can log in to each PostgreSQL node and run a simple SQL
statement against the locally running instance:
name | age
--------+-----
john | 30
dawson | 35
(2 rows)
TESTING FAILOVER
In a proper setup, client applications won't have issues connecting to the cluster, even if one or even two of
the nodes go down. We will test the cluster for failover in the following scenarios:
Scenario 1. Intentionally stop the PostgreSQL on the primary node and verify access to PostgreSQL.
1. Run the following command on any node to check the current cluster status:
Output:
2. node1 is the current leader. Stop Patroni in node1 to see how it changes the cluster:
3. Once the service stops in node1 , check the logs in node2 and node3 using the following command:
```
Sep 23 14:18:13 node03 patroni[10042]: 2021-09-23 14:18:13,905 INFO: no action. I am a
secondary (node3) and following a leader (node1)
Sep 23 14:18:20 node03 patroni[10042]: 2021-09-23 14:18:20,011 INFO: Got response from node2
http://node2:8008/patroni: {"state": "running", "postprimary_start_time": "2021-09-23
12:50:29.460027+00:00", "role": "replica", "server_version": 130003, "cluster_unlocked": true,
"xlog": {"received_location": 67219152, "replayed_location": 67219152, "replayed_timestamp":
"2021-09-23 13:19:50.329387+00:00", "paused": false}, "timeline": 1,
"database_system_identifier": "7011110722654005156", "patroni": {"version": "2.1.0", "scope":
"stampede1"}}
Sep 23 14:18:20 node03 patroni[10042]: 2021-09-23 14:18:20,031 WARNING: Request failed to
node1: GET http://node1:8008/patroni (HTTPConnectionPool(host='node1', port=8008): Max retries
exceeded with url: /patroni (Caused by ProtocolError('Connection aborted.',
ConnectionResetError(104, 'Connection reset by peer'))))
Sep 23 14:18:20 node03 patroni[10042]: 2021-09-23 14:18:20,038 INFO: Software Watchdog
activated with 25 second timeout, timing slack 15 seconds
Sep 23 14:18:20 node03 patroni[10042]: 2021-09-23 14:18:20,043 INFO: promoted self to leader
by acquiring session lock
Sep 23 14:18:20 node03 patroni[13641]: server promoting
Sep 23 14:18:20 node03 patroni[10042]: 2021-09-23 14:18:20,049 INFO: cleared rewind state
after becoming the leader
Sep 23 14:18:21 node03 patroni[10042]: 2021-09-23 14:18:21,101 INFO: no action. I am (node3)
the leader with the lock
Sep 23 14:18:21 node03 patroni[10042]: 2021-09-23 14:18:21,117 INFO: no action. I am (node3)
the leader with the lock
Sep 23 14:18:31 node03 patroni[10042]: 2021-09-23 14:18:31,114 INFO: no action. I am (node3)
the leader with the lock
...
```
The logs in node3 show that the requests to node1 are failing, the watchdog is coming into action, and node3
is promoting itself as the leader:
4. Verify that you can still access the cluster through the HAProxy instance and read data:
name | age
--------+-----
john | 30
dawson | 35
(2 rows)
Output:
As we see, node3 remains the leader and the rest are replicas.
To emulate the power outage, let's kill the service in node3 and see what happens in node1 and node2 .
postgres 10042 0.1 2.1 647132 43948 ? Ssl 12:50 0:09 /usr/bin/python3 /usr/bin/
patroni /etc/patroni/patroni.yml
```
Sep 23 14:40:41 node02 patroni[10577]: 2021-09-23 14:40:41,656 INFO: no action. I am a
secondary (node2) and following a leader (node3)
…
Sep 23 14:41:01 node02 patroni[10577]: 2021-09-23 14:41:01,373 INFO: Got response from node1
http://node1:8008/patroni: {"state": "running", "postprimary_start_time": "2021-09-23
14:25:30.076762+00:00", "role": "replica", "server_version": 130003, "cluster_unlocked": true,
"xlog": {"received_location": 67221352, "replayed_location": 67221352, "replayed_timestamp":
null, "paused": false}, "timeline": 2, "database_system_identifier": "7011110722654005156",
"patroni": {"version": "2.1.0", "scope": "stampede1"}}
Sep 23 14:41:03 node02 patroni[10577]: 2021-09-23 14:41:03,364 WARNING: Request failed to
node3: GET http://node3:8008/patroni (HTTPConnectionPool(host='node3', port=8008): Max retries
exceeded with url: /patroni (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection
object at 0x7f57e06dffa0>, 'Connection to node3 timed out. (connect timeout=2)')))
Sep 23 14:41:03 node02 patroni[10577]: 2021-09-23 14:41:03,373 INFO: Software Watchdog
activated with 25 second timeout, timing slack 15 seconds
Sep 23 14:41:03 node02 patroni[10577]: 2021-09-23 14:41:03,385 INFO: promoted self to leader
by acquiring session lock
Sep 23 14:41:03 node02 patroni[15478]: server promoting
Sep 23 14:41:03 node02 patroni[10577]: 2021-09-23 14:41:03,397 INFO: cleared rewind state
after becoming the leader
Sep 23 14:41:04 node02 patroni[10577]: 2021-09-23 14:41:04,450 INFO: no action. I am (node2)
the leader with the lock
Sep 23 14:41:04 node02 patroni[10577]: 2021-09-23 14:41:04,475 INFO: no action. I am (node2)
the leader with the lock
…
…
```
node2 realizes that the leader is dead, and promotes itself as the leader.
3. Try accessing the cluster using the HAProxy endpoint at any point in time between these operations. The
cluster is still accepting connections.
MANUAL SWITCHOVER
Typically, a manual switchover is needed for planned downtime to perform maintenance activity on the
leader node. Patroni provides the switchover command to manually switch over from the leader node.
Patroni asks the name of the current primary node and then the node that should take over as the
switched-over primary. You can also specify the time at which the switchover should happen. To trigger the
process immediately, specify the value now:
Restart the Patroni service in node2 (after the "planned maintenance"). The node rejoins the cluster as a
secondary.
Contact Us
For paid support and managed or consulting services , contact Percona Sales.
Summary
• Overview
• Architecture
• Deployment
• Testing
Overview
A Disaster Recovery (DR) solution ensures that a system can be quickly restored to a normal operational
state if something unexpected happens. When operating a database, you would back up the data as
frequently as possible and have a mechanism to restore that data when needed. Disaster Recovery is often
mistaken for high availability (HA), but they are two different concepts altogether:
• High availability ensures guaranteed service levels at all times. This solution involves configuring one or
more standby systems to an active database, and the ability to switch seamlessly to that standby when
the primary database becomes unavailable, for example, during a power outage or a server crash. To
learn more about high-availability solutions with Percona Distribution for PostgreSQL, refer to High
Availability in PostgreSQL with Patroni.
• Disaster Recovery protects the database instance against accidental or malicious data loss or data
corruption. Disaster recovery can be achieved by using either the options provided by PostgreSQL, or
external extensions.
<br>
PostgreSQL offers multiple options for setting up database disaster recovery.
This is the basic backup approach. These tools can generate the backup of one or more
PostgreSQL databases (either just the structure, or both the structure and data), then restore
them through the [pg_restore](https://www.postgresql.org/docs/15/app-pgrestore.html) command.
| Advantages | Disadvantages |
| ------------ | --------------- |
| Easy to use | 1. Backup of only one database at a time.<br>2. No incremental backups.<br>3.
No point-in-time recovery since the backup is a snapshot in time.<br>4. Performance
degradation when the database size is large.|
This method involves backing up the PostgreSQL data directory to a different location, and
restoring it when needed.
| Advantages | Disadvantages |
| ------------ | --------------- |
| Consistent snapshot of the data directory or the whole data disk volume | 1. Requires
stopping PostgreSQL in order to copy the files. This is not practical for most production
setups.<br> 2. No backup of individual databases or tables.|
- **PostgreSQL [pg_basebackup](https://www.postgresql.org/docs/15/app-pgbasebackup.html)**
This backup tool is provided by PostgreSQL. It is used to back up data when the database
instance is running. `pgasebackup` makes a binary copy of the database cluster files, while
making sure the system is put in and out of backup mode automatically.
| Advantages | Disadvantages |
| ------------ | --------------- |
| 1. Supports backups when the database is running.<br>2. Supports point-in-time recovery | 1.
No incremental backups.<br>2. No backup of individual databases or tables.|
To achieve a production grade PostgreSQL disaster recovery solution, you need something that can take full
or incremental database backups from a running instance, and restore from those backups at any point in
time. Percona Distribution for PostgreSQL is supplied with pgBackRest: a reliable, open-source backup and
recovery solution for PostgreSQL.
This document focuses on the Disaster recovery solution in Percona Distribution for PostgreSQL. The
Deploying backup and disaster recovery solution in Percona Distribution for PostgreSQL tutorial provides
guidelines of how to set up and test this solution.
PGBACKREST
pgBackRest is an easy-to-use, open-source solution that can reliably back up even the largest of
PostgreSQL databases. pgBackRest supports the following backup types:
• differential backup - includes all data that has changed since the last full backup. While this means the
backup time is slightly higher, it enables a faster restore.
• incremental backup - only backs up the files that have changed since the last full or differential backup,
resulting in a quick backup time. To restore to a point in time, however, you will need to restore each
incremental backup in the order they were taken.
When it comes to restoring, pgBackRest can do a full or a delta restore. A full restore needs an empty
PostgreSQL target directory. A delta restore is intelligent enough to recognize already-existing files in the
PostgreSQL data directory, and update only the ones the backup contains.
pgBackRest supports remote repository hosting and can even use cloud-based services like AWS S3, Google
Cloud Services Cloud Storage, Azure Blob Storage for saving backup files. It supports parallel backup
through multi-core processing and compression. By default, backup integrity is verified through checksums,
and saved files can be encrypted for enhanced security.
pgBackRest can restore a database to a specific point in time in the past. This is the case where a database
is not inaccessible but perhaps contains corrupted data. Using the point-in-time recovery, a database
administrator can restore the database to the last known good state.
Finally, pgBackRest also supports restoring PostgreSQL databases to a different PostgreSQL instance or a
separate data directory.
Setup overview
This section describes the architecture of the backup and disaster recovery solution. For the configuration
steps, refer to the Deploying backup and disaster recovery solution in Percona Distribution for PostgreSQL.
SYSTEM ARCHITECTURE
As the configuration example, we will use a three server architecture where pgBackRest resides on a
dedicated remote host. The servers communicate with each other via passwordless SSH.
Important
Passwordless SSH may not be an ideal solution for your environment. In this case, consider using other methods,
for example, TLS with client certificates.
Components:
• pg-primary hosts the primary PostgreSQL server. Note that "primary" here means the main database
instance and does not refer to the primary node of a PostgreSQL replication cluster or a HA setup.
• pg-repo is the remote backup repository and hosts pgBackRest . It's important to host the backup
repository on a physically separate instance, to be accessed when the target goes down.
• pg-secondary is the secondary PostgreSQL node. Don't confuse it with a hot standby. "Secondary" in this
context means a PostgreSQL instance that's idle. We will restore the database backup to this instance
when the primary PostgreSQL instance goes down.
Note
For simplicity, we use a single-node PostgreSQL instance as the primary database server. In a production scenario,
you will use some form of high-availability solution to protect the primary instance. When you are using a high-
availability setup, we recommend configuring pgBackRest to back up the hot standby server so the primary node
is not unnecessarily loaded.
DEPLOYMENT
Refer to the Deploying backup and disaster recovery solution in Percona Distribution for PostgreSQL tutorial.
Contact Us
For paid support and managed or consulting services , contact Percona Sales.
5.2.2 Deploying backup and disaster recovery solution in Percona Distribution for PostgreSQL
This document provides instructions of how to set up and test the backup and disaster recovery solution in
Percona Distribution for PostgreSQL with pgBackRest . For technical overview and architecture description of
this solution, refer to Backup and disaster recovery in Percona Distribution for PostgreSQL.
Deployment
As the example configuration, we will use the nodes with the following IP addresses:
pg-primary 10.104.0.3
pg-repo 10.104.0.5
pg-secondary 10.104.0.4
SET UP HOSTNAMES
In our architecture, the pgBackRest repository is located on a remote host. To allow communication among
the nodes, passwordless SSH is required. To achieve this, properly setting up hostnames in the /etc/hosts
files is very important.
1. Define the hostname for every server in the /etc/hostname file. The following are the examples of how
the /etc/hostname file in three nodes looks like:
cat /etc/hostname
pg-primary
cat /etc/hostname
pg-repo
cat /etc/hostname
pg-secondary
2. For the nodes to communicate seamlessly across the network, resolve their hostnames to their IP addresses in
the /etc/hosts file. (Alternatively, you can make appropriate entries in your internal DNS servers)
The /etc/hosts file for the pg-primary node looks like this:
```
127.0.1.1 pg-primary pg-primary
127.0.0.1 localhost
10.104.0.5 pg-repo
```
```
127.0.1.1 pg-repo pg-repo
127.0.0.1 localhost
10.104.0.3 pg-primary
10.104.0.4 pg-secondary
```
```
127.0.1.1 pg-secondary pg-secondary
127.0.0.1 localhost
10.104.0.3 pg-primary
10.104.0.5 pg-repo
```
Before setting up passwordless SSH, ensure that the postgres user in all three instances has a password.
1. To set or change the password, run the following command as a root user:
$ passwd postgres
3. After setting up the password, edit the /etc/ssh/sshd_config file and ensure the PasswordAuthentication
variable is set as yes .
PasswordAuthentication yes
4. In the pg-repo node, restart the sshd service. Without the restart, the SSH server will not allow you to connect
to it using a password while adding the keys.
5. In the pg-primary node, generate an SSH key pair and add the public key to the pg-repo node.
Important
$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
...
6. To verify everything has worked as expected, run the following command from the pg-primary node.
$ ssh postgres@pg-repo
7. Repeat the SSH connection from pg-repo to pg-primary to ensure that passwordless SSH is working.
8. Set up bidirectional passwordless SSH between pg-repo and pg-secondary using the same method. This will
allow pg-repo to recover the backups to pg-secondary .
Install Percona Distribution for PostgreSQL in the primary and the secondary nodes from Percona repository.
1. Install percona-release .
2. Enable the repository:
At this step, configure the PostgreSQL instance on the pg-primary node for continuous archiving of the WAL
files.
Note
INSTALL PGBACKREST
Install pgBackRest in all three instances from Percona repository. Use the following command:
On Debian / Ubuntu
On RHEL / derivatives
Run the following commands on all three nodes to set up the required configuration file for pgBackRest .
2. Configure the location and permissions for the pgBackRest configuration file:
Configure pgBackRest on the pg-primary node by setting up a stanza. A stanza is a set of configuration
parameters that tells pgBackRest where to backup its files. Edit the /etc/pgbackrest/pgbackrest.conf file in
the pg-primary node to include the following lines:
[global]
repo1-host=pg-repo
repo1-host-user=postgres
process-max=2
log-level-console=info
log-level-file=debug
[prod_backup]
pg1-path=/var/lib/postgresql/14/main
You can see the pg1-path attribute for the prod_backup stanza has been set to the PostgreSQL data folder.
Add a stanza for the pgBackRest in the pg-repo node. Edit the /etc/pgbackrest/pgbackrest.conf configuration
file to include the following lines:
[global]
repo1-path=/home/pgbackrest/pg_backup
repo1-retention-full=2
process-max=2
log-level-console=info
log-level-file=debug
start-fast=y
stop-auto=y
[prod_backup]
pg1-path=/var/lib/postgresql/14/main
pg1-host=pg-primary
pg1-host-user=postgres
pg1-port = 5432
After the configuration files are set up, it’s now time to initialize the pgBackRest stanza. Run the following
command in the remote backup repository node ( pg-repo ).
Once the stanza is created successfully, you can try out the different use cases for disaster recovery.
This section covers a few use cases where pgBackRest can back up and restore databases either in the
same instance or a different node.
1. To start our testing, let’s create a table in the postgres database in the pg-primary node and add some data.
2. Take a full backup of the database instance. Run the following commands from the pg-repo node:
If you want an incremental backup, you can omit the type attribute. By default, pgBackRest always takes an
incremental backup except the first backup of the cluster which is always a full backup.
If you need a differential backup, use diff for the type field:
1. Run the following command in the pg-primary node to delete the main data directory.
$ rm -rf /var/lib/postgresql/14/main/*
3. After the command executes successfully, you can access PostgreSQL from the psql command line tool and
check if the table and data rows have been restored.
If your target PostgreSQL instance has an already existing data directory, the full restore option will fail. You
will get an error message stating there are existing data files. In this case, you can use the --delta option to
restore only the corrupted files.
For example, let's say one of your developers mistakenly deleted a few rows from a table. You can use
pgBackRest to revert your database to a previous point in time to recover the lost rows.
1. Take a timestamp when the database is stable and error-free. Run the following command from the
psql prompt.
SELECT CURRENT_TIMESTAMP;
current_timestamp
-------------------------------
2021-11-07 11:55:47.952405+00
(1 row)
Note down the above timestamp since we will use this time in the restore command. Note that in a real life
scenario, finding the correct point in time when the database was error-free may require extensive
investigation. It is also important to note that all changes after the selected point will be lost after the roll back.
3. To recover the data, run a command with the noted timestamp as an argument. Run the commands below to
recover the database up to that time.
4. Check the database table to see if the record has been restored.
Sometimes a PostgreSQL server may encounter hardware issues and become completely inaccessible. In
such cases, we will need to recover the database to a separate instance where pgBackRest is not initially
configured. To restore the instance to a separate host, you have to first install both PostgreSQL and
pgBackRest in this host.
In our test setup, we already have PostgreSQL and pgBackRest installed in the third node, pg-secondary .
Change the pgBackRest configuration file in the pg-secondary node as shown below.
[global]
repo1-host=pg-repo
repo1-host-user=postgres
process-max=2
log-level-console=info
log-level-file=debug
[prod_backup]
pg1-path=/var/lib/postgresql/14/main
There should be bidirectional passwordless SSH communication between pg-repo and pg-secondary . Refer
to the Set up passwordless SSH section for the steps, if you haven’t configured it.
2021-11-07 13:34:08.897 P00 INFO: restore command begin 2.36: --delta --exec-id=109728-
d81c7b0b --log-level-console=info --log-level-file=debug --pg1-path=/var/lib/postgresql/14/
main --process-max=2 --repo1-host=pg-repo --repo1-host-user=postgres --stanza=prod_backup
2021-11-07 13:34:09.784 P00 INFO: repo1: restore backup set
20211107-111534F_20211107-131807I, recovery will start at 2021-11-07 13:18:07
2021-11-07 13:34:09.786 P00 INFO: remove invalid files/links/paths from '/var/lib/
postgresql/14/main'
2021-11-07 13:34:11.803 P00 INFO: write updated /var/lib/postgresql/14/main/
postgresql.auto.conf
2021-11-07 13:34:11.819 P00 INFO: restore global/pg_control (performed last to ensure
aborted restores cannot be started)
2021-11-07 13:34:11.819 P00 INFO: restore size = 23.2MB, file total = 937
2021-11-07 13:34:11.820 P00 INFO: restore command end: completed successfully (2924ms)
Contact Us
For paid support and managed or consulting services , contact Percona Sales.
Organizations dealing with spatial data need to store it somewhere and manipulate it. PostGIS is the open-
source extension for PostgreSQL that allows doing just that. It adds support for storing the spatial data types
such as:
• Geographical data like points, lines, polygons, GPS coordinates that can be mapped on a sphere.
• Geometrical data. This is also points, lines and polygons but they apply to a 2D surface.
To operate with spacial data inside SQL queries, PostGIS supports spacial functions like distance, area, union,
intersection. It uses the spacial indexes like R-Tree and Quadtree for efficient processing of database
operations. Read more about supported spacial functions and indexes in PostGIS documentation.
By deploying PostGIS with Percona Distribution for PostgreSQL, you receive the open-source spatial
database that you can use in various areas without vendor lock-in.
• To store and manage spatial data, create and store spatial shapes, calculate areas and distances
• To integrate spatial and non-spatial data such as demographic or economic data in a database
Despite its power and flexibility, PostGIS may not suit your needs if:
• You need to store only a couple of map locations. Consider using the built-in geometric functions and
operations of PostgreSQL
• You need real-time data analysis. While PostGIS can handle real-time spatial data, it may not be the best
option for real-time data analysis on large volumes of data.
Next steps:
Deployment
Contact Us
For paid support and managed or consulting services , contact Percona Sales.
The following document provides guidelines how to install PostGIS and how to run the basic queries.
Preconditions
1. We assume that you have the basic knowledge of spatial data, GIS (Geographical Information System) and of
shapefiles.
2. You need to acquire spacial data. For the following examples, we'll use the same data set as is used in PostGIS
tutorial.
Install PostGIS
This installs the set of PostGIS extensions. To check what extensions are available, run the following query from
the psql terminal:
Note
To enable the postgis_sfcgal-3 extension on Ubuntu 18.04, you need to manually install the required dependency:
c. Enable the codeready builder repository to resolve dependencies conflict. For Red Hat Enterprise Linux 8,
replace the operating system version in the following commands accordingly.
RHEL 9
CentOS 9
Oracle Linux 9
This installs the set of PostGIS extensions. To check what extensions are available, run the following query from
the psql terminal:
3. Create a database and a schema to store your data. A schema is a container that logically segments objects
(tables, functions, views, and so on) for better management. Run the following commands from the psql
terminal:
4. To make PostGIS functions and operations work, you need to enable the postgis extension. From the psql
terminal, switch to the database you created and run the following command:
\c nyc;
CREATE EXTENSION postgis;
SELECT postgis_full_version();
postgis_full_version
---------------------------------------------------------------------------------------------------------------
POSTGIS="3.3.3dev 0" [EXTENSION] PGSQL="140" GEOS="3.10.2-CAPI-1.16.0" PROJ="8.2.1"
LIBXML="2.9.13" LIBJSON="0.15" LIBPROTOBUF="1.3.3" WAGYU="0.5.0 (Internal)"
PostGIS provides the shp2pgsql command line utility that converts the binary data from shapefiles into the
series of SQL commands and loads them into the database.
1. From the folder where the .shp files are located, execute the following command and replace the dbname
value with the name of your database:
shp2pgsql \
-D \
-I \
-s 26918 \
nyc_streets.shp \
nyc_streets \
| psql -U postgres dbname=nyc
• -s indicates the spatial reference identifier of the data. The data we load is in the Projected coordinate
system for North America and has the value 26918.
• nyc_streets.shp is the source shapefile
• nyc_streets is the table name to create in the database
\d nyc_streets;
Table "public.nyc_streets"
Column | Type | Collation | Nullable | Default
--------+---------------------------------+-----------+----------
+------------------------------------------
gid | integer | | not null |
nextval('nyc_streets_gid_seq'::regclass)
id | double precision | | |
name | character varying(200) | | |
oneway | character varying(10) | | |
type | character varying(50) | | |
geom | geometry(MultiLineString,26918) | | |
Indexes:
"nyc_streets_pkey" PRIMARY KEY, btree (gid)
"nyc_streets_geom_idx" gist (geom)
1. Repeat the command to upload other shapefiles in the data set: nyc_census_blocks , nyc_neighborhoods ,
nyc_subway_stations
Contact Us
For paid support and managed or consulting services , contact Percona Sales.
After you installed and set up PostGIS, let’s find answers to the following questions by querying the
database:
Output:
population
------------
8175032
(1 row)
To get the answer we will use the ST_Area function that returns the areas of polygons.
Output:
st_area
--------------------
3519.8365965413293
(1 row)
By default, the output is given in square meters. To get the value in square kilometers, divide it by 1000.
SELECT ST_Length(geom)
FROM nyc_streets
WHERE name = 'Columbus Cir';
Output:
st_length
-------------------
308.3419936909855
(1 row)
Contact Us
For paid support and managed or consulting services , contact Percona Sales.
Percona Distribution for PortgreSQL supports several authentication methods, including the LDAP
authentication. The use of LDAP is to provide a central place for authentication - meaning the LDAP server
stores usernames and passwords and their resource permissions.
The LDAP authentication in Percona Distribution for PortgreSQL is implemented the same way as in upstream
PostgreSQL.
CONTACT US
For paid support and managed or consulting services , contact Percona Sales.
NOTE: Should you need the data files later, back up your data before uninstalling Percona Distribution for
PostgreSQL.
To uninstall Percona Distribution for PostgreSQL on platforms that use apt package manager such as
Debian or Ubuntu, complete the following steps.
$ rm -rf /etc/postgresql/15/main
To uninstall Percona Distribution for PostgreSQL on platforms that use yum package manager such as Red
Hat Enterprise Linux or CentOS, complete the following steps.
$ rm -rf /var/lib/pgsql/15/data
Contact Us
For paid support and managed or consulting services , contact Percona Sales.
Contact Us
For paid support and managed or consulting services , contact Percona Sales.
8. Trademark Policy
This Trademark Policy is to ensure that users of Percona-branded products or services know that what they
receive has really been developed, approved, tested and maintained by Percona. Trademarks help to
prevent confusion in the marketplace, by distinguishing one company’s or person’s products and services
from another’s.
Percona owns a number of marks, including but not limited to Percona, XtraDB, Percona XtraDB, XtraBackup,
Percona XtraBackup, Percona Server, and Percona Live, plus the distinctive visual icons and logos
associated with these marks. Both the unregistered and registered marks of Percona are protected.
Use of any Percona trademark in the name, URL, or other identifying characteristic of any product, service,
website, or other use is not permitted without Percona’s written permission with the following three limited
exceptions.
First, you may use the appropriate Percona mark when making a nominative fair use reference to a bona
fide Percona product.
Second, when Percona has released a product under a version of the GNU General Public License (“GPL”),
you may use the appropriate Percona mark when distributing a verbatim copy of that product in
accordance with the terms and conditions of the GPL.
Third, you may use the appropriate Percona mark to refer to a distribution of GPL-released Percona software
that has been modified with minor changes for the sole purpose of allowing the software to operate on an
operating system or hardware platform for which Percona has not yet released the software, provided that
those third party changes do not affect the behavior, functionality, features, design or performance of the
software. Users who acquire this Percona-branded software receive substantially exact implementations of
the Percona software.
Percona reserves the right to revoke this authorization at any time in its sole discretion. For example, if
Percona believes that your modification is beyond the scope of the limited license granted in this Policy or
that your use of the Percona mark is detrimental to Percona, Percona will revoke this authorization. Upon
revocation, you must immediately cease using the applicable Percona mark. If you do not immediately
cease using the Percona mark upon revocation, Percona may take action to protect its rights and interests
in the Percona mark. Percona does not grant any license to use any Percona mark for any other modified
versions of Percona software; such use will require our prior written permission.
Neither trademark law nor any of the exceptions set forth in this Trademark Policy permit you to truncate,
modify or otherwise use any Percona mark as part of your own brand. For example, if XYZ creates a modified
version of the Percona Server, XYZ may not brand that modification as “XYZ Percona Server” or “Percona XYZ
Server”, even if that modification otherwise complies with the third exception noted above.
In all cases, you must comply with applicable law, the underlying license, and this Trademark Policy, as
amended from time to time. For instance, any mention of Percona trademarks should include the full
trademarked name, with proper spelling and capitalization, along with attribution of ownership to Percona
Inc. For example, the full proper name for XtraBackup is Percona XtraBackup. However, it is acceptable to
omit the word “Percona” for brevity on the second and subsequent uses, where such omission does not
cause confusion.
In the event of doubt as to any of the conditions or exceptions outlined in this Trademark Policy, please
contact [email protected] for assistance and we will do our very best to be helpful.
Contact Us
For paid support and managed or consulting services , contact Percona Sales.