Este conteúdo não está disponível no idioma selecionado.
Chapter 4. Using PostgreSQL
The PostgreSQL server is an open source robust and highly-extensible database server based on the SQL language. The PostgreSQL server provides an object-relational database system that can manage extensive datasets and a high number of concurrent users. For these reasons, PostgreSQL servers can be used in clusters to manage high amounts of data.
The PostgreSQL server includes features for ensuring data integrity, building fault-tolerant environments and applications. With the PostgreSQL server, you can extend a database with your own data types, custom functions, or code from different programming languages without the need to recompile the database.
Learn how to install and configure PostgreSQL on a RHEL system, how to back up PostgreSQL data, and how to migrate from an earlier PostgreSQL version.
4.1. Installing PostgreSQL
RHEL 9 provides PostgreSQL 13 as the initial version of this Application Stream, which you can install easily as an RPM package.
Additional PostgreSQL versions are provided as modules with a shorter life cycle in minor releases of RHEL 9:
-
RHEL 9.2 introduced PostgreSQL 15 as the
postgresql:15
module stream -
RHEL 9.4 introduced PostgreSQL 16 as the
postgresql:16
module stream
To install PostgreSQL, use the following procedure.
By design, it is impossible to install more than one version (stream) of the same module in parallel. Therefore, you must choose only one of the available streams from the postgresql
module. You can use different versions of the PostgreSQL database server in containers, see Running multiple PostgreSQL versions in containers.
Procedure
Install the PostgreSQL server packages:
For PostgreSQL 13 from the RPM package:
# dnf install postgresql-server
For PostgreSQL 15 or PostgreSQL 16 by selecting stream (version) 15 or 16 from the
postgresql
module and specifying theserver
profile, for example:# dnf module install postgresql:16/server
The
postgres
superuser is created automatically.
Initialize the database cluster:
# postgresql-setup --initdb
Red Hat recommends storing the data in the default
/var/lib/pgsql/data
directory.Start the
postgresql
service:# systemctl start postgresql.service
Enable the
postgresql
service to start at boot:# systemctl enable postgresql.service
If you want to upgrade from an earlier postgresql
stream within RHEL 9, follow both procedures described in Switching to a later stream and in Migrating to a RHEL 9 version of PostgreSQL.
4.2. Running multiple PostgreSQL versions in containers
To run different versions of PostgreSQL on the same host, run them in containers because you cannot install multiple versions (streams) of the same module in parallel.
This procedure includes PostgreSQL 13 and PostgreSQL 15 as examples but you can use any PostgreSQL container version available in the Red Hat Ecosystem Catalog.
Prerequisites
-
The
container-tools
meta-package is installed.
Procedure
Use your Red Hat Customer Portal account to authenticate to the
registry.redhat.io
registry:# podman login registry.redhat.io
Skip this step if you are already logged in to the container registry.
Run PostgreSQL 13 in a container:
$ podman run -d --name <container_name> -e POSTGRESQL_USER=<user_name> -e POSTGRESQL_PASSWORD=<password> -e POSTGRESQL_DATABASE=<database_name> -p <host_port_1>:5432 rhel9/postgresql-13
For more information about the usage of this container image, see the Red Hat Ecosystem Catalog.
Run PostgreSQL 15 in a container:
$ podman run -d --name <container_name> -e POSTGRESQL_USER=<user_name> -e POSTGRESQL_PASSWORD=<password> -e POSTGRESQL_DATABASE=<database_name> -p <host_port_2>:5432 rhel9/postgresql-15
For more information about the usage of this container image, see the Red Hat Ecosystem Catalog.
Run PostgreSQL 16 in a container:
$ podman run -d --name <container_name> -e POSTGRESQL_USER=<user_name> -e POSTGRESQL_PASSWORD=<password> -e POSTGRESQL_DATABASE=<database_name> -p <host_port_3>:5432 rhel9/postgresql-16
For more information about the usage of this container image, see the Red Hat Ecosystem Catalog.
NoteThe container names and host ports of the two database servers must differ.
To ensure that clients can access the database server on the network, open the host ports in the firewall:
# firewall-cmd --permanent --add-port={<host_port_1>/tcp,<host_port_2>/tcp,<host_port_3>/tcp,...} # firewall-cmd --reload
Verification
Display information about running containers:
$ podman ps
Connect to the database server and log in as root:
# psql -u postgres -p -h localhost -P <host_port> --protocol tcp
4.3. Creating PostgreSQL users
PostgreSQL users are of the following types:
-
The
postgres
UNIX system user - should be used only to run the PostgreSQL server and client applications, such aspg_dump
. Do not use thepostgres
system user for any interactive work on PostgreSQL administration, such as database creation and user management. -
A database superuser - the default
postgres
PostgreSQL superuser is not related to thepostgres
system user. You can limit access of thepostgres
superuser in thepg_hba.conf
file, otherwise no other permission limitations exist. You can also create other database superusers. A role with specific database access permissions:
- A database user - has a permission to log in by default
- A group of users - enables managing permissions for the group as a whole
Roles can own database objects (for example, tables and functions) and can assign object privileges to other roles using SQL commands.
Standard database management privileges include SELECT
, INSERT
, UPDATE
, DELETE
, TRUNCATE
, REFERENCES
, TRIGGER
, CREATE
, CONNECT
, TEMPORARY
, EXECUTE
, and USAGE
.
Role attributes are special privileges, such as LOGIN
, SUPERUSER
, CREATEDB
, and CREATEROLE
.
Red Hat recommends performing most tasks as a role that is not a superuser. A common practice is to create a role that has the CREATEDB
and CREATEROLE
privileges and use this role for all routine management of databases and roles.
Prerequisites
- The PostgreSQL server is installed.
- The database cluster is initialized.
Procedure
To create a user, set a password for the user, and assign the user the
CREATEROLE
andCREATEDB
permissions:postgres=# CREATE USER mydbuser WITH PASSWORD 'mypasswd' CREATEROLE CREATEDB;
Replace mydbuser with the username and mypasswd with the user’s password.
Additional resources
Example 4.1. Initializing, creating, and connecting to a PostgreSQL database
This example demonstrates how to initialize a PostgreSQL database, create a database user with routine database management privileges, and how to create a database that is accessible from any system account through the database user with management privileges.
Install the PosgreSQL server:
# dnf install postgresql-server
Initialize the database cluster:
# postgresql-setup --initdb * Initializing database in '/var/lib/pgsql/data' * Initialized, logs are in /var/lib/pgsql/initdb_postgresql.log
Set the password hashing algorithm to
scram-sha-256
.In the
/var/lib/pgsql/data/postgresql.conf
file, change the following line:#password_encryption = md5 # md5 or scram-sha-256
to:
password_encryption = scram-sha-256
In the
/var/lib/pgsql/data/pg_hba.conf
file, change the following line for the IPv4 local connections:host all all 127.0.0.1/32 ident
to:
host all all 127.0.0.1/32 scram-sha-256
Start the postgresql service:
# systemctl start postgresql.service
Log in as the system user named
postgres
:# su - postgres
Start the PostgreSQL interactive terminal:
$ psql psql (13.7) Type "help" for help. postgres=#
Optional: Obtain information about the current database connection:
postgres=# \conninfo You are connected to database "postgres" as user "postgres" via socket in "/var/run/postgresql" at port "5432".
Create a user named
mydbuser
, set a password formydbuser
, and assignmydbuser
theCREATEROLE
andCREATEDB
permissions:postgres=# CREATE USER mydbuser WITH PASSWORD 'mypasswd' CREATEROLE CREATEDB; CREATE ROLE
The
mydbuser
user now can perform routine database management operations: create databases and manage user indexes.Log out of the interactive terminal by using the
\q
meta command:postgres=# \q
Log out of the
postgres
user session:$ logout
Log in to the PostgreSQL terminal as
mydbuser
, specify the hostname, and connect to the defaultpostgres
database, which was created during initialization:# psql -U mydbuser -h 127.0.0.1 -d postgres Password for user mydbuser: Type the password. psql (13.7) Type "help" for help. postgres=>
Create a database named
mydatabase
:postgres=> CREATE DATABASE mydatabase; CREATE DATABASE postgres=>
Log out of the session:
postgres=# \q
Connect to mydatabase as
mydbuser
:# psql -U mydbuser -h 127.0.0.1 -d mydatabase Password for user mydbuser: psql (13.7) Type "help" for help. mydatabase=>
Optional: Obtain information about the current database connection:
mydatabase=> \conninfo You are connected to database "mydatabase" as user "mydbuser" on host "127.0.0.1" at port "5432".
4.4. Configuring PostgreSQL
In a PostgreSQL database, all data and configuration files are stored in a single directory called a database cluster. Red Hat recommends storing all data, including configuration files, in the default /var/lib/pgsql/data/
directory.
PostgreSQL configuration consists of the following files:
-
postgresql.conf
- is used for setting the database cluster parameters. -
postgresql.auto.conf
- holds basic PostgreSQL settings similarly topostgresql.conf
. However, this file is under the server control. It is edited by theALTER SYSTEM
queries, and cannot be edited manually. -
pg_ident.conf
- is used for mapping user identities from external authentication mechanisms into the PostgreSQL user identities. -
pg_hba.conf
- is used for configuring client authentication for PostgreSQL databases.
To change the PostgreSQL configuration, use the following procedure.
Procedure
-
Edit the respective configuration file, for example,
/var/lib/pgsql/data/postgresql.conf
. Restart the
postgresql
service so that the changes become effective:# systemctl restart postgresql.service
Example 4.2. Configuring PostgreSQL database cluster parameters
This example shows basic settings of the database cluster parameters in the /var/lib/pgsql/data/postgresql.conf
file.
# This is a comment log_connections = yes log_destination = 'syslog' search_path = '"$user", public' shared_buffers = 128MB password_encryption = scram-sha-256
Example 4.3. Setting client authentication in PostgreSQL
This example demonstrates how to set client authentication in the /var/lib/pgsql/data/pg_hba.conf
file.
# TYPE DATABASE USER ADDRESS METHOD local all all trust host postgres all 192.168.93.0/24 ident host all all .example.com scram-sha-256
4.5. Configuring TLS encryption on a PostgreSQL server
By default, PostgreSQL uses unencrypted connections. For more secure connections, you can enable Transport Layer Security (TLS) support on the PostgreSQL server and configure your clients to establish encrypted connections.
Prerequisites
- The PostgreSQL server is installed.
- The database cluster is initialized.
- If the server runs RHEL 9.2 or later and the FIPS mode is enabled, clients must either support the Extended Master Secret (EMS) extension or use TLS 1.3. TLS 1.2 connections without EMS fail. For more information, see the Red Hat Knowledgebase solution TLS extension "Extended Master Secret" enforced on RHEL 9.2 and later.
Procedure
Install the OpenSSL library:
# dnf install openssl
Generate a TLS certificate and a key:
# openssl req -new -x509 -days 365 -nodes -text -out server.crt \ -keyout server.key -subj "/CN=dbhost.yourdomain.com"
Replace dbhost.yourdomain.com with your database host and domain name.
Copy your signed certificate and your private key to the required locations on the database server:
# cp server.{key,crt} /var/lib/pgsql/data/.
Change the owner and group ownership of the signed certificate and your private key to the
postgres
user:# chown postgres:postgres /var/lib/pgsql/data/server.{key,crt}
Restrict the permissions for your private key so that it is readable only by the owner:
# chmod 0400 /var/lib/pgsql/data/server.key
Set the password hashing algorithm to
scram-sha-256
by changing the following line in the/var/lib/pgsql/data/postgresql.conf
file:#password_encryption = md5 # md5 or scram-sha-256
to:
password_encryption = scram-sha-256
Configure PostgreSQL to use SSL/TLS by changing the following line in the
/var/lib/pgsql/data/postgresql.conf
file:#ssl = off
to:
ssl=on
Restrict access to all databases to accept only connections from clients using TLS by changing the following line for the IPv4 local connections in the
/var/lib/pgsql/data/pg_hba.conf
file:host all all 127.0.0.1/32 ident
to:
hostssl all all 127.0.0.1/32 scram-sha-256
Alternatively, you can restrict access for a single database and a user by adding the following new line:
hostssl mydatabase mydbuser 127.0.0.1/32 scram-sha-256
Replace mydatabase with the database name and mydbuser with the username.
Make the changes effective by restarting the
postgresql
service:# systemctl restart postgresql.service
Verification
To manually verify that the connection is encrypted:
Connect to the PostgreSQL database as the mydbuser user, specify the hostname and the database name:
$ psql -U mydbuser -h 127.0.0.1 -d mydatabase Password for user mydbuser:
Replace mydatabase with the database name and mydbuser with the username.
Obtain information about the current database connection:
mydbuser=> \conninfo You are connected to database "mydatabase" as user "mydbuser" on host "127.0.0.1" at port "5432". SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
You can write a simple application that verifies whether a connection to PostgreSQL is encrypted. This example demonstrates such an application written in C that uses the
libpq
client library, which is provided by thelibpq-devel
package:#include <stdio.h> #include <stdlib.h> #include <libpq-fe.h> int main(int argc, char* argv[]) { //Create connection PGconn* connection = PQconnectdb("hostaddr=127.0.0.1 password=mypassword port=5432 dbname=mydatabase user=mydbuser"); if (PQstatus(connection) ==CONNECTION_BAD) { printf("Connection error\n"); PQfinish(connection); return -1; //Execution of the program will stop here } printf("Connection ok\n"); //Verify TLS if (PQsslInUse(connection)){ printf("TLS in use\n"); printf("%s\n", PQsslAttribute(connection,"protocol")); } //End connection PQfinish(connection); printf("Disconnected\n"); return 0; }
Replace mypassword with the password, mydatabase with the database name, and mydbuser with the username.
NoteYou must load the
pq
libraries for compilation by using the-lpq
option. For example, to compile the application by using the GCC compiler:$ gcc source_file.c -lpq -o myapplication
where the source_file.c contains the example code above, and myapplication is the name of your application for verifying secured PostgreSQL connection.
Example 4.4. Initializing, creating, and connecting to a PostgreSQL database using TLS encryption
This example demonstrates how to initialize a PostgreSQL database, create a database user and a database, and how to connect to the database using a secured connection.
Install the PosgreSQL server:
# dnf install postgresql-server
Initialize the database cluster:
# postgresql-setup --initdb * Initializing database in '/var/lib/pgsql/data' * Initialized, logs are in /var/lib/pgsql/initdb_postgresql.log
Install the OpenSSL library:
# dnf install openssl
Generate a TLS certificate and a key:
# openssl req -new -x509 -days 365 -nodes -text -out server.crt \ -keyout server.key -subj "/CN=dbhost.yourdomain.com"
Replace dbhost.yourdomain.com with your database host and domain name.
Copy your signed certificate and your private key to the required locations on the database server:
# cp server.{key,crt} /var/lib/pgsql/data/.
Change the owner and group ownership of the signed certificate and your private key to the
postgres
user:# chown postgres:postgres /var/lib/pgsql/data/server.{key,crt}
Restrict the permissions for your private key so that it is readable only by the owner:
# chmod 0400 /var/lib/pgsql/data/server.key
Set the password hashing algorithm to
scram-sha-256
. In the/var/lib/pgsql/data/postgresql.conf
file, change the following line:#password_encryption = md5 # md5 or scram-sha-256
to:
password_encryption = scram-sha-256
Configure PostgreSQL to use SSL/TLS. In the
/var/lib/pgsql/data/postgresql.conf
file, change the following line:#ssl = off
to:
ssl=on
Start the
postgresql
service:# systemctl start postgresql.service
Log in as the system user named
postgres
:# su - postgres
Start the PostgreSQL interactive terminal as the
postgres
user:$ psql -U postgres psql (13.7) Type "help" for help. postgres=#
Create a user named
mydbuser
and set a password formydbuser
:postgres=# CREATE USER mydbuser WITH PASSWORD 'mypasswd'; CREATE ROLE postgres=#
Create a database named
mydatabase
:postgres=# CREATE DATABASE mydatabase; CREATE DATABASE postgres=#
Grant all permissions to the
mydbuser
user:postgres=# GRANT ALL PRIVILEGES ON DATABASE mydatabase TO mydbuser; GRANT postgres=#
Log out of the interactive terminal:
postgres=# \q
Log out of the
postgres
user session:$ logout
Restrict access to all databases to accept only connections from clients using TLS by changing the following line for the IPv4 local connections in the
/var/lib/pgsql/data/pg_hba.conf
file:host all all 127.0.0.1/32 ident
to:
hostssl all all 127.0.0.1/32 scram-sha-256
Make the changes effective by restarting the
postgresql
service:# systemctl restart postgresql.service
Connect to the PostgreSQL database as the
mydbuser
user, specify the hostname and the database name:$ psql -U mydbuser -h 127.0.0.1 -d mydatabase Password for user mydbuser: psql (13.7) SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) Type "help" for help. mydatabase=>
4.6. Backing up PostgreSQL data
To back up PostgreSQL data, use one of the following approaches:
- SQL dump
- File system level backup
- Continuous archiving
4.6.1. Backing up PostgreSQL data with an SQL dump
The SQL dump method is based on generating a dump file with SQL commands. When a dump is uploaded back to the database server, it recreates the database in the same state as it was at the time of the dump.
The SQL dump is ensured by the following PostgreSQL client applications:
- pg_dump dumps a single database without cluster-wide information about roles or tablespaces
- pg_dumpall dumps each database in a given cluster and preserves cluster-wide data, such as role and tablespace definitions.
By default, the pg_dump
and pg_dumpall
commands write their results into the standard output. To store the dump in a file, redirect the output to an SQL file. The resulting SQL file can be either in a text format or in other formats that allow for parallelism and for more detailed control of object restoration.
You can perform the SQL dump from any remote host that has access to the database.
4.6.1.1. Advantages and disadvantages of an SQL dump
An SQL dump has the following advantages compared to other PostgreSQL backup methods:
- An SQL dump is the only PostgreSQL backup method that is not server version-specific. The output of the pg_dump utility can be reloaded into later versions of PostgreSQL, which is not possible for file system level backups or continuous archiving.
- An SQL dump is the only method that works when transferring a database to a different machine architecture, such as going from a 32-bit to a 64-bit server.
- An SQL dump provides internally consistent dumps. A dump represents a snapshot of the database at the time pg_dump began running.
- The pg_dump utility does not block other operations on the database when it is running.
A disadvantage of an SQL dump is that it takes more time compared to file system level backup.
4.6.1.2. Performing an SQL dump using pg_dump
To dump a single database without cluster-wide information, use the pg_dump utility.
Prerequisites
-
You must have read access to all tables that you want to dump. To dump the entire database, you must run the commands as the
postgres
superuser or a user with database administrator privileges.
Procedure
Dump a database without cluster-wide information:
$ pg_dump dbname > dumpfile
To specify which database server pg_dump will contact, use the following command-line options:
The
-h
option to define the host.The default host is either the local host or what is specified by the
PGHOST
environment variable.The
-p
option to define the port.The default port is indicated by the
PGPORT
environment variable or the compiled-in default.
4.6.1.3. Performing an SQL dump using pg_dumpall
To dump each database in a given database cluster and to preserve cluster-wide data, use the pg_dumpall utility.
Prerequisites
-
You must run the commands as the
postgres
superuser or a user with database administrator privileges.
Procedure
Dump all databases in the database cluster and preserve cluster-wide data:
$ pg_dumpall > dumpfile
To specify which database server pg_dumpall will contact, use the following command-line options:
The
-h
option to define the host.The default host is either the local host or what is specified by the
PGHOST
environment variable.The
-p
option to define the port.The default port is indicated by the
PGPORT
environment variable or the compiled-in default.The
-l
option to define the default database.This option enables you to choose a default database different from the
postgres
database created automatically during initialization.
4.6.1.4. Restoring a database dumped using pg_dump
To restore a database from an SQL dump that you dumped using the pg_dump utility, follow the steps below.
Prerequisites
-
You must run the commands as the
postgres
superuser or a user with database administrator privileges.
Procedure
Create a new database:
$ createdb dbname
- Verify that all users who own objects or were granted permissions on objects in the dumped database already exist. If such users do not exist, the restore fails to recreate the objects with the original ownership and permissions.
Run the
psql
utility to restore a text file dump created by the pg_dump utility:$ psql dbname < dumpfile
where
dumpfile
is the output of thepg_dump
command. To restore a non-text file dump, use thepg_restore
utility instead:$ pg_restore non-plain-text-file
4.6.1.5. Restoring databases dumped using pg_dumpall
To restore data from a database cluster that you dumped using the pg_dumpall utility, follow the steps below.
Prerequisites
-
You must run the commands as the
postgres
superuser or a user with database administrator privileges.
Procedure
- Ensure that all users who own objects or were granted permissions on objects in the dumped databases already exist. If such users do not exist, the restore fails to recreate the objects with the original ownership and permissions.
Run the psql utility to restore a text file dump created by the pg_dumpall utility:
$ psql < dumpfile
where
dumpfile
is the output of thepg_dumpall
command.
4.6.1.6. Performing an SQL dump of a database on another server
Dumping a database directly from one server to another is possible because pg_dump and psql can write to and read from pipes.
Procedure
To dump a database from one server to another, run:
$ pg_dump -h host1 dbname | psql -h host2 dbname
4.6.1.7. Handling SQL errors during restore
By default, psql continues to execute if an SQL error occurs, causing the database to restore only partially.
To change the default behavior, use one of the following approaches when restoring a dump.
Prerequisites
-
You must run the commands as the
postgres
superuser or a user with database administrator privileges.
Procedure
Make psql exit with an exit status of 3 if an SQL error occurs by setting the
ON_ERROR_STOP
variable:$ psql --set ON_ERROR_STOP=on dbname < dumpfile
Specify that the whole dump is restored as a single transaction so that the restore is either fully completed or canceled.
When restoring a text file dump using the
psql
utility:$ psql -1
When restoring a non-text file dump using the
pg_restore
utility:$ pg_restore -e
Note that when using this approach, even a minor error can cancel a restore operation that has already run for many hours.
Additional resources
4.6.2. Backing up PostgreSQL data with a file system level backup
To create a file system level backup, copy PostgreSQL database files to another location. For example, you can use any of the following approaches:
- Create an archive file using the tar utility.
- Copy the files to a different location using the rsync utility.
- Create a consistent snapshot of the data directory.
4.6.2.1. Advantages and limitations of file system backing up
File system level backing up has the following advantage compared to other PostgreSQL backup methods:
- File system level backing up is usually faster than an SQL dump.
File system level backing up has the following limitations compared to other PostgreSQL backup methods:
- This backing up method is not suitable when you want to upgrade from RHEL 8 to RHEL 9 and migrate your data to the upgraded system. File system level backup is specific to an architecture and a RHEL major version. You can restore your data on your RHEL 8 system if the upgrade is not successful but you cannot restore the data on a RHEL 9 system.
- The database server must be shut down before backing up and restoring data.
- Backing up and restoring certain individual files or tables is impossible. Backing up a file system works only for complete backing up and restoring of an entire database cluster.
4.6.2.2. Performing file system level backing up
To perform file system level backing up, use the following procedure.
Procedure
Choose the location of a database cluster and initialize this cluster:
# postgresql-setup --initdb
Stop the postgresql service:
# systemctl stop postgresql.service
Use any method to create a file system backup, for example a
tar
archive:$ tar -cf backup.tar /var/lib/pgsql/data/
Start the postgresql service:
# systemctl start postgresql.service
Additional resources
4.6.3. Backing up PostgreSQL data by continuous archiving
PostgreSQL records every change made to the database’s data files into a write ahead log (WAL) file that is available in the pg_wal/
subdirectory of the cluster’s data directory. This log is intended primarily for a crash recovery. After a crash, the log entries made since the last checkpoint can be used for restoring the database to a consistency.
The continuous archiving method, also known as an online backup, combines the WAL files with a copy of the database cluster in the form of a base backup performed on a running server or a file system level backup.
If a database recovery is needed, you can restore the database from the copy of the database cluster and then replay log from the backed up WAL files to bring the system to the current state.
With the continuous archiving method, you must keep a continuous sequence of all archived WAL files that extends at minimum back to the start time of your last base backup. Therefore the ideal frequency of base backups depends on:
- The storage volume available for archived WAL files.
- The maximum possible duration of data recovery in situations when recovery is necessary. In cases with a long period since the last backup, the system replays more WAL segments, and the recovery therefore takes more time.
You cannot use pg_dump and pg_dumpall SQL dumps as a part of a continuous archiving backup solution. SQL dumps produce logical backups and do not contain enough information to be used by a WAL replay.
4.6.3.1. Advantages and disadvantages of continuous archiving
Continuous archiving has the following advantages compared to other PostgreSQL backup methods:
- With the continuous backup method, it is possible to use a base backup that is not entirely consistent because any internal inconsistency in the backup is corrected by the log replay. Therefore you can perform a base backup on a running PostgreSQL server.
-
A file system snapshot is not needed;
tar
or a similar archiving utility is sufficient. - Continuous backup can be achieved by continuing to archive the WAL files because the sequence of WAL files for the log replay can be indefinitely long. This is particularly valuable for large databases.
- Continuous backup supports point-in-time recovery. It is not necessary to replay the WAL entries to the end. The replay can be stopped at any point and the database can be restored to its state at any time since the base backup was taken.
- If the series of WAL files are continuously available to another machine that has been loaded with the same base backup file, it is possible to restore the other machine with a nearly-current copy of the database at any point.
Continuous archiving has the following disadvantages compared to other PostgreSQL backup methods:
- Continuous backup method supports only restoration of an entire database cluster, not a subset.
- Continuous backup requires extensive archival storage.
4.6.3.2. Setting up WAL archiving
A running PostgreSQL server produces a sequence of write ahead log (WAL) records. The server physically divides this sequence into WAL segment files, which are given numeric names that reflect their position in the WAL sequence. Without WAL archiving, the segment files are reused and renamed to higher segment numbers.
When archiving WAL data, the contents of each segment file are captured and saved at a new location before the segment file is reused. You have multiple options where to save the content, such as an NFS-mounted directory on another machine, a tape drive, or a CD.
Note that WAL records do not include changes to configuration files.
To enable WAL archiving, use the following procedure.
Procedure
In the
/var/lib/pgsql/data/postgresql.conf
file:-
Set the
wal_level
configuration parameter toreplica
or higher. -
Set the
archive_mode
parameter toon
. Specify the shell command in the
archive_command
configuration parameter. You can use thecp
command, another command, or a shell script.NoteThe archive command is executed only on completed WAL segments. A server that generates little WAL traffic can have a substantial delay between the completion of a transaction and its safe recording in archive storage. To limit how old unarchived data can be, you can:
-
Set the
archive_timeout
parameter to force the server to switch to a new WAL segment file with a given frequency. -
Use the
pg_switch_wal
parameter to force a segment switch to ensure that a transaction is archived immediately after it finishes.
Example 4.5. Shell command for archiving WAL segments
This example shows a simple shell command you can set in the
archive_command
configuration parameter.The following command copies a completed segment file to the required location:
archive_command = 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
where the
%p
parameter is replaced by the relative path to the file to archive and the%f
parameter is replaced by the file name.This command copies archivable WAL segments to the
/mnt/server/archivedir/
directory. After replacing the%p
and%f
parameters, the executed command looks as follows:test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/00000001000000A900000065 /mnt/server/archivedir/00000001000000A900000065
A similar command is generated for each new file that is archived.
-
Set the
-
Set the
Restart the
postgresql
service to enable the changes:# systemctl restart postgresql.service
- Test your archive command and ensure it does not overwrite an existing file and that it returns a nonzero exit status if it fails.
- To protect your data, ensure that the segment files are archived into a directory that does not have group or world read access.
Additional resources
4.6.3.3. Making a base backup
You can create a base backup in several ways. The simplest way of performing a base backup is using the pg_basebackup utility on a running PostgreSQL server.
The base backup process creates a backup history file that is stored into the WAL archive area and is named after the first WAL segment file that you need for the base backup.
The backup history file is a small text file containing the starting and ending times, and WAL segments of the backup. If you used the label string to identify the associated dump file, you can use the backup history file to determine which dump file to restore.
Consider keeping several backup sets to be certain that you can recover your data.
Prerequisites
-
You must run the commands as the
postgres
superuser, a user with database administrator privileges, or another user with at leastREPLICATION
permissions. - You must keep all the WAL segment files generated during and after the base backup.
Procedure
Use the
pg_basebackup
utility to perform the base backup.To create a base backup as individual files (plain format):
$ pg_basebackup -D backup_directory -Fp
Replace backup_directory with your chosen backup location.
If you use tablespaces and perform the base backup on the same host as the server, you must also use the
--tablespace-mapping
option, otherwise the backup will fail upon an attempt to write the backup to the same location.To create a base backup as a
tar
archive (tar
and compressed format):$ pg_basebackup -D backup_directory -Ft -z
Replace backup_directory with your chosen backup location.
To restore such data, you must manually extract the files in the correct locations.
To specify which database server pg_basebackup will contact, use the following command-line options:
The
-h
option to define the host.The default host is either the local host or a host specified by the
PGHOST
environment variable.The
-p
option to define the port.The default port is indicated by the
PGPORT
environment variable or the compiled-in default.
- After the base backup process is complete, safely archive the copy of the database cluster and the WAL segment files used during the backup, which are specified in the backup history file.
- Delete WAL segments numerically lower than the WAL segment files used in the base backup because these are older than the base backup and no longer needed for a restore.
Additional resources
4.6.3.4. Restoring the database using a continuous archive backup
To restore a database using a continuous backup, use the following procedure.
Procedure
Stop the server:
# systemctl stop postgresql.service
Copy the necessary data to a temporary location.
Preferably, copy the whole cluster data directory and any tablespaces. Note that this requires enough free space on your system to hold two copies of your existing database.
If you do not have enough space, save the contents of the cluster’s
pg_wal
directory, which can contain logs that were not archived before the system went down.- Remove all existing files and subdirectories under the cluster data directory and under the root directories of any tablespaces you are using.
Restore the database files from your base backup.
Ensure that:
-
The files are restored with the correct ownership (the database system user, not
root
). - The files are restored with the correct permissions.
-
The symbolic links in the
pg_tblspc/
subdirectory are restored correctly.
-
The files are restored with the correct ownership (the database system user, not
Remove any files present in the
pg_wal/
subdirectory.These files resulted from the base backup and are therefore obsolete. If you did not archive
pg_wal/
, recreate it with proper permissions.-
Copy any unarchived WAL segment files that you saved in step 2 into
pg_wal/
. Create the
recovery.conf
recovery command file in the cluster data directory and specify the shell command in therestore_command
configuration parameter. You can use thecp
command, another command, or a shell script. For example:restore_command = 'cp /mnt/server/archivedir/%f "%p"'
Start the server:
# systemctl start postgresql.service
The server will enter the recovery mode and proceed to read through the archived WAL files that it needs.
If the recovery is terminated due to an external error, the server can be restarted and it will continue the recovery. When the recovery process is completed, the server renames
recovery.conf
torecovery.done
. This prevents the server from accidental re-entering the recovery mode after it starts normal database operations.Check the contents of the database to verify that the database has recovered into the required state.
If the database has not recovered into the required state, return to step 1. If the database has recovered into the required state, allow the users to connect by restoring the client authentication configuration in the
pg_hba.conf
file.
4.6.3.4.1. Additional resources
4.7. Migrating to a RHEL 9 version of PostgreSQL
Red Hat Enterprise Linux 8 provides PostgreSQL in multiple module streams: PostgreSQL 10 (the default postgresql stream), PostgreSQL 9.6, PostgreSQL 12, PostgreSQL 13, PostgreSQL 15, and PostgreSQL 16.
In RHEL 9, PostgreSQL 13, PostgreSQL 15, and PostgreSQL 16 are available.
On RHEL, you can use two PostgreSQL migration paths for the database files:
The fast upgrade method is quicker than the dump and restore process. However, in certain cases, the fast upgrade does not work, and you can only use the dump and restore process, for example in case of cross-architecture upgrades.
As a prerequisite for migration to a later version of PostgreSQL, back up all your PostgreSQL databases.
Dumping the databases and performing backup of the SQL files is required for the dump and restore process and recommended for the fast upgrade method.
Before migrating to a later version of PostgreSQL, see the upstream compatibility notes for the version of PostgreSQL to which you want to migrate, and for all skipped PostgreSQL versions between the one you are migrating from and the target version.
4.7.1. Notable differences between PostgreSQL 15 and PostgreSQL 16
PostgreSQL 16 introduced the following notable changes.
The postmasters
binary is no longer available
PostgreSQL is no longer distributed with the postmaster
binary. Users who start the postgresql
server by using the provided systemd
unit file (the systemctl start postgres.service
command) are not affected by this change. If you previously started the postgresql
server directly through the postmaster
binary, you must now use the postgres
binary instead.
Documentation is no longer packaged
PostgreSQL no longer provides documentation in PDF format within the package. Use the online documentation instead.
4.7.2. Notable differences between PostgreSQL 13 and PostgreSQL 15
PostgreSQL 15 introduced the following backwards incompatible changes.
Default permissions of the public schema
The default permissions of the public schema have been modified in PostgreSQL 15. Newly created users need to grant permission explicitly by using the GRANT ALL ON SCHEMA public TO myuser;
command.
The following example works in PostgreSQL 13 and earlier:
postgres=# CREATE USER mydbuser; postgres=# \c postgres mydbuser postgres=$ CREATE TABLE mytable (id int);
The following example works in PostgreSQL 15 and later:
postgres=# CREATE USER mydbuser; postgres=# GRANT ALL ON SCHEMA public TO mydbuser; postgres=# \c postgres mydbuser postgres=$ CREATE TABLE mytable (id int);
Ensure that the mydbuser
access is configured appropriately in the pg_hba.conf
file. See Creating PostgreSQL users for more information.
PQsendQuery()
no longer supported in pipeline mode
Since PostgreSQL 15, the libpq
PQsendQuery()
function is no longer supported in pipeline mode. Modify affected applications to use the PQsendQueryParams()
function instead.
4.7.3. Fast upgrade using the pg_upgrade utility
As a system administrator, you can upgrade to the most recent version of PostgreSQL by using the fast upgrade method. To perform a fast upgrade, copy binary data files to the /var/lib/pgsql/data/
directory and use the pg_upgrade
utility.
You can use this method for migrating data:
- From the RHEL 8 version of PostgreSQL 12 to a RHEL version of PostgreSQL 13
- From a RHEL 8 or 9 version of PostgreSQL 13 to a RHEL version of PostgreSQL 15
- From a RHEL 8 or 9 version of PostgreSQL 15 to a RHEL version of PostgreSQL 16
The following procedure describes migration from the RHEL 8 version of PostgreSQL 12 to the RHEL 9 version of PostgreSQL 13 using the fast upgrade method. For migration from postgresql
streams other than 12
, use one of the following approaches:
-
Update your PostgreSQL server to version 12 on RHEL 8 and then use the
pg_upgrade
utility to perform the fast upgrade to RHEL 9 version of PostgreSQL 13. - Use the dump and restore upgrade directly between any RHEL 8 version of PostgreSQL and an equal or later PostgreSQL version in RHEL 9.
Prerequisites
-
Before performing the upgrade, back up all your data stored in the PostgreSQL databases. By default, all data is stored in the
/var/lib/pgsql/data/
directory on both the RHEL 8 and RHEL 9 systems.
Procedure
On the RHEL 9 system, install the
postgresql-server
andpostgresql-upgrade
packages:# dnf install postgresql-server postgresql-upgrade
Optionally, if you used any PostgreSQL server modules on RHEL 8, install them also on the RHEL 9 system in two versions, compiled both against PostgreSQL 12 (installed as the
postgresql-upgrade
package) and the target version of PostgreSQL 13 (installed as thepostgresql-server
package). If you need to compile a third-party PostgreSQL server module, build it both against thepostgresql-devel
andpostgresql-upgrade-devel
packages.Check the following items:
-
Basic configuration: On the RHEL 9 system, check whether your server uses the default
/var/lib/pgsql/data
directory and the database is correctly initialized and enabled. In addition, the data files must be stored in the same path as mentioned in the/usr/lib/systemd/system/postgresql.service
file. - PostgreSQL servers: Your system can run multiple PostgreSQL servers. Ensure that the data directories for all these servers are handled independently.
-
PostgreSQL server modules: Ensure that the PostgreSQL server modules that you used on RHEL 8 are installed on your RHEL 9 system as well. Note that plugins are installed in the
/usr/lib64/pgsql/
directory.
-
Basic configuration: On the RHEL 9 system, check whether your server uses the default
Ensure that the
postgresql
service is not running on either of the source and target systems at the time of copying data.# systemctl stop postgresql.service
-
Copy the database files from the source location to the
/var/lib/pgsql/data/
directory on the RHEL 9 system. Perform the upgrade process by running the following command as the PostgreSQL user:
# postgresql-setup --upgrade
This launches the
pg_upgrade
process in the background.In case of failure,
postgresql-setup
provides an informative error message.Copy the prior configuration from
/var/lib/pgsql/data-old
to the new cluster.Note that the fast upgrade does not reuse the prior configuration in the newer data stack and the configuration is generated from scratch. If you want to combine the old and new configurations manually, use the *.conf files in the data directories.
Start the new PostgreSQL server:
# systemctl start postgresql.service
Analyze the new database cluster.
For PostgreSQL 13:
su postgres -c '~/analyze_new_cluster.sh'
For PostgreSQL 15 or later:
su postgres -c 'vacuumdb --all --analyze-in-stages'
NoteYou may need to use
ALTER COLLATION name REFRESH VERSION
, see the upstream documentation for details.
If you want the new PostgreSQL server to be automatically started on boot, run:
# systemctl enable postgresql.service
4.7.4. Dump and restore upgrade
When using the dump and restore upgrade, you must dump all databases contents into an SQL file dump file. Note that the dump and restore upgrade is slower than the fast upgrade method and it may require some manual fixing in the generated SQL file.
You can use this method for migrating data from any RHEL 8 version of PostgreSQL to any equal or later version of PostgreSQL in RHEL 9.
On RHEL 8 and RHEL 9 systems, PostgreSQL data is stored in the /var/lib/pgsql/data/
directory by default.
To perform the dump and restore upgrade, change the user to root
.
The following procedure describes migration from the RHEL 8 default version of PostgreSQL 10 to the RHEL 9 version of PostgreSQL 13.
Procedure
On your RHEL 8 system, start the PostgreSQL 10 server:
# systemctl start postgresql.service
On the RHEL 8 system, dump all databases contents into the
pgdump_file.sql
file:su - postgres -c "pg_dumpall > ~/pgdump_file.sql"
Ensure that the databases were dumped correctly:
su - postgres -c 'less "$HOME/pgdump_file.sql"'
As a result, the path to the dumped sql file is displayed:
/var/lib/pgsql/pgdump_file.sql
.On the RHEL 9 system, install the
postgresql-server
package:# dnf install postgresql-server
Optionally, if you used any PostgreSQL server modules on RHEL 8, install them also on the RHEL 9 system. If you need to compile a third-party PostgreSQL server module, build it against the
postgresql-devel
package.On the RHEL 9 system, initialize the data directory for the new PostgreSQL server:
# postgresql-setup --initdb
On the RHEL 9 system, copy the
pgdump_file.sql
into the PostgreSQL home directory, and check that the file was copied correctly:su - postgres -c 'test -e "$HOME/pgdump_file.sql" && echo exists'
Copy the configuration files from the RHEL 8 system:
su - postgres -c 'ls -1 $PGDATA/*.conf'
The configuration files to be copied are:
-
/var/lib/pgsql/data/pg_hba.conf
-
/var/lib/pgsql/data/pg_ident.conf
-
/var/lib/pgsql/data/postgresql.conf
-
On the RHEL 9 system, start the new PostgreSQL server:
# systemctl start postgresql.service
On the RHEL 9 system, import data from the dumped sql file:
su - postgres -c 'psql -f ~/pgdump_file.sql postgres'
4.8. Installing and configuring a PostgreSQL database server by using RHEL system roles
You can use the postgresql
RHEL system role to automate the installation and management of the PostgreSQL database server. By default, this role also optimizes PostgreSQL by automatically configuring performance-related settings in the PostgreSQL service configuration files.
4.8.1. Configuring PostgreSQL with an existing TLS certificate by using the postgresql
RHEL system role
If your application requires a PostgreSQL database server, you can configure this service with TLS encryption to enable secure communication between the application and the database. By using the postgresql
RHEL system role, you can automate this process and remotely install and configure PostgreSQL with TLS encryption. In the playbook, you can use an existing private key and a TLS certificate that was issued by a certificate authority (CA).
The postgresql
role cannot open ports in the firewalld
service. To allow remote access to the PostgreSQL server, add a task that uses the firewall
RHEL system role to your playbook.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. Both the private key of the managed node and the certificate are stored on the control node in the following files:
-
Private key:
~/<FQDN_of_the_managed_node>.key
-
Certificate:
~/<FQDN_of_the_managed_node>.crt
-
Private key:
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
$ ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>
After the
ansible-vault create
command opens an editor, enter the sensitive data in the<key>: <value>
format:pwd: <password>
- Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Installing and configuring PostgreSQL hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create directory for TLS certificate and key ansible.builtin.file: path: /etc/postgresql/ state: directory mode: 755 - name: Copy CA certificate ansible.builtin.copy: src: "~/{{ inventory_hostname }}.crt" dest: "/etc/postgresql/server.crt" - name: Copy private key ansible.builtin.copy: src: "~/{{ inventory_hostname }}.key" dest: "/etc/postgresql/server.key" mode: 0600 - name: PostgreSQL with an existing private key and certificate ansible.builtin.include_role: name: rhel-system-roles.postgresql vars: postgresql_version: "16" postgresql_password: "{{ pwd }}" postgresql_ssl_enable: true postgresql_cert_name: "/etc/postgresql/server" postgresql_server_conf: listen_addresses: "'*'" password_encryption: scram-sha-256 postgresql_pg_hba_conf: - type: local database: all user: all auth_method: scram-sha-256 - type: hostssl database: all user: all address: '127.0.0.1/32' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '::1/128' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '192.0.2.0/24' auth_method: scram-sha-256 - name: Open the PostgresQL port in firewalld ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - service: postgresql state: enabled
The settings specified in the example playbook include the following:
postgresql_version: <version>
Sets the version of PostgreSQL to install. The version you can set depends on the PostgreSQL versions that are available in Red Hat Enterprise Linux running on the managed node.
You cannot upgrade or downgrade PostgreSQL by changing the
postgresql_version
variable and running the playbook again.postgresql_password: <password>
Sets the password of the
postgres
database superuser.You cannot change the password by changing the
postgresql_password
variable and running the playbook again.postgresql_cert_name: <private_key_and_certificate_file>
Defines the path and base name of both the certificate and private key on the managed node without
.crt
andkey
suffixes. During the PostgreSQL configuration, the role creates symbolic links in the/var/lib/pgsql/data/
directory that refer to these files.The certificate and private key must exist locally on the managed node. You can use tasks with the
ansible.builtin.copy
module to transfer the files from the control node to the managed node, as shown in the playbook.postgresql_server_conf: <list_of_settings>
Defines
postgresql.conf
settings the role should set. The role adds these settings to the/etc/postgresql/system-roles.conf
file and includes this file at the end of/var/lib/pgsql/data/postgresql.conf
. Consequently, settings from thepostgresql_server_conf
variable override settings in/var/lib/pgsql/data/postgresql.conf
.Re-running the playbook with different settings in
postgresql_server_conf
overwrites the/etc/postgresql/system-roles.conf
file with the new settings.postgresql_pg_hba_conf: <list_of_authentication_entries>
Configures client authentication entries in the
/var/lib/pgsql/data/pg_hba.conf
file. For details, see see the PostgreSQL documentation.The example allows the following connections to PostgreSQL:
- Unencrypted connections by using local UNIX domain sockets.
- TLS-encrypted connections to the IPv4 and IPv6 localhost addresses.
-
TLS-encrypted connections from the 192.0.2.0/24 subnet. Note that access from remote addresses is only possible if you also configure the
listen_addresses
setting in thepostgresql_server_conf
variable appropriately.
Re-running the playbook with different settings in
postgresql_pg_hba_conf
overwrites the/var/lib/pgsql/data/pg_hba.conf
file with the new settings.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.postgresql/README.md
file on the control node.Validate the playbook syntax:
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook --ask-vault-pass ~/playbook.yml
Verification
Use the
postgres
super user to connect to a PostgreSQL server and execute the\conninfo
meta command:# psql "postgresql://postgres@managed-node-01.example.com:5432" -c '\conninfo' Password for user postgres: You are connected to database "postgres" as user "postgres" on host "192.0.2.1" at port "5432". SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)
If the output displays a TLS protocol version and cipher details, the connection works and TLS encryption is enabled.
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.postgresql/README.md
file -
/usr/share/doc/rhel-system-roles/postgresql/
directory - Ansible vault
4.8.2. Configuring PostgreSQL with a TLS certificate issued from IdM by using the postgresql
RHEL system role
If your application requires a PostgreSQL database server, you can configure the PostgreSQL service with TLS encryption to enable secure communication between the application and the database. If the PostgreSQL host is a member of a Red Hat Enterprise Linux Identity Management (IdM) domain, the certmonger
service can manage the certificate request and future renewals.
By using the postgresql
RHEL system role, you can automate this process. You can remotely install and configure PostgreSQL with TLS encryption, and the postgresql
role uses the certificate
RHEL system role to configure certmonger
and request a certificate from IdM.
The postgresql
role cannot open ports in the firewalld
service. To allow remote access to the PostgreSQL server, add a task to your playbook that uses the firewall
RHEL system role.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - You enrolled the managed node in an IdM domain.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
$ ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>
After the
ansible-vault create
command opens an editor, enter the sensitive data in the<key>: <value>
format:pwd: <password>
- Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Installing and configuring PostgreSQL hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: PostgreSQL with certificates issued by IdM ansible.builtin.include_role: name: rhel-system-roles.postgresql vars: postgresql_version: "16" postgresql_password: "{{ pwd }}" postgresql_ssl_enable: true postgresql_certificates: - name: postgresql_cert dns: "{{ inventory_hostname }}" ca: ipa principal: "postgresql/{{ inventory_hostname }}@EXAMPLE.COM" postgresql_server_conf: listen_addresses: "'*'" password_encryption: scram-sha-256 postgresql_pg_hba_conf: - type: local database: all user: all auth_method: scram-sha-256 - type: hostssl database: all user: all address: '127.0.0.1/32' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '::1/128' auth_method: scram-sha-256 - type: hostssl database: all user: all address: '192.0.2.0/24' auth_method: scram-sha-256 - name: Open the PostgresQL port in firewalld ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - service: postgresql state: enabled
The settings specified in the example playbook include the following:
postgresql_version: <version>
Sets the version of PostgreSQL to install. The version you can set depends on the PostgreSQL versions that are available in Red Hat Enterprise Linux running on the managed node.
You cannot upgrade or downgrade PostgreSQL by changing the
postgresql_version
variable and running the playbook again.postgresql_password: <password>
Sets the password of the
postgres
database superuser.You cannot change the password by changing the
postgresql_password
variable and running the playbook again.postgresql_certificates: <certificate_role_settings>
-
A list of YAML dictionaries with settings for the
certificate
role. postgresql_server_conf: <list_of_settings>
Defines
postgresql.conf
settings you want the role to set. The role adds these settings to the/etc/postgresql/system-roles.conf
file and includes this file at the end of/var/lib/pgsql/data/postgresql.conf
. Consequently, settings from thepostgresql_server_conf
variable override settings in/var/lib/pgsql/data/postgresql.conf
.Re-running the playbook with different settings in
postgresql_server_conf
overwrites the/etc/postgresql/system-roles.conf
file with the new settings.postgresql_pg_hba_conf: <list_of_authentication_entries>
Configures client authentication entries in the
/var/lib/pgsql/data/pg_hba.conf
file. For details, see see the PostgreSQL documentation.The example allows the following connections to PostgreSQL:
- Unencrypted connections by using local UNIX domain sockets.
- TLS-encrypted connections to the IPv4 and IPv6 localhost addresses.
-
TLS-encrypted connections from the 192.0.2.0/24 subnet. Note that access from remote addresses is only possible if you also configure the
listen_addresses
setting in thepostgresql_server_conf
variable appropriately.
Re-running the playbook with different settings in
postgresql_pg_hba_conf
overwrites the/var/lib/pgsql/data/pg_hba.conf
file with the new settings.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.postgresql/README.md
file on the control node.Validate the playbook syntax:
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook --ask-vault-pass ~/playbook.yml
Verification
Use the
postgres
super user to connect to a PostgreSQL server and execute the\conninfo
meta command:# psql "postgresql://postgres@managed-node-01.example.com:5432" -c '\conninfo' Password for user postgres: You are connected to database "postgres" as user "postgres" on host "192.0.2.1" at port "5432". SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)
If the output displays a TLS protocol version and cipher details, the connection works and TLS encryption is enabled.
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.postgresql/README.md
file -
/usr/share/doc/rhel-system-roles/postgresql/
directory - Ansible vault