Chapter 3. Using PostgreSQL


The PostgreSQL server is an open source robust and highly-extensible database server based on the SQL language. The PostgreSQL server provides an object-relational database system that can manage extensive datasets and a high number of concurrent users. For these reasons, PostgreSQL servers can be used in clusters to manage high amounts of data.

The PostgreSQL server includes features for ensuring data integrity, building fault-tolerant environments and applications. With the PostgreSQL server, you can extend a database with your own data types, custom functions, or code from different programming languages without the need to recompile the database.

Learn how to install and configure PostgreSQL on a RHEL system, how to back up PostgreSQL data, and how to migrate from an earlier PostgreSQL version.

3.1. Installing PostgreSQL

RHEL 10 provides PostgreSQL 16 as the initial version of the Application Stream, which can be installed easily as an RPM package. Additional PostgreSQL versions are provided as alternative versions with a shorter life cycle in minor releases of RHEL 10.

Important

By design, you can install only one version (stream) of the same module and, because of conflicting RPM packages, you cannot install multiple PostgreSQL instances on the same host. As an alternative, you can run the database server services in a container. See Using containers to run multiple PostgreSQL instances on a single host.

Procedure

  1. Install the PostgreSQL server packages:

    # dnf install postgresql-server
    Copy to Clipboard Toggle word wrap

    The postgres superuser is created automatically.

  2. Initialize the database cluster:

    # postgresql-setup --initdb
    Copy to Clipboard Toggle word wrap

    Store the data in the default /var/lib/pgsql/data directory.

  3. Enable and start the postgresql service:

    # systemctl enable --now postgresql.service
    Copy to Clipboard Toggle word wrap

If you install PostgreSQL from packages, you can run only a single version of it on the same host. Tu run multiple instances or different versions of PostgreSQL, you can run the service in a container.

Prerequisites

  • The podman package is installed.

Procedure

  1. Authenticate to the registry.redhat.io registry by using your Red Hat Customer Portal account:

    # podman login registry.redhat.io
    Copy to Clipboard Toggle word wrap

    Skip this step if you are already logged in to the container registry.

  2. Start the containers you want to use. For each container, enter:

    $ podman run -d --name <container_name> -e POSTGRESQL_USER=<user_name> -e POSTGRESQL_PASSWORD=<password> -p <host_port_1>:5432 rhel10/postgresql-16
    Copy to Clipboard Toggle word wrap

    For more information about the usage of this container image, see the Red Hat Ecosystem Catalog.

    Important

    The container names and host ports of the two database servers must differ.

  3. To ensure that clients can access the database server on the network, open the host ports in the firewall:

    # firewall-cmd --permanent --add-port={<host_port_1>/tcp,<host_port_2>/tcp,...}
    # firewall-cmd --reload
    Copy to Clipboard Toggle word wrap

Verification

  1. Connect to the database server and log in as root:

    # psql -u postgres -p -h localhost -P <host_port> --protocol tcp
    Copy to Clipboard Toggle word wrap
  2. Display information about running containers:

    $ podman ps
    Copy to Clipboard Toggle word wrap

3.3. Creating PostgreSQL users

PostgreSQL users are of the following types:

  • The postgres Linux system user: Use it only to run the PostgreSQL server and client applications, such as pg_dump. Do not use the postgres system user for any interactive work on PostgreSQL administration, such as database creation and user management.
  • A database superuser: The default postgres PostgreSQL superuser is not related to the postgres system user. You can limit access of the postgres superuser in the /var/lib/pgsql/data/pg_hba.conf file, otherwise no other permission limitations exist. You can also create other database superusers.
  • A role with specific database access permissions:

    • A database user: Has a permission to log in by default.
    • A group of users: Enables managing permissions for the group as a whole.

Roles can own database objects (for example, tables and functions) and can assign object privileges to other roles by using SQL commands.

Standard database management privileges include SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER, CREATE, CONNECT, TEMPORARY, EXECUTE, and USAGE.

Role attributes are special privileges, such as LOGIN, SUPERUSER, CREATEDB, and CREATEROLE.

Important

Perform most tasks as a role that is not a superuser. A common practice is to create a role that has the CREATEDB and CREATEROLE privileges and use this role for all routine management of databases and roles.

Prerequisites

  • The PostgreSQL server is installed.
  • The database cluster is initialized.
  • The password_encryption parameter in the /var/lib/pgsql/data/postgresql.conf file is set to scram-sha-256.
  • Entries in the /var/lib/pgsql/data/pg_hba.conf file use the scram-sha-256 hashing algorithm as authentication method.

Procedure

  1. Log in as the postgres system user, or switch to this user:

    # su - postgres
    Copy to Clipboard Toggle word wrap
  2. Start the PostgreSQL interactive terminal:

    $ psql
    psql (16.4)
    Type "help" for help.
    
    postgres=#
    Copy to Clipboard Toggle word wrap
  3. Optional: Obtain information about the current database connection:

    postgres=# \conninfo
    You are connected to database "postgres" as user "postgres" via socket in "/var/run/postgresql" at port "5432".
    Copy to Clipboard Toggle word wrap
  4. Create a user named mydbuser, set a password for it, and assign the CREATEROLE and CREATEDB permissions to the user:

    postgres=# CREATE USER mydbuser WITH PASSWORD '<password>' CREATEROLE CREATEDB;
    CREATE ROLE
    Copy to Clipboard Toggle word wrap

    The mydbuser user now can perform routine database management operations: create databases and manage user indexes.

  5. Log out of the interactive terminal by using the \q meta command:

    postgres=# \q
    Copy to Clipboard Toggle word wrap

Verification

  1. Log in to the PostgreSQL terminal as mydbuser, specify the hostname, and connect to the default postgres database, which was created during initialization:

    # psql -U mydbuser -h 127.0.0.1 -d postgres
    Password for user mydbuser:
    Type the password.
    psql (16.4)
    Type "help" for help.
    
    postgres=>
    Copy to Clipboard Toggle word wrap
  2. Create a database:

    postgres=> CREATE DATABASE <db_name>;
    Copy to Clipboard Toggle word wrap
  3. Log out of the session:

    postgres=# \q
    Copy to Clipboard Toggle word wrap
  4. Connect to new database as mydbuser:

    # psql -U mydbuser -h 127.0.0.1 -d <db_name>
    Password for user mydbuser:
    psql (16.4)
    Type "help" for help.
    mydatabase=>
    Copy to Clipboard Toggle word wrap

3.4. Configuring PostgreSQL

In a PostgreSQL database, all data and configuration files are stored in a single directory called a database cluster. By default, PostgreSQL uses the /var/lib/pgsql/data/ directory.

PostgreSQL configuration consists of the following files:

  • /var/lib/pgsql/data/postgresql.conf - is used for setting the database cluster parameters.
  • /var/lib/pgsql/data/postgresql.auto.conf - holds basic PostgreSQL settings similarly to postgresql.conf. However, this file is under the server control. It is edited by the ALTER SYSTEM queries, and cannot be edited manually.
  • /var/lib/pgsql/data/pg_ident.conf - is used for mapping user identities from external authentication mechanisms into the PostgreSQL user identities.
  • /var/lib/pgsql/data/pg_hba.conf - is used for configuring client authentication for PostgreSQL databases.

Procedure

  1. Edit the /var/lib/pgsql/data/postgresql.conf file and configure basic settings of the database cluster parameters, for example:

    log_connections = yes
    log_destination = 'syslog'
    search_path = '"$user", public'
    shared_buffers = 128MB
    password_encryption = scram-sha-256
    Copy to Clipboard Toggle word wrap
  2. Edit the /var/lib/pgsql/data/pg_hba.conf file and configure client authentication, for example:

    # TYPE    DATABASE       USER        ADDRESS              METHOD
    local     all            all                              trust
    host      postgres       all         192.168.93.0/24      ident
    host      all            all         .example.com         scram-sha-256
    Copy to Clipboard Toggle word wrap
  3. Restart the postgresql service so that the changes become effective:

    # systemctl restart postgresql.service
    Copy to Clipboard Toggle word wrap

By default, PostgreSQL uses unencrypted connections. For more secure connections, you can enable Transport Layer Security (TLS) support on the PostgreSQL server and configure your clients to establish encrypted connections.

Prerequisites

  • You created a TLS private key and a certificate authority (CA) issued a server certificate for your PostgreSQL server.
  • The PostgreSQL server is installed.
  • The database cluster is initialized.
  • If FIPS mode is enabled, clients must either support the Extended Master Secret (EMS) extension or use TLS 1.3. TLS 1.2 connections without EMS fail. For more information, see the Red Hat Knowledgebase solution TLS extension "Extended Master Secret" enforced on RHEL 9.2 and later.

Procedure

  1. Store the private key and the server certificate in the /var/lib/pgsql/data/ directory:

    # cp server.{key,crt} /var/lib/pgsql/data/
    Copy to Clipboard Toggle word wrap
  2. Set the ownership of the private key and certificate:

    # chown postgres:postgres /var/lib/pgsql/data/server.{key,crt}
    Copy to Clipboard Toggle word wrap
  3. Set permissions on the server certificate that enable only the PostgreSQL server to read the file:

    # chmod 0400 /var/lib/pgsql/data/server.key
    Copy to Clipboard Toggle word wrap

    Because certificates are part of the communication before a secure connection is established, any client can retrieve them without authentication. Therefore, you do not need to set strict permissions on the server certificate file.

  4. Edit the /var/lib/pgsql/data/postgresql.conf file and make the following changes:

    1. Set the scram-sha-256 hashing algorithm:

      password_encryption = scram-sha-256
      Copy to Clipboard Toggle word wrap
    2. Enable TLS encryption:

      ssl = on
      Copy to Clipboard Toggle word wrap
  5. Edit the /var/lib/pgsql/data/pg_hba.conf file and update the authentication entries to use TLS encryption and the scram-sha-256 hashing algorithm. For example, change host entries to hostssl to enable TLS encryption, and set the scram-sha-256 hashing algorithm in the last column:

    hostssl    all    all    192.0.2.0/24    scram-sha-256
    Copy to Clipboard Toggle word wrap
  6. Restart the postgresql service:

    # systemctl restart postgresql.service
    Copy to Clipboard Toggle word wrap

Verification

  • Use the postgres super user to connect to a PostgreSQL server and execute the \conninfo meta command:

    # psql "postgresql://postgres@localhost:5432" -c '\conninfo'
    Password for user postgres:
    You are connected to database "postgres" as user "postgres" on host "192.0.2.1" at port "5432".
    SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)
    Copy to Clipboard Toggle word wrap

A logical backup of PostgreSQL data consists of the SQL statements necessary to restore the data. The advantage of logical backup over physical backup is that the data can be restored on other hardware configurations and PostgreSQL versions.

The SQL dump method is based on generating a dump file with SQL commands. When a dump is uploaded back to the database server, it recreates the database in the same state as it was at the time of the dump.

The SQL dump is ensured by the following PostgreSQL client applications:

  • pg_dump dumps a single database without cluster-wide information about roles or tablespaces
  • pg_dumpall dumps each database in a given cluster and preserves cluster-wide data, such as role and tablespace definitions.

By default, the pg_dump and pg_dumpall commands write their results into the standard output. To store the dump in a file, redirect the output to an SQL file. The resulting SQL file can be either in a text format or in other formats that allow for parallelism and for more detailed control of object restoration.

You can perform the SQL dump from any remote host that has access to the database.

3.6.1. Advantages and disadvantages of an SQL dump

An SQL dump has the following advantages compared to other PostgreSQL backup methods:

  • An SQL dump is the only PostgreSQL backup method that is not server version-specific. The output of the pg_dump utility can be reloaded into later versions of PostgreSQL, which is not possible for file system level backups or continuous archiving.
  • An SQL dump is the only method that works when transferring a database to a different machine architecture, such as going from a 32-bit to a 64-bit server.
  • An SQL dump provides internally consistent dumps. A dump represents a snapshot of the database at the time pg_dump began running.
  • The pg_dump utility does not block other operations on the database when it is running.

A disadvantage of an SQL dump is that it takes more time compared to file system level backup.

To dump a single database without cluster-wide information, use the pg_dump utility.

Prerequisites

  • You are logged in as the postgres superuser or a user with database administrator privileges.

Procedure

  • Dump a database without cluster-wide information:

    $ pg_dump <db_name> > <dump_file>
    Copy to Clipboard Toggle word wrap

    To specify which database server pg_dump will contact, use the following command-line options:

    • The -h option to define the host.

      The default host is either the local host or what is specified by the PGHOST environment variable.

    • The -p option to define the port.

      The default port is indicated by the PGPORT environment variable or the compiled-in default.

To restore a database from an SQL dump that you dumped using the pg_dump utility, follow the steps below.

Prerequisites

  • You are logged in as the postgres superuser or a user with database administrator privileges.

Procedure

  1. Create a new database:

    $ createdb <db_name>
    Copy to Clipboard Toggle word wrap
  2. Verify that all users who own objects or were granted permissions on objects in the dumped database already exist. If such users do not exist, the restore fails to recreate the objects with the original ownership and permissions.
  3. Run the psql utility to restore a text file dump created by the pg_dump utility:

    $ psql <db_name> < <dump_file>
    Copy to Clipboard Toggle word wrap

    where <dump_file> is the output of the pg_dump command. To restore a non-text file dump, use the pg_restore utility instead:

    $ pg_restore <non-plain_text_file>
    Copy to Clipboard Toggle word wrap

To dump each database in a given database cluster and to preserve cluster-wide data, use the pg_dumpall utility.

Prerequisites

  • You are logged in as the postgres superuser or a user with database administrator privileges.

Procedure

  • Dump all databases in the database cluster and preserve cluster-wide data:

    $ pg_dumpall > <dump_file>
    Copy to Clipboard Toggle word wrap

    To specify which database server pg_dumpall will contact, use the following command-line options:

    • The -h option to define the host.

      The default host is either the local host or what is specified by the PGHOST environment variable.

    • The -p option to define the port.

      The default port is indicated by the PGPORT environment variable or the compiled-in default.

    • The -l option to define the default database.

      This option enables you to choose a default database different from the postgres database created automatically during initialization.

To restore data from a database cluster that you dumped by using the pg_dumpall utility, follow the steps below.

Prerequisites

  • You are logged in as the postgres superuser or a user with database administrator privileges.

Procedure

  1. Ensure that all users who own objects or were granted permissions on objects in the dumped databases already exist. If such users do not exist, the restore fails to recreate the objects with the original ownership and permissions.
  2. Run the psql utility to restore a text file dump created by the pg_dumpall utility:

    $ psql < <dump_file>
    Copy to Clipboard Toggle word wrap

    where <dump_file> is the output of the pg_dumpall command.

3.6.6. Handling SQL errors during restore

By default, psql continues to execute if an SQL error occurs, causing the database to restore only partially.

To change the default behavior, use one of the following approaches when restoring a dump.

Prerequisites

  • You are logged in as the postgres superuser or a user with database administrator privileges.

Procedure

  • Make psql exit with an exit status of 3 if an SQL error occurs by setting the ON_ERROR_STOP variable:

    $ psql --set ON_ERROR_STOP=on <db_name> < <dump_file>
    Copy to Clipboard Toggle word wrap
  • Specify that the whole dump is restored as a single transaction so that the restore is either fully completed or canceled.

    • When restoring a text file dump by using the psql utility:

      $ psql -1
      Copy to Clipboard Toggle word wrap
    • When restoring a non-text file dump by using the pg_restore utility:

      $ pg_restore -e
      Copy to Clipboard Toggle word wrap

    Note that when you use this approach, even a minor error can cancel a restore operation that has already run for many hours.

A physical backup of PostgreSQL data contains file and directories that store the content. This method is typically faster and smaller in size.

You can back up an entire PostgreSQL instance in offline mode by performing a file system level backup. This method is faster than an SQL dump.

Limitations of a file system level backup compared to other PostgreSQL backup methods:

  • File system level backup is specific to an architecture and a RHEL major version. For example, you cannot use this method to backup your data on RHEL 9 and restore it on RHEL 10.
  • The database service must be stopped before backing up and restoring data.
  • Backing up and restoring certain individual files or tables is impossible.

Procedure

  1. Stop the postgresql service:

    # systemctl stop postgresql.service
    Copy to Clipboard Toggle word wrap
  2. Use any method to create a file system backup, for example a tar archive:

    $ tar -cf backup.tar /var/lib/pgsql/data/
    Copy to Clipboard Toggle word wrap
  3. Start the postgresql service:

    # systemctl start postgresql.service
    Copy to Clipboard Toggle word wrap

PostgreSQL records every change made to the database’s data files into a write ahead log (WAL) file that is available in the pg_wal/ subdirectory of the cluster’s data directory. This log is intended primarily for a crash recovery. After a crash, the log entries made since the last checkpoint can be used for restoring the database to a consistency.

The continuous archiving method, also known as an online backup, combines the WAL files with a copy of the database cluster in the form of a base backup performed on a running server or a file system level backup.

If a database recovery is needed, you can restore the database from the copy of the database cluster and then replay log from the backed up WAL files to bring the system to the current state.

With the continuous archiving method, you must keep a continuous sequence of all archived WAL files that extends at minimum back to the start time of your last base backup. Therefore the ideal frequency of base backups depends on:

  • The storage volume available for archived WAL files.
  • The maximum possible duration of data recovery in situations when recovery is necessary. In cases with a long period since the last backup, the system replays more WAL segments, and the recovery therefore takes more time.
Note

You cannot use pg_dump and pg_dumpall SQL dumps as a part of a continuous archiving backup solution. SQL dumps produce logical backups and do not contain enough information to be used by a WAL replay.

Continuous archiving has the following advantages compared to other PostgreSQL backup methods:

  • With the continuous backup method, it is possible to use a base backup that is not entirely consistent because any internal inconsistency in the backup is corrected by the log replay. Therefore you can perform a base backup on a running PostgreSQL server.
  • A file system snapshot is not needed; tar or a similar archiving utility is sufficient.
  • Continuous backup can be achieved by continuing to archive the WAL files because the sequence of WAL files for the log replay can be indefinitely long. This is particularly valuable for large databases.
  • Continuous backup supports point-in-time recovery. It is not necessary to replay the WAL entries to the end. The replay can be stopped at any point and the database can be restored to its state at any time since the base backup was taken.
  • If the series of WAL files are continuously available to another machine that has been loaded with the same base backup file, it is possible to restore the other machine with a nearly-current copy of the database at any point.

Continuous archiving has the following disadvantages compared to other PostgreSQL backup methods:

  • Continuous backup method supports only restoration of an entire database cluster, not a subset.
  • Continuous backup requires extensive archival storage.

3.8.2. Setting up WAL archiving

A running PostgreSQL server produces a sequence of write ahead log (WAL) records. The server physically divides this sequence into WAL segment files, which are given numeric names that reflect their position in the WAL sequence. Without WAL archiving, the segment files are reused and renamed to higher segment numbers.

When archiving WAL data, the contents of each segment file are captured and saved at a new location before the segment file is reused. You have multiple options where to save the content, such as an NFS-mounted directory on another machine, a tape drive, or a CD.

Note that WAL records do not include changes to configuration files.

To enable WAL archiving, use the following procedure.

Procedure

  1. In the /var/lib/pgsql/data/postgresql.conf file:

    1. Set the wal_level configuration parameter to replica or higher.
    2. Set the archive_mode parameter to on.
    3. Specify the shell command in the archive_command configuration parameter. You can use the cp command, another command, or a shell script.

      For example:

      archive_command = 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
      Copy to Clipboard Toggle word wrap

      where the %p parameter is replaced by the relative path to the file to archive and the %f parameter is replaced by the file name.

      This command copies archivable WAL segments to the /mnt/server/archivedir/ directory. After replacing the %p and %f parameters, the executed command looks as follows:

      test ! -f /mnt/server/archivedir/00000001000000A900000065 && cp pg_wal/00000001000000A900000065 /mnt/server/archivedir/00000001000000A900000065
      Copy to Clipboard Toggle word wrap

      A similar command is generated for each new file that is archived.

      Note

      The archive command is executed only on completed WAL segments. A server that generates little WAL traffic can have a substantial delay between the completion of a transaction and its safe recording in archive storage. To limit how old unarchived data can be, you can:

      • Set the archive_timeout parameter to force the server to switch to a new WAL segment file with a given frequency.
      • Use the pg_switch_wal parameter to force a segment switch to ensure that a transaction is archived immediately after it finishes.
  2. Restart the postgresql service to enable the changes:

    # systemctl restart postgresql.service
    Copy to Clipboard Toggle word wrap
  3. Test your archive command and ensure it does not overwrite an existing file and that it returns a nonzero exit status if it fails.
  4. To protect your data, ensure that the segment files are archived into a directory that does not have group or world read access.

3.8.3. Making a base backup

You can create a base backup in several ways. The simplest way of performing a base backup is using the pg_basebackup utility on a running PostgreSQL server.

The base backup process creates a backup history file that is stored into the WAL archive area and is named after the first WAL segment file that you need for the base backup.

The backup history file is a small text file containing the starting and ending times, and WAL segments of the backup. If you used the label string to identify the associated dump file, you can use the backup history file to determine which dump file to restore.

Note

Consider keeping several backup sets to be certain that you can recover your data.

Prerequisites

  • You are logged in as the postgres superuser, a user with database administrator privileges, or another user with at least REPLICATION permissions.
  • You must keep all the WAL segment files generated during and after the base backup.

Procedure

  1. Use the pg_basebackup utility to perform the base backup.

    • To create a base backup as individual files (plain format):

      $ pg_basebackup -D <backup_directory> -Fp
      Copy to Clipboard Toggle word wrap

      Replace backup_directory with your chosen backup location.

      If you use tablespaces and perform the base backup on the same host as the server, you must also use the --tablespace-mapping option, otherwise the backup will fail upon an attempt to write the backup to the same location.

    • To create a base backup as a tar archive (tar and compressed format):

      $ pg_basebackup -D <backup_directory> -Ft -z
      Copy to Clipboard Toggle word wrap

      Replace backup_directory with your chosen backup location.

      To restore such data, you must manually extract the files in the correct locations.

    To specify which database server pg_basebackup will contact, use the following command-line options:

    • The -h option to define the host.

      The default host is either the local host or a host specified by the PGHOST environment variable.

    • The -p option to define the port.

      The default port is indicated by the PGPORT environment variable or the compiled-in default.

  2. After the base backup process is complete, safely archive the copy of the database cluster and the WAL segment files used during the backup, which are specified in the backup history file.
  3. Delete WAL segments numerically lower than the WAL segment files used in the base backup because these are older than the base backup and no longer needed for a restore.

To restore a database by using a continuous backup, use the following procedure.

Procedure

  1. Stop the server:

    # systemctl stop postgresql.service
    Copy to Clipboard Toggle word wrap
  2. Copy the necessary data to a temporary location.

    Preferably, copy the whole cluster data directory and any tablespaces. Note that this requires enough free space on your system to hold two copies of your existing database.

    If you do not have enough space, save the contents of the cluster’s pg_wal directory, which can contain logs that were not archived before the system went down.

  3. Remove all existing files and subdirectories under the cluster data directory and under the root directories of any tablespaces you are using.
  4. Restore the database files from your base backup.

    Ensure that:

    • The files are restored with the correct ownership (the database system user, not root).
    • The files are restored with the correct permissions.
    • The symbolic links in the pg_tblspc/ subdirectory are restored correctly.
  5. Remove any files present in the pg_wal/ subdirectory.

    These files resulted from the base backup and are therefore obsolete. If you did not archive pg_wal/, recreate it with proper permissions.

  6. Copy any unarchived WAL segment files that you saved in step 2 into pg_wal/.
  7. Create the recovery.conf recovery command file in the cluster data directory and specify the shell command in the restore_command configuration parameter. You can use the cp command, another command, or a shell script. For example:

    restore_command = 'cp /mnt/server/archivedir/%f "%p"'
    Copy to Clipboard Toggle word wrap
  8. Start the server:

    # systemctl start postgresql.service
    Copy to Clipboard Toggle word wrap

    The server will enter the recovery mode and proceed to read through the archived WAL files that it needs.

    If the recovery is terminated due to an external error, the server can be restarted and it will continue the recovery. When the recovery process is completed, the server renames recovery.conf to recovery.done. This prevents the server from accidental re-entering the recovery mode after it starts normal database operations.

  9. Check the contents of the database to verify that the database has recovered into the required state.

    If the database has not recovered into the required state, return to step 1. If the database has recovered into the required state, allow the users to connect by restoring the client authentication configuration in the pg_hba.conf file.

You can use the pg_dump and psql utilities to back up a PostgreSQL database and directly restore it on another PostgreSQL server. With this method you can transfer a database in a single step without intermediate files.

Prerequisites

  • You are logged in as the postgres user.

Procedure

  • Transfer a database from the source server to a destination server:

    $ pg_dump -h <source_server> <db_name> | psql -h <destination_server> <db_name>
    Copy to Clipboard Toggle word wrap

If you already run a PostgreSQL version lower than 16 on RHEL 9 and want to move the database software to a host that runs RHEL 10, you can migrate the databases.

The following migration methods are available:

Important

Always back up the /var/lib/pgsql/data/ directory on the source host before a PostgreSQL migration.

You can use the backup and restore method to migrate data from any RHEL 8 or RHEL 9 version of PostgreSQL to any equal or later version of PostgreSQL on RHEL 10.

Prerequisites

  • The existing database server runs on RHEL 8 or RHEL 9 and uses a PostgreSQL version installed from the RHEL repositories.
  • The locale settings on both hosts are the same. To verify this, compare the output of the echo $LANG command on both hosts.

Procedure

  1. On the host with the existing PostgreSQL instance that you want to migrate:

    1. Export all databases to the /var/lib/pgsql/pgdump_file.sql file:

      # su - postgres -c "pg_dumpall > /var/lib/pgsql/pgdump_file.sql"
      Copy to Clipboard Toggle word wrap
    2. Check the exported file:

      # su - postgres -c 'less "/var/lib/pgsql/pgdump_file.sql"'
      Copy to Clipboard Toggle word wrap
    3. Copy the database dump that you created in an earlier step and the PostgreSQL configuration files to the RHEL 10 host, for example:

      # scp /var/lib/pgsql/pgdump_file.sql \
             /var/lib/pgsql/data/pg_hba.conf \
             /var/lib/pgsql/data/pg_ident.conf \
             /var/lib/pgsql/data/postgresql.conf \
             <user>@<rhel_10_host>:/tmp/
      Copy to Clipboard Toggle word wrap
  2. On the RHEL 10 host:

    1. Install the postgresql-server package:

      # dnf install postgresql-server
      Copy to Clipboard Toggle word wrap
    2. Initialize the /var/lib/pgsql/data/ directory:

      # postgresql-setup --initdb
      Copy to Clipboard Toggle word wrap
    3. Move the copied configuration files to the /var/lib/pgsql/data/ directory:

      # mv /tmp/pg_hba.conf \
           /tmp/pg_ident.conf \
           /tmp/postgresql.conf \
           /var/lib/pgsql/data/
      Copy to Clipboard Toggle word wrap
    4. Ensure a correct ownership of the content in the /var/lib/pgsql/data/ directory:

      # chown -R postgres:postgres /var/lib/pgsql/data/
      Copy to Clipboard Toggle word wrap
    5. Restore the SELinux context on /var/lib/pgsql/data/:

      # restorecon -Rv /var/lib/pgsql/data/
      Copy to Clipboard Toggle word wrap
    6. Enable and start the postgresql service:

      # systemctl enable --now postgresql.service
      Copy to Clipboard Toggle word wrap
    7. Import the data as the postgres user:

      # su - postgres -c 'psql -f /tmp/pgdump_file.sql postgres'
      Copy to Clipboard Toggle word wrap
    8. Verify your databases and ensure that your applications that use the PostgreSQL server work as expected.

If you want to migrate a PostgreSQL 13 instance from a previous RHEL version to PostgreSQL 16 on RHEL 10, you can use the fast upgrade method. With this method, you copy the content of the /var/lib/pgsql/data/ directory to the RHEL 10 host and the pg_update utility converts the databases.

Important

This method works only if your existing PostgreSQL instance is version 13 and the hardware architecture is the same on the source and destination host. In other cases, use the backup and restore method.

Prerequisites

  • The existing database server uses PostgreSQL 13.
  • The hardware architecture of the current and future server is the same.
  • The RHEL 10 host has enough free space on the disk that holds the /var/lib/pgsql/ directory.

    For example, if the size of the directory on the PostgreSQL server want to migrate is 10 GiB, you require at least 20 GiB free disk space on the RHEL 10 host during the migration.

  • The locale settings on both hosts are the same. To verify this, compare the output of the echo $LANG command on both hosts.

Procedure

  1. On the host with the existing PostgreSQL instance that you want to migrate:

    1. Stop the postgresql service:

      # systemctl stop postgresql.service
      Copy to Clipboard Toggle word wrap
    2. Change into the /var/lib/pgsql/ directory, and back up the data subdirectory:

      # cd /var/lib/pgsql/
      # tar -zcf ~/pgdata.bak.tar.gz data/
      Copy to Clipboard Toggle word wrap
    3. Copy the ~/pgdata.bak.tar.gz archive to the RHEL 10 host, for example:

      # scp ~/pgdata.bak.tar.gz <user>@<rhel_10_host>:/tmp/
      Copy to Clipboard Toggle word wrap
  2. On the RHEL 10 host:

    1. Install the required packages:

      # dnf install postgresql-server postgresql-upgrade
      Copy to Clipboard Toggle word wrap

      The postgresql-upgrade package provides a PostgreSQL 13 server which is required during the migration.

    2. If you use third party PostgreSQL server modules, build them against both the postgresql-devel and postgresql-upgrade-devel packages, and install them.
    3. Ensure that the postgresql service is stopped:

      # systemctl stop postgresql.service
      Copy to Clipboard Toggle word wrap
    4. Change into the /var/lib/pgsql/ directory, and extract the backed up data directory from the previous host:

      # cd /var/lib/pgsql/
      # tar -zxf /tmp/pgdata.bak.tar.gz
      Copy to Clipboard Toggle word wrap
    5. Optional: Remove the /tmp/pgdata.bak.tar.gz archive:

      # rm /tmp/pgdata.bak.tar.gz
      Copy to Clipboard Toggle word wrap
    6. Perform the upgrade process:

      # postgresql-setup --upgrade
      Copy to Clipboard Toggle word wrap

      The postgresql-setup shell script renames the /var/lib/pgsql/data/ directory to /var/lib/pgsql/data-old/ and uses the pg_upgrade utility to migrate the databases to a re-created /var/lib/pgsql/data/ directory.

      Important

      The pg_upgrade utility migrates only the databases and not the configuration files. After the migration, /var/lib/pgsql/data/ contains only the default .conf files. If you, previously, had custom configuration files, copy them from the /var/lib/pgsql/data-old/ directory and ensure that they are compatible with the new PostgreSQL version.

    7. Enable and start the postgresql service:

      # systemctl enable --now postgresql.service
      Copy to Clipboard Toggle word wrap
    8. Clean up and analyze all databases:

      # su postgres -c 'vacuumdb --all --analyze-in-stages'
      Copy to Clipboard Toggle word wrap
    9. Verify your databases and ensure that your applications that use the PostgreSQL server work as expected.
    10. Optional: Remove the /var/lib/pgsql/data-old/ directory which contains the databases and configuration file from before the migration.

      # rm -r /var/lib/pgsql/data-old/
      Copy to Clipboard Toggle word wrap
    11. Optional: Remove the postgresql-upgrade package:

      # dnf remove postgresql-upgrade
      Copy to Clipboard Toggle word wrap
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat