Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 2. Restoring the PostgreSQL database for Red Hat Edge Manager on Red Hat Enterprise Linux
Use this topic after you have a backup of the flightctl database and need to recover Red Hat Edge Manager on Red Hat Enterprise Linux. A typical flightctl-services deployment runs application containers as systemd-managed Podman quadlets. During restore you must keep the PostgreSQL and KV store services running while application services are stopped: the primary procedure restores the database, runs flightctl-restore, then starts application services again. An alternative path skips flightctl-restore but still stops only application services while the database and KV remain up.
2.1. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
The following items cover access, tooling, backups, and release compatibility for the restore procedures in this topic on a Red Hat Enterprise Linux host where Red Hat Edge Manager runs as systemd-managed Podman quadlets, as described in Restore using quadlets and flightctl-restore. If you use database-only restore without flightctl-restore instead, the mandatory requirements still apply; optional tools listed later are needed only when you run the steps that use them (for example, temporary port publishing for flightctl-restore).
-
Host access: Root privileges or sudo on the host where the quadlets run, with permission to stop and start systemd units, run
systemctl daemon-reload, create unit drop-in files under.ddirectories, and run Podman commands (includingpodman secretandpodman exec). -
Core tools:
systemctlandpodmaninstalled and usable by your restore account. -
Flight Control CLI tools:
flightctlandflightctl-restoreavailable on the host (or on a jump host if your runbook runs commands remotely). Before you restore, confirm thatflightctl-restorematches the Red Hat Edge Manager server version (see the first step in Restore using quadlets and flightctl-restore). -
Backup artifacts: A tested backup of the
flightctlPostgreSQL database, in a form your team can replay (logical dump, physical backup, or other agreed method). Include matching configuration backups from/etc/flightctl/when your recovery plan requires them. -
Compatible versions: The Red Hat Edge Manager /
flightctl-servicesrelease on the host should be compatible with the data in the backup (typically the same major release as when the backup was created).
Optional tools for verification steps in this topic (install only what you use):
-
jq— parse output frompodman secret inspectwhen retrieving passwords. -
pg_isready— check PostgreSQL readiness on localhost after temporary port publishing. -
redis-cli(Red Hat Enterprise Linux 9) orvalkey-cli(Red Hat Enterprise Linux 10) — check KV store connectivity on localhost; use the client that matches your host’s major Red Hat Enterprise Linux version and container image. -
ssornetstat— confirm that ports5432and6379listen during the restore window.
If Red Hat Edge Manager runs on Red Hat OpenShift Container Platform or another Kubernetes cluster rather than on a Red Hat Enterprise Linux quadlet host, prerequisites differ (for example, kubectl access and cluster networking). See Restoring the PostgreSQL database for Red Hat Edge Manager on Red Hat OpenShift Container Platform.
Perform full restores during a maintenance window. Restoring data requires stopping application services and can interrupt device management until the procedure completes.
2.2. Restore using quadlets and flightctl-restore Link kopierenLink in die Zwischenablage kopiert!
Typical RPM installations run Red Hat Edge Manager application components as separate systemd units backed by Podman quadlets, while the PostgreSQL and KV store containers keep running until you intentionally restart them. This sequence matches that layout: stop application services only, restore the flightctl database, supply credentials, expose ports locally for flightctl-restore, run the restore binary, remove temporary port publishing, then start application services again.
Verify that
flightctl-restorematches the Red Hat Edge Manager server version:flightctl version flightctl-restore version echo "Server version: $(flightctl version)" echo "Restore version: $(flightctl-restore version)"ImportantThe server and restore versions must match. Update the
flightctl-restorebinary before you continue if they differ.Stop the Red Hat Edge Manager application services so they do not write to the database during restore. Do not stop
flightctl-db.service,flightctl-kv.service, or other database or KV units; they must keep running while you restore data and runflightctl-restore.# Stop only application services (keep database and KV store running) sudo systemctl stop flightctl-api.service sudo systemctl stop flightctl-worker.service sudo systemctl stop flightctl-periodic.service sudo systemctl stop flightctl-alert-exporter.service sudo systemctl stop flightctl-alertmanager-proxy.service sudo systemctl stop flightctl-telemetry-gateway.service sudo systemctl stop flightctl-pam-issuer.service sudo systemctl stop flightctl-cli-artifacts.service sudo systemctl stop flightctl-alertmanager.service sudo systemctl stop flightctl-imagebuilder-api.service sudo systemctl stop flightctl-imagebuilder-worker.service sudo systemctl stop flightctl-ui.serviceNoteIf
systemctlreportsUnknown unitfor a service, your host might not ship that component (for example, image builder or UI). Skip that line. Do not runsystemctl stop flightctl.targethere; that would stop the database and KV services as well.Confirm that the application units you stopped are inactive:
sudo systemctl status flightctl-api.service systemctl is-active flightctl-api.service-
Restore the
flightctlPostgreSQL database using the method that matches your backup (for example,pg_restore,psqlwith a SQL dump, or storage-level recovery). Ensure the database is consistent and reachable from the deployment before you runflightctl-restore. Retrieve the database application password from the Podman secret:
DB_APP_PASSWORD=$(sudo podman secret inspect flightctl-postgresql-user-password --showsecret | jq -r '.[0].SecretData') echo "Database password retrieved successfully"Retrieve the KV store password from the Podman secret:
KV_PASSWORD=$(sudo podman secret inspect flightctl-kv-password --showsecret | jq -r '.[0].SecretData') echo "KV store credentials retrieved successfully"Optional: verify database and KV connectivity from the host through the running containers:
Database: verify readiness:
sudo podman exec flightctl-db pg_isready -U postgresTo connect to the database for additional verification (optional):
sudo podman exec -it flightctl-db psql -U flightctl_app -d flightctlKV store: on Red Hat Enterprise Linux 9 you can use
redis-cli; on Red Hat Enterprise Linux 10 usevalkey-cliinside the container, for example:sudo podman exec flightctl-kv redis-cli pingPublish the database and KV ports on
localhostsoflightctl-restorecan reach them. The database and KV containers use a private network by default; use temporary systemd drop-in files to add port publishing, then reload and restart only those services:DB_CONTAINER_FILE=$(systemctl show flightctl-db.service -p SourcePath --value) KV_CONTAINER_FILE=$(systemctl show flightctl-kv.service -p SourcePath --value) DB_DROPIN_DIR="${DB_CONTAINER_FILE}.d" KV_DROPIN_DIR="${KV_CONTAINER_FILE}.d" sudo mkdir -p "$DB_DROPIN_DIR" "$KV_DROPIN_DIR" sudo tee "$DB_DROPIN_DIR/10-publish-port.conf" > /dev/null <<'EOF' [Container] PublishPort=5432:5432 EOF sudo tee "$KV_DROPIN_DIR/10-publish-port.conf" > /dev/null <<'EOF' [Container] PublishPort=6379:6379 EOF sudo systemctl daemon-reload sudo systemctl restart flightctl-db.service flightctl-kv.serviceVerify listening ports and basic connectivity (adjust the KV client for your Red Hat Enterprise Linux major version):
ss -tlnp | grep -E ':5432|:6379' || true pg_isready -h localhost -p 5432 REDISCLI_AUTH="$KV_PASSWORD" redis-cli -h localhost -p 6379 pingOn Red Hat Enterprise Linux 10, use
VALKEYCLI_AUTHwithvalkey-cliinstead ofREDISCLI_AUTHwithredis-cliif that matches your environment.Run
flightctl-restorewith the database and KV passwords (run from the directory that contains the binary if you do not use a full path):DB_PASSWORD="$DB_APP_PASSWORD" KV_PASSWORD="$KV_PASSWORD" ./bin/flightctl-restoreWatch the output for errors or successful completion.
Remove the temporary port publishing drop-ins and restart the database and KV services:
DB_CONTAINER_FILE=$(systemctl show flightctl-db.service -p SourcePath --value) KV_CONTAINER_FILE=$(systemctl show flightctl-kv.service -p SourcePath --value) DB_DROPIN_DIR="${DB_CONTAINER_FILE}.d" KV_DROPIN_DIR="${KV_CONTAINER_FILE}.d" sudo rm -f "$DB_DROPIN_DIR/10-publish-port.conf" "$KV_DROPIN_DIR/10-publish-port.conf" sudo rmdir "$DB_DROPIN_DIR" 2>/dev/null || true sudo rmdir "$KV_DROPIN_DIR" 2>/dev/null || true sudo systemctl daemon-reload sudo systemctl restart flightctl-db.service flightctl-kv.service
Start the application services again (same set you stopped; omit units your host does not use):
sudo systemctl start flightctl-api.service sudo systemctl start flightctl-worker.service sudo systemctl start flightctl-periodic.service sudo systemctl start flightctl-alert-exporter.service sudo systemctl start flightctl-alertmanager-proxy.service sudo systemctl start flightctl-telemetry-gateway.service sudo systemctl start flightctl-pam-issuer.service sudo systemctl start flightctl-cli-artifacts.service sudo systemctl start flightctl-alertmanager.service sudo systemctl start flightctl-imagebuilder-api.service sudo systemctl start flightctl-imagebuilder-worker.service sudo systemctl start flightctl-ui.serviceVerify with
sudo systemctl statuson each unit andsudo podman ps --filter "name=flightctl-".-
Confirm that the API responds and that inventory looks correct in the Red Hat Edge Manager web console or with
flightctlCLI commands.
2.3. Alternative: Database-only restore without flightctl-restore Link kopierenLink in die Zwischenablage kopiert!
If your operations team restores the PostgreSQL data without running flightctl-restore (for example, a DBA-led replay into the live database), use the same rule as the main procedure: stop only application services so the database and KV store keep running. Do not use systemctl stop flightctl.target, which stops flightctl-db, flightctl-kv, and everything else.
- Stop the application services only (same list as Restore using quadlets and flightctl-restore, step 2). Skip units that do not exist on your host.
Restore the
flightctldatabase using the procedure that matches your backup while PostgreSQL remains available:-
Logical backups (dump files): Use
pg_restore,psql, or equivalent clients per your DBA standards. Storage or snapshot restores: Restore the data directory or volume following your infrastructure playbook.
Ensure database name, roles, and grants match what Red Hat Edge Manager expects.
-
Logical backups (dump files): Use
-
Verify that PostgreSQL accepts connections and that the
flightctldatabase is present (for example,pg_isreadyor a short test query). -
Start the application services again with the same
systemctl startsequence as in the Start the application services again step of Restore using quadlets and flightctl-restore. Do not rely onsystemctl start flightctl.targetunless your runbook confirms it does not disrupt the database or KV units you left running. -
Confirm health (for example,
sudo systemctl status flightctl-api.service) and validate data in the web console or withflightctlCLI commands.
2.4. After you restore Link kopierenLink in die Zwischenablage kopiert!
When the restore commands finish and application services are healthy again, validate the control plane and plan for device reconciliation. Restoring the database changes what the service knows about devices; edge devices must reconnect and compare their live state to the restored specifications.
2.4.1. Operational follow-up Link kopierenLink in die Zwischenablage kopiert!
- Re-run checks from Testing backups in the backup topic if you need a structured validation checklist.
- Record commands, secret handling, and timing in your runbooks so the next restore repeats cleanly.
2.4.2. Post-restore device status changes Link kopierenLink in die Zwischenablage kopiert!
After a successful restore, devices move through automatic status transitions while they reconnect and reconcile with the restored control plane data.
- AwaitingReconnect
-
Devices are always placed in
AwaitingReconnectfirst. The service waits for each device to report its current state again. Spec reconciliation for those devices remains paused until they reconnect. - Enrollment requests and post-restore approval
Devices approved after the restored backup was taken do not exist after the restore and must be approved again. After restore:
-
Devices created from a restored enrollment request are placed in
AwaitingReconnectand follow the normalAwaitingReconnectbehavior. -
Devices without an enrollment request before backup, with a non-zero deployed specification version, are placed in
AwaitingReconnectand follow the normalAwaitingReconnectbehavior. - Devices without an enrollment request before backup, with a zero specification version, move to normal status.
-
Devices created from a restored enrollment request are placed in
- ConflictPaused
-
After a device reconnects and reports its current state, the service compares the specification stored in the restored backup with the device-reported version. If the restored backup specification is older (for example, the device had moved forward while backups lagged), the device can enter
ConflictPaused. Rendering of new specifications stops for that device until an operator resolves the mismatch. Human review is required before you force configuration forward. - Normal operation
- When the restored specification and the device-reported state are compatible, the device returns to normal operational statuses (for example, online or updating) and usual reconciliation resumes.
2.4.2.1. Monitor device status Link kopierenLink in die Zwischenablage kopiert!
Use the flightctl CLI to see which devices need attention:
flightctl get devices
flightctl get devices --field-selector=status.summary.status=AwaitingReconnect
flightctl get devices --field-selector=status.summary.status=ConflictPaused
2.4.2.2. Resolve ConflictPaused devices Link kopierenLink in die Zwischenablage kopiert!
- Review the specification source: if the device belongs to a fleet, inspect the fleet template and selector; if not, inspect the device spec directly. Review labels and ownership to confirm how the restored specification applies to the device.
When you are confident the restored specification is what you want, resume the device or a group of devices. Replace
example-devicewith your device resource name and adjust selectors to match your environment:flightctl resume device example-device flightctl resume device --selector="environment=production"Use additional
flightctl resume deviceoptions your deployment supports (for example, field selectors) if you need to resume many devices in bulk.