Chapter 8. 3scale backup and restore
This section provides you, as the administrator of a Red Hat 3scale API Management installation, the information needed to:
- Set up the backup procedures for persistent data.
- Perform a restore from backup of the persistent data.
In case of issues with one or more of the MySQL databases, you will be able to restore 3scale correctly to its previous operational state.
8.1. Prerequisites
- A 3scale 2.10 instance. For more information about how to install 3scale, see Installing 3scale on OpenShift.
- jq: For extraction or transformation of JSON data.
An OpenShift Container Platform 4.x user account with one of the following roles in the OpenShift cluster:
- cluster-admin
- admin
- edit
A user with an edit cluster role locally binded in the namespace of a 3scale installation can perform backup and restore procedures.
The following sections contain information about persistent volumes, using data sets, setting up the backup procedures for persistent data, as well as restoring system databases and OpenShift secrets:
8.2. Persistent volumes and considerations
Persistent volumes
- A persistent volume (PV) provided to the cluster by the underlying infrastructure.
- Storage service external to the cluster. This can be in the same data center or elsewhere.
Considerations
The backup and restore procedures for persistent data vary depending on the storage type in use. To ensure the backups and restores preserve data consistency, it is not sufficient to backup the underlying PVs for a database. For example, do not capture only partial writes and partial transactions. Use the database’s backup mechanisms instead.
Some parts of the data are synchronized between different components. One copy is considered the source of truth for the data set. The other is a copy that is not modified locally, but synchronized from the source of truth. In these cases, upon completion, the source of truth should be restored, and copies in other components synchronized from it.
8.3. Using data sets
This section explains in more detail about different data sets in the different persistent stores, their purpose, the storage type used, and whether or not it is the source of truth.
The full state of a 3scale deployment is stored across the following DeploymentConfig
objects and their PVs:
Name | Description |
---|---|
MySQL database ( | |
Volume for Files | |
Redis database ( | |
Redis database ( |
8.3.1. Defining system-mysql
system-mysql
is a relational database which stores information about users, accounts, APIs, plans, and more, in the 3scale Admin Console.
A subset of this information related to services is synchronized to the Backend
component and stored in backend-redis
. system-mysql
is the source of truth for this information.
8.3.2. Defining system-storage
System
can be scaled horizontally with multiple pods uploading and reading said static files, hence the need for a ReadWriteMany (RWX) PersistentVolume
system-storage
stores files to be read and written by the System
component.
They fall into two categories:
-
Configuration files read by the
System
component at run-time - Static files, for example, HTML, CSS, JS, uploaded to system by its CMS feature, for the purpose of creating a Developer Portal
System
can be scaled horizontally with multiple pods uploading and reading said static files, hence the need for a ReadWriteMany (RWX) PersistentVolume
.
8.3.3. Defining backend-redis
backend-redis
contains multiple data sets used by the Backend
component:
-
Usages: This is API usage information aggregated by
Backend
. It is used byBackend
for rate-limiting decisions and bySystem
to display analytics information in the UI or via API. -
Config: This is configuration information about services, rate-limits, and more, that is synchronized from
System
via an internal API. This is not the source of truth of this information, howeverSystem
andsystem-mysql
is. - Queues: This is queues of background jobs to be executed by worker processes. These are ephemeral and are deleted once processed.
8.3.4. Defining system-redis
system-redis
contains queues for jobs to be processed in background. These are ephemeral and are deleted once processed.
8.4. Backing up system databases
The following commands are in no specific order and can be used as you need them to back up and archive system databases.
8.4.1. Backing up system-mysql
Execute MySQL Backup Command:
oc rsh $(oc get pods -l 'deploymentConfig=system-mysql' -o json | jq -r '.items[0].metadata.name') bash -c 'export MYSQL_PWD=${MYSQL_ROOT_PASSWORD}; mysqldump --single-transaction -hsystem-mysql -uroot system' | gzip > system-mysql-backup.gz
8.4.2. Backing up system-storage
Archive the system-storage
files to another storage:
oc rsync $(oc get pods -l 'deploymentConfig=system-app' -o json | jq '.items[0].metadata.name' -r):/opt/system/public/system ./local/dir
8.4.3. Backing up backend-redis
Backup the dump.rdb
file from redis:
oc cp $(oc get pods -l 'deploymentConfig=backend-redis' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./backend-redis-dump.rdb
8.4.4. Backing up system-redis
Backup the dump.rdb
file from redis:
oc cp $(oc get pods -l 'deploymentConfig=system-redis' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./system-redis-dump.rdb
8.4.5. Backing up zync-database
Backup the zync_production
database:
oc rsh $(oc get pods -l 'deploymentConfig=zync-database' -o json | jq -r '.items[0].metadata.name') bash -c 'pg_dump zync_production' | gzip > zync-database-backup.gz
8.4.6. Backing up OpenShift secrets and ConfigMaps
The following is the list of commands for OpenShift secrets and ConfigMaps:
8.4.6.1. OpenShift secrets
oc get secrets system-smtp -o json > system-smtp.json oc get secrets system-seed -o json > system-seed.json oc get secrets system-database -o json > system-database.json oc get secrets backend-internal-api -o json > backend-internal-api.json oc get secrets system-events-hook -o json > system-events-hook.json oc get secrets system-app -o json > system-app.json oc get secrets system-recaptcha -o json > system-recaptcha.json oc get secrets system-redis -o json > system-redis.json oc get secrets zync -o json > zync.json oc get secrets system-master-apicast -o json > system-master-apicast.json
8.4.6.2. ConfigMaps
oc get configmaps system-environment -o json > system-environment.json oc get configmaps apicast-environment -o json > apicast-environment.json
8.5. Restoring system databases
Prevent record creation by scaling down pods like system-app
or disabling routes.
Use the following procedures to restore OpenShift secrets and system databases:
8.5.1. Restoring a template-based deployment
Use the following steps to restore a template-based deployment.
Procedure
- Restore secrets before creating deploying template.:
oc apply -f system-smtp.json
Template parameters will be read from copied secrets and configmaps:
oc new-app --file /opt/amp/templates/amp.yml \ --param APP_LABEL=$(cat system-environment.json | jq -r '.metadata.labels.app') \ --param TENANT_NAME=$(cat system-seed.json | jq -r '.data.TENANT_NAME' | base64 -d) \ --param SYSTEM_DATABASE_USER=$(cat system-database.json | jq -r '.data.DB_USER' | base64 -d) \ --param SYSTEM_DATABASE_PASSWORD=$(cat system-database.json | jq -r '.data.DB_PASSWORD' | base64 -d) \ --param SYSTEM_DATABASE=$(cat system-database.json | jq -r '.data.URL' | base64 -d | cut -d '/' -f4) \ --param SYSTEM_DATABASE_ROOT_PASSWORD=$(cat system-database.json | jq -r '.data.URL' | base64 -d | awk -F '[:@]' '{print $3}') \ --param WILDCARD_DOMAIN=$(cat system-environment.json | jq -r '.data.THREESCALE_SUPERDOMAIN') \ --param SYSTEM_BACKEND_USERNAME=$(cat backend-internal-api.json | jq '.data.username' -r | base64 -d) \ --param SYSTEM_BACKEND_PASSWORD=$(cat backend-internal-api.json | jq '.data.password' -r | base64 -d) \ --param SYSTEM_BACKEND_SHARED_SECRET=$(cat system-events-hook.json | jq -r '.data.PASSWORD' | base64 -d) \ --param SYSTEM_APP_SECRET_KEY_BASE=$(cat system-app.json | jq -r '.data.SECRET_KEY_BASE' | base64 -d) \ --param ADMIN_PASSWORD=$(cat system-seed.json | jq -r '.data.ADMIN_PASSWORD' | base64 -d) \ --param ADMIN_USERNAME=$(cat system-seed.json | jq -r '.data.ADMIN_USER' | base64 -d) \ --param ADMIN_EMAIL=$(cat system-seed.json | jq -r '.data.ADMIN_EMAIL' | base64 -d) \ --param ADMIN_ACCESS_TOKEN=$(cat system-seed.json | jq -r '.data.ADMIN_ACCESS_TOKEN' | base64 -d) \ --param MASTER_NAME=$(cat system-seed.json | jq -r '.data.MASTER_DOMAIN' | base64 -d) \ --param MASTER_USER=$(cat system-seed.json | jq -r '.data.MASTER_USER' | base64 -d) \ --param MASTER_PASSWORD=$(cat system-seed.json | jq -r '.data.MASTER_PASSWORD' | base64 -d) \ --param MASTER_ACCESS_TOKEN=$(cat system-seed.json | jq -r '.data.MASTER_ACCESS_TOKEN' | base64 -d) \ --param RECAPTCHA_PUBLIC_KEY="$(cat system-recaptcha.json | jq -r '.data.PUBLIC_KEY' | base64 -d)" \ --param RECAPTCHA_PRIVATE_KEY="$(cat system-recaptcha.json | jq -r '.data.PRIVATE_KEY' | base64 -d)" \ --param SYSTEM_REDIS_URL=$(cat system-redis.json | jq -r '.data.URL' | base64 -d) \ --param SYSTEM_MESSAGE_BUS_REDIS_URL="$(cat system-redis.json | jq -r '.data.MESSAGE_BUS_URL' | base64 -d)" \ --param SYSTEM_REDIS_NAMESPACE="$(cat system-redis.json | jq -r '.data.NAMESPACE' | base64 -d)" \ --param SYSTEM_MESSAGE_BUS_REDIS_NAMESPACE="$(cat system-redis.json | jq -r '.data.MESSAGE_BUS_NAMESPACE' | base64 -d)" \ --param ZYNC_DATABASE_PASSWORD=$(cat zync.json | jq -r '.data.ZYNC_DATABASE_PASSWORD' | base64 -d) \ --param ZYNC_SECRET_KEY_BASE=$(cat zync.json | jq -r '.data.SECRET_KEY_BASE' | base64 -d) \ --param ZYNC_AUTHENTICATION_TOKEN=$(cat zync.json | jq -r '.data.ZYNC_AUTHENTICATION_TOKEN' | base64 -d) \ --param APICAST_ACCESS_TOKEN=$(cat system-master-apicast.json | jq -r '.data.ACCESS_TOKEN' | base64 -d) \ --param APICAST_MANAGEMENT_API=$(cat apicast-environment.json | jq -r '.data.APICAST_MANAGEMENT_API') \ --param APICAST_OPENSSL_VERIFY=$(cat apicast-environment.json | jq -r '.data.OPENSSL_VERIFY') \ --param APICAST_RESPONSE_CODES=$(cat apicast-environment.json | jq -r '.data.APICAST_RESPONSE_CODES') \ --param APICAST_REGISTRY_URL=$(cat system-environment.json | jq -r '.data.APICAST_REGISTRY_URL')
8.5.2. Restoring an operator-based deployment
Use the following steps to restore operator-based deployments.
Procedure
- Install the 3scale operator on OpenShift.
Restore secrets before creating an APIManager resource:
oc apply -f system-smtp.json oc apply -f system-seed.json oc apply -f system-database.json oc apply -f backend-internal-api.json oc apply -f system-events-hook.json oc apply -f system-app.json oc apply -f system-recaptcha.json oc apply -f system-redis.json oc apply -f zync.json oc apply -f system-master-apicast.json
Restore ConfigMaps before creating an APIManager resource:
oc apply -f system-environment.json oc apply -f apicast-environment.json
- Deploy 3scale with the operator using the APIManager custom resource.
8.5.3. Restoring system-mysql
Procedure
Copy the MySQL dump to the system-mysql pod:
oc cp ./system-mysql-backup.gz $(oc get pods -l 'deploymentConfig=system-mysql' -o json | jq '.items[0].metadata.name' -r):/var/lib/mysql
Decompress the backup file:
oc rsh $(oc get pods -l 'deploymentConfig=system-mysql' -o json | jq -r '.items[0].metadata.name') bash -c 'gzip -d ${HOME}/system-mysql-backup.gz'
Restore the MySQL DB Backup file:
oc rsh $(oc get pods -l 'deploymentConfig=system-mysql' -o json | jq -r '.items[0].metadata.name') bash -c 'export MYSQL_PWD=${MYSQL_ROOT_PASSWORD}; mysql -hsystem-mysql -uroot system < ${HOME}/system-mysql-backup'
8.5.4. Restoring system-storage
Restore the Backup file to system-storage:
oc rsync ./local/dir/system/ $(oc get pods -l 'deploymentConfig=system-app' -o json | jq '.items[0].metadata.name' -r):/opt/system/public/system
8.5.5. Restoring zync-database
Instructions to restore zync-database
depend on the deployment type applied for 3scale.
8.5.5.1. Template-based deployments
Procedure
Scale down the zync DeploymentConfig to 0 pods:
oc scale dc zync --replicas=0 oc scale dc zync-que --replicas=0
Copy the Zync database dump to the
zync-database
pod:oc cp ./zync-database-backup.gz $(oc get pods -l 'deploymentConfig=zync-database' -o json | jq '.items[0].metadata.name' -r):/var/lib/pgsql/
Decompress the backup file:
oc rsh $(oc get pods -l 'deploymentConfig=zync-database' -o json | jq -r '.items[0].metadata.name') bash -c 'gzip -d ${HOME}/zync-database-backup.gz'
Restore the PostgreSQL DB backup file:
oc rsh $(oc get pods -l 'deploymentConfig=zync-database' -o json | jq -r '.items[0].metadata.name') bash -c 'psql -f ${HOME}/zync-database-backup'
Restore to the original count of replicas, by replacing
${ZYNC_REPLICAS}
with the number of replicas, in the commands below:oc scale dc zync --replicas=${ZYNC_REPLICAS} oc scale dc zync-que --replicas=${ZYNC_REPLICAS}
8.5.5.2. Operator-based deployments
Follow the instructions under Deploying 3scale using the operator, in particular Deploying the APIManager custom resource to redeploy your 3scale instance.
Procedure
Store the number of replicas, by replacing
${DEPLOYMENT_NAME}
with the name you defined when you created your 3scale deployment:ZYNC_SPEC=`oc get APIManager/${DEPLOYMENT_NAME} -o json | jq -r '.spec.zync'`
Scale down the zync DeploymentConfig to 0 pods:
oc patch APIManager/${DEPLOYMENT_NAME} --type merge -p '{"spec": {"zync": {"appSpec": {"replicas": 0}, "queSpec": {"replicas": 0}}}}'
Copy the Zync database dump to the
zync-database
pod:oc cp ./zync-database-backup.gz $(oc get pods -l 'deploymentConfig=zync-database' -o json | jq '.items[0].metadata.name' -r):/var/lib/pgsql/
Decompress the backup file:
oc rsh $(oc get pods -l 'deploymentConfig=zync-database' -o json | jq -r '.items[0].metadata.name') bash -c 'gzip -d ${HOME}/zync-database-backup.gz'
Restore the PostgreSQL DB backup file:
oc rsh $(oc get pods -l 'deploymentConfig=zync-database' -o json | jq -r '.items[0].metadata.name') bash -c 'psql -f ${HOME}/zync-database-backup'
Restore to the original count of replicas:
oc patch APIManager ${DEPLOYMENT_NAME} --type merge -p '{"spec": {"zync":'"${ZYNC_SPEC}"'}}'
8.5.5.3. Restoring 3scale options with backend-redis
and system-redis
By restoring 3scale, you will restore backend-redis
and system-redis
. These components have the following functions:
*backend-redis
: The database that supports application authentication and rate limiting in 3scale. It is also used for statistics storage and temporary job storage. *system-redis
: Provides temporary storage for background jobs for 3scale and is also used as a message bus for Ruby processes of system-app
pods.
The backend-redis
component
The backend-redis
component has two databases, data
and queues
. In default 3scale deployment, data
and queues
are deployed in the Redis database, but in different logical database indexes /0
and /1
. Restoring data
database runs without any issues, however restoring queues
database can lead to duplicated jobs.
Regarding duplication of jobs, in 3scale the backend workers process background jobs in a matter of milliseconds. If backend-redis
fails 30 seconds after the last database snapshot and you try to restore it, the background jobs that happened during those 30 seconds are performed twice because backend does not have a system in place to avoid duplication.
In this scenario, you must restore the backup as the /0
database index contains data that is not saved anywhere else. Restoring /0
database index means that you must also restore the /1
database index since one cannot be stored without the other. When you choose to separate databases on different servers and not one database in different indexes, the size of the queue will be approximately zero, so it is preferable not to restore backups and lose a few background jobs. This will be the case in a 3scale Hosted setup you will need to therefore apply different backup and restore strategies for both.
The `system-redis`component
The majority of the 3scale system background jobs are idempotent, that is, identical requests return an identical result no matter how many times you run them.
The following is a list of examples of events handled by background jobs in system:
- Notification jobs such as plan trials about to expire, credit cards about to expire, activation reminders, plan changes, invoice state changes, PDF reports.
- Billing such as invoicing and charging.
- Deletion of complex objects.
- Backend synchronization jobs.
- Indexation jobs, for example with sphinx.
- Sanitisation jobs, for example invoice IDs.
- Janitorial tasks such as purging audits, user sessions, expired tokens, log entries, suspending inactive accounts.
- Traffic updates.
- Proxy configuration change monitoring and proxy deployments.
- Background signup jobs,
- Zync jobs such as Single sign-on (SSO) synchronization, routes creation.
If you are restoring the above list of background jobs, 3scale’s system maintains the state of each restored job. It is important to check the integrity of the system after the restoration is complete.
8.5.6. Ensuring information consistency between Backend
and System
After restoring backend-redis
a sync of the Config information from System
should be forced to ensure the information in Backend
is consistent with that in System
, which is the source of truth.
8.5.6.1. Managing the deployment configuration for backend-redis
These steps are intended for running instances of backend-redis
.
Procedure
Edit the
redis-config
configmap:oc edit configmap redis-config
Comment
SAVE
commands in theredis-config
configmap:#save 900 1 #save 300 10 #save 60 10000
Set
appendonly
to no in theredis-config
configmap:appendonly no
Redeploy
backend-redis
to load the new configurations:oc rollout latest dc/backend-redis
Check the status of the rollout to ensure it has finished:
oc rollout status dc/backend-redis
Rename the
dump.rdb
file:oc rsh $(oc get pods -l 'deploymentConfig=backend-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'mv ${HOME}/data/dump.rdb ${HOME}/data/dump.rdb-old'
Rename the
appendonly.aof
file:oc rsh $(oc get pods -l 'deploymentConfig=backend-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'mv ${HOME}/data/appendonly.aof ${HOME}/data/appendonly.aof-old'
Move the backup file to the POD:
oc cp ./backend-redis-dump.rdb $(oc get pods -l 'deploymentConfig=backend-redis' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb
Redeploy
backend-redis
to load the backup:oc rollout latest dc/backend-redis
Check the status of the rollout to ensure it has finished:
oc rollout status dc/backend-redis
Create the
appendonly
file:oc rsh $(oc get pods -l 'deploymentConfig=backend-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli BGREWRITEAOF'
After a while, ensure that the AOF rewrite is complete:
oc rsh $(oc get pods -l 'deploymentConfig=backend-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli info' | grep aof_rewrite_in_progress
-
While
aof_rewrite_in_progress = 1
, the execution is in progress. -
Check periodically until
aof_rewrite_in_progress = 0
. Zero indicates that the execution is complete.
-
While
Edit the
redis-config
configmap:oc edit configmap redis-config
Uncomment
SAVE
commands in theredis-config
configmap:save 900 1 save 300 10 save 60 10000
Set
appendonly
to yes in theredis-config
configmap:appendonly yes
Redeploy
backend-redis
to reload the default configurations:oc rollout latest dc/backend-redis
Check the status of the rollout to ensure it has finished:
oc rollout status dc/backend-redis
8.5.6.2. Managing the deployment configuration for system-redis
These steps are intended for running instances of system-redis
.
Procedure
Edit the
redis-config
configmap:oc edit configmap redis-config
Comment
SAVE
commands in theredis-config
configmap:#save 900 1 #save 300 10 #save 60 10000
Set
appendonly
to no in theredis-config
configmap:appendonly no
Redeploy
system-redis
to load the new configurations:oc rollout latest dc/system-redis
Check the status of the rollout to ensure it has finished:
oc rollout status dc/system-redis
Rename the
dump.rdb
file:oc rsh $(oc get pods -l 'deploymentConfig=system-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'mv ${HOME}/data/dump.rdb ${HOME}/data/dump.rdb-old'
Rename the
appendonly.aof
file:oc rsh $(oc get pods -l 'deploymentConfig=system-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'mv ${HOME}/data/appendonly.aof ${HOME}/data/appendonly.aof-old'
Move the
Backup
file to the POD:oc cp ./system-redis-dump.rdb $(oc get pods -l 'deploymentConfig=system-redis' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb
Redeploy
system-redis
to load the backup:oc rollout latest dc/system-redis
Check the status of the rollout to ensure it has finished:
oc rollout status dc/system-redis
Create the
appendonly
file:oc rsh $(oc get pods -l 'deploymentConfig=system-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli BGREWRITEAOF'
After a while, ensure that the AOF rewrite is complete:
oc rsh $(oc get pods -l 'deploymentConfig=system-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli info' | grep aof_rewrite_in_progress
-
While
aof_rewrite_in_progress = 1
, the execution is in progress. -
Check periodically until
aof_rewrite_in_progress = 0
. Zero indicates that the execution is complete.
-
While
Edit the
redis-config
configmap:oc edit configmap redis-config
Uncomment
SAVE
commands in theredis-config
configmap:save 900 1 save 300 10 save 60 10000
Set
appendonly
to yes in theredis-config
configmap:appendonly yes
Redeploy
system-redis
to reload the default configurations:oc rollout latest dc/system-redis
Check the status of the rollout to ensure it has finished:
oc rollout status dc/system-redis
8.5.7. Restoring backend-worker
Restore to the latest version of backend-worker
:
oc rollout latest dc/backend-worker
Check the status of the rollout to ensure it has finished:
oc rollout status dc/backend-worker
8.5.8. Restoring system-app
Restore to the latest version of system-app
:
oc rollout latest dc/system-app
Check the status of the rollout to ensure it has finished:
oc rollout status dc/system-app
8.5.9. Restoring system-sidekiq
Restore to the latest version of
system-sidekiq
:oc rollout latest dc/system-sidekiq
Check the status of the rollout to ensure it has finished:
oc rollout status dc/system-sidekiq
8.5.9.1. Restoring system-sphinx
Restore to the latest version of
system-sphinx
:oc rollout latest dc/system-sphinx
Check the status of the rollout to ensure it has finished:
oc rollout status dc/system-sphinx
8.5.9.2. Restoring OpenShift routes managed by Zync
Force Zync to recreate missing OpenShift routes:
oc rsh $(oc get pods -l 'deploymentConfig=system-sidekiq' -o json | jq '.items[0].metadata.name' -r) bash -c 'bundle exec rake zync:resync:domains'