Migrating Red Hat 3scale API Management
Migrate or upgrade 3scale API Management and its components
Abstract
Preface Copy linkLink copied to clipboard!
DO NOT ATTEMPT TO INSTALL OR UPGRADE TO 3scale 2.16 IF YOUR DEPLOYMENT USES ORACLE DATABASE. 3scale 2.16 is currently not compatible with Oracle DB. Upgrading from 2.15 to 2.16 in such environments will lead to severe issues preventing the system from operating correctly. Deployments using Oracle DB must stay on version 2.15 until compatibility is added in a future maintenance release (planned for 2.16.1).
This guide provides the information to upgrade Red Hat 3scale API Management to the latest version via the 3scale operator. You will find details required to upgrade your 3scale installation from 2.15 to 2.16, as well as the steps to upgrade APIcast in an operator-based deployment.
To upgrade your 3scale On-premises deployment, refer to the following guide:
To upgrade APIcast in an operator-based deployment, refer to the following guide:
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We appreciate your feedback on our documentation.
To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly.
Prerequisite
- You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one.
Procedure
- Click the following link: Create issue.
- In the Summary text box, enter a brief description of the issue.
In the Description text box, provide the following information:
- The URL of the page where you found the issue.
-
A detailed description of the issue.
You can leave the information in any other fields at their default values.
- Click Create to submit the Jira issue to the documentation team.
Thank you for taking the time to provide feedback.
Chapter 1. 3scale API Management operator-based upgrade guide: from 2.15 to 2.16 Copy linkLink copied to clipboard!
Upgrade Red Hat 3scale API Management from version 2.15 to 2.16, in an operator-based installation to manage 3scale on OpenShift 4.x.
To automatically obtain a micro-release of 3scale, make sure automatic updates is on. Do not set automatic updates if you are using an Oracle external database. To check this, see Configuring automated application of micro releases.
In order to understand the required conditions and procedure, read the entire upgrade guide before applying the listed steps. The upgrade process disrupts the provision of the service until the procedure finishes. Due to this disruption, make sure to have a maintenance window.
1.1. Prerequisites to perform the upgrade Copy linkLink copied to clipboard!
To resolve certificate verification failures with the 3scale operator, add the annotation to skip certificate verification to the affected Custom Resource (CR). This annotation can be applied to a CR during creation or added to an existing CR. Once applied, the errors are reconciled.
This section describes the required configurations to upgrade 3scale from 2.15 to 2.16 in an operator-based installation.
- An OpenShift Container Platform (OCP) 4.12, 4.14, 4.16, 4.17, 4.18, 4.19 or 4.20 cluster with administrator access. Ensure that your OCP environment is upgraded to at least version 4.12, which is the minimal requirement for proceeding with a 3scale update.
- 3scale 2.15 previously deployed via the 3scale operator.
Make sure the latest CSV of the
threescale-2.15channel is in use. To check it:- If the approval setting for the subscription is automatic, you should already be in the latest CSV version of the channel.
- If the approval setting for the subscription is manual, make sure you approve all pending InstallPlans and have the latest CSV version.
- Keep in mind if there is a pending install plan, there might be more pending install plans, which will only be shown after the existing pending plan has been installed.
1.1.1. External databases requirement Copy linkLink copied to clipboard!
In 3scale 2.16 internal databases are not supported, and are not managed by the operator. The only exception is the Zync database that can still be used as an internal component (zync-database deployment).
Before upgrading to 2.16, ensure that all database used by your 3scale installation are not managed by the operator: - System database - MySQL, PostgreSQL or Oracle (configured by system-database secret) - Backend Redis (configured by backend-redis secret) - System Redis (configured by system-redis secret)
Ensure that the versions of your databases are supported in 3scale 2.16 before proceeding with the upgrade. Refer to Components and minimum version requirements for more information.
If you are using internal databases, first migrate them to the external databases, following the instructions in Externalizing databases for 2.16.
1.1.2. 3scale API Management 2.16 pre-flight checks Copy linkLink copied to clipboard!
Before installing the 3scale 2.16 via the operator, ensure your database components meet the required minimum versions. This pre-flight check is critical to avoid breaking your 3scale instance during the upgrade.
- If the databases are not upgraded, the 3scale instance will not be upgraded to 2.16.
- You can upgrade your databases with or without the 3scale 2.16 operator running. If the operator is running, it checks database versions every 10 minutes and will automatically trigger the upgrade process. If the operator was not running during the upgrade, scale it back up. You must do this to verify the requirements and continue with the installation.
1.1.2.1. Components and minimum version requirements Copy linkLink copied to clipboard!
- The Oracle Database is not checked.
- Zync with external databases is not checked.
Ensure the following components are at or above the specified versions:
System-app component:
- MySQL: 8.0.0
- PostgreSQL: 15.0
Backend component:
- Redis: 7.2 (two instances required)
Version verification
Verify MySQL version:
mysql --version
$ mysql --versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify PostgreSQL version:
psql --version
$ psql --versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify Redis version:
redis-server --version
$ redis-server --versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.1.2.2. Upgrading databases not meeting requirements Copy linkLink copied to clipboard!
If your database versions do not meet the minimum requirements, follow these steps:
Install the 3scale 2.16 operator:
- The 2.16 operator is installed regardless of the database versions.
Upgrade databases:
- Upgrade MySQL, PostgreSQL, or Redis to meet the minimum required versions.
- Note: Follow the official documentation for the upgrade procedures of each database.
Resume 2.16 upgrade:
- Once the databases are upgraded, the 3scale 2.16 operator detects the new versions.
- The upgrade process for 3scale 2.16 will then proceed automatically.
By following these pre-flight checks and ensuring your database components are up-to-date, you can transition to 3scale 2.16.
1.2. Upgrading from 2.15 to 2.16 in an operator-based installation Copy linkLink copied to clipboard!
To upgrade 3scale from version 2.15 to 2.16 in an operator-based deployment:
- Log in to the OCP console using the account with administrator privileges.
- Select the project where the 3scale-operator has been deployed.
- Click Operators > Installed Operators.
- Select Red Hat Integration - 3scale > Subscription > Channel.
Edit the channel of the subscription by selecting threescale-2.16 and save the changes.
This will start the upgrade process.
Query the pods' status on the project until you see all the new versions are running and ready without errors:
oc get pods -n <3scale_namespace>
$ oc get pods -n <3scale_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note- The pods might have temporary errors during the upgrade process.
- The time required to upgrade pods can vary from 5-10 minutes.
- After new pod versions are running, confirm a successful upgrade by logging in to the 3scale Admin Portal and checking that it works as expected.
Check the status of the APIManager objects and get the YAML content by running the following command. <myapimanager> represents the name of your APIManager:
oc get apimanager <myapimanager> -n <3scale_namespace> -o yaml
$ oc get apimanager <myapimanager> -n <3scale_namespace> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow The new annotations with the values should be as follows:
apps.3scale.net/apimanager-threescale-version: "2.16" apps.3scale.net/threescale-operator-version: "0.13.x"
apps.3scale.net/apimanager-threescale-version: "2.16" apps.3scale.net/threescale-operator-version: "0.13.x"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After you have performed all steps, the 3scale upgrade from 2.15 to 2.16 in an operator-based deployment is complete.
1.3. Upgrading from 2.15 to 2.16 in an operator-based installation with an external Oracle database Copy linkLink copied to clipboard!
Follow this procedure to update your 3scale operator-based installation with an external Oracle database.
Procedure
-
Follow these steps in Installing Red Hat 3scale API Management guide to create a new
system-oracle-3scale-2.16.0-1image. - Follow the steps in Upgrading from 2.15 to 2.16 in an operator-based installation to upgrade the 3scale operator.
- Once the upgrade is completed, update the APIManager custom resource with the new image created in the first step of this procedure as described in Installing 3scale API Management with Oracle using the operator.
Chapter 2. APIcast operator-based upgrade guide: from 2.15 to 2.16 Copy linkLink copied to clipboard!
Upgrading APIcast from 2.15 to 2.16 in an operator-based installation helps you use the APIcast API gateway to integrate your internal and external application programming interfaces (APIs) services with 3scale.
In order to understand the required conditions and procedure, read the entire upgrade guide before applying the listed steps. The upgrade process disrupts the provision of the service until the procedure finishes. Due to this disruption, make sure to have a maintenance window.
2.1. Prerequisites to perform the upgrade Copy linkLink copied to clipboard!
To perform the upgrade of APIcast from 2.15 to 2.16 in an operator-based installation, the following required prerequisites must already be in place:
- An OpenShift Container Platform (OCP) 4.12, 4.14, 4.16, 4.17, 4.18, 4.19 or 4.20 cluster with administrator access. Ensure that your OCP environment is upgraded to at least version 4.12, which is the minimal requirement for proceeding with an APIcast update.
- APIcast 2.15 previously deployed via the APIcast operator.
Make sure the latest CSV of the
threescale-2.15channel is in use. To check it:- If the approval setting for the subscription is automatic, you should already be in the latest CSV version of the channel.
- If the approval setting for the subscription is manual, make sure you approve all pending InstallPlans and have the latest CSV version.
- Keep in mind if there is a pending install plan, there might be more pending install plans, which will only be shown after the existing pending plan has been installed.
2.2. Upgrading APIcast from 2.15 to 2.16 in an operator-based installation Copy linkLink copied to clipboard!
Upgrade APIcast from 2.15 to 2.16 in an operator-based installation so that APIcast can function as the API gateway in your 3scale installation.
Procedure
- Log in to the OCP console using the account with administrator privileges.
- Select the project where the APIcast operator has been deployed.
- Click Operators > Installed Operators.
- In Subscription > Channel, select Red Hat Integration - 3scale APIcast gateway.
Edit the channel of the subscription by selecting the threescale-2.16 channel and save the changes.
This will start the upgrade process.
Query the pods status on the project until you see all the new versions are running and ready without errors:
oc get pods -n <apicast_namespace>
$ oc get pods -n <apicast_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note- The pods might have temporary errors during the upgrade process.
- The time required to upgrade pods can vary from 5-10 minutes.
Check the status of the APIcast objects and get the YAML content by running the following command:
oc get apicast <myapicast> -n <apicast_namespace> -o yaml
$ oc get apicast <myapicast> -n <apicast_namespace> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow The new annotations with the values should be as follows:
apicast.apps.3scale.net/operator-version: "0.13.x"
apicast.apps.3scale.net/operator-version: "0.13.x"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After you have performed all steps, the APIcast upgrade from 2.15 to 2.16 in an operator-based deployment is complete.
Chapter 3. Externalizing databases for 2.16 Copy linkLink copied to clipboard!
The procedure of externalizing databases consists in migrating the internal databases used by 3scale to external databases. In this context, the term "external" means that the databases are not part of the 3scale installation, and are not managed by the 3scale operator. The term does not indicate whether the database is hosted inside or outside of the OpenShift cluster where 3scale is installed, or whether it resides in the same namespace as 3scale installation.
To avoid data corruption and inconsistencies, the process must be performed when the 3scale is not running. Therefore, schedule a maintenance window to perform the procedure. It is recommended to perform the procedure on a test environment prior to attempting the migration on a production environment.
3.1. Pre-requisites Copy linkLink copied to clipboard!
- Red Hat 3scale API Management 2.15 installed and running successfully on a supported version of Red Hat OpenShift Container Platform.
- Enough permissions to be able to create, update and delete OpenShift resources: deployments, persistent volumes and persistent volume claims, config maps, secrets, and APIManager resource.
- In case the databases will be deployed on the OpenShift cluster, the cluster must have enough resources to be able to create a new PostgreSQL database (in case currently PostgreSQL is used) and 2 additional Redis instances.
- Consider your post migration verification steps before proceeding. This may require you to take a snapshot of data using the API or portal in order to verify a successful migration.
-
You have the
ocCLI installed on the machine where the procedure will be performed.
This guide provides steps for creating the databases within the OpenShift cluster, using the Deployment resource, in a similar way the 3scale operator created the internal databases in versions 2.15 and older. This setup is not recommended for production environment.
3.2. PostgreSQL 10 upgrade - On-cluster Copy linkLink copied to clipboard!
This section covers the steps necessary to upgrade the internal PostgreSQL database from version 10 to version 15 while keeping the database on the cluster.
A high-level overview of the migration:
- Scale down the 3scale instance, keeping the system database running.
- Export the database data into a dump file.
- Deploy the new database on the cluster.
- Restore the database data unto the newly created database.
- Change the database connection string for the system database to point to the new database.
- Mark the system database as external in APIManager.
- Start the 3scale instance.
- In the event of failure, scale down the 3scale instance, point it to an old database, remove the external database in APIManager, and start 3scale again.
3.2.1. Preliminary steps Copy linkLink copied to clipboard!
Log in to the OpenShift cluster where your 3scale On-premises instance is installed:
oc login <url> <authentication-parameters>
oc login <url> <authentication-parameters>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace <url>, <authentication-parameters> with your own OpenShift server URL, authentication parameters. Authentication parameters can be either
-u <username>or--token=<token>.Expose following environment variables:
export THREESCALE_NAMESPACE=<3scale-namespace> export OPERATOR_NAMESPACE=<3scale-operator-namespace>
export THREESCALE_NAMESPACE=<3scale-namespace> export OPERATOR_NAMESPACE=<3scale-operator-namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace <3scale-namespace>, <3scale-operator-namespace> with the names of the namespaces where 3scale and the 3scale operator are installed, accordingly.
Switch to the 3scale installation namespace. The commands below assume that the current namespace is the one where 3scale is installed, unless another namespace is specified explicitly with the
-noption.oc project $THREESCALE_NAMESPACE
oc project $THREESCALE_NAMESPACECopy to Clipboard Copied! Toggle word wrap Toggle overflow Export the values of replica counts for each deployment before scaling them down to ensure that when the 3scale instance is scaled back up, the deployments are restored to their original replica counts.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.2. Scale 3scale Operator and 3scale instance down Copy linkLink copied to clipboard!
Before scaling down the 3scale instance and the 3scale operator, please ensure you are well aware of the resources created in your 3scale instance. This knowledge will be required to confirm that all the contents remain in the 3scale system after the migration.
Scale down the deployment of the 3scale operator controller to prevent it from interfering with the scaling down of other pods.
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
Scale down all the pods of the 3scale deployment, except system-postgresql:
oc scale deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,backend-redis,system-app,system-memcache,system-redis,system-sidekiq,system-searchd,zync,zync-database,zync-que} --replicas=0
oc scale deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,backend-redis,system-app,system-memcache,system-redis,system-sidekiq,system-searchd,zync,zync-database,zync-que} --replicas=0
Verify that all pods have been scaled down with the following command:
oc get deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,backend-redis,system-app,system-memcache,system-redis,system-sidekiq,system-searchd,zync,zync-database,zync-que}
oc get deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,backend-redis,system-app,system-memcache,system-redis,system-sidekiq,system-searchd,zync,zync-database,zync-que}
The column READY should show 0/0 for all the deployments listed above.
3.2.3. Prepare a PostgreSQL dump Copy linkLink copied to clipboard!
Save the name of the system-postgresql pod in an environment variable:
POSTGRES_POD=$(oc get pods -l deployment=system-postgresql -o jsonpath='{.items[0].metadata.name}')
POSTGRES_POD=$(oc get pods -l deployment=system-postgresql -o jsonpath='{.items[0].metadata.name}')
Export database data into a dump file on the pod:
oc exec $POSTGRES_POD -- pg_dump -U system -d system -F c -b -v -f /tmp/db_dump.backup
oc exec $POSTGRES_POD -- pg_dump -U system -d system -F c -b -v -f /tmp/db_dump.backup
Ensure the command is executed successfully.
Copy the dump from the pod to the host machine:
oc cp $POSTGRES_POD:/tmp/db_dump.backup ./db_dump.backup
oc cp $POSTGRES_POD:/tmp/db_dump.backup ./db_dump.backup
During the copy command execution, you might encounter a message:
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
This is expected and does not mean your data is corrupted or was unsuccessfully pulled from the database.
3.2.4. Prepare the required resources for the new database Copy linkLink copied to clipboard!
Create or switch to the namespace where the database will be deployed.
Export the name of the namespace where the new PostgreSQL database will be installed into a variable. You can use an existing namespace, or create a new one. It is also possible to deploy the new database in the same namespace where 3scale is installed, but it is not recommended.
DB_NAMESPACE=<database-target-namespace>
DB_NAMESPACE=<database-target-namespace>
oc project $DB_NAMESPACE
oc project $DB_NAMESPACE
or
oc new-project $DB_NAMESPACE
oc new-project $DB_NAMESPACE
Export the existing OpenShift secret system-database from the 3scale namespace:
oc get secret system-database -n $THREESCALE_NAMESPACE -o yaml > system-database-secret.yml
oc get secret system-database -n $THREESCALE_NAMESPACE -o yaml > system-database-secret.yml
Create a secret in the database namespace with the same username and password as in the original database (only needed if the database is installed in a namespace different from where 3scale is installed):
DB_USER=$(oc get secret system-database -n $THREESCALE_NAMESPACE -o jsonpath='{.data.DB_USER}' | base64 -d)
DB_PASSWORD=$(oc get secret system-database -n $THREESCALE_NAMESPACE -o jsonpath='{.data.DB_PASSWORD}' | base64 -d)
oc create secret generic system-database --from-literal=DB_USER=$DB_USER --from-literal=DB_PASSWORD=$DB_PASSWORD --namespace=$DB_NAMESPACE
DB_USER=$(oc get secret system-database -n $THREESCALE_NAMESPACE -o jsonpath='{.data.DB_USER}' | base64 -d)
DB_PASSWORD=$(oc get secret system-database -n $THREESCALE_NAMESPACE -o jsonpath='{.data.DB_PASSWORD}' | base64 -d)
oc create secret generic system-database --from-literal=DB_USER=$DB_USER --from-literal=DB_PASSWORD=$DB_PASSWORD --namespace=$DB_NAMESPACE
Create a YAML file system-postgresql-deployment-external.yaml locally with the specification of the new deployment for PostgreSQL 15:
Consider updating the following:
- Labels, annotations and environment variables in case you had or require some custom values
- Limits and Requests in case you had or require some custom values
- Other values can be modified according to the requirements
Create a service account for the PostgreSQL deployment and label it:
oc create serviceaccount postgresql-db-external oc label serviceaccount postgresql-db-external app=system-postgresql-external
oc create serviceaccount postgresql-db-external
oc label serviceaccount postgresql-db-external app=system-postgresql-external
Create a YAML file postgresql-data-external-pvc.yaml locally with the specification of the Persistent Volume Claim for the new database:
Adjust the storage request as required.
Create a YAML file system-postgresql-service.yaml locally with the specification of the Service for the new database:
3.2.5. Create the new PostgreSQL resources in the namespace Copy linkLink copied to clipboard!
Create the resource from the YAML files created in previous steps using the oc apply:
oc apply -f postgresql-data-external-pvc.yaml oc apply -f system-postgresql-deployment-external.yaml oc apply -f system-postgresql-service.yaml
oc apply -f postgresql-data-external-pvc.yaml
oc apply -f system-postgresql-deployment-external.yaml
oc apply -f system-postgresql-service.yaml
Verify that all the resources have been created properly with the following command:
oc get deployment,pvc,svc,serviceaccount -l app=system-postgresql-external
oc get deployment,pvc,svc,serviceaccount -l app=system-postgresql-external
Specifically, check that the column READY for the system-postgresql-external deployment shows 1/1.
3.2.6. Upload database dump to PostgreSQL Copy linkLink copied to clipboard!
Once the new PostgreSQL deployment pod is ready, copy the DB dump file from local machine to the pod:
oc cp ./db_dump.backup $(oc get pods -l 'deployment=system-postgresql-external' -o json | jq '.items[0].metadata.name' -r):/tmp
oc cp ./db_dump.backup $(oc get pods -l 'deployment=system-postgresql-external' -o json | jq '.items[0].metadata.name' -r):/tmp
Restore the database using the dump file:
oc rsh $(oc get pods -l 'deployment=system-postgresql-external' -o json | jq -r '.items[0].metadata.name') \ bash -c 'pg_restore -v -h localhost -U postgres -d system /tmp/db_dump.backup'
oc rsh $(oc get pods -l 'deployment=system-postgresql-external' -o json | jq -r '.items[0].metadata.name') \
bash -c 'pg_restore -v -h localhost -U postgres -d system /tmp/db_dump.backup'
You may see the following warnings:
pg_restore: error: could not execute query: ERROR: schema "public" already exists Command was: CREATE SCHEMA public;
pg_restore: error: could not execute query: ERROR: schema "public" already exists
Command was: CREATE SCHEMA public;
and
pg_restore: warning: errors ignored on restore: 1 command terminated with exit code 1
pg_restore: warning: errors ignored on restore: 1
command terminated with exit code 1
Thes warning messages can be ignored, as it does not mean that the process has failed.
To verify that the restore process has completed successfully, run the following command to show some data in the database:
oc rsh $(oc get pods -l 'deployment=system-postgresql-external' -o json | jq -r '.items[0].metadata.name') \ psql -U postgres -d system -c 'SELECT org_name FROM accounts LIMIT 20;'
oc rsh $(oc get pods -l 'deployment=system-postgresql-external' -o json | jq -r '.items[0].metadata.name') \
psql -U postgres -d system -c 'SELECT org_name FROM accounts LIMIT 20;'
The example output returned by this command:
The data should show Org Names of the accounts. Verify that the names are as expected.
3.2.7. Update 3scale secret Copy linkLink copied to clipboard!
Switch back to the 3scale installation namespace:
oc project $THREESCALE_NAMESPACE
oc project $THREESCALE_NAMESPACE
The system-database secret needs to be updated to point to the new external database. Before updating the secret make a backup of the existing resource:
oc get secret system-database -o yaml > system-database-secret.yaml
oc get secret system-database -o yaml > system-database-secret.yaml
Verify that the variables $DB_NAMESPACE, $DB_USER and $DB_PASSWORD are set. Set the new database connection string to the variable DB_URL:
DB_URL=postgresql://$DB_USER:$DB_PASSWORD@system-postgresql-service.$DB_NAMESPACE.svc.cluster.local/system
DB_URL=postgresql://$DB_USER:$DB_PASSWORD@system-postgresql-service.$DB_NAMESPACE.svc.cluster.local/system
If the new PostgreSQL database was created in a different way, not following the exact steps described in this guide, modify the connection string accordingly.
Patch the system-database secret under the 3scale installation namespace to point to the new PostgreSQL instance.
oc patch secret system-database -p "{\"stringData\":{\"URL\":\"$DB_URL\"}}"
oc patch secret system-database -p "{\"stringData\":{\"URL\":\"$DB_URL\"}}"
3.2.8. Mark the system database as external in APIManager resource Copy linkLink copied to clipboard!
Update the APIManager custom resource to indicate that the database of the system component is external. Run the following commands:
APIMANAGER_NAME=$(oc get apimanager -o jsonpath='{.items[0].metadata.name}')
oc patch apimanager $APIMANAGER_NAME --type=merge -p '{"spec": {"externalComponents": {"system": {"database": true}}}}'
APIMANAGER_NAME=$(oc get apimanager -o jsonpath='{.items[0].metadata.name}')
oc patch apimanager $APIMANAGER_NAME --type=merge -p '{"spec": {"externalComponents": {"system": {"database": true}}}}'
This will “disconnect” the operator from reconciling the deployment, meaning that 3scale Operator will no longer reconcile the database deployment or deployment configuration and the associated persistent volume claim.
3.2.9. Scale up the 3scale pods Copy linkLink copied to clipboard!
Scale up the following pods to the replica counts stored into environment variables previously.
Scale up the deployment of the 3scale operator controller back to 1 replica.
oc scale deployment/threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1
oc scale deployment/threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1
Once the threescale-operator-controller-manager-v2 pod is up and running, the 3scale operator will reconcile the resources to ensure that the state of the cluster matches the desired state defined in the APIManager custom resource. For example, if any replicas number is specified for any of the components in the APIManager custom resource, the operator will scale the corresponding deployment to match that number.
Verify that all the pods are up and running with the following command:
oc get deployments
oc get deployments
All deployments should show matching numbers of ready and desired replicas in the READY column, for example, 1/1 or 2/2, except system-postgresql.
Verify that the everything is working properly by logging in to the Admin Portal and the Developer Portal and checking that the APIs are working as expected.
In case you observe any errors in the system-app or system-sidekiq pods, you can follow the instructions in Rolling back to revert the changes and restore the system to use the internal PostgreSQL database.
3.2.10. Delete the internal PostgreSQL deployment Copy linkLink copied to clipboard!
At this point 3scale instance should have fully recovered. Run suitable tests against the installation to confirm the data was correctly migrated.
Once confirmed that the database is fully functional and data is correct, remove the previous PostgreSQL deployment and the associated PVC.
Only perform the steps below once you are 100% sure the data is correct, as the commands will remove the previous PostgreSQL data irreversibly.
oc delete deployment system-postgresql -n $THREESCALE_NAMESPACE oc delete pvc postgresql-data -n $THREESCALE_NAMESPACE oc delete service system-postgresql -n $THREESCALE_NAMESPACE
oc delete deployment system-postgresql -n $THREESCALE_NAMESPACE
oc delete pvc postgresql-data -n $THREESCALE_NAMESPACE
oc delete service system-postgresql -n $THREESCALE_NAMESPACE
3.2.11. Rolling back to internal PostgreSQL Copy linkLink copied to clipboard!
In the event of a failed migration of the database engine version, it is recommended to restore your database version to Postgres 10 and re-try. To do this, we are going to point back to the previous database (default 3scale database).
Ensure the current namespace is the one where 3scale is installed:
oc project $THREESCALE_NAMESPACE
oc project $THREESCALE_NAMESPACE
Scale down 3scale instance and the operator:
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,backend-redis,system-app,system-memcache,system-redis,system-sidekiq,system-searchd,zync,zync-database,zync-que} --replicas=0
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,backend-redis,system-app,system-memcache,system-redis,system-sidekiq,system-searchd,zync,zync-database,zync-que} --replicas=0
Update the APIManager custom resource to indicate that the database of the system component is internal. Run the following commands:
APIMANAGER_NAME=$(oc get apimanager -o jsonpath='{.items[0].metadata.name}')
APIMANAGER_NAME=$(oc get apimanager -o jsonpath='{.items[0].metadata.name}')
oc patch apimanager $APIMANAGER_NAME --type=merge -p '{"spec": {"externalComponents": {"system": {"database": false}}}}'
oc patch apimanager $APIMANAGER_NAME --type=merge -p '{"spec": {"externalComponents": {"system": {"database": false}}}}'
Re-apply the PostgreSQL database specification to APIManager:
oc patch APIManager $APIMANAGER_NAME --type=merge -p '{"spec": {"system": {"database": {"postgresql": {}}}}}'
oc patch APIManager $APIMANAGER_NAME --type=merge -p '{"spec": {"system": {"database": {"postgresql": {}}}}}'
Restore the system-database secret from your local backup:
oc delete secret system-database oc apply -f system-database-secret.yaml
oc delete secret system-database
oc apply -f system-database-secret.yaml
Follow the steps in Scaling up pods to scale up the pods.
At this point, after a while, your instance should be recovered to its previous, pre-migration state.
Remove the external database deployment, service, PVC, and service account.
oc delete deployment system-postgresql-external -n $DB_NAMESPACE oc delete service system-postgresql-service -n $DB_NAMESPACE oc delete sa postgresql-db-external -n $DB_NAMESPACE oc delete pvc postgresql-data-external -n $DB_NAMESPACE
oc delete deployment system-postgresql-external -n $DB_NAMESPACE
oc delete service system-postgresql-service -n $DB_NAMESPACE
oc delete sa postgresql-db-external -n $DB_NAMESPACE
oc delete pvc postgresql-data-external -n $DB_NAMESPACE
3.3. MySQL migration to external - On-cluster Copy linkLink copied to clipboard!
The internal MySQL database is the database used by the System component of 3scale. In 3scale 2.15, it is present in the namespace as system-mysql deployment.
To externalize the MySQL database, there are two options you can choose from:
-
Keep the existing
system-mysqldeployment, but disconnect it from the operator. - Migrate the data to a new MySQL server. For this option, follow the instructions in Configuring an external MySQL database.
This section covers the first option which keeps the existing system-mysql deployment. The approach consists in removing any 3scale references from the resources related to the internal MySQL database and mark the database as external component in APIManager custom resource.
Although the steps necessary to migrate to external MySQL are minimal, the migration is going to be service-affecting due to database restarts, however, no dump and backup files are required.
3.3.1. Preliminary steps Copy linkLink copied to clipboard!
Log in to the OpenShift cluster where your 3scale On-premises instance is installed:
oc login <url> <authentication-parameters>
oc login <url> <authentication-parameters>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace <url>, <authentication-parameters> with your own OpenShift server URL, authentication parameters. Authentication parameters can be either
-u <username>or--token=<token>.Expose following environment variables:
export THREESCALE_NAMESPACE=<3scale-namespace> export OPERATOR_NAMESPACE=<3scale-operator-namespace>
export THREESCALE_NAMESPACE=<3scale-namespace> export OPERATOR_NAMESPACE=<3scale-operator-namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace <3scale-namespace>, <3scale-operator-namespace> with the names of the namespaces where 3scale and the 3scale operator are installed, accordingly.
Switch to the 3scale installation namespace. The commands below assume that the current namespace is the one where 3scale is installed, unless another namespace is specified explicitly with the
-noption.oc project $THREESCALE_NAMESPACE
oc project $THREESCALE_NAMESPACECopy to Clipboard Copied! Toggle word wrap Toggle overflow Export the values of replica counts for each deployment before scaling them down to ensure that when the 3scale instance is scaled back up, the deployments are restored to their original replica counts.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3.2. Scale 3scale Operator and 3scale instance down Copy linkLink copied to clipboard!
Scale down the deployment of the 3scale operator controller to prevent it from interfering with the scaling down of other pods.
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
Scale down all the pods of the 3scale deployment, except the databases:
oc scale deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que} --replicas=0
oc scale deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que} --replicas=0
Verify that all pods have been scaled down with the following command:
oc get deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que}
oc get deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que}
The column READY should show 0/0 for all the deployments listed above.
3.3.3. Mark the system database as external in APIManager resource Copy linkLink copied to clipboard!
Update the APIManager custom resource to indicate that the database of the system component is external. This will detach the system-mysql deployment and the related PersistentVolumeClaim and ConfigMap resources from the 3scale operator. Run the following commands:
APIMANAGER_NAME=$(oc get apimanager -o jsonpath='{.items[0].metadata.name}')
oc patch apimanager $APIMANAGER_NAME --type=merge -p '{"spec": {"externalComponents": {"system": {"database": true}}}}'
APIMANAGER_NAME=$(oc get apimanager -o jsonpath='{.items[0].metadata.name}')
oc patch apimanager $APIMANAGER_NAME --type=merge -p '{"spec": {"externalComponents": {"system": {"database": true}}}}'
3.3.4. Remove 3scale operator references from the MySQL resources Copy linkLink copied to clipboard!
Removing ownerReferences is required to ensure that even if the APIManager CR is removed, the database pod and PVCs associated with it are not removed.
Remove the metadata.OwnerReferences from the system-mysql deployment:
oc patch deployment system-mysql --type=json -p='[{"op": "remove", "path": "/metadata/ownerReferences"}]'
oc patch deployment system-mysql --type=json -p='[{"op": "remove", "path": "/metadata/ownerReferences"}]'
Remove the metadata.OwnerReferences from the mysql-storage PVC:
oc patch pvc mysql-storage --type=json -p='[{"op": "remove", "path": "/metadata/ownerReferences"}]'
oc patch pvc mysql-storage --type=json -p='[{"op": "remove", "path": "/metadata/ownerReferences"}]'
Remove the metadata.labels that are related to 3scale and the spec.template.metadata.labels that are related to 3scale.
oc patch deployment system-mysql --type=json -p='[ {"op": "remove", "path": "/metadata/labels/app"}, {"op": "remove", "path": "/metadata/labels/threescale_component"}, {"op": "remove", "path": "/metadata/labels/threescale_component_element"}, {"op": "remove", "path": "/spec/template/metadata/labels/app"}, {"op": "remove", "path": "/spec/template/metadata/labels/rht.comp_ver"}, {"op": "remove", "path": "/spec/template/metadata/labels/rht.prod_name"}, {"op": "remove", "path": "/spec/template/metadata/labels/threescale_component_element"}, {"op": "remove", "path": "/spec/template/metadata/labels/threescale_component"}, {"op": "remove", "path": "/spec/template/metadata/labels/rht.prod_ver"}, {"op": "remove", "path": "/spec/template/metadata/labels/com.company"}, {"op": "remove", "path": "/spec/template/metadata/labels/rht.subcomp_t"}, {"op": "remove", "path": "/spec/template/metadata/labels/rht.subcomp"}, {"op": "remove", "path": "/spec/template/metadata/labels/rht.comp"}]'
oc patch deployment system-mysql --type=json -p='[ {"op": "remove", "path": "/metadata/labels/app"}, {"op": "remove", "path": "/metadata/labels/threescale_component"}, {"op": "remove", "path": "/metadata/labels/threescale_component_element"}, {"op": "remove", "path": "/spec/template/metadata/labels/app"}, {"op": "remove", "path": "/spec/template/metadata/labels/rht.comp_ver"}, {"op": "remove", "path": "/spec/template/metadata/labels/rht.prod_name"}, {"op": "remove", "path": "/spec/template/metadata/labels/threescale_component_element"}, {"op": "remove", "path": "/spec/template/metadata/labels/threescale_component"}, {"op": "remove", "path": "/spec/template/metadata/labels/rht.prod_ver"}, {"op": "remove", "path": "/spec/template/metadata/labels/com.company"}, {"op": "remove", "path": "/spec/template/metadata/labels/rht.subcomp_t"}, {"op": "remove", "path": "/spec/template/metadata/labels/rht.subcomp"}, {"op": "remove", "path": "/spec/template/metadata/labels/rht.comp"}]'
Remove metadata.labels from the mysql-storage PVC
oc patch pvc mysql-storage --type=json -p='[ {"op": "remove", "path": "/metadata/labels/app"}, {"op": "remove", "path": "/metadata/labels/threescale_component"}, {"op": "remove", "path": "/metadata/labels/threescale_component_element"} ]'
oc patch pvc mysql-storage --type=json -p='[ {"op": "remove", "path": "/metadata/labels/app"}, {"op": "remove", "path": "/metadata/labels/threescale_component"}, {"op": "remove", "path": "/metadata/labels/threescale_component_element"} ]'
Remove ownerReferences and 3scale labels from mysql-extra-conf:
oc patch configmap mysql-extra-conf --type=json -p='[{"op": "remove", "path": "/metadata/ownerReferences"}]'
oc label configmap mysql-extra-conf app- threescale_component- threescale_component_element-
oc patch configmap mysql-extra-conf --type=json -p='[{"op": "remove", "path": "/metadata/ownerReferences"}]'
oc label configmap mysql-extra-conf app- threescale_component- threescale_component_element-
Remove image triggers to ensure that the new version of the operator will not trigger an image change on the deployment.
oc set triggers deployment system-mysql --remove-all oc set triggers deployment system-mysql --from-config
oc set triggers deployment system-mysql --remove-all
oc set triggers deployment system-mysql --from-config
3.3.5. Scale up the 3scale pods Copy linkLink copied to clipboard!
Scale up the following pods to the replica counts stored into environment variables previously.
Scale up the deployment of the 3scale operator controller back to 1 replica.
oc scale deployment/threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1
oc scale deployment/threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1
Once the threescale-operator-controller-manager-v2 pod is up and running, the 3scale operator will reconcile the resources to ensure that the state of the cluster matches the desired state defined in the APIManager custom resource. For example, if any replicas number is specified for any of the components in the APIManager custom resource, the operator will scale the corresponding deployment to match that number.
Verify that all the pods are up and running with the following command:
oc get deployments
oc get deployments
All deployments should show matching numbers of ready and desired replicas in the READY column, for example, 1/1 or 2/2.
Verify that the everything is working properly by logging in to the Admin Portal and the Developer Portal and checking that the APIs are working as expected.
3.4. Redis 6 upgrade - On-cluster Copy linkLink copied to clipboard!
This section covers the steps necessary to upgrade the internal backend and system Redis databases from version 6 to version 7 while keeping the databases on the cluster but external to 3scale instance. Ensure that all the pre-requisite steps from this document are considered before starting.
A high-level overview of the migration:
- Scale down 3scale instance but let the databases run
- Take a copy of the databases data (dump file)
- Create deployments for a new database
- Create PVC for the new databases deployment
- Copy the databases data to the newly created databases
- Connect the new databases to the existing 3scale instance
- Mark both databases as external in APIManager
- Start 3scale instance
- In the event of failure, scale down the 3scale instance, point it to an old database, remove the external database in APIManager, and start 3scale again
3.4.1. Preliminary steps Copy linkLink copied to clipboard!
Log in to the OpenShift cluster where your 3scale On-premises instance is installed:
oc login <url> <authentication-parameters>
oc login <url> <authentication-parameters>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace <url>, <authentication-parameters> with your own OpenShift server URL, authentication parameters. Authentication parameters can be either
-u <username>or--token=<token>.Expose following environment variables:
export THREESCALE_NAMESPACE=<3scale-namespace> export OPERATOR_NAMESPACE=<3scale-operator-namespace>
export THREESCALE_NAMESPACE=<3scale-namespace> export OPERATOR_NAMESPACE=<3scale-operator-namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace <3scale-namespace>, <3scale-operator-namespace> with the names of the namespaces where 3scale and the 3scale operator are installed, accordingly.
Switch to the 3scale installation namespace. The commands below assume that the current namespace is the one where 3scale is installed, unless another namespace is specified explicitly with the
-noption.oc project $THREESCALE_NAMESPACE
oc project $THREESCALE_NAMESPACECopy to Clipboard Copied! Toggle word wrap Toggle overflow Export the values of replica counts for each deployment before scaling them down to ensure that when the 3scale instance is scaled back up, the deployments are restored to their original replica counts.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.2. Scale 3scale Operator and 3scale instance down Copy linkLink copied to clipboard!
Scale down the deployment of the 3scale operator controller to prevent it from interfering with the scaling down of other pods.
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
Scale down all the pods of the 3scale deployment, except the Redis instances:
oc scale deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que} --replicas=0
oc scale deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que} --replicas=0
Verify that all pods have been scaled down with the following command:
oc get deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que}
oc get deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que}
The column READY should show 0/0 for all the deployments listed above.
3.4.3. Back up Redis data Copy linkLink copied to clipboard!
Ensure to give Redis some minutes to process all the keys before taking the dumps, the reason for this is that it can take few seconds for Redis SAVE to trigger and write to the persistent volume.
Back up data of the backend-redis deployment:
oc cp $(oc get pods -l 'deployment=backend-redis' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./backend-redis-dump.rdb
oc cp $(oc get pods -l 'deployment=backend-redis' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./backend-redis-dump.rdb
Back up data of the system-redis deployment:
oc cp $(oc get pods -l 'deployment=system-redis' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./system-redis-dump.rdb
oc cp $(oc get pods -l 'deployment=system-redis' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./system-redis-dump.rdb
During the copy command execution, you might see the following a message:
tar: Removing leading `/' from member names
tar: Removing leading `/' from member names
This is expected and does not mean that the operation has failed.
3.4.4. Scale down the deployments backend-redis and system-redis Copy linkLink copied to clipboard!
oc scale deployment/system-redis --replicas=0 oc scale deployment/backend-redis --replicas=0
oc scale deployment/system-redis --replicas=0
oc scale deployment/backend-redis --replicas=0
3.4.5. Prepare the required resources for the new databases Copy linkLink copied to clipboard!
Create or switch to the namespace where the Redis databases will be deployed.
Export the name of the namespace where the new Redis databases will be installed into a variable. You can use an existing namespace, or create a new one. It is also possible to deploy the new database in the same namespace where 3scale is installed, but it is not recommended.
REDIS_NAMESPACE=<database-target-namespace>
REDIS_NAMESPACE=<database-target-namespace>
oc project $REDIS_NAMESPACE
oc project $REDIS_NAMESPACE
or
oc new-project $REDIS_NAMESPACE
oc new-project $REDIS_NAMESPACE
3.4.5.1. Backend Redis resources Copy linkLink copied to clipboard!
Create a YAML file backend-redis-external.yaml locally with the specification of the new deployment for Redis instance for Backend component:
Consider updating the following:
- Labels, annotations and environment variables in case you had or require some custom values
- Limits and Requests in case you had or require some custom values
- Other values can be modified according to the requirements
Create a service account for the Backend Redis instance and label it:
oc create serviceaccount backend-redis-external oc label serviceaccount backend-redis-external app=backend-redis-external
oc create serviceaccount backend-redis-external
oc label serviceaccount backend-redis-external app=backend-redis-external
Create a YAML file backend-redis-storage-external-pvc.yaml locally with the specification of the Persistent Volume Claim for the new Redis instance:
Adjust the storage request as required.
Create a YAML file backend-redis-service.yaml locally with the specification of the Service for the new Redis instance:
3.4.5.2. System Redis resources Copy linkLink copied to clipboard!
Create a YAML file system-redis-external.yaml locally with the specification of the new deployment for Redis instance for System component:
Consider updating the following:
- Labels, annotations and environment variables in case you had or require some custom values
- Limits and Requests in case you had or require some custom values
- Other values can be modified according to the requirements
Create a service account for the Backend Redis instance and label it:
oc create serviceaccount system-redis-external oc label serviceaccount system-redis-external app=system-redis-external
oc create serviceaccount system-redis-external
oc label serviceaccount system-redis-external app=system-redis-external
Create a YAML file system-redis-storage-external-pvc.yaml locally with the specification of the Persistent Volume Claim for the new Redis instance:
Adjust the storage request as required.
Create a YAML file system-redis-service.yaml locally with the specification of the Service for the new Redis instance:
Create a YAML file redis-config-external.yaml locally with the specification of the ConfigMap for the new Redis instances configuration:
The above Redis configuration closely matches the configuration provided by the 3scale Operator, with a slight difference in being prepared to restore data from the dump file.
3.4.6. Create the new Redis resources in the namespace Copy linkLink copied to clipboard!
At this point, you should have the following files in the local working directory:
-
backend-redis-dump.rdb -
backend-redis-external.yaml -
backend-redis-secret.yaml -
backend-redis-service.yaml -
backend-redis-storage-external-pvc.yaml -
redis-config-external.yaml -
system-redis-dump.rdb -
system-redis-external.yaml -
system-redis-secret.yaml -
system-redis-service.yaml -
system-redis-storage-external-pvc.yaml
Ensure your current namespace is $REDIS_NAMESPACE:
oc project $REDIS_NAMESPACE
oc project $REDIS_NAMESPACE
Create the resource from the YAML files created in previous steps using the oc apply:
Config map:
oc apply -f redis-config-external.yaml
oc apply -f redis-config-external.yaml
Backend:
oc apply -f backend-redis-external.yaml oc apply -f backend-redis-storage-external-pvc.yaml oc apply -f backend-redis-service.yaml
oc apply -f backend-redis-external.yaml
oc apply -f backend-redis-storage-external-pvc.yaml
oc apply -f backend-redis-service.yaml
System:
oc apply -f system-redis-external.yaml oc apply -f system-redis-storage-external-pvc.yaml oc apply -f system-redis-service.yaml
oc apply -f system-redis-external.yaml
oc apply -f system-redis-storage-external-pvc.yaml
oc apply -f system-redis-service.yaml
Verify that all the resources have been created properly with the following command:
oc get deployment,pvc,svc,serviceaccount -l app=backend-redis-external oc get deployment,pvc,svc,serviceaccount -l app=system-redis-external
oc get deployment,pvc,svc,serviceaccount -l app=backend-redis-external
oc get deployment,pvc,svc,serviceaccount -l app=system-redis-external
Specifically, check that the column READY for the backend-redis-external and system-redis-external deployment shows 1/1.
3.4.7. Restore database dumps on the new Redis instances Copy linkLink copied to clipboard!
Restore the data in the new Redis instances using the previously created dump files:
oc cp ./backend-redis-dump.rdb $(oc get pods -l 'deployment=backend-redis-external' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb
oc cp ./backend-redis-dump.rdb $(oc get pods -l 'deployment=backend-redis-external' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb
oc cp ./system-redis-dump.rdb $(oc get pods -l 'deployment=system-redis-external' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb
oc cp ./system-redis-dump.rdb $(oc get pods -l 'deployment=system-redis-external' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb
Restart the Redis deployments:
oc rollout restart deployment/backend-redis-external oc rollout restart deployment/system-redis-external
oc rollout restart deployment/backend-redis-external
oc rollout restart deployment/system-redis-external
Once Redis instances are in a ready state, create an append-only file:
oc rsh $(oc get pods -l 'deployment=backend-redis-external' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli BGREWRITEAOF'
oc rsh $(oc get pods -l 'deployment=backend-redis-external' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli BGREWRITEAOF'
oc rsh $(oc get pods -l 'deployment=system-redis-external' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli BGREWRITEAOF'
oc rsh $(oc get pods -l 'deployment=system-redis-external' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli BGREWRITEAOF'
After a few minutes, confirm that the AOF rewrite is complete:
oc rsh $(oc get pods -l 'deployment=backend-redis-external' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli info' | grep aof_rewrite_in_progress
oc rsh $(oc get pods -l 'deployment=backend-redis-external' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli info' | grep aof_rewrite_in_progress
oc rsh $(oc get pods -l 'deployment=system-redis-external' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli info' | grep aof_rewrite_in_progress
oc rsh $(oc get pods -l 'deployment=system-redis-external' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli info' | grep aof_rewrite_in_progress
While aof_rewrite_in_progress = 1, the execution is in progress. Check periodically until aof_rewrite_in_progress = 0. Zero indicates that the execution is complete.
Edit redis-config-external ConfigMap resource using oc edit configmap/redis-config-external and apply the following changes:
uncomment the lines:
save 900 1 save 300 10 save 60 10000
save 900 1 save 300 10 save 60 10000Copy to Clipboard Copied! Toggle word wrap Toggle overflow remove the line
save ""and setappendonlyvalue toyesappendonly yes
appendonly yesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Restart the Redis deployments:
oc rollout restart deployment/backend-redis-external oc rollout restart deployment/system-redis-external
oc rollout restart deployment/backend-redis-external
oc rollout restart deployment/system-redis-external
3.4.8. Update 3scale secrets Copy linkLink copied to clipboard!
Switch back to the 3scale installation namespace:
oc project $THREESCALE_NAMESPACE
oc project $THREESCALE_NAMESPACE
Before updating the 3scale secrets so that the 3scale instance starts using the new external databases, back them up:
oc get secret system-redis -o yaml > system-redis-secret.yaml oc get secret backend-redis -o yaml > backend-redis-secret.yaml
oc get secret system-redis -o yaml > system-redis-secret.yaml
oc get secret backend-redis -o yaml > backend-redis-secret.yaml
Patch the backend-redis secret under the 3scale installation namespace to point to the new Redis instance for Backend.
BACKEND_REDIS_URL=redis://backend-redis-service.$REDIS_NAMESPACE.svc.cluster.local:6379
oc patch secret backend-redis -p "{\"stringData\":{\"REDIS_STORAGE_URL\":\"$BACKEND_REDIS_URL/0\"}}"
oc patch secret backend-redis -p "{\"stringData\":{\"REDIS_QUEUES_URL\":\"$BACKEND_REDIS_URL/1\"}}"
BACKEND_REDIS_URL=redis://backend-redis-service.$REDIS_NAMESPACE.svc.cluster.local:6379
oc patch secret backend-redis -p "{\"stringData\":{\"REDIS_STORAGE_URL\":\"$BACKEND_REDIS_URL/0\"}}"
oc patch secret backend-redis -p "{\"stringData\":{\"REDIS_QUEUES_URL\":\"$BACKEND_REDIS_URL/1\"}}"
Patch the system-redis secret under the 3scale installation namespace to point to the new Redis instance for System.
SYSTEM_REDIS_URL=redis://system-redis-service.$REDIS_NAMESPACE.svc.cluster.local:6379/1
oc patch secret system-redis -p "{\"stringData\":{\"URL\":\"$SYSTEM_REDIS_URL\"}}"
SYSTEM_REDIS_URL=redis://system-redis-service.$REDIS_NAMESPACE.svc.cluster.local:6379/1
oc patch secret system-redis -p "{\"stringData\":{\"URL\":\"$SYSTEM_REDIS_URL\"}}"
3.4.9. Mark the Redis databases as external in APIManager resource Copy linkLink copied to clipboard!
Update the APIManager custom resource to indicate that the Redis databases are external. Run the following commands:
APIMANAGER_NAME=$(oc get apimanager -o jsonpath='{.items[0].metadata.name}')
oc patch apimanager $APIMANAGER_NAME --type=merge -p '{"spec": {"externalComponents": {"system": {"redis": true}}}}'
oc patch apimanager $APIMANAGER_NAME --type=merge -p '{"spec": {"externalComponents": {"backend": {"redis": true}}}}'
APIMANAGER_NAME=$(oc get apimanager -o jsonpath='{.items[0].metadata.name}')
oc patch apimanager $APIMANAGER_NAME --type=merge -p '{"spec": {"externalComponents": {"system": {"redis": true}}}}'
oc patch apimanager $APIMANAGER_NAME --type=merge -p '{"spec": {"externalComponents": {"backend": {"redis": true}}}}'
3.4.10. Scale up the 3scale pods Copy linkLink copied to clipboard!
Scale up the following pods to the replica counts stored into environment variables previously.
Scale up the deployment of the 3scale operator controller back to 1 replica.
oc scale deployment/threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1
oc scale deployment/threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1
Once the threescale-operator-controller-manager-v2 pod is up and running, the 3scale operator will reconcile the resources to ensure that the state of the cluster matches the desired state defined in the APIManager custom resource. For example, if any replicas number is specified for any of the components in the APIManager custom resource, the operator will scale the corresponding deployment to match that number.
Verify that all the pods are up and running with the following command:
oc get deployments
oc get deployments
All deployments should show matching numbers of ready and desired replicas in the READY column, for example, 1/1 or 2/2, except system-redis and backend-redis.
Verify that the everything is working properly by logging in to the Admin Portal and the Developer Portal and checking that the APIs are working as expected.
3.4.11. Confirm Redis data Copy linkLink copied to clipboard!
Confirm 3scale instance fully recovered and that data is correct - for exampe, verify that the analytics data for API usage appear correctly in the admin portal.
Once confirmed that the databases are fully functional and the data is correct, remove the previous Redis deployments or deployment configs and PVC associated with the initial 3scale Redis databases:
Please consider your post migration verification steps before proceeding. Performing the below steps will permanently remove Redis instance managed by the Operator and the restoration will not be possible.
Delete Deployment resources:
oc delete deployment system-redis -n $THREESCALE_NAMESPACE oc delete deployment backend-redis -n $THREESCALE_NAMESPACE
oc delete deployment system-redis -n $THREESCALE_NAMESPACE
oc delete deployment backend-redis -n $THREESCALE_NAMESPACE
Delete PVC:
oc delete pvc system-redis-storage -n $THREESCALE_NAMESPACE oc delete pvc backend-redis-storage -n $THREESCALE_NAMESPACE
oc delete pvc system-redis-storage -n $THREESCALE_NAMESPACE
oc delete pvc backend-redis-storage -n $THREESCALE_NAMESPACE
Delete Redis configuration ConfigMap:
oc delete configmap redis-config -n $THREESCALE_NAMESPACE
oc delete configmap redis-config -n $THREESCALE_NAMESPACE
3.4.12. Restoration in the event of a failed upgrade attempt Copy linkLink copied to clipboard!
In the event of failed migration, the only case where the migration could fail is if the new image fails to be pulled for some reason or data corruption. In this case, follow the below steps to restore Redis to its previous state.
Scale entire instance down:
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
Set Redis back to being managed by the operator
APIMANAGER_NAME=$(oc get apimanager -n $THREESCALE_NAMESPACE -o jsonpath='{.items[0].metadata.name}')
oc patch APIManager $APIMANAGER_NAME -n $THREESCALE_NAMESPACE --type=merge -p '{"spec": {"externalComponents": {"system": {"redis": false}}}}'
oc patch APIManager $APIMANAGER_NAME -n $THREESCALE_NAMESPACE --type=merge -p '{"spec": {"externalComponents": {"backend": {"redis": false}}}}'
APIMANAGER_NAME=$(oc get apimanager -n $THREESCALE_NAMESPACE -o jsonpath='{.items[0].metadata.name}')
oc patch APIManager $APIMANAGER_NAME -n $THREESCALE_NAMESPACE --type=merge -p '{"spec": {"externalComponents": {"system": {"redis": false}}}}'
oc patch APIManager $APIMANAGER_NAME -n $THREESCALE_NAMESPACE --type=merge -p '{"spec": {"externalComponents": {"backend": {"redis": false}}}}'
At this point, the Redis deployments will be again managed by the 3scale operator which will re-apply all of the removed labels and update the image back to the previous version of Redis.
Recreate the backend-redis and system-redis secrets from the local backup:
oc delete secret backend-redis -n $THREESCALE_NAMESPACE oc apply -f backend-redis-secret.yaml oc delete secret system-redis -n $THREESCALE_NAMESPACE oc apply -f system-redis-secret.yaml
oc delete secret backend-redis -n $THREESCALE_NAMESPACE
oc apply -f backend-redis-secret.yaml
oc delete secret system-redis -n $THREESCALE_NAMESPACE
oc apply -f system-redis-secret.yaml
Scale up the deployment of the 3scale operator controller back to 1 replica.
oc scale deployment/threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1
oc scale deployment/threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1
The operator will scale all the 3scale pods back up.
Remove the Redis external resources created during the migration attempt:
Chapter 4. Migration of On-Cluster MySQL to AWS Copy linkLink copied to clipboard!
4.1. Prerequisites Copy linkLink copied to clipboard!
You must have AWS access to create the MySQL RDS instance and the networking components (VPC, Subnets, Security Groups, etc.) that are necessary to access the MySQL DB.
4.2. Overview Copy linkLink copied to clipboard!
This section covers the steps necessary to migrate the on-cluster MySQL to AWS. A high-level overview of the migration is as follows:
- Follow the AWS guide to create a MySQL RDS instance
- Scale down the 3scale operator and the 3scale instance
- Create the MySQL dump file
- Seed the new database with the MySQL dump file
- Backup and update the system-database secret
- Scale up the 3scale operator and the 3scale instance
Verify that 3scale is healthy
- If it’s not healthy, restore the secrets and retry the migration
- If it is healthy, clean up the old system-mysql component
Export the following environments variables:
export MYSQL_ON_CLUSTER_NAMESPACE=<namespace where the MySQL pod is running> export OPERATOR_NAMESPACE=<namespace where the 3scale operator is running> export THREESCALE_NAMESPACE=<namespace where the 3scale instance is running>
export MYSQL_ON_CLUSTER_NAMESPACE=<namespace where the MySQL pod is running>
export OPERATOR_NAMESPACE=<namespace where the 3scale operator is running>
export THREESCALE_NAMESPACE=<namespace where the 3scale instance is running>
Additionally, assign the following variables for the replica counts. This is done to ensure that we are scaling the 3scale instance back to the replica values that we scaled it down from:
For pre-2.15 operator versions:
For 2.15+ operator versions:
Optional - If you want to write the replica counts to a file as a backup, follow these steps:
touch replica_counts.txt
touch replica_counts.txt
Then run the following command for each Deployment/DeploymentConfig listed above, apicast-production is used as an example:
echo "APICAST_PRODUCTION_REPLICA_COUNT=$(oc get deployment -n $THREESCALE_NAMESPACE apicast-production -o=jsonpath='{.spec.replicas}')" >> replica_counts.txt
echo "APICAST_PRODUCTION_REPLICA_COUNT=$(oc get deployment -n $THREESCALE_NAMESPACE apicast-production -o=jsonpath='{.spec.replicas}')" >> replica_counts.txt
4.2.1. Follow the AWS Guide to Create a MySQL RDS Instance Copy linkLink copied to clipboard!
Follow the https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.MySQL.html#CHAP_GettingStarted.Creating.MySQL[Create a MySQL DB instance] guide or the https://aws.amazon.com/getting-started/hands-on/create-mysql-db/[Create and Connect to a MySQL Database with Amazon RDS] guide to create a MySQL RDS instance.
Follow the https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.MySQL.html#CHAP_GettingStarted.Creating.MySQL[Create a MySQL DB instance] guide or the
https://aws.amazon.com/getting-started/hands-on/create-mysql-db/[Create and Connect to a MySQL Database with Amazon RDS] guide to create a MySQL RDS instance.
Many configuration options must be considered when creating a MySQL RDS instance.
4.2.2. Scale Down the 3scale Operator and 3scale Instance Copy linkLink copied to clipboard!
Scale down all resources except for the system-mysql instance
For pre-2.15 operator versions:
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale dc/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale dc/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
For 2.15+ operator versions:
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
4.2.3. Create the MySQL Dump File Copy linkLink copied to clipboard!
4.2.4. Seed the New MySQL RDS Instance with the MySQL Dump File Copy linkLink copied to clipboard!
4.2.5. Backup and Update the system-database Secret Copy linkLink copied to clipboard!
back up the Secret, before updating the 3scale system-database Secret so that the 3scale instance starts using the new external MySQL instance.
oc get secret system-database -n $THREESCALE_NAMESPACE -o yaml > system-database-secret.yaml
oc get secret system-database -n $THREESCALE_NAMESPACE -o yaml > system-database-secret.yaml
Change the value of the system-database Secret DB_PASSWORD on the cluster to:
<AWS RDS Admin Password>
<AWS RDS Admin Password>
Change the value of the system-database Secret DB_USER on the cluster to:
<AWS RDS Admin Username>
<AWS RDS Admin Username>
Change the value of the system-database Secret DB_USER on the cluster to:
mysql2://<AWS RDS Admin Username>:<AWS RDS Admin Password>@<AWS RDS Hostname>/threescale
mysql2://<AWS RDS Admin Username>:<AWS RDS Admin Password>@<AWS RDS Hostname>/threescale
4.2.6. Add External Components to APIManager Copy linkLink copied to clipboard!
For the operator to stop reconciling the internal MySQL database, it is necessary to mark MySQL database as an external component in the APIManager. This is done by setting the APIM.spec.externalComponents.system.database to true:
oc patch APIManager <APIManager Name> -n $THREESCALE_NAMESPACE --type=merge -p '{"spec": {"externalComponents": {"system": {"database": true}}}}'
oc patch APIManager <APIManager Name> -n $THREESCALE_NAMESPACE --type=merge -p '{"spec": {"externalComponents": {"system": {"database": true}}}}'
This *disconnects* the operator from reconciling the component, which means, the 3scale Operator will no longer reconcile the database Deployment/DeploymentConfig and associated PersistentVolumeClaim.
This *disconnects* the operator from reconciling the component, which means, the 3scale Operator will no longer reconcile the database Deployment/DeploymentConfig and associated PersistentVolumeClaim.
4.2.7. Scale Up the 3scale operator and 3scale Instance Copy linkLink copied to clipboard!
For pre-2.15 operator versions:
For 2.15+ operator versions:
Wait for 3scale instances to fully recover and confirm that the migration data was successful.
Once the 3scale instance state is confirmed to be correct, the on-cluster MySQL instance can be deleted.
4.2.8. Restoration in Case of Failure Copy linkLink copied to clipboard!
If the migration is unsuccessful and you need to revert back to the previous configuration:
Scale down all resources except for the system-mysql instance
For pre-2.15 operator versions:
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale dc/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale dc/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
For 2.15+ operator versions:
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
Remove the system-database Secret:
oc delete secret system-database -n $THREESCALE_NAMESPACE
oc delete secret system-database -n $THREESCALE_NAMESPACE
Re-create the system-database Secret from the backup:
oc apply -f system-database-secret.yaml
oc apply -f system-database-secret.yaml
For pre-2.15 operator versions:
For 2.15+ operator versions:
Wait for 3scale instances to fully recover and retry the migration from the start.
Chapter 5. Migration of On-Cluster Postgres to AWS Copy linkLink copied to clipboard!
5.1. Prerequisites Copy linkLink copied to clipboard!
You must have AWS access to create the Postgres RDS instance and the networking components (VPC, Subnets, Security Groups, etc.) that are necessary to access the Postgres DB. Your on-cluster Postgres instance also needs to be running version 13.
5.2. Overview Copy linkLink copied to clipboard!
- Follow the AWS guide to create a Postgres RDS instance
- Scale down the 3scale operator and the 3scale instance
- Create the Postgres dump file
- Seed the new database with the Postgres dump file
-
Backup and update the
system-databasesecret - Scale up the 3scale operator and the 3scale instance
Verify that 3scale is healthy
- If it is not healthy, restore the secrets and retry the migration
-
If it is healthy, clean up the old
system-postgresqlcomponent
Export the following environments variables:
export POSTGRES_ON_CLUSTER_NAMESPACE=<namespace where the Postgres pod is running> export OPERATOR_NAMESPACE=<namespace where the 3scale operator is running> export THREESCALE_NAMESPACE=<namespace where the 3scale instance is running>
export POSTGRES_ON_CLUSTER_NAMESPACE=<namespace where the Postgres pod is running>
export OPERATOR_NAMESPACE=<namespace where the 3scale operator is running>
export THREESCALE_NAMESPACE=<namespace where the 3scale instance is running>
Additionally, assign the following variables for the replica counts. This is done to ensure that we are scaling the 3scale instance back to the replica values that we scaled it down from.
For pre-2.15 operator versions:
For 2.15+ operator versions:
Optional - If you want to write the replica counts to a file as a backup, follow these steps:
touch replica_counts.txt
touch replica_counts.txt
Then run the following command for each Deployment/DeploymentConfig listed above, apicast-production is used as an example:
echo "APICAST_PRODUCTION_REPLICA_COUNT=$(oc get deployment -n $THREESCALE_NAMESPACE apicast-production -o=jsonpath='{.spec.replicas}')" >> replica_counts.txt
echo "APICAST_PRODUCTION_REPLICA_COUNT=$(oc get deployment -n $THREESCALE_NAMESPACE apicast-production -o=jsonpath='{.spec.replicas}')" >> replica_counts.txt
5.2.1. Follow the AWS Guide to Create a Postgres RDS Instance Copy linkLink copied to clipboard!
Follow the https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.PostgreSQL.html[Creating and connecting to a PostgreSQL DB instance] guide to create a Postgres RDS instance.
Follow the https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.PostgreSQL.html[Creating and connecting to a PostgreSQL DB instance] guide to create a Postgres RDS instance.
Many configuration options must be considered when creating a Postgres RDS instance, for example: storage autoscaling, automated backups, etc. Review your options when creating the RDS instance.
5.2.2. Scale Down the 3scale Operator and 3scale Instance Copy linkLink copied to clipboard!
Scale down all resources except for the system-postgresql instance
For pre-2.15 operator versions:
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale dc/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale dc/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
For 2.15+ operator versions:
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
5.2.3. Create the Postgres Dump File Copy linkLink copied to clipboard!
5.2.4. Seed the New Postgres RDS Instance with the Postgres Dump File Copy linkLink copied to clipboard!
5.2.5. Backup and Update the system-database Secret Copy linkLink copied to clipboard!
Before updating the 3scale system-database Secret so that the 3scale instance starts using the new external Postgres instance, back up the Secret:
oc get secret system-database -n $THREESCALE_NAMESPACE -o yaml > system-database-secret.yaml
oc get secret system-database -n $THREESCALE_NAMESPACE -o yaml > system-database-secret.yaml
Change the value of the system-database Secret DB_PASSWORD on the cluster to:
<AWS RDS Admin Password>
<AWS RDS Admin Password>
Change the value of the system-database Secret DB_USER on the cluster to:
<AWS RDS Admin Username>
<AWS RDS Admin Username>
Change the value of the system-database Secret DB_USER on the cluster to:
postgresql://<AWS RDS Admin Username>:<AWS RDS Admin Password>@<AWS RDS Hostname>/system
postgresql://<AWS RDS Admin Username>:<AWS RDS Admin Password>@<AWS RDS Hostname>/system
5.2.6. Add External Components to APIManager Copy linkLink copied to clipboard!
For the operator to stop reconciling the internal Postgres database, it is necessary to mark Postgres database as an external component in the APIManager. This is done by setting the APIM.spec.externalComponents.system.database to true:
oc patch APIManager <APIManager Name> -n $THREESCALE_NAMESPACE --type=merge -p '{"spec": {"externalComponents": {"system": {"database": true}}}}'
oc patch APIManager <APIManager Name> -n $THREESCALE_NAMESPACE --type=merge -p '{"spec": {"externalComponents": {"system": {"database": true}}}}'
This *disconnects the operator from reconciling the component, which meaans, the 3scale Operator will no longer reconcile the database Deployment/DeploymentConfig and associated PersistentVolumeClaim.
This *disconnects the operator from reconciling the component, which meaans, the 3scale Operator will no longer reconcile the database Deployment/DeploymentConfig and associated PersistentVolumeClaim.
5.2.7. Scale Up the 3scale operator and 3scale Instance Copy linkLink copied to clipboard!
For pre-2.15 operator versions:
For 2.15+ operator versions:
Wait for 3scale instances to fully recover and confirm that the migration data was successful.
Once the 3scale instance state is confirmed to be correct, the on-cluster Postgres instance can be deleted.
5.2.8. Restoration in Case of Failure Copy linkLink copied to clipboard!
If the migration is unsuccessful and you need to revert back to the previous configuration, scale down all resources.
For pre-2.15 operator versions:
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale dc/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale dc/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
For 2.15+ operator versions:
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
Remove the system-database Secret:
oc delete secret system-database -n $THREESCALE_NAMESPACE
oc delete secret system-database -n $THREESCALE_NAMESPACE
Re-create the system-database Secret from the backup:
oc apply -f system-database-secret.yaml
oc apply -f system-database-secret.yaml
Scale d
For pre-2.15 operator versions:
For 2.15+ operator versions:
Wait for 3scale instances to fully recover and retry the migration from the start.
Chapter 6. Migration of Redis on-cluster to AWS Copy linkLink copied to clipboard!
6.1. Prerequisites Copy linkLink copied to clipboard!
- Complete the Redis upgrade by referring Externalizing databases for 2.16
- AWS access and relevant ElasticCache permissions
- Consider your post-migration verification steps before proceeding. This may require you to take a snapshot of data using the api or portal and define a test plan to verify a successful migration
- Consider performing the migration at the time of lowest traffic as the migration will be service-affecting
- Consider letting 3scale components process all the jobs in the background before fully scaling the instance
6.2. Overview and considerations Copy linkLink copied to clipboard!
Consider the following:
- Is the Redis instance going to be AWS-managed (Amazon Redis OSS) or self-managed (ec2)
- Review permissions around access to AWS resources
- Review cluster configuration to ensure AWS resources are reachable
- Consider connectivity security, firewalls, etc.
- Consider Redis configuration, including instance type, security groups, maintenance, backups, etc.
- Consider creating 2 databases for the backend instead of relying on logical databases (for queues and storage)
- Consider whether you want to restore system-redis from the dump or create a new database
- Consider the Redis versions and ensure that the version you are migrating from matches the version you are migrating to
- Familiarize yourself with 3scale supported configurations, AWS might provide more options when configuring Redis that the configuration on cluster, but not all the options are fully supported (ACL, TLS for example)
- Scale 3scale instance and the operator down
- Retrieve Redis dump files
Follow AWS guidelines on how to create/seed Redis instances from the dump files, this includes:
- Creating an S3 bucket
- Adding appropriate permissions to the S3
- Creating a Redis instance from the S3 backup
- Performing pre-connections checks
- Backup and update Redis secrets - backend-redis and system-redis
- Scale up 3scale instance and the operator
- Restore the secrets in case of failed migration
Export the following environments:
export REDIS_ON_CLUSTER_NAMESPACE=<namespace where the Redis pod is running> export OPERATOR_NAMESPACE=<namespace where the 3scale operator is running> export THREESCALE_NAMESPACE=<namespace where the 3scale instance is running>
export REDIS_ON_CLUSTER_NAMESPACE=<namespace where the Redis pod is running>
export OPERATOR_NAMESPACE=<namespace where the 3scale operator is running>
export THREESCALE_NAMESPACE=<namespace where the 3scale instance is running>
Additionally, export the following replica counts. This is done to ensure that we are scaling the 3scale instance back to the replica values that we scaled it down from.
For pre-2.15 Operator versions:
For 2.15 + Operator versions:
6.2.1. Scale 3scale Operator and 3scale instance down Copy linkLink copied to clipboard!
Scale down all resources apart from the database instances
For pre-2.15 operator version:
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale dc/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale dc/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
For 2.15 + operator version:
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
6.2.2. Retrieve the Redis dump file Copy linkLink copied to clipboard!
For deploymentConfig:
oc cp $(oc get pods -l 'deploymentConfig=backend-redis' -n $REDIS_ON_CLUSTER_NAMESPACE -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./backend-redis-dump.rdb oc cp $(oc get pods -l 'deploymentConfig=system-redis' -n $REDIS_ON_CLUSTER_NAMESPACE -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./system-redis-dump.rdb
oc cp $(oc get pods -l 'deploymentConfig=backend-redis' -n $REDIS_ON_CLUSTER_NAMESPACE -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./backend-redis-dump.rdb
oc cp $(oc get pods -l 'deploymentConfig=system-redis' -n $REDIS_ON_CLUSTER_NAMESPACE -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./system-redis-dump.rdb
For deployment:
oc cp $(oc get pods -l 'deployment=backend-redis' -n $REDIS_ON_CLUSTER_NAMESPACE -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./backend-redis-dump.rdb oc cp $(oc get pods -l 'deployment=system-redis' -n $REDIS_ON_CLUSTER_NAMESPACE -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./system-redis-dump.rdb
oc cp $(oc get pods -l 'deployment=backend-redis' -n $REDIS_ON_CLUSTER_NAMESPACE -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./backend-redis-dump.rdb
oc cp $(oc get pods -l 'deployment=system-redis' -n $REDIS_ON_CLUSTER_NAMESPACE -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./system-redis-dump.rdb
Ensure to tweak the values accordingly to where and what your backend Redis instance is.
6.2.3. Follow the AWS guide on how to create a Redis instance from Redis .rdb file Copy linkLink copied to clipboard!
Follow the Tutorial: Seeding a new node-based cluster with an externally created backup from Step 2 - Create an Amazon S3 bucket and folder
Many configuration options must be considered when creating a Redis instance.
Currently, the Redis on cluster version is 7.0, so ensure that the Redis on AWS is created with the same Major.Minor versions are currently used on the cluster.
Currently, the Redis on cluster version is 7.0, so ensure that the Redis on AWS is created with the same Major.Minor versions are currently used on the cluster.
6.2.4. Pre-connection check Copy linkLink copied to clipboard!
After successful Redis creation, double-check that the communication between your cluster and your Redis instance on AWS is possible:
oc exec -it $(oc get pods -l 'deploymentConfig=backend-redis' -o jsonpath='{.items[0].metadata.name}') -- redis-cli -h <Redis host from AWS without port or with -p <port> if custom port is defined> KEYS "liveness-probe"
oc exec -it $(oc get pods -l 'deployment=backend-redis' -o jsonpath='{.items[0].metadata.name}') -- redis-cli -h <Redis host from AWS without port or with -p <port> if custom port is defined> KEYS "liveness-probe"
oc exec -it $(oc get pods -l 'deploymentConfig=backend-redis' -o jsonpath='{.items[0].metadata.name}') -- redis-cli -h <Redis host from AWS without port or with -p <port> if custom port is defined> KEYS "liveness-probe"
oc exec -it $(oc get pods -l 'deployment=backend-redis' -o jsonpath='{.items[0].metadata.name}') -- redis-cli -h <Redis host from AWS without port or with -p <port> if custom port is defined> KEYS "liveness-probe"
Consider updating the name of the deployment configuration or deployment.
If the connection was successful you must receive a liveness-probe key: liveness-probe
If the connection was successful but data is missing, you must receive the following: (empty array)
At this point, it means that Redis was not successfully restored from the dump file, or, the dump file is corrupted
If the connection is unsuccessful, the command hangs. If this happens, review your configuration on AWS as it might mean that the AWS resources are inaccessible from your cluster.
Ensure to run the commands for both, system and backend Redis instances.
6.2.5. Update Redis secret Copy linkLink copied to clipboard!
Before updating the 3scale secrets so that the 3scale instance starts using the new external databases, back them up:
oc get secret backend-redis -n $THREESCALE_NAMESPACE -o yaml > backend-redis-secret.yaml oc get secret system-redis -n $THREESCALE_NAMESPACE -o yaml > system-redis-secret.yaml
oc get secret backend-redis -n $THREESCALE_NAMESPACE -o yaml > backend-redis-secret.yaml
oc get secret system-redis -n $THREESCALE_NAMESPACE -o yaml > system-redis-secret.yaml
Change the value of the backend-redis secret REDIS_QUEUES_URL on the cluster to:
redis://<AWS Redis endpoint>:6379/1
redis://<AWS Redis endpoint>:6379/1
Change the value of the backend-redis secret REDIS_STORAGE_URL on the cluster to:
redis://<AWS Redis endpoint>:6379/0
redis://<AWS Redis endpoint>:6379/0
Change the value of the system-redis secret URL on the cluster to:
redis://<AWS Redis endpoint>:6379/0
redis://<AWS Redis endpoint>:6379/0
6.2.6. Scale up 3scale operator and 3scale instance Copy linkLink copied to clipboard!
For pre-2.15 operator version:
For 2.15 + operator version:
Wait for 3scale instances to fully recover and confirm that the migration data was successful.
Once the 3scale instance state is confirmed to be correct, the on-cluster Redis instances can be deleted.
6.2.7. Restoration in case of failure Copy linkLink copied to clipboard!
If the migration is unsuccessful and you need to revert back to the previous configuration:
Scale the operator and 3scale instance down:
Scale entire instance down:
For pre-2.15 operator version:
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale dc/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale dc/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
For 2.15 + operator version:
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
Remove the backend-redis and system-redis secrets
oc delete secret system-redis -n $THREESCALE_NAMESPACE oc delete secret backend-redis -n $THREESCALE_NAMESPACE
oc delete secret system-redis -n $THREESCALE_NAMESPACE
oc delete secret backend-redis -n $THREESCALE_NAMESPACE
Re-create secrets from the backed-up secrets
oc apply -f backend-redis-secret.yaml oc apply -f system-redis-secret.yaml
oc apply -f backend-redis-secret.yaml
oc apply -f system-redis-secret.yaml
Scale up 3scale instance
For pre-2.15 operator version:
For 2.15 + operator version:
Chapter 7. 3scale troubleshooting guides for external databases Copy linkLink copied to clipboard!
This guide assists in diagnosing and troubleshooting some of the possible issues that arise from using external to 3scale databases. This can be both, databases on a cloud provider (for example AWS or GCP) or, external to 3scale but on a cluster (customer-managed on cluster databases).
7.1. Resources Copy linkLink copied to clipboard!
- Setup and review 3scale monitoring
In all cases, to get a better understanding of the root cause of an issue, you must enable 3scale monitoring. For more details, see Enabling 3scale monitoring stack. Choose the installation process based on the 3scale and OpenShift version in use.
- Review OpenShift monitoring
OpenShift monitoring provides insights into the state of the cluster and its workloads. It can be extremely beneficial for understanding and diagnosing the root cause of an issue, ensure to familiarize yourself with the OpenShift documentation: About OpenShift Container Platform monitoring
- Review cloud provider-specific monitoring
Depending on the cloud provider, the monitoring might vary. However, the general concept remains the same. Monitoring should provide sufficient information about the state of the running instance, its connections, memory, etc. Ensure to familiarize yourself with the cloud provider’s monitoring documentation.
- Enable additional logs on 3scale components
Apicast - see APIcast parameters
System - see Reducing the log level of system-app component in 3scale
Zync - set the following envs on zync deployment/deployment configuration
oc set env dc/zync DEBUG=1 -n <3SCALE_NAMESPACE> oc set env dc/zync-que DEBUG=1 -n <3SCALE_NAMESPACE>
oc set env dc/zync DEBUG=1 -n <3SCALE_NAMESPACE>
oc set env dc/zync-que DEBUG=1 -n <3SCALE_NAMESPACE>
Debug logs are beneficial in understanding the issue.
7.1.1. System database Copy linkLink copied to clipboard!
MySQL, PostgreSQL, or Oracle can be used as the system database.
This part of the document covers possible issues and their manifestations, diagnosis steps, and possible solutions. Issues and manifestations are generic to all types of databases, diagnosis and solutions and can be different depending on the database provider.
7.1.1.1. System database limitations Copy linkLink copied to clipboard!
- 3scale currently does not support TLS with system database
7.1.1.2. Connectivity issue Copy linkLink copied to clipboard!
This is the most basic system database issue and can occur when a database is unreachable.
7.1.1.2.1. Manifestation Copy linkLink copied to clipboard!
The issue manifests in the system component when it is not being able to get ready.
7.1.1.2.2. Diagnosis Copy linkLink copied to clipboard!
When this happens the system-app-pre hook pod will fail with the following error:
ActiveRecord::ConnectionNotEstablished: Unknown <your DB provider> server host '<host from the system-database secret'
In case the system pod can connect to the database (pre-hook pod passes) but the system app crashes, you might see the following error in the log:
Unknown database '<Database name>' (ActiveRecord::NoDatabaseError)
To confirm the issue is on the connectivity side, navigate to system pods and check the logs of the system-pre-hook and system-app
7.1.1.2.3. Solution Copy linkLink copied to clipboard!
Connectivity issues can happen because of the following reasons:
- Incorrectly configured system database secret - ensure that all the fields are set in the correct format - more information can be found in Externalizing MySQL database doc: Configuring an external MySQL database
- Incorrectly configured connectivity setting on the cloud provider or OpenShift side - in the case of cloud provider databases, ensure that your cluster can reach the cloud provider resources, more information about setting up OpenShift with AWS can be found here: Installing a cluster quickly on AWS
- Incorrectly configured secret with wrong database name - this issue happens when the system connects to the database itself correctly but cannot find the provided database name. To solve this problem, update the secret database name or ensure that the database name provided in the secret exists in the database itself - more information can be found in Externalizing MySQL database doc: Configuring an external MySQL database
7.1.1.3. Performance bottlenecks issues Copy linkLink copied to clipboard!
This part of the document describes potential problems with the database bottlenecks.
7.1.1.3.1. Manifestation Copy linkLink copied to clipboard!
7.1.1.3.1.1. 1. Slow Query Performance Copy linkLink copied to clipboard!
- Long Response Times: Queries take longer to execute than expected, leading to delays in application responses.
- Increased Execution Time: Simple queries that previously ran quickly now take significantly more time.
- Frequent Timeouts: Queries may timeout or fail due to exceeding maximum execution time limits.
7.1.1.3.1.2. 2. High CPU Utilization Copy linkLink copied to clipboard!
- Excessive CPU Load: The database server exhibits high CPU usage, often near 100%, causing slow performance for all database operations.
- CPU Spikes: Frequent spikes in CPU usage during peak query execution times.
7.1.1.3.1.3. 3. High Memory Usage Copy linkLink copied to clipboard!
- Memory Swapping: The database server might start using swap memory, leading to a significant decrease in performance.
- Insufficient Cache: Inefficient use of memory, leading to frequent cache misses and more disk I/O operations.
7.1.1.3.1.4. 4. Disk I/O Bottlenecks Copy linkLink copied to clipboard!
- High I/O Wait Times: The system shows high I/O wait times, indicating that processes are frequently waiting for disk operations to complete.
- Slow Disk Access: Increased latency in reading from or writing to the disk, causing overall slow database operations.
- Log File Saturation: Write-heavy operations (for example: transaction logs) may saturate disk bandwidth, slowing down other operations.
7.1.1.3.1.5. 5. Locking and Concurrency Issues Copy linkLink copied to clipboard!
- Lock Wait Timeouts: Frequent lock waits or deadlocks occur, causing queries to be delayed or aborted.
- Transaction Contention: Multiple transactions contend for the same resources, leading to increased wait times and slower processing.
7.1.1.3.1.6. 6. Increased Connection Latency Copy linkLink copied to clipboard!
- Delayed Connections: Establishing new connections to the database becomes slower, potentially causing timeouts.
- Connection Pool Saturation: Connection pools reach their maximum limits, causing delays or failures in acquiring a database connection.
7.1.1.3.1.7. 7. Query Contention Copy linkLink copied to clipboard!
- Deadlocks: Increased frequency of deadlocks, where two or more queries are waiting for each other to release locks.
- Blocking Queries: Queries block each other, leading to cascading delays and slow performance for dependent queries.
7.1.1.3.1.8. 8. Increased Error Rates Copy linkLink copied to clipboard!
- Timeouts and Failures: Higher rates of query timeouts, transaction rollbacks, or failed queries.
- Resource Exhaustion: Errors related to running out of critical resources like memory, disk space, or file descriptors.
7.1.1.3.1.9. 9. Unresponsive Database Copy linkLink copied to clipboard!
- Database Crashes or Hangs: The database server becomes unresponsive, requiring a restart to restore normal operation.
- Long Recovery Times: After a crash or failure, the database takes a long time to recover, indicating underlying performance issues.
7.1.1.3.1.10. 10. Poor Application Performance Copy linkLink copied to clipboard!
- Slow Application Responses: Applications dependent on the database exhibit poor performance, with slower page loads, longer processing times, and delayed transactions.
- User Complaints: End users report slowness or unresponsiveness in applications, often pointing to database-related issues.
7.1.1.3.1.11. 11. Increased Network Latency Copy linkLink copied to clipboard!
- Network Saturation: If the database is remote, high network latency or saturation can cause delays in query execution and data retrieval.
- Slow Data Transfer: Large queries or data retrieval operations take longer than usual due to network-related bottlenecks.
7.1.1.3.1.12. 12. Inefficient Index Usage Copy linkLink copied to clipboard!
- Table Scans: Queries that should be using indexes are instead performing full table scans, leading to increased execution times.
- Fragmented Indexes: Index fragmentation causes slower query performance and increased disk I/O.
7.1.1.3.1.13. 13. High Replication Lag Copy linkLink copied to clipboard!
- Delayed Replication: In a replicated setup, the lag between the primary and replica servers increases, causing stale data to be served from replicas.
- Replication Conflicts: Replication errors or conflicts slow down the entire replication process, leading to data inconsistency.
7.1.1.3.2. Diagnosis Copy linkLink copied to clipboard!
It is recommended to enable debug logging on the system app to understand the issue better. The logs provide useful information, like the QUERY and the response times. For example:
Settings Load (0.5ms) SELECT settings. FROM settings WHERE settings.account_id = 1 LIMIT 1*
By investigating the logs and comparing them to previous logs (if available) you must be able to identify the increase in response times.
Resource Monitoring
Track CPU, memory, and I/O usage using monitoring tools like Prometheus, Grafana, or native database monitoring solutions. The how to can vary depending on the database provider.
If the database runs on a cluster, 3scale Monitoring can be beneficial to investigate the database resource usage and network performance.
If the database runs on and is managed by the cloud provider, looking into the provider-specific monitoring stack, might help in tracing down the root cause. For example, AWS CloudWatch metrics.
Optimize Configuration
Adjust database configuration settings (for example: buffer sizes, cache limits) to better handle the workload. Increasing, for example, the buffer size or cache size in MySQL is necessary when your database is experiencing performance issues related to memory management, such as excessive disk I/O, slow query performance, or high contention for resources.
7.1.1.3.3. Solution Copy linkLink copied to clipboard!
Resource issues
If you have encountered resource issues and know which resource is lacking a possible solution is to increase the resource limits. For example, when running the database on a cluster adjusting the deployment resource limits might be beneficial. If running on AWS or other cloud providers, consider moving to another type of instance with more resources available.
Optimize Configuration
Tweak the database configuration accordingly.
7.1.2. Redis databases Copy linkLink copied to clipboard!
Redis databases are used for the system and backend components of 3scale.
This part of the document covers possible issues and their manifestations, diagnosis steps, and possible solutions.
7.1.2.1. Redis databases limitations Copy linkLink copied to clipboard!
- 3scale currently does not support Redis with ACL
- 3scale currently does not support TLS
7.1.2.2. Connectivity issue Copy linkLink copied to clipboard!
This is the most basic Redis database issue and can occur when a database is unreachable.
7.1.2.2.1. Manifestation Copy linkLink copied to clipboard!
The issue manifests in the system and/or backend component not being able to become ready.
7.1.2.2.2. Diagnosis Copy linkLink copied to clipboard!
When this happens the system-app-pre hook pod will fail with the following error:
Redis::CannotConnectError: Bad file descriptor (redis://<Redis host>:<Redis port>/1)
When the same is affecting workers the following error can be found in the backend worker pod, svc container:
Error connecting to Redis queue storage: Bad file descriptor (redis://<Redis host>:<Redis port>/1)
7.1.2.2.3. Solution Copy linkLink copied to clipboard!
Connectivity issues can happen because of the following reasons:
- Incorrectly configured backend redis secret - ensure that all the fields are set in the correct format. Foe more information, see External Redis Database configuration document.
- Incorrectly configured connectivity setting on the cloud provider or OpenShift side - in the case of cloud provider databases, ensure that your cluster can reach the cloud provider resources. For more information, see Setting up OpenShift with AWS.
7.1.2.3. Performance bottlenecks issues Copy linkLink copied to clipboard!
This part of the document describes potential problems with the database bottlenecks
7.1.2.3.1. Manifestation Copy linkLink copied to clipboard!
7.1.2.3.1.1. 1. Memory Management Issues Copy linkLink copied to clipboard!
- Out of Memory (OOM) Errors: Redis throws errors when it runs out of memory, typically with the message OOM command not allowed when used memory > 'maxmemory'.
- Increased Latency: As memory usage approaches the limit, Redis might experience increased latency due to more frequent garbage collection or swapping.
- Evicted Keys: When Redis is configured with a maxmemory limit and eviction policy, keys may be evicted (deleted) before they should be, leading to data loss.
7.1.2.3.1.2. 2. Latency Issues Copy linkLink copied to clipboard!
- Increased Latency Spikes: Periodic spikes in latency, particularly during large operations or when Redis is persistently writing data to disk.
7.1.2.3.2. Diagnosis Copy linkLink copied to clipboard!
7.1.2.3.2.1. Backend Redis Copy linkLink copied to clipboard!
-
Memory Management Issues
The backend worker currently does not support debug-level logs, however, the backend worker issues are usually logged out to the logs:
For example:
bundler: failed to load command: bin/3scale_backend_worker (bin/3scale_backend_worker)
/opt/ruby/deps/rubygems/github.com/3scale/redis-rb/redis-rb-external-gitcommit-7210a9d6cf733fe5a1ad0dd20f5f613167743810/app/lib/redis/client.rb:126:in `call': OOM command not allowed when used memory > 'maxmemory'.
Which indicates memory issues on the backend worker.
If Redis runs on cluster, running INFO memory on Redis pod is useful to check current memory usage.
Use tools like Grafana dashboards from 3scale Monitoring or CloudWatch from AWS to investigate the memory usage.
You can also check the eviction policy (volatile-lru, allkeys-lru etc) and track key evictions with INFO stats. -
Latency issues
If running Redis on cluster, use “redis-cli –latency” or “redis-cli –latency-history” to monitor and log latency over time.
If using a cloud provider database use metrics like CloudWatch might help understand the issue better.
7.1.2.3.2.2. System Redis Copy linkLink copied to clipboard!
-
Memory Management Issues
For the system, Redis instance might become overloaded with jobs to be processed, this might be caused by system-sidekiq issues. Going through system-sidekiq logs might indicate what the issue is.
If sidekiq is functional, navigating to https://master.<domain>/sidekiq can also help diagnose the jobs being processed.
7.1.2.3.3. Solution Copy linkLink copied to clipboard!
7.1.2.3.3.1. Backend Redis Copy linkLink copied to clipboard!
- Memory Management Issues Consider increasing memory and configuring the eviction policies. Alternatively, look into distributing the load across multiple Redis instances to balance the memory usage (Sentinels), see more information on how to configure Redis with sentinels.
-
Latency issues Consider Redis persistence tuning, which can be done by adjusting the settings of SAVE or AOF to reduce the impact of disk I/O on performance.
Look into distributing the load across multiple Redis instances to balance the memory usage (Sentinels). See more information on how to configure Redis with sentinels.
7.1.2.3.3.2. System Redis Copy linkLink copied to clipboard!
- Memory Management Issues
Ensure sidekiq jobs are being processed promptly.