Migrating Red Hat 3scale API Management


Red Hat 3scale API Management 2.16

Migrate or upgrade 3scale API Management and its components

Red Hat Customer Content Services

Abstract

Upgrade Red Hat 3scale API Management to the latest version via the 3scale operator and also find information to upgrade APIcast in an operator-based deployment.

Preface

Warning

DO NOT ATTEMPT TO INSTALL OR UPGRADE TO 3scale 2.16 IF YOUR DEPLOYMENT USES ORACLE DATABASE. 3scale 2.16 is currently not compatible with Oracle DB. Upgrading from 2.15 to 2.16 in such environments will lead to severe issues preventing the system from operating correctly. Deployments using Oracle DB must stay on version 2.15 until compatibility is added in a future maintenance release (planned for 2.16.1).

This guide provides the information to upgrade Red Hat 3scale API Management to the latest version via the 3scale operator. You will find details required to upgrade your 3scale installation from 2.15 to 2.16, as well as the steps to upgrade APIcast in an operator-based deployment.

To upgrade your 3scale On-premises deployment, refer to the following guide:

To upgrade APIcast in an operator-based deployment, refer to the following guide:

Providing feedback on Red Hat documentation

We appreciate your feedback on our documentation.

To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly.

Prerequisite

  • You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one.

Procedure

  1. Click the following link: Create issue.
  2. In the Summary text box, enter a brief description of the issue.
  3. In the Description text box, provide the following information:

    • The URL of the page where you found the issue.
    • A detailed description of the issue.
      You can leave the information in any other fields at their default values.
  4. Click Create to submit the Jira issue to the documentation team.

Thank you for taking the time to provide feedback.

Upgrade Red Hat 3scale API Management from version 2.15 to 2.16, in an operator-based installation to manage 3scale on OpenShift 4.x.

To automatically obtain a micro-release of 3scale, make sure automatic updates is on. Do not set automatic updates if you are using an Oracle external database. To check this, see Configuring automated application of micro releases.

Important

In order to understand the required conditions and procedure, read the entire upgrade guide before applying the listed steps. The upgrade process disrupts the provision of the service until the procedure finishes. Due to this disruption, make sure to have a maintenance window.

1.1. Prerequisites to perform the upgrade

Important

To resolve certificate verification failures with the 3scale operator, add the annotation to skip certificate verification to the affected Custom Resource (CR). This annotation can be applied to a CR during creation or added to an existing CR. Once applied, the errors are reconciled.

This section describes the required configurations to upgrade 3scale from 2.15 to 2.16 in an operator-based installation.

  • An OpenShift Container Platform (OCP) 4.12, 4.14, 4.16, 4.17, 4.18, 4.19 or 4.20 cluster with administrator access. Ensure that your OCP environment is upgraded to at least version 4.12, which is the minimal requirement for proceeding with a 3scale update.
  • 3scale 2.15 previously deployed via the 3scale operator.
  • Make sure the latest CSV of the threescale-2.15 channel is in use. To check it:

    • If the approval setting for the subscription is automatic, you should already be in the latest CSV version of the channel.
    • If the approval setting for the subscription is manual, make sure you approve all pending InstallPlans and have the latest CSV version.
    • Keep in mind if there is a pending install plan, there might be more pending install plans, which will only be shown after the existing pending plan has been installed.

1.1.1. External databases requirement

In 3scale 2.16 internal databases are not supported, and are not managed by the operator. The only exception is the Zync database that can still be used as an internal component (zync-database deployment).

Before upgrading to 2.16, ensure that all database used by your 3scale installation are not managed by the operator: - System database - MySQL, PostgreSQL or Oracle (configured by system-database secret) - Backend Redis (configured by backend-redis secret) - System Redis (configured by system-redis secret)

Important

Ensure that the versions of your databases are supported in 3scale 2.16 before proceeding with the upgrade. Refer to Components and minimum version requirements for more information.

If you are using internal databases, first migrate them to the external databases, following the instructions in Externalizing databases for 2.16.

Before installing the 3scale 2.16 via the operator, ensure your database components meet the required minimum versions. This pre-flight check is critical to avoid breaking your 3scale instance during the upgrade.

Important
  • If the databases are not upgraded, the 3scale instance will not be upgraded to 2.16.
  • You can upgrade your databases with or without the 3scale 2.16 operator running. If the operator is running, it checks database versions every 10 minutes and will automatically trigger the upgrade process. If the operator was not running during the upgrade, scale it back up. You must do this to verify the requirements and continue with the installation.
Note
  • The Oracle Database is not checked.
  • Zync with external databases is not checked.

Ensure the following components are at or above the specified versions:

System-app component:

  • MySQL: 8.0.0
  • PostgreSQL: 15.0

Backend component:

  • Redis: 7.2 (two instances required)

Version verification

  • Verify MySQL version:

    $ mysql --version
    Copy to Clipboard Toggle word wrap
  • Verify PostgreSQL version:

    $ psql --version
    Copy to Clipboard Toggle word wrap
  • Verify Redis version:

    $ redis-server --version
    Copy to Clipboard Toggle word wrap

If your database versions do not meet the minimum requirements, follow these steps:

  1. Install the 3scale 2.16 operator:

    • The 2.16 operator is installed regardless of the database versions.
  2. Upgrade databases:

    • Upgrade MySQL, PostgreSQL, or Redis to meet the minimum required versions.
    • Note: Follow the official documentation for the upgrade procedures of each database.
  3. Resume 2.16 upgrade:

    • Once the databases are upgraded, the 3scale 2.16 operator detects the new versions.
    • The upgrade process for 3scale 2.16 will then proceed automatically.

By following these pre-flight checks and ensuring your database components are up-to-date, you can transition to 3scale 2.16.

To upgrade 3scale from version 2.15 to 2.16 in an operator-based deployment:

  1. Log in to the OCP console using the account with administrator privileges.
  2. Select the project where the 3scale-operator has been deployed.
  3. Click Operators > Installed Operators.
  4. Select Red Hat Integration - 3scale > Subscription > Channel.
  5. Edit the channel of the subscription by selecting threescale-2.16 and save the changes.

    This will start the upgrade process.

  6. Query the pods' status on the project until you see all the new versions are running and ready without errors:

    $ oc get pods -n <3scale_namespace>
    Copy to Clipboard Toggle word wrap
    Note
    • The pods might have temporary errors during the upgrade process.
    • The time required to upgrade pods can vary from 5-10 minutes.
  7. After new pod versions are running, confirm a successful upgrade by logging in to the 3scale Admin Portal and checking that it works as expected.
  8. Check the status of the APIManager objects and get the YAML content by running the following command. <myapimanager> represents the name of your APIManager:

    $ oc get apimanager <myapimanager> -n <3scale_namespace> -o yaml
    Copy to Clipboard Toggle word wrap
    • The new annotations with the values should be as follows:

      apps.3scale.net/apimanager-threescale-version: "2.16"
      apps.3scale.net/threescale-operator-version: "0.13.x"
      Copy to Clipboard Toggle word wrap

After you have performed all steps, the 3scale upgrade from 2.15 to 2.16 in an operator-based deployment is complete.

Follow this procedure to update your 3scale operator-based installation with an external Oracle database.

Procedure

  1. Follow these steps in Installing Red Hat 3scale API Management guide to create a new system-oracle-3scale-2.16.0-1 image.
  2. Follow the steps in Upgrading from 2.15 to 2.16 in an operator-based installation to upgrade the 3scale operator.
  3. Once the upgrade is completed, update the APIManager custom resource with the new image created in the first step of this procedure as described in Installing 3scale API Management with Oracle using the operator.

Upgrading APIcast from 2.15 to 2.16 in an operator-based installation helps you use the APIcast API gateway to integrate your internal and external application programming interfaces (APIs) services with 3scale.

Important

In order to understand the required conditions and procedure, read the entire upgrade guide before applying the listed steps. The upgrade process disrupts the provision of the service until the procedure finishes. Due to this disruption, make sure to have a maintenance window.

2.1. Prerequisites to perform the upgrade

To perform the upgrade of APIcast from 2.15 to 2.16 in an operator-based installation, the following required prerequisites must already be in place:

  • An OpenShift Container Platform (OCP) 4.12, 4.14, 4.16, 4.17, 4.18, 4.19 or 4.20 cluster with administrator access. Ensure that your OCP environment is upgraded to at least version 4.12, which is the minimal requirement for proceeding with an APIcast update.
  • APIcast 2.15 previously deployed via the APIcast operator.
  • Make sure the latest CSV of the threescale-2.15 channel is in use. To check it:

    • If the approval setting for the subscription is automatic, you should already be in the latest CSV version of the channel.
    • If the approval setting for the subscription is manual, make sure you approve all pending InstallPlans and have the latest CSV version.
    • Keep in mind if there is a pending install plan, there might be more pending install plans, which will only be shown after the existing pending plan has been installed.

Upgrade APIcast from 2.15 to 2.16 in an operator-based installation so that APIcast can function as the API gateway in your 3scale installation.

Procedure

  1. Log in to the OCP console using the account with administrator privileges.
  2. Select the project where the APIcast operator has been deployed.
  3. Click Operators > Installed Operators.
  4. In Subscription > Channel, select Red Hat Integration - 3scale APIcast gateway.
  5. Edit the channel of the subscription by selecting the threescale-2.16 channel and save the changes.

    This will start the upgrade process.

  6. Query the pods status on the project until you see all the new versions are running and ready without errors:

    $ oc get pods -n <apicast_namespace>
    Copy to Clipboard Toggle word wrap
    Note
    • The pods might have temporary errors during the upgrade process.
    • The time required to upgrade pods can vary from 5-10 minutes.
  7. Check the status of the APIcast objects and get the YAML content by running the following command:

    $ oc get apicast <myapicast> -n <apicast_namespace> -o yaml
    Copy to Clipboard Toggle word wrap
    • The new annotations with the values should be as follows:

      apicast.apps.3scale.net/operator-version: "0.13.x"
      Copy to Clipboard Toggle word wrap

After you have performed all steps, the APIcast upgrade from 2.15 to 2.16 in an operator-based deployment is complete.

Chapter 3. Externalizing databases for 2.16

The procedure of externalizing databases consists in migrating the internal databases used by 3scale to external databases. In this context, the term "external" means that the databases are not part of the 3scale installation, and are not managed by the 3scale operator. The term does not indicate whether the database is hosted inside or outside of the OpenShift cluster where 3scale is installed, or whether it resides in the same namespace as 3scale installation.

Important

To avoid data corruption and inconsistencies, the process must be performed when the 3scale is not running. Therefore, schedule a maintenance window to perform the procedure. It is recommended to perform the procedure on a test environment prior to attempting the migration on a production environment.

3.1. Pre-requisites

  1. Red Hat 3scale API Management 2.15 installed and running successfully on a supported version of Red Hat OpenShift Container Platform.
  2. Enough permissions to be able to create, update and delete OpenShift resources: deployments, persistent volumes and persistent volume claims, config maps, secrets, and APIManager resource.
  3. In case the databases will be deployed on the OpenShift cluster, the cluster must have enough resources to be able to create a new PostgreSQL database (in case currently PostgreSQL is used) and 2 additional Redis instances.
  4. Consider your post migration verification steps before proceeding. This may require you to take a snapshot of data using the API or portal in order to verify a successful migration.
  5. You have the oc CLI installed on the machine where the procedure will be performed.
Note

This guide provides steps for creating the databases within the OpenShift cluster, using the Deployment resource, in a similar way the 3scale operator created the internal databases in versions 2.15 and older. This setup is not recommended for production environment.

3.2. PostgreSQL 10 upgrade - On-cluster

This section covers the steps necessary to upgrade the internal PostgreSQL database from version 10 to version 15 while keeping the database on the cluster.

A high-level overview of the migration:

  • Scale down the 3scale instance, keeping the system database running.
  • Export the database data into a dump file.
  • Deploy the new database on the cluster.
  • Restore the database data unto the newly created database.
  • Change the database connection string for the system database to point to the new database.
  • Mark the system database as external in APIManager.
  • Start the 3scale instance.
  • In the event of failure, scale down the 3scale instance, point it to an old database, remove the external database in APIManager, and start 3scale again.

3.2.1. Preliminary steps

  1. Log in to the OpenShift cluster where your 3scale On-premises instance is installed:

    oc login <url> <authentication-parameters>
    Copy to Clipboard Toggle word wrap

    Replace <url>, <authentication-parameters> with your own OpenShift server URL, authentication parameters. Authentication parameters can be either -u <username> or --token=<token>.

  2. Expose following environment variables:

    export THREESCALE_NAMESPACE=<3scale-namespace>
    export OPERATOR_NAMESPACE=<3scale-operator-namespace>
    Copy to Clipboard Toggle word wrap

    Replace <3scale-namespace>, <3scale-operator-namespace> with the names of the namespaces where 3scale and the 3scale operator are installed, accordingly.

  3. Switch to the 3scale installation namespace. The commands below assume that the current namespace is the one where 3scale is installed, unless another namespace is specified explicitly with the -n option.

    oc project $THREESCALE_NAMESPACE
    Copy to Clipboard Toggle word wrap
  4. Export the values of replica counts for each deployment before scaling them down to ensure that when the 3scale instance is scaled back up, the deployments are restored to their original replica counts.

    SYSTEM_MEMCACHE_REPLICA_COUNT=$(oc get deployment system-memcache -o=jsonpath='{.spec.replicas}')
    ZYNC_DATABASE_REPLICA_COUNT=$(oc get deployment zync-database -o=jsonpath='{.spec.replicas}')
    APICAST_PRODUCTION_REPLICA_COUNT=$(oc get deployment apicast-production -o=jsonpath='{.spec.replicas}')
    APICAST_STAGING_REPLICA_COUNT=$(oc get deployment apicast-staging -o=jsonpath='{.spec.replicas}')
    BACKEND_CRON_REPLICA_COUNT=$(oc get deployment backend-cron -o=jsonpath='{.spec.replicas}')
    BACKEND_LISTENER_REPLICA_COUNT=$(oc get deployment backend-listener -o=jsonpath='{.spec.replicas}')
    BACKEND_WORKER_REPLICA_COUNT=$(oc get deployment backend-worker -o=jsonpath='{.spec.replicas}')
    SYSTEM_APP_REPLICA_COUNT=$(oc get deployment system-app -o=jsonpath='{.spec.replicas}')
    SYSTEM_SIDEKIQ_REPLICA_COUNT=$(oc get deployment system-sidekiq -o=jsonpath='{.spec.replicas}')
    SYSTEM_SEARCHD_REPLICA_COUNT=$(oc get deployment system-searchd -o=jsonpath='{.spec.replicas}')
    ZYNC_REPLICA_COUNT=$(oc get deployment zync -o=jsonpath='{.spec.replicas}')
    ZYNC_QUE_REPLICA_COUNT=$(oc get deployment zync-que -o=jsonpath='{.spec.replicas}')
    Copy to Clipboard Toggle word wrap

Before scaling down the 3scale instance and the 3scale operator, please ensure you are well aware of the resources created in your 3scale instance. This knowledge will be required to confirm that all the contents remain in the 3scale system after the migration.

Scale down the deployment of the 3scale operator controller to prevent it from interfering with the scaling down of other pods.

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
Copy to Clipboard Toggle word wrap

Scale down all the pods of the 3scale deployment, except system-postgresql:

oc scale deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,backend-redis,system-app,system-memcache,system-redis,system-sidekiq,system-searchd,zync,zync-database,zync-que} --replicas=0
Copy to Clipboard Toggle word wrap

Verify that all pods have been scaled down with the following command:

oc get deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,backend-redis,system-app,system-memcache,system-redis,system-sidekiq,system-searchd,zync,zync-database,zync-que}
Copy to Clipboard Toggle word wrap

The column READY should show 0/0 for all the deployments listed above.

3.2.3. Prepare a PostgreSQL dump

Save the name of the system-postgresql pod in an environment variable:

POSTGRES_POD=$(oc get pods -l deployment=system-postgresql -o jsonpath='{.items[0].metadata.name}')
Copy to Clipboard Toggle word wrap

Export database data into a dump file on the pod:

oc exec $POSTGRES_POD -- pg_dump -U system -d system -F c -b -v -f /tmp/db_dump.backup
Copy to Clipboard Toggle word wrap

Ensure the command is executed successfully.

Copy the dump from the pod to the host machine:

oc cp $POSTGRES_POD:/tmp/db_dump.backup ./db_dump.backup
Copy to Clipboard Toggle word wrap

During the copy command execution, you might encounter a message:

tar: Removing leading `/' from member names
Copy to Clipboard Toggle word wrap

This is expected and does not mean your data is corrupted or was unsuccessfully pulled from the database.

Create or switch to the namespace where the database will be deployed.

Export the name of the namespace where the new PostgreSQL database will be installed into a variable. You can use an existing namespace, or create a new one. It is also possible to deploy the new database in the same namespace where 3scale is installed, but it is not recommended.

DB_NAMESPACE=<database-target-namespace>
Copy to Clipboard Toggle word wrap
oc project $DB_NAMESPACE
Copy to Clipboard Toggle word wrap

or

oc new-project $DB_NAMESPACE
Copy to Clipboard Toggle word wrap

Export the existing OpenShift secret system-database from the 3scale namespace:

oc get secret system-database -n $THREESCALE_NAMESPACE -o yaml > system-database-secret.yml
Copy to Clipboard Toggle word wrap

Create a secret in the database namespace with the same username and password as in the original database (only needed if the database is installed in a namespace different from where 3scale is installed):

DB_USER=$(oc get secret system-database -n $THREESCALE_NAMESPACE -o jsonpath='{.data.DB_USER}' | base64 -d)
DB_PASSWORD=$(oc get secret system-database -n $THREESCALE_NAMESPACE -o jsonpath='{.data.DB_PASSWORD}' | base64 -d)
oc create secret generic system-database --from-literal=DB_USER=$DB_USER --from-literal=DB_PASSWORD=$DB_PASSWORD --namespace=$DB_NAMESPACE
Copy to Clipboard Toggle word wrap

Create a YAML file system-postgresql-deployment-external.yaml locally with the specification of the new deployment for PostgreSQL 15:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: system-postgresql-external
  labels:
    app: system-postgresql-external
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      deployment: system-postgresql-external
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        deployment: system-postgresql-external
    spec:
      containers:
      - name: system-postgresql-external
        image: registry.redhat.io/rhel9/postgresql-15
        imagePullPolicy: IfNotPresent
        env:
        - name: POSTGRESQL_USER
          valueFrom:
            secretKeyRef:
              key: DB_USER
              name: system-database
        - name: POSTGRESQL_PASSWORD
          valueFrom:
            secretKeyRef:
              key: DB_PASSWORD
              name: system-database
        - name: POSTGRESQL_DATABASE
          value: system
        ports:
        - containerPort: 5432
          protocol: TCP
        livenessProbe:
          failureThreshold: 3
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          tcpSocket:
            port: 5432
          timeoutSeconds: 1
        readinessProbe:
          exec:
            command:
            - /bin/sh
            - -i
            - -c
            - psql -h 127.0.0.1 -U $POSTGRESQL_USER -q -d $POSTGRESQL_DATABASE -c 'SELECT 1'
          failureThreshold: 3
          initialDelaySeconds: 10
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 5
        resources:
          limits:
            memory: 2Gi
          requests:
            cpu: 250m
            memory: 512Mi
        volumeMounts:
        - mountPath: /var/lib/pgsql/data
          name: postgresql-data-external
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      serviceAccountName: postgresql-db-external
      terminationGracePeriodSeconds: 30
      volumes:
      - name: postgresql-data-external
        persistentVolumeClaim:
          claimName: postgresql-data-external
Copy to Clipboard Toggle word wrap

Consider updating the following:

  • Labels, annotations and environment variables in case you had or require some custom values
  • Limits and Requests in case you had or require some custom values
  • Other values can be modified according to the requirements

Create a service account for the PostgreSQL deployment and label it:

oc create serviceaccount postgresql-db-external
oc label serviceaccount postgresql-db-external app=system-postgresql-external
Copy to Clipboard Toggle word wrap

Create a YAML file postgresql-data-external-pvc.yaml locally with the specification of the Persistent Volume Claim for the new database:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgresql-data-external
  labels:
    app: system-postgresql-external
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  volumeMode: Filesystem
Copy to Clipboard Toggle word wrap

Adjust the storage request as required.

Create a YAML file system-postgresql-service.yaml locally with the specification of the Service for the new database:

apiVersion: v1
kind: Service
metadata:
  name: system-postgresql-service
  labels:
    app: system-postgresql-external
spec:
  type: ClusterIP
  ports:
  - port: 5432
    targetPort: 5432
    protocol: TCP
  selector:
    deployment: system-postgresql-external
Copy to Clipboard Toggle word wrap

Create the resource from the YAML files created in previous steps using the oc apply:

oc apply -f postgresql-data-external-pvc.yaml
oc apply -f system-postgresql-deployment-external.yaml
oc apply -f system-postgresql-service.yaml
Copy to Clipboard Toggle word wrap

Verify that all the resources have been created properly with the following command:

oc get deployment,pvc,svc,serviceaccount -l app=system-postgresql-external
Copy to Clipboard Toggle word wrap

Specifically, check that the column READY for the system-postgresql-external deployment shows 1/1.

3.2.6. Upload database dump to PostgreSQL

Once the new PostgreSQL deployment pod is ready, copy the DB dump file from local machine to the pod:

oc cp ./db_dump.backup $(oc get pods -l 'deployment=system-postgresql-external' -o json | jq '.items[0].metadata.name' -r):/tmp
Copy to Clipboard Toggle word wrap

Restore the database using the dump file:

oc rsh $(oc get pods -l 'deployment=system-postgresql-external' -o json | jq -r '.items[0].metadata.name') \
bash -c 'pg_restore -v -h localhost -U postgres -d system /tmp/db_dump.backup'
Copy to Clipboard Toggle word wrap

You may see the following warnings:

pg_restore: error: could not execute query: ERROR:  schema "public" already exists
Command was: CREATE SCHEMA public;
Copy to Clipboard Toggle word wrap

and

pg_restore: warning: errors ignored on restore: 1
command terminated with exit code 1
Copy to Clipboard Toggle word wrap

Thes warning messages can be ignored, as it does not mean that the process has failed.

To verify that the restore process has completed successfully, run the following command to show some data in the database:

oc rsh $(oc get pods -l 'deployment=system-postgresql-external' -o json | jq -r '.items[0].metadata.name') \
psql -U postgres -d system -c 'SELECT org_name FROM accounts LIMIT 20;'
Copy to Clipboard Toggle word wrap

The example output returned by this command:

    org_name
----------------
 Master Account
 Developer
 Provider Name
 Test
(4 rows)
Copy to Clipboard Toggle word wrap

The data should show Org Names of the accounts. Verify that the names are as expected.

3.2.7. Update 3scale secret

Switch back to the 3scale installation namespace:

oc project $THREESCALE_NAMESPACE
Copy to Clipboard Toggle word wrap

The system-database secret needs to be updated to point to the new external database. Before updating the secret make a backup of the existing resource:

oc get secret system-database -o yaml > system-database-secret.yaml
Copy to Clipboard Toggle word wrap

Verify that the variables $DB_NAMESPACE, $DB_USER and $DB_PASSWORD are set. Set the new database connection string to the variable DB_URL:

DB_URL=postgresql://$DB_USER:$DB_PASSWORD@system-postgresql-service.$DB_NAMESPACE.svc.cluster.local/system
Copy to Clipboard Toggle word wrap
Note

If the new PostgreSQL database was created in a different way, not following the exact steps described in this guide, modify the connection string accordingly.

Patch the system-database secret under the 3scale installation namespace to point to the new PostgreSQL instance.

oc patch secret system-database -p "{\"stringData\":{\"URL\":\"$DB_URL\"}}"
Copy to Clipboard Toggle word wrap

Update the APIManager custom resource to indicate that the database of the system component is external. Run the following commands:

APIMANAGER_NAME=$(oc get apimanager -o jsonpath='{.items[0].metadata.name}')
oc patch apimanager $APIMANAGER_NAME --type=merge -p '{"spec": {"externalComponents": {"system": {"database": true}}}}'
Copy to Clipboard Toggle word wrap
Note

This will “disconnect” the operator from reconciling the deployment, meaning that 3scale Operator will no longer reconcile the database deployment or deployment configuration and the associated persistent volume claim.

3.2.9. Scale up the 3scale pods

Scale up the following pods to the replica counts stored into environment variables previously.

oc scale deployment backend-redis --replicas=1
oc scale deployment system-redis --replicas=1
oc scale deployment system-memcache --replicas=$SYSTEM_MEMCACHE_REPLICA_COUNT
oc scale deployment zync-database --replicas=$ZYNC_DATABASE_REPLICA_COUNT
oc scale deployment backend-cron --replicas=$BACKEND_CRON_REPLICA_COUNT
oc scale deployment backend-listener --replicas=$BACKEND_LISTENER_REPLICA_COUNT
oc scale deployment backend-worker --replicas=$BACKEND_WORKER_REPLICA_COUNT
oc scale deployment system-searchd --replicas=$SYSTEM_SEARCHD_REPLICA_COUNT
oc scale deployment zync --replicas=$ZYNC_REPLICA_COUNT
oc scale deployment zync-que --replicas=$ZYNC_QUE_REPLICA_COUNT
oc scale deployment system-app --replicas=$SYSTEM_APP_REPLICA_COUNT
oc scale deployment system-sidekiq --replicas=$SYSTEM_SIDEKIQ_REPLICA_COUNT
oc scale deployment apicast-staging --replicas=$APICAST_STAGING_REPLICA_COUNT
oc scale deployment apicast-production --replicas=$APICAST_PRODUCTION_REPLICA_COUNT
Copy to Clipboard Toggle word wrap

Scale up the deployment of the 3scale operator controller back to 1 replica.

oc scale deployment/threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1
Copy to Clipboard Toggle word wrap

Once the threescale-operator-controller-manager-v2 pod is up and running, the 3scale operator will reconcile the resources to ensure that the state of the cluster matches the desired state defined in the APIManager custom resource. For example, if any replicas number is specified for any of the components in the APIManager custom resource, the operator will scale the corresponding deployment to match that number.

Verify that all the pods are up and running with the following command:

oc get deployments
Copy to Clipboard Toggle word wrap

All deployments should show matching numbers of ready and desired replicas in the READY column, for example, 1/1 or 2/2, except system-postgresql.

Verify that the everything is working properly by logging in to the Admin Portal and the Developer Portal and checking that the APIs are working as expected.

In case you observe any errors in the system-app or system-sidekiq pods, you can follow the instructions in Rolling back to revert the changes and restore the system to use the internal PostgreSQL database.

3.2.10. Delete the internal PostgreSQL deployment

At this point 3scale instance should have fully recovered. Run suitable tests against the installation to confirm the data was correctly migrated.

Once confirmed that the database is fully functional and data is correct, remove the previous PostgreSQL deployment and the associated PVC.

Warning

Only perform the steps below once you are 100% sure the data is correct, as the commands will remove the previous PostgreSQL data irreversibly.

oc delete deployment system-postgresql -n $THREESCALE_NAMESPACE
oc delete pvc postgresql-data -n $THREESCALE_NAMESPACE
oc delete service system-postgresql -n $THREESCALE_NAMESPACE
Copy to Clipboard Toggle word wrap

3.2.11. Rolling back to internal PostgreSQL

In the event of a failed migration of the database engine version, it is recommended to restore your database version to Postgres 10 and re-try. To do this, we are going to point back to the previous database (default 3scale database).

Ensure the current namespace is the one where 3scale is installed:

oc project $THREESCALE_NAMESPACE
Copy to Clipboard Toggle word wrap

Scale down 3scale instance and the operator:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,backend-redis,system-app,system-memcache,system-redis,system-sidekiq,system-searchd,zync,zync-database,zync-que} --replicas=0
Copy to Clipboard Toggle word wrap

Update the APIManager custom resource to indicate that the database of the system component is internal. Run the following commands:

APIMANAGER_NAME=$(oc get apimanager -o jsonpath='{.items[0].metadata.name}')
Copy to Clipboard Toggle word wrap
oc patch apimanager $APIMANAGER_NAME --type=merge -p '{"spec": {"externalComponents": {"system": {"database": false}}}}'
Copy to Clipboard Toggle word wrap

Re-apply the PostgreSQL database specification to APIManager:

oc patch APIManager $APIMANAGER_NAME --type=merge -p '{"spec": {"system": {"database": {"postgresql": {}}}}}'
Copy to Clipboard Toggle word wrap

Restore the system-database secret from your local backup:

oc delete secret system-database
oc apply -f system-database-secret.yaml
Copy to Clipboard Toggle word wrap

Follow the steps in Scaling up pods to scale up the pods.

At this point, after a while, your instance should be recovered to its previous, pre-migration state.

Remove the external database deployment, service, PVC, and service account.

oc delete deployment system-postgresql-external -n $DB_NAMESPACE
oc delete service system-postgresql-service -n $DB_NAMESPACE
oc delete sa postgresql-db-external -n $DB_NAMESPACE
oc delete pvc postgresql-data-external -n $DB_NAMESPACE
Copy to Clipboard Toggle word wrap

3.3. MySQL migration to external - On-cluster

The internal MySQL database is the database used by the System component of 3scale. In 3scale 2.15, it is present in the namespace as system-mysql deployment.

To externalize the MySQL database, there are two options you can choose from:

  1. Keep the existing system-mysql deployment, but disconnect it from the operator.
  2. Migrate the data to a new MySQL server. For this option, follow the instructions in Configuring an external MySQL database.

This section covers the first option which keeps the existing system-mysql deployment. The approach consists in removing any 3scale references from the resources related to the internal MySQL database and mark the database as external component in APIManager custom resource.

Note

Although the steps necessary to migrate to external MySQL are minimal, the migration is going to be service-affecting due to database restarts, however, no dump and backup files are required.

3.3.1. Preliminary steps

  1. Log in to the OpenShift cluster where your 3scale On-premises instance is installed:

    oc login <url> <authentication-parameters>
    Copy to Clipboard Toggle word wrap

    Replace <url>, <authentication-parameters> with your own OpenShift server URL, authentication parameters. Authentication parameters can be either -u <username> or --token=<token>.

  2. Expose following environment variables:

    export THREESCALE_NAMESPACE=<3scale-namespace>
    export OPERATOR_NAMESPACE=<3scale-operator-namespace>
    Copy to Clipboard Toggle word wrap

    Replace <3scale-namespace>, <3scale-operator-namespace> with the names of the namespaces where 3scale and the 3scale operator are installed, accordingly.

  3. Switch to the 3scale installation namespace. The commands below assume that the current namespace is the one where 3scale is installed, unless another namespace is specified explicitly with the -n option.

    oc project $THREESCALE_NAMESPACE
    Copy to Clipboard Toggle word wrap
  4. Export the values of replica counts for each deployment before scaling them down to ensure that when the 3scale instance is scaled back up, the deployments are restored to their original replica counts.

    SYSTEM_MEMCACHE_REPLICA_COUNT=$(oc get deployment system-memcache -o=jsonpath='{.spec.replicas}')
    ZYNC_DATABASE_REPLICA_COUNT=$(oc get deployment zync-database -o=jsonpath='{.spec.replicas}')
    APICAST_PRODUCTION_REPLICA_COUNT=$(oc get deployment apicast-production -o=jsonpath='{.spec.replicas}')
    APICAST_STAGING_REPLICA_COUNT=$(oc get deployment apicast-staging -o=jsonpath='{.spec.replicas}')
    BACKEND_CRON_REPLICA_COUNT=$(oc get deployment backend-cron -o=jsonpath='{.spec.replicas}')
    BACKEND_LISTENER_REPLICA_COUNT=$(oc get deployment backend-listener -o=jsonpath='{.spec.replicas}')
    BACKEND_WORKER_REPLICA_COUNT=$(oc get deployment backend-worker -o=jsonpath='{.spec.replicas}')
    SYSTEM_APP_REPLICA_COUNT=$(oc get deployment system-app -o=jsonpath='{.spec.replicas}')
    SYSTEM_SIDEKIQ_REPLICA_COUNT=$(oc get deployment system-sidekiq -o=jsonpath='{.spec.replicas}')
    SYSTEM_SEARCHD_REPLICA_COUNT=$(oc get deployment system-searchd -o=jsonpath='{.spec.replicas}')
    ZYNC_REPLICA_COUNT=$(oc get deployment zync -o=jsonpath='{.spec.replicas}')
    ZYNC_QUE_REPLICA_COUNT=$(oc get deployment zync-que -o=jsonpath='{.spec.replicas}')
    Copy to Clipboard Toggle word wrap

Scale down the deployment of the 3scale operator controller to prevent it from interfering with the scaling down of other pods.

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
Copy to Clipboard Toggle word wrap

Scale down all the pods of the 3scale deployment, except the databases:

oc scale deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que} --replicas=0
Copy to Clipboard Toggle word wrap

Verify that all pods have been scaled down with the following command:

oc get deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que}
Copy to Clipboard Toggle word wrap

The column READY should show 0/0 for all the deployments listed above.

Update the APIManager custom resource to indicate that the database of the system component is external. This will detach the system-mysql deployment and the related PersistentVolumeClaim and ConfigMap resources from the 3scale operator. Run the following commands:

APIMANAGER_NAME=$(oc get apimanager -o jsonpath='{.items[0].metadata.name}')
oc patch apimanager $APIMANAGER_NAME --type=merge -p '{"spec": {"externalComponents": {"system": {"database": true}}}}'
Copy to Clipboard Toggle word wrap

Removing ownerReferences is required to ensure that even if the APIManager CR is removed, the database pod and PVCs associated with it are not removed.

Remove the metadata.OwnerReferences from the system-mysql deployment:

oc patch deployment system-mysql --type=json -p='[{"op": "remove", "path": "/metadata/ownerReferences"}]'
Copy to Clipboard Toggle word wrap

Remove the metadata.OwnerReferences from the mysql-storage PVC:

oc patch pvc mysql-storage --type=json -p='[{"op": "remove", "path": "/metadata/ownerReferences"}]'
Copy to Clipboard Toggle word wrap

Remove the metadata.labels that are related to 3scale and the spec.template.metadata.labels that are related to 3scale.

oc patch deployment system-mysql --type=json -p='[ {"op": "remove", "path": "/metadata/labels/app"}, {"op": "remove", "path": "/metadata/labels/threescale_component"}, {"op": "remove", "path": "/metadata/labels/threescale_component_element"}, {"op": "remove", "path": "/spec/template/metadata/labels/app"}, {"op": "remove", "path": "/spec/template/metadata/labels/rht.comp_ver"}, {"op": "remove", "path": "/spec/template/metadata/labels/rht.prod_name"}, {"op": "remove", "path": "/spec/template/metadata/labels/threescale_component_element"}, {"op": "remove", "path": "/spec/template/metadata/labels/threescale_component"}, {"op": "remove", "path": "/spec/template/metadata/labels/rht.prod_ver"}, {"op": "remove", "path": "/spec/template/metadata/labels/com.company"}, {"op": "remove", "path": "/spec/template/metadata/labels/rht.subcomp_t"}, {"op": "remove", "path": "/spec/template/metadata/labels/rht.subcomp"}, {"op": "remove", "path": "/spec/template/metadata/labels/rht.comp"}]'
Copy to Clipboard Toggle word wrap

Remove metadata.labels from the mysql-storage PVC

oc patch pvc mysql-storage --type=json -p='[ {"op": "remove", "path": "/metadata/labels/app"}, {"op": "remove", "path": "/metadata/labels/threescale_component"}, {"op": "remove", "path": "/metadata/labels/threescale_component_element"} ]'
Copy to Clipboard Toggle word wrap

Remove ownerReferences and 3scale labels from mysql-extra-conf:

oc patch configmap mysql-extra-conf --type=json -p='[{"op": "remove", "path": "/metadata/ownerReferences"}]'
oc label configmap mysql-extra-conf app- threescale_component- threescale_component_element-
Copy to Clipboard Toggle word wrap

Remove image triggers to ensure that the new version of the operator will not trigger an image change on the deployment.

oc set triggers deployment system-mysql --remove-all
oc set triggers deployment system-mysql --from-config
Copy to Clipboard Toggle word wrap

3.3.5. Scale up the 3scale pods

Scale up the following pods to the replica counts stored into environment variables previously.

oc scale deployment system-memcache --replicas=$SYSTEM_MEMCACHE_REPLICA_COUNT
oc scale deployment zync-database --replicas=$ZYNC_DATABASE_REPLICA_COUNT
oc scale deployment apicast-production --replicas=$APICAST_PRODUCTION_REPLICA_COUNT
oc scale deployment apicast-staging --replicas=$APICAST_STAGING_REPLICA_COUNT
oc scale deployment backend-cron --replicas=$BACKEND_CRON_REPLICA_COUNT
oc scale deployment backend-listener --replicas=$BACKEND_LISTENER_REPLICA_COUNT
oc scale deployment backend-worker --replicas=$BACKEND_WORKER_REPLICA_COUNT
oc scale deployment system-app --replicas=$SYSTEM_APP_REPLICA_COUNT
oc scale deployment system-sidekiq --replicas=$SYSTEM_SIDEKIQ_REPLICA_COUNT
oc scale deployment system-searchd --replicas=$SYSTEM_SEARCHD_REPLICA_COUNT
oc scale deployment zync --replicas=$ZYNC_REPLICA_COUNT
oc scale deployment zync-que --replicas=$ZYNC_QUE_REPLICA_COUNT
Copy to Clipboard Toggle word wrap

Scale up the deployment of the 3scale operator controller back to 1 replica.

oc scale deployment/threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1
Copy to Clipboard Toggle word wrap

Once the threescale-operator-controller-manager-v2 pod is up and running, the 3scale operator will reconcile the resources to ensure that the state of the cluster matches the desired state defined in the APIManager custom resource. For example, if any replicas number is specified for any of the components in the APIManager custom resource, the operator will scale the corresponding deployment to match that number.

Verify that all the pods are up and running with the following command:

oc get deployments
Copy to Clipboard Toggle word wrap

All deployments should show matching numbers of ready and desired replicas in the READY column, for example, 1/1 or 2/2.

Verify that the everything is working properly by logging in to the Admin Portal and the Developer Portal and checking that the APIs are working as expected.

3.4. Redis 6 upgrade - On-cluster

This section covers the steps necessary to upgrade the internal backend and system Redis databases from version 6 to version 7 while keeping the databases on the cluster but external to 3scale instance. Ensure that all the pre-requisite steps from this document are considered before starting.

A high-level overview of the migration:

  • Scale down 3scale instance but let the databases run
  • Take a copy of the databases data (dump file)
  • Create deployments for a new database
  • Create PVC for the new databases deployment
  • Copy the databases data to the newly created databases
  • Connect the new databases to the existing 3scale instance
  • Mark both databases as external in APIManager
  • Start 3scale instance
  • In the event of failure, scale down the 3scale instance, point it to an old database, remove the external database in APIManager, and start 3scale again

3.4.1. Preliminary steps

  1. Log in to the OpenShift cluster where your 3scale On-premises instance is installed:

    oc login <url> <authentication-parameters>
    Copy to Clipboard Toggle word wrap

    Replace <url>, <authentication-parameters> with your own OpenShift server URL, authentication parameters. Authentication parameters can be either -u <username> or --token=<token>.

  2. Expose following environment variables:

    export THREESCALE_NAMESPACE=<3scale-namespace>
    export OPERATOR_NAMESPACE=<3scale-operator-namespace>
    Copy to Clipboard Toggle word wrap

    Replace <3scale-namespace>, <3scale-operator-namespace> with the names of the namespaces where 3scale and the 3scale operator are installed, accordingly.

  3. Switch to the 3scale installation namespace. The commands below assume that the current namespace is the one where 3scale is installed, unless another namespace is specified explicitly with the -n option.

    oc project $THREESCALE_NAMESPACE
    Copy to Clipboard Toggle word wrap
  4. Export the values of replica counts for each deployment before scaling them down to ensure that when the 3scale instance is scaled back up, the deployments are restored to their original replica counts.

    SYSTEM_MEMCACHE_REPLICA_COUNT=$(oc get deployment system-memcache -o=jsonpath='{.spec.replicas}')
    ZYNC_DATABASE_REPLICA_COUNT=$(oc get deployment zync-database -o=jsonpath='{.spec.replicas}')
    APICAST_PRODUCTION_REPLICA_COUNT=$(oc get deployment apicast-production -o=jsonpath='{.spec.replicas}')
    APICAST_STAGING_REPLICA_COUNT=$(oc get deployment apicast-staging -o=jsonpath='{.spec.replicas}')
    BACKEND_CRON_REPLICA_COUNT=$(oc get deployment backend-cron -o=jsonpath='{.spec.replicas}')
    BACKEND_LISTENER_REPLICA_COUNT=$(oc get deployment backend-listener -o=jsonpath='{.spec.replicas}')
    BACKEND_WORKER_REPLICA_COUNT=$(oc get deployment backend-worker -o=jsonpath='{.spec.replicas}')
    SYSTEM_APP_REPLICA_COUNT=$(oc get deployment system-app -o=jsonpath='{.spec.replicas}')
    SYSTEM_SIDEKIQ_REPLICA_COUNT=$(oc get deployment system-sidekiq -o=jsonpath='{.spec.replicas}')
    SYSTEM_SEARCHD_REPLICA_COUNT=$(oc get deployment system-searchd -o=jsonpath='{.spec.replicas}')
    ZYNC_REPLICA_COUNT=$(oc get deployment zync -o=jsonpath='{.spec.replicas}')
    ZYNC_QUE_REPLICA_COUNT=$(oc get deployment zync-que -o=jsonpath='{.spec.replicas}')
    Copy to Clipboard Toggle word wrap

Scale down the deployment of the 3scale operator controller to prevent it from interfering with the scaling down of other pods.

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
Copy to Clipboard Toggle word wrap

Scale down all the pods of the 3scale deployment, except the Redis instances:

oc scale deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que} --replicas=0
Copy to Clipboard Toggle word wrap

Verify that all pods have been scaled down with the following command:

oc get deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que}
Copy to Clipboard Toggle word wrap

The column READY should show 0/0 for all the deployments listed above.

3.4.3. Back up Redis data

Note

Ensure to give Redis some minutes to process all the keys before taking the dumps, the reason for this is that it can take few seconds for Redis SAVE to trigger and write to the persistent volume.

Back up data of the backend-redis deployment:

oc cp $(oc get pods -l 'deployment=backend-redis' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./backend-redis-dump.rdb
Copy to Clipboard Toggle word wrap

Back up data of the system-redis deployment:

oc cp $(oc get pods -l 'deployment=system-redis' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./system-redis-dump.rdb
Copy to Clipboard Toggle word wrap

During the copy command execution, you might see the following a message:

tar: Removing leading `/' from member names
Copy to Clipboard Toggle word wrap

This is expected and does not mean that the operation has failed.

oc scale deployment/system-redis --replicas=0
oc scale deployment/backend-redis --replicas=0
Copy to Clipboard Toggle word wrap

Create or switch to the namespace where the Redis databases will be deployed.

Export the name of the namespace where the new Redis databases will be installed into a variable. You can use an existing namespace, or create a new one. It is also possible to deploy the new database in the same namespace where 3scale is installed, but it is not recommended.

REDIS_NAMESPACE=<database-target-namespace>
Copy to Clipboard Toggle word wrap
oc project $REDIS_NAMESPACE
Copy to Clipboard Toggle word wrap

or

oc new-project $REDIS_NAMESPACE
Copy to Clipboard Toggle word wrap
3.4.5.1. Backend Redis resources

Create a YAML file backend-redis-external.yaml locally with the specification of the new deployment for Redis instance for Backend component:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: backend-redis-external
  labels:
    app: backend-redis-external
spec:
  replicas: 1
  selector:
    matchLabels:
      deployment: backend-redis-external
  template:
    metadata:
      labels:
        deployment: backend-redis-external
    spec:
      restartPolicy: Always
      serviceAccountName: backend-redis-external
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 30
      securityContext: {}
      containers:
        - resources:
            limits:
              cpu: '2'
              memory: 32Gi
            requests:
              cpu: '1'
              memory: 1Gi
          readinessProbe:
            exec:
              command:
                - container-entrypoint
                - bash
                - '-c'
                - redis-cli set liveness-probe "`date`" | grep OK
            initialDelaySeconds: 10
            timeoutSeconds: 1
            periodSeconds: 30
            successThreshold: 1
            failureThreshold: 3
          terminationMessagePath: /dev/termination-log
          name: backend-redis-external
          livenessProbe:
            tcpSocket:
              port: 6379
            initialDelaySeconds: 10
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          env:
            - name: REDIS_CONF
              value: /etc/redis.d/redis.conf
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: backend-redis-storage-external
              mountPath: /var/lib/redis/data
            - name: redis-config-external
              mountPath: /etc/redis.d/
          terminationMessagePolicy: File
          image: 'registry.redhat.io/rhel9/redis-7'
      serviceAccount: backend-redis-external
      volumes:
        - name: backend-redis-storage-external
          persistentVolumeClaim:
            claimName: backend-redis-storage-external
        - name: redis-config-external
          configMap:
            name: redis-config-external
            items:
              - key: redis.conf
                path: redis.conf
            defaultMode: 420
      dnsPolicy: ClusterFirst
  strategy:
    type: Recreate
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
Copy to Clipboard Toggle word wrap

Consider updating the following:

  • Labels, annotations and environment variables in case you had or require some custom values
  • Limits and Requests in case you had or require some custom values
  • Other values can be modified according to the requirements

Create a service account for the Backend Redis instance and label it:

oc create serviceaccount backend-redis-external
oc label serviceaccount backend-redis-external app=backend-redis-external
Copy to Clipboard Toggle word wrap

Create a YAML file backend-redis-storage-external-pvc.yaml locally with the specification of the Persistent Volume Claim for the new Redis instance:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: backend-redis-storage-external
  labels:
    app: backend-redis-external
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  volumeMode: Filesystem
Copy to Clipboard Toggle word wrap

Adjust the storage request as required.

Create a YAML file backend-redis-service.yaml locally with the specification of the Service for the new Redis instance:

apiVersion: v1
kind: Service
metadata:
  name: backend-redis-service
  labels:
    app: backend-redis-external
spec:
  type: ClusterIP
  ports:
  - port: 6379
    targetPort: 6379
    protocol: TCP
  selector:
    deployment: backend-redis-external
Copy to Clipboard Toggle word wrap
3.4.5.2. System Redis resources

Create a YAML file system-redis-external.yaml locally with the specification of the new deployment for Redis instance for System component:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: system-redis-external
  labels:
    app: system-redis-external
spec:
  replicas: 1
  selector:
    matchLabels:
      deployment: system-redis-external
  template:
    metadata:
      labels:
        deployment: system-redis-external
    spec:
      restartPolicy: Always
      serviceAccountName: system-redis-external
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 30
      securityContext: {}
      containers:
        - resources:
            limits:
              cpu: '2'
              memory: 32Gi
            requests:
              cpu: '1'
              memory: 1Gi
          readinessProbe:
            exec:
              command:
                - container-entrypoint
                - bash
                - '-c'
                - redis-cli set liveness-probe "`date`" | grep OK
            initialDelaySeconds: 10
            timeoutSeconds: 1
            periodSeconds: 30
            successThreshold: 1
            failureThreshold: 3
          terminationMessagePath: /dev/termination-log
          name: system-redis-external
          livenessProbe:
            tcpSocket:
              port: 6379
            initialDelaySeconds: 10
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          env:
            - name: REDIS_CONF
              value: /etc/redis.d/redis.conf
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: system-redis-storage-external
              mountPath: /var/lib/redis/data
            - name: redis-config-external
              mountPath: /etc/redis.d/
          terminationMessagePolicy: File
          image: 'registry.redhat.io/rhel9/redis-7'
      serviceAccount: system-redis-external
      volumes:
        - name: system-redis-storage-external
          persistentVolumeClaim:
            claimName: system-redis-storage-external
        - name: redis-config-external
          configMap:
            name: redis-config-external
            items:
              - key: redis.conf
                path: redis.conf
            defaultMode: 420
      dnsPolicy: ClusterFirst
  strategy:
    type: Recreate
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
Copy to Clipboard Toggle word wrap

Consider updating the following:

  • Labels, annotations and environment variables in case you had or require some custom values
  • Limits and Requests in case you had or require some custom values
  • Other values can be modified according to the requirements

Create a service account for the Backend Redis instance and label it:

oc create serviceaccount system-redis-external
oc label serviceaccount system-redis-external app=system-redis-external
Copy to Clipboard Toggle word wrap

Create a YAML file system-redis-storage-external-pvc.yaml locally with the specification of the Persistent Volume Claim for the new Redis instance:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: system-redis-storage-external
  labels:
    app: system-redis-external
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  volumeMode: Filesystem
Copy to Clipboard Toggle word wrap

Adjust the storage request as required.

Create a YAML file system-redis-service.yaml locally with the specification of the Service for the new Redis instance:

apiVersion: v1
kind: Service
metadata:
  name: system-redis-service
  labels:
    app: system-redis-external
spec:
  type: ClusterIP
  ports:
  - port: 6379
    targetPort: 6379
    protocol: TCP
  selector:
    deployment: system-redis-external
Copy to Clipboard Toggle word wrap

Create a YAML file redis-config-external.yaml locally with the specification of the ConfigMap for the new Redis instances configuration:

kind: ConfigMap
apiVersion: v1
metadata:
  name: redis-config-external
data:
  redis.conf: |
    protected-mode no

    port 6379

    timeout 0
    tcp-keepalive 300

    daemonize no
    supervised no

    loglevel notice

    databases 16

    #save 900 1
    #save 300 10
    #save 60 10000
    save ""

    stop-writes-on-bgsave-error yes

    rdbcompression yes
    rdbchecksum yes

    dbfilename dump.rdb

    slave-serve-stale-data yes
    slave-read-only yes

    repl-diskless-sync no
    repl-disable-tcp-nodelay no

    appendonly no
    appendfilename "appendonly.aof"
    appendfsync everysec
    no-appendfsync-on-rewrite no
    auto-aof-rewrite-percentage 100
    auto-aof-rewrite-min-size 64mb
    aof-load-truncated yes

    lua-time-limit 5000

    activerehashing no

    aof-rewrite-incremental-fsync yes
    dir /var/lib/redis/data

    rename-command REPLICAOF ""
    rename-command SLAVEOF ""
Copy to Clipboard Toggle word wrap
Note

The above Redis configuration closely matches the configuration provided by the 3scale Operator, with a slight difference in being prepared to restore data from the dump file.

At this point, you should have the following files in the local working directory:

  • backend-redis-dump.rdb
  • backend-redis-external.yaml
  • backend-redis-secret.yaml
  • backend-redis-service.yaml
  • backend-redis-storage-external-pvc.yaml
  • redis-config-external.yaml
  • system-redis-dump.rdb
  • system-redis-external.yaml
  • system-redis-secret.yaml
  • system-redis-service.yaml
  • system-redis-storage-external-pvc.yaml

Ensure your current namespace is $REDIS_NAMESPACE:

oc project $REDIS_NAMESPACE
Copy to Clipboard Toggle word wrap

Create the resource from the YAML files created in previous steps using the oc apply:

Config map:

oc apply -f redis-config-external.yaml
Copy to Clipboard Toggle word wrap

Backend:

oc apply -f backend-redis-external.yaml
oc apply -f backend-redis-storage-external-pvc.yaml
oc apply -f backend-redis-service.yaml
Copy to Clipboard Toggle word wrap

System:

oc apply -f system-redis-external.yaml
oc apply -f system-redis-storage-external-pvc.yaml
oc apply -f system-redis-service.yaml
Copy to Clipboard Toggle word wrap

Verify that all the resources have been created properly with the following command:

oc get deployment,pvc,svc,serviceaccount -l app=backend-redis-external
oc get deployment,pvc,svc,serviceaccount -l app=system-redis-external
Copy to Clipboard Toggle word wrap

Specifically, check that the column READY for the backend-redis-external and system-redis-external deployment shows 1/1.

Restore the data in the new Redis instances using the previously created dump files:

oc cp ./backend-redis-dump.rdb $(oc get pods -l 'deployment=backend-redis-external' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb
Copy to Clipboard Toggle word wrap
oc cp ./system-redis-dump.rdb $(oc get pods -l 'deployment=system-redis-external' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb
Copy to Clipboard Toggle word wrap

Restart the Redis deployments:

oc rollout restart deployment/backend-redis-external
oc rollout restart deployment/system-redis-external
Copy to Clipboard Toggle word wrap

Once Redis instances are in a ready state, create an append-only file:

oc rsh $(oc get pods -l 'deployment=backend-redis-external' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli BGREWRITEAOF'
Copy to Clipboard Toggle word wrap
oc rsh $(oc get pods -l 'deployment=system-redis-external' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli BGREWRITEAOF'
Copy to Clipboard Toggle word wrap

After a few minutes, confirm that the AOF rewrite is complete:

oc rsh $(oc get pods -l 'deployment=backend-redis-external' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli info' | grep aof_rewrite_in_progress
Copy to Clipboard Toggle word wrap
oc rsh $(oc get pods -l 'deployment=system-redis-external' -o json | jq '.items[0].metadata.name' -r) bash -c 'redis-cli info' | grep aof_rewrite_in_progress
Copy to Clipboard Toggle word wrap

While aof_rewrite_in_progress = 1, the execution is in progress. Check periodically until aof_rewrite_in_progress = 0. Zero indicates that the execution is complete.

Edit redis-config-external ConfigMap resource using oc edit configmap/redis-config-external and apply the following changes:

  • uncomment the lines:

    save 900 1
    save 300 10
    save 60 10000
    Copy to Clipboard Toggle word wrap
  • remove the line save "" and set appendonly value to yes

    appendonly yes
    Copy to Clipboard Toggle word wrap

Restart the Redis deployments:

oc rollout restart deployment/backend-redis-external
oc rollout restart deployment/system-redis-external
Copy to Clipboard Toggle word wrap

3.4.8. Update 3scale secrets

Switch back to the 3scale installation namespace:

oc project $THREESCALE_NAMESPACE
Copy to Clipboard Toggle word wrap

Before updating the 3scale secrets so that the 3scale instance starts using the new external databases, back them up:

oc get secret system-redis -o yaml > system-redis-secret.yaml
oc get secret backend-redis -o yaml > backend-redis-secret.yaml
Copy to Clipboard Toggle word wrap

Patch the backend-redis secret under the 3scale installation namespace to point to the new Redis instance for Backend.

BACKEND_REDIS_URL=redis://backend-redis-service.$REDIS_NAMESPACE.svc.cluster.local:6379
oc patch secret backend-redis -p "{\"stringData\":{\"REDIS_STORAGE_URL\":\"$BACKEND_REDIS_URL/0\"}}"
oc patch secret backend-redis -p "{\"stringData\":{\"REDIS_QUEUES_URL\":\"$BACKEND_REDIS_URL/1\"}}"
Copy to Clipboard Toggle word wrap

Patch the system-redis secret under the 3scale installation namespace to point to the new Redis instance for System.

SYSTEM_REDIS_URL=redis://system-redis-service.$REDIS_NAMESPACE.svc.cluster.local:6379/1
oc patch secret system-redis -p "{\"stringData\":{\"URL\":\"$SYSTEM_REDIS_URL\"}}"
Copy to Clipboard Toggle word wrap

Update the APIManager custom resource to indicate that the Redis databases are external. Run the following commands:

APIMANAGER_NAME=$(oc get apimanager -o jsonpath='{.items[0].metadata.name}')
oc patch apimanager $APIMANAGER_NAME --type=merge -p '{"spec": {"externalComponents": {"system": {"redis": true}}}}'
oc patch apimanager $APIMANAGER_NAME --type=merge -p '{"spec": {"externalComponents": {"backend": {"redis": true}}}}'
Copy to Clipboard Toggle word wrap

3.4.10. Scale up the 3scale pods

Scale up the following pods to the replica counts stored into environment variables previously.

oc scale deployment system-memcache --replicas=$SYSTEM_MEMCACHE_REPLICA_COUNT
oc scale deployment zync-database --replicas=$ZYNC_DATABASE_REPLICA_COUNT
oc scale deployment apicast-production --replicas=$APICAST_PRODUCTION_REPLICA_COUNT
oc scale deployment apicast-staging --replicas=$APICAST_STAGING_REPLICA_COUNT
oc scale deployment backend-cron --replicas=$BACKEND_CRON_REPLICA_COUNT
oc scale deployment backend-listener --replicas=$BACKEND_LISTENER_REPLICA_COUNT
oc scale deployment backend-worker --replicas=$BACKEND_WORKER_REPLICA_COUNT
oc scale deployment system-app --replicas=$SYSTEM_APP_REPLICA_COUNT
oc scale deployment system-sidekiq --replicas=$SYSTEM_SIDEKIQ_REPLICA_COUNT
oc scale deployment system-searchd --replicas=$SYSTEM_SEARCHD_REPLICA_COUNT
oc scale deployment zync --replicas=$ZYNC_REPLICA_COUNT
oc scale deployment zync-que --replicas=$ZYNC_QUE_REPLICA_COUNT
Copy to Clipboard Toggle word wrap

Scale up the deployment of the 3scale operator controller back to 1 replica.

oc scale deployment/threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1
Copy to Clipboard Toggle word wrap

Once the threescale-operator-controller-manager-v2 pod is up and running, the 3scale operator will reconcile the resources to ensure that the state of the cluster matches the desired state defined in the APIManager custom resource. For example, if any replicas number is specified for any of the components in the APIManager custom resource, the operator will scale the corresponding deployment to match that number.

Verify that all the pods are up and running with the following command:

oc get deployments
Copy to Clipboard Toggle word wrap

All deployments should show matching numbers of ready and desired replicas in the READY column, for example, 1/1 or 2/2, except system-redis and backend-redis.

Verify that the everything is working properly by logging in to the Admin Portal and the Developer Portal and checking that the APIs are working as expected.

3.4.11. Confirm Redis data

Confirm 3scale instance fully recovered and that data is correct - for exampe, verify that the analytics data for API usage appear correctly in the admin portal.

Once confirmed that the databases are fully functional and the data is correct, remove the previous Redis deployments or deployment configs and PVC associated with the initial 3scale Redis databases:

Warning

Please consider your post migration verification steps before proceeding. Performing the below steps will permanently remove Redis instance managed by the Operator and the restoration will not be possible.

Delete Deployment resources:

oc delete deployment system-redis -n $THREESCALE_NAMESPACE
oc delete deployment backend-redis -n $THREESCALE_NAMESPACE
Copy to Clipboard Toggle word wrap

Delete PVC:

oc delete pvc system-redis-storage -n $THREESCALE_NAMESPACE
oc delete pvc backend-redis-storage -n $THREESCALE_NAMESPACE
Copy to Clipboard Toggle word wrap

Delete Redis configuration ConfigMap:

oc delete configmap redis-config -n $THREESCALE_NAMESPACE
Copy to Clipboard Toggle word wrap

In the event of failed migration, the only case where the migration could fail is if the new image fails to be pulled for some reason or data corruption. In this case, follow the below steps to restore Redis to its previous state.

Scale entire instance down:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment/{system-memcache,zync-database,apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
Copy to Clipboard Toggle word wrap

Set Redis back to being managed by the operator

APIMANAGER_NAME=$(oc get apimanager -n $THREESCALE_NAMESPACE -o jsonpath='{.items[0].metadata.name}')
oc patch APIManager $APIMANAGER_NAME -n $THREESCALE_NAMESPACE --type=merge -p '{"spec": {"externalComponents": {"system": {"redis": false}}}}'
oc patch APIManager $APIMANAGER_NAME -n $THREESCALE_NAMESPACE --type=merge -p '{"spec": {"externalComponents": {"backend": {"redis": false}}}}'
Copy to Clipboard Toggle word wrap

At this point, the Redis deployments will be again managed by the 3scale operator which will re-apply all of the removed labels and update the image back to the previous version of Redis.

Recreate the backend-redis and system-redis secrets from the local backup:

oc delete secret backend-redis -n $THREESCALE_NAMESPACE
oc apply -f backend-redis-secret.yaml

oc delete secret system-redis -n $THREESCALE_NAMESPACE
oc apply -f system-redis-secret.yaml
Copy to Clipboard Toggle word wrap

Scale up the deployment of the 3scale operator controller back to 1 replica.

oc scale deployment/threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1
Copy to Clipboard Toggle word wrap

The operator will scale all the 3scale pods back up.

Remove the Redis external resources created during the migration attempt:

oc project $REDIS_NAMESPACE
oc delete deployment backend-redis-external
oc delete deployment system-redis-external
oc delete service backend-redis-service
oc delete service system-redis-service
oc delete pvc system-redis-storage-external
oc delete pvc backend-redis-storage-external
oc delete configmap redis-config-external
oc delete sa system-redis-external
oc delete sa backend-redis-external
Copy to Clipboard Toggle word wrap

Chapter 4. Migration of On-Cluster MySQL to AWS

4.1. Prerequisites

You must have AWS access to create the MySQL RDS instance and the networking components (VPC, Subnets, Security Groups, etc.) that are necessary to access the MySQL DB.

4.2. Overview

This section covers the steps necessary to migrate the on-cluster MySQL to AWS. A high-level overview of the migration is as follows:

  • Follow the AWS guide to create a MySQL RDS instance
  • Scale down the 3scale operator and the 3scale instance
  • Create the MySQL dump file
  • Seed the new database with the MySQL dump file
  • Backup and update the system-database secret
  • Scale up the 3scale operator and the 3scale instance
  • Verify that 3scale is healthy

    • If it’s not healthy, restore the secrets and retry the migration
    • If it is healthy, clean up the old system-mysql component

Export the following environments variables:

export MYSQL_ON_CLUSTER_NAMESPACE=<namespace where the MySQL pod is running>

export OPERATOR_NAMESPACE=<namespace where the 3scale operator is running>

export THREESCALE_NAMESPACE=<namespace where the 3scale instance is running>
Copy to Clipboard Toggle word wrap

Additionally, assign the following variables for the replica counts. This is done to ensure that we are scaling the 3scale instance back to the replica values that we scaled it down from:

For pre-2.15 operator versions:

APICAST_PRODUCTION_REPLICA_COUNT=$(oc get dc apicast-production -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

APICAST_STAGING_REPLICA_COUNT=$(oc get dc apicast-staging -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

BACKEND_CRON_REPLICA_COUNT=$(oc get dc backend-cron -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

BACKEND_LISTENER_REPLICA_COUNT=$(oc get dc backend-listener -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

BACKEND_REDIS_REPLICA_COUNT=$(oc get dc backend-redis -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

BACKEND_WORKER_REPLICA_COUNT=$(oc get dc backend-worker -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

SYSTEM_APP_REPLICA_COUNT=$(oc get dc system-app -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

SYSTEM_MEMCACHE_REPLICA_COUNT=$(oc get dc system-memcache -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

SYSTEM_REDIS_REPLICA_COUNT=$(oc get dc system-redis -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

SYSTEM_SEARCHD_REPLICA_COUNT=$(oc get dc system-searchd -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

SYSTEM_SIDEKIQ_REPLICA_COUNT=$(oc get dc system-sidekiq -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

ZYNC_REPLICA_COUNT=$(oc get dc zync -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

ZYNC_DATABASE_REPLICA_COUNT=$(oc get dc zync-database -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

ZYNC_QUE_REPLICA_COUNT=$(oc get dc zync-que -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
Copy to Clipboard Toggle word wrap

For 2.15+ operator versions:

APICAST_PRODUCTION_REPLICA_COUNT=$(oc get deployment -n $THREESCALE_NAMESPACE apicast-production -o=jsonpath='{.spec.replicas}')

APICAST_STAGING_REPLICA_COUNT=$(oc get deployment apicast-staging -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

BACKEND_CRON_REPLICA_COUNT=$(oc get deployment backend-cron -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

BACKEND_LISTENER_REPLICA_COUNT=$(oc get deployment backend-listener -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

BACKEND_REDIS_REPLICA_COUNT=$(oc get deployment backend-redis -o=jsonpath='{.spec.replicas}')

BACKEND_WORKER_REPLICA_COUNT=$(oc get deployment backend-worker -o=jsonpath='{.spec.replicas}')

SYSTEM_APP_REPLICA_COUNT=$(oc get deployment system-app -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

SYSTEM_MEMCACHE_REPLICA_COUNT=$(oc get deployment system-memcache -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

SYSTEM_REDIS_REPLICA_COUNT=$(oc get deployment system-redis -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

SYSTEM_SEARCHD_REPLICA_COUNT=$(oc get deployment system-searchd -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

SYSTEM_SIDEKIQ_REPLICA_COUNT=$(oc get deployment system-sidekiq -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

ZYNC_REPLICA_COUNT=$(oc get deployment zync -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

ZYNC_DATABASE_REPLICA_COUNT=$(oc get deployment zync-database -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

ZYNC_QUE_REPLICA_COUNT=$(oc get deployment zync-que -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
Copy to Clipboard Toggle word wrap

Optional - If you want to write the replica counts to a file as a backup, follow these steps:

touch replica_counts.txt
Copy to Clipboard Toggle word wrap

Then run the following command for each Deployment/DeploymentConfig listed above, apicast-production is used as an example:

echo "APICAST_PRODUCTION_REPLICA_COUNT=$(oc get deployment -n $THREESCALE_NAMESPACE apicast-production -o=jsonpath='{.spec.replicas}')" >> replica_counts.txt
Copy to Clipboard Toggle word wrap
Follow the https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.MySQL.html#CHAP_GettingStarted.Creating.MySQL[Create a MySQL DB instance] guide or the
https://aws.amazon.com/getting-started/hands-on/create-mysql-db/[Create and Connect to a MySQL Database with Amazon RDS] guide to create a MySQL RDS instance.
Copy to Clipboard Toggle word wrap

Many configuration options must be considered when creating a MySQL RDS instance.

Scale down all resources except for the system-mysql instance

For pre-2.15 operator versions:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0

oc scale dc/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
Copy to Clipboard Toggle word wrap

For 2.15+ operator versions:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0

oc scale deployment/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
Copy to Clipboard Toggle word wrap

4.2.3. Create the MySQL Dump File

# Connect to the system-mysql Pod
export MYSQL_POD_NAME=$(oc get pods -n $THREESCALE_NAMESPACE | grep mysql | awk '{print $1}')
oc rsh -t $MYSQL_POD_NAME

# Create the MySQL dump file
mysqldump -h system-mysql -u root -p$MYSQL_ROOT_PASSWORD system > system_mysql_backup.sql

# Remove DEFINER commands from the backup file - this is needed because RDS doesn't provide superuser privileges
sed 's/\sDEFINER=`[^`]*`@`[^`]*`//g' system_mysql_backup.sql > cleaned_system_mysql_backup.sql
Copy to Clipboard Toggle word wrap
# These steps assume you are still connected to the system-mysql Pod from the previous section

# Create env vars with the connection details
export TARGET_MYSQL_HOSTNAME="<AWS RDS Hostname>"
export TARGET_MYSQL_USER="<AWS RDS Admin Username>"
export TARGET_MYSQL_PASSWORD="<AWS RDS Admin Password>"

# Create a fresh database on the target MySQL
mysql -h $TARGET_MYSQL_HOSTNAME -u $TARGET_MYSQL_USER -p$TARGET_MYSQL_PASSWORD -e "CREATE DATABASE IF NOT EXISTS threescale;"

# Source the dump file to the target database
mysql -h $TARGET_MYSQL_HOSTNAME -u $TARGET_MYSQL_USER -p$TARGET_MYSQL_PASSWORD threescale < cleaned_system_mysql_backup.sql

# Once the sourcing is complete, exit the system-mysql Pod
exit
Copy to Clipboard Toggle word wrap

back up the Secret, before updating the 3scale system-database Secret so that the 3scale instance starts using the new external MySQL instance.

oc get secret system-database -n $THREESCALE_NAMESPACE -o yaml > system-database-secret.yaml
Copy to Clipboard Toggle word wrap

Change the value of the system-database Secret DB_PASSWORD on the cluster to:

<AWS RDS Admin Password>
Copy to Clipboard Toggle word wrap

Change the value of the system-database Secret DB_USER on the cluster to:

<AWS RDS Admin Username>
Copy to Clipboard Toggle word wrap

Change the value of the system-database Secret DB_USER on the cluster to:

mysql2://<AWS RDS Admin Username>:<AWS RDS Admin Password>@<AWS RDS Hostname>/threescale
Copy to Clipboard Toggle word wrap

4.2.6. Add External Components to APIManager

For the operator to stop reconciling the internal MySQL database, it is necessary to mark MySQL database as an external component in the APIManager. This is done by setting the APIM.spec.externalComponents.system.database to true:

oc patch APIManager <APIManager Name> -n $THREESCALE_NAMESPACE --type=merge -p '{"spec": {"externalComponents": {"system": {"database": true}}}}'
Copy to Clipboard Toggle word wrap
Note
This *disconnects* the operator from reconciling the component, which means, the 3scale Operator will no longer reconcile the database Deployment/DeploymentConfig and associated PersistentVolumeClaim.
Copy to Clipboard Toggle word wrap

For pre-2.15 operator versions:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1

oc scale dc apicast-production -n $THREESCALE_NAMESPACE --replicas=$APICAST_PRODUCTION_REPLICA_COUNT

oc scale dc apicast-staging -n $THREESCALE_NAMESPACE --replicas=$APICAST_STAGING_REPLICA_COUNT

oc scale dc backend-cron -n $THREESCALE_NAMESPACE --replicas=$BACKEND_CRON_REPLICA_COUNT

oc scale dc backend-listener -n $THREESCALE_NAMESPACE --replicas=$BACKEND_LISTENER_REPLICA_COUNT

oc scale dc backend-redis -n $THREESCALE_NAMESPACE --replicas=$BACKEND_REDIS_REPLICA_COUNT

oc scale dc backend-worker -n $THREESCALE_NAMESPACE --replicas=$BACKEND_WORKER_REPLICA_COUNT

oc scale dc system-app -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_APP_REPLICA_COUNT

oc scale dc system-memcache -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_MEMCACHE_REPLICA_COUNT

oc scale dc system-redis -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_REDIS_REPLICA_COUNT

oc scale dc system-searchd -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SEARCHD_REPLICA_COUNT

oc scale dc system-sidekiq -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SIDEKIQ_REPLICA_COUNT

oc scale dc zync -n $THREESCALE_NAMESPACE --replicas=$ZYNC_REPLICA_COUNT

oc scale dc zync-database -n $THREESCALE_NAMESPACE --replicas=$ZYNC_DATABASE_REPLICA_COUNT

oc scale dc zync-que -n $THREESCALE_NAMESPACE --replicas=$ZYNC_QUE_REPLICA_COUNT
Copy to Clipboard Toggle word wrap

For 2.15+ operator versions:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1

oc scale deployment apicast-production -n $THREESCALE_NAMESPACE --replicas=$APICAST_PRODUCTION_REPLICA_COUNT

oc scale deployment apicast-staging -n $THREESCALE_NAMESPACE --replicas=$APICAST_STAGING_REPLICA_COUNT

oc scale deployment backend-cron -n $THREESCALE_NAMESPACE --replicas=$BACKEND_CRON_REPLICA_COUNT

oc scale deployment backend-listener -n $THREESCALE_NAMESPACE --replicas=$BACKEND_LISTENER_REPLICA_COUNT

oc scale deployment backend-redis -n $THREESCALE_NAMESPACE --replicas=$BACKEND_REDIS_REPLICA_COUNT

oc scale deployment backend-worker -n $THREESCALE_NAMESPACE --replicas=$BACKEND_WORKER_REPLICA_COUNT

oc scale deployment system-app -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_APP_REPLICA_COUNT

oc scale deployment system-memcache -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_MEMCACHE_REPLICA_COUNT

oc scale deployment system-redis -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_REDIS_REPLICA_COUNT

oc scale deployment system-searchd -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SEARCHD_REPLICA_COUNT

oc scale deployment system-sidekiq -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SIDEKIQ_REPLICA_COUNT

oc scale deployment zync -n $THREESCALE_NAMESPACE --replicas=$ZYNC_REPLICA_COUNT

oc scale deployment zync-database -n $THREESCALE_NAMESPACE --replicas=$ZYNC_DATABASE_REPLICA_COUNT

oc scale deployment zync-que -n $THREESCALE_NAMESPACE --replicas=$ZYNC_QUE_REPLICA_COUNT
Copy to Clipboard Toggle word wrap

Wait for 3scale instances to fully recover and confirm that the migration data was successful.
Once the 3scale instance state is confirmed to be correct, the on-cluster MySQL instance can be deleted.

4.2.8. Restoration in Case of Failure

If the migration is unsuccessful and you need to revert back to the previous configuration:

Scale down all resources except for the system-mysql instance

For pre-2.15 operator versions:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0

oc scale dc/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
Copy to Clipboard Toggle word wrap

For 2.15+ operator versions:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0

oc scale deployment/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
Copy to Clipboard Toggle word wrap

Remove the system-database Secret:

oc delete secret system-database -n $THREESCALE_NAMESPACE
Copy to Clipboard Toggle word wrap

Re-create the system-database Secret from the backup:

oc apply -f system-database-secret.yaml
Copy to Clipboard Toggle word wrap

For pre-2.15 operator versions:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1

oc scale dc apicast-production -n $THREESCALE_NAMESPACE --replicas=$APICAST_PRODUCTION_REPLICA_COUNT

oc scale dc apicast-staging -n $THREESCALE_NAMESPACE --replicas=$APICAST_STAGING_REPLICA_COUNT

oc scale dc backend-cron -n $THREESCALE_NAMESPACE --replicas=$BACKEND_CRON_REPLICA_COUNT

oc scale dc backend-listener -n $THREESCALE_NAMESPACE --replicas=$BACKEND_LISTENER_REPLICA_COUNT

oc scale dc backend-redis -n $THREESCALE_NAMESPACE --replicas=$BACKEND_REDIS_REPLICA_COUNT

oc scale dc backend-worker -n $THREESCALE_NAMESPACE --replicas=$BACKEND_WORKER_REPLICA_COUNT

oc scale dc system-app -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_APP_REPLICA_COUNT

oc scale dc system-memcache -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_MEMCACHE_REPLICA_COUNT

oc scale dc system-redis -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_REDIS_REPLICA_COUNT

oc scale dc system-searchd -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SEARCHD_REPLICA_COUNT

oc scale dc system-sidekiq -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SIDEKIQ_REPLICA_COUNT

oc scale dc zync -n $THREESCALE_NAMESPACE --replicas=$ZYNC_REPLICA_COUNT

oc scale dc zync-database -n $THREESCALE_NAMESPACE --replicas=$ZYNC_DATABASE_REPLICA_COUNT

oc scale dc zync-que -n $THREESCALE_NAMESPACE --replicas=$ZYNC_QUE_REPLICA_COUNT
Copy to Clipboard Toggle word wrap

For 2.15+ operator versions:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1

oc scale deployment apicast-production -n $THREESCALE_NAMESPACE --replicas=$APICAST_PRODUCTION_REPLICA_COUNT

oc scale deployment apicast-staging -n $THREESCALE_NAMESPACE --replicas=$APICAST_STAGING_REPLICA_COUNT

oc scale deployment backend-cron -n $THREESCALE_NAMESPACE --replicas=$BACKEND_CRON_REPLICA_COUNT

oc scale deployment backend-listener -n $THREESCALE_NAMESPACE --replicas=$BACKEND_LISTENER_REPLICA_COUNT

oc scale deployment backend-redis -n $THREESCALE_NAMESPACE --replicas=$BACKEND_REDIS_REPLICA_COUNT

oc scale deployment backend-worker -n $THREESCALE_NAMESPACE --replicas=$BACKEND_WORKER_REPLICA_COUNT

oc scale deployment system-app -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_APP_REPLICA_COUNT

oc scale deployment system-memcache -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_MEMCACHE_REPLICA_COUNT

oc scale deployment system-redis -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_REDIS_REPLICA_COUNT

oc scale deployment system-searchd -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SEARCHD_REPLICA_COUNT

oc scale deployment system-sidekiq -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SIDEKIQ_REPLICA_COUNT

oc scale deployment zync -n $THREESCALE_NAMESPACE --replicas=$ZYNC_REPLICA_COUNT

oc scale deployment zync-database -n $THREESCALE_NAMESPACE --replicas=$ZYNC_DATABASE_REPLICA_COUNT

oc scale deployment zync-que -n $THREESCALE_NAMESPACE --replicas=$ZYNC_QUE_REPLICA_COUNT
Copy to Clipboard Toggle word wrap

Wait for 3scale instances to fully recover and retry the migration from the start.

5.1. Prerequisites

You must have AWS access to create the Postgres RDS instance and the networking components (VPC, Subnets, Security Groups, etc.) that are necessary to access the Postgres DB. Your on-cluster Postgres instance also needs to be running version 13.

5.2. Overview

  • Follow the AWS guide to create a Postgres RDS instance
  • Scale down the 3scale operator and the 3scale instance
  • Create the Postgres dump file
  • Seed the new database with the Postgres dump file
  • Backup and update the system-database secret
  • Scale up the 3scale operator and the 3scale instance
  • Verify that 3scale is healthy

    • If it is not healthy, restore the secrets and retry the migration
    • If it is healthy, clean up the old system-postgresql component

Export the following environments variables:

export POSTGRES_ON_CLUSTER_NAMESPACE=<namespace where the Postgres pod is running>

export OPERATOR_NAMESPACE=<namespace where the 3scale operator is running>

export THREESCALE_NAMESPACE=<namespace where the 3scale instance is running>
Copy to Clipboard Toggle word wrap

Additionally, assign the following variables for the replica counts. This is done to ensure that we are scaling the 3scale instance back to the replica values that we scaled it down from.

For pre-2.15 operator versions:

APICAST_PRODUCTION_REPLICA_COUNT=$(oc get dc apicast-production -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

APICAST_STAGING_REPLICA_COUNT=$(oc get dc apicast-staging -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

BACKEND_CRON_REPLICA_COUNT=$(oc get dc backend-cron -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

BACKEND_LISTENER_REPLICA_COUNT=$(oc get dc backend-listener -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

BACKEND_REDIS_REPLICA_COUNT=$(oc get dc backend-redis -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

BACKEND_WORKER_REPLICA_COUNT=$(oc get dc backend-worker -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

SYSTEM_APP_REPLICA_COUNT=$(oc get dc system-app -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

SYSTEM_MEMCACHE_REPLICA_COUNT=$(oc get dc system-memcache -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

SYSTEM_REDIS_REPLICA_COUNT=$(oc get dc system-redis -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

SYSTEM_SEARCHD_REPLICA_COUNT=$(oc get dc system-searchd -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

SYSTEM_SIDEKIQ_REPLICA_COUNT=$(oc get dc system-sidekiq -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

ZYNC_REPLICA_COUNT=$(oc get dc zync -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

ZYNC_DATABASE_REPLICA_COUNT=$(oc get dc zync-database -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

ZYNC_QUE_REPLICA_COUNT=$(oc get dc zync-que -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
Copy to Clipboard Toggle word wrap

For 2.15+ operator versions:

APICAST_PRODUCTION_REPLICA_COUNT=$(oc get deployment -n $THREESCALE_NAMESPACE apicast-production -o=jsonpath='{.spec.replicas}')

APICAST_STAGING_REPLICA_COUNT=$(oc get deployment apicast-staging -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

BACKEND_CRON_REPLICA_COUNT=$(oc get deployment backend-cron -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

BACKEND_LISTENER_REPLICA_COUNT=$(oc get deployment backend-listener -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

BACKEND_REDIS_REPLICA_COUNT=$(oc get deployment backend-redis -o=jsonpath='{.spec.replicas}')

BACKEND_WORKER_REPLICA_COUNT=$(oc get deployment backend-worker -o=jsonpath='{.spec.replicas}')

SYSTEM_APP_REPLICA_COUNT=$(oc get deployment system-app -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

SYSTEM_MEMCACHE_REPLICA_COUNT=$(oc get deployment system-memcache -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

SYSTEM_REDIS_REPLICA_COUNT=$(oc get deployment system-redis -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

SYSTEM_SEARCHD_REPLICA_COUNT=$(oc get deployment system-searchd -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

SYSTEM_SIDEKIQ_REPLICA_COUNT=$(oc get deployment system-sidekiq -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

ZYNC_REPLICA_COUNT=$(oc get deployment zync -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

ZYNC_DATABASE_REPLICA_COUNT=$(oc get deployment zync-database -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')

ZYNC_QUE_REPLICA_COUNT=$(oc get deployment zync-que -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
Copy to Clipboard Toggle word wrap

Optional - If you want to write the replica counts to a file as a backup, follow these steps:

touch replica_counts.txt
Copy to Clipboard Toggle word wrap

Then run the following command for each Deployment/DeploymentConfig listed above, apicast-production is used as an example:

echo "APICAST_PRODUCTION_REPLICA_COUNT=$(oc get deployment -n $THREESCALE_NAMESPACE apicast-production -o=jsonpath='{.spec.replicas}')" >> replica_counts.txt
Copy to Clipboard Toggle word wrap
Follow the https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.PostgreSQL.html[Creating and connecting to a PostgreSQL DB instance] guide to create a Postgres RDS instance.
Copy to Clipboard Toggle word wrap

Many configuration options must be considered when creating a Postgres RDS instance, for example: storage autoscaling, automated backups, etc. Review your options when creating the RDS instance.

Scale down all resources except for the system-postgresql instance

For pre-2.15 operator versions:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0

oc scale dc/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
Copy to Clipboard Toggle word wrap

For 2.15+ operator versions:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0

oc scale deployment/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
Copy to Clipboard Toggle word wrap

5.2.3. Create the Postgres Dump File

# Connect to the system-postgresql Pod
export POSTGRES_POD_NAME=$(oc get pods -n $THREESCALE_NAMESPACE | grep postgres | awk '{print $1}')
oc rsh -t $POSTGRES_POD_NAME

# Create the Postgres dump file
PGPASSWORD=$POSTGRESQL_PASSWORD pg_dump -U $POSTGRESQL_USER -c $POSTGRESQL_DATABASE > /tmp/system_postgres_backup.psql
Copy to Clipboard Toggle word wrap
# These steps assume you are still connected to the system-postgresql Pod from the previous section

# Create env vars with the connection details
export TARGET_POSTGRES_HOSTNAME="<AWS RDS Hostname>"
export TARGET_POSTGRES_USER="<AWS RDS Admin Username>"
export TARGET_POSTGRES_PASSWORD="<AWS RDS Admin Password>"

# Create a fresh database on the target Postgres if it doesn't already exist
PGPASSWORD=$TARGET_POSTGRES_PASSWORD psql -h $TARGET_POSTGRES_HOSTNAME -U $TARGET_POSTGRES_USER -c "CREATE DATABASE system;"

# Source the dump file to the target database
PGPASSWORD=$TARGET_POSTGRES_PASSWORD psql -h $TARGET_POSTGRES_HOSTNAME -U $TARGET_POSTGRES_USER -d system < /tmp/system_postgres_backup.psql

# Once the sourcing is complete, exit the system-postgresql Pod
exit
Copy to Clipboard Toggle word wrap

Before updating the 3scale system-database Secret so that the 3scale instance starts using the new external Postgres instance, back up the Secret:

oc get secret system-database -n $THREESCALE_NAMESPACE -o yaml > system-database-secret.yaml
Copy to Clipboard Toggle word wrap

Change the value of the system-database Secret DB_PASSWORD on the cluster to:

<AWS RDS Admin Password>
Copy to Clipboard Toggle word wrap

Change the value of the system-database Secret DB_USER on the cluster to:

<AWS RDS Admin Username>
Copy to Clipboard Toggle word wrap

Change the value of the system-database Secret DB_USER on the cluster to:

postgresql://<AWS RDS Admin Username>:<AWS RDS Admin Password>@<AWS RDS Hostname>/system
Copy to Clipboard Toggle word wrap

5.2.6. Add External Components to APIManager

For the operator to stop reconciling the internal Postgres database, it is necessary to mark Postgres database as an external component in the APIManager. This is done by setting the APIM.spec.externalComponents.system.database to true:

oc patch APIManager <APIManager Name> -n $THREESCALE_NAMESPACE --type=merge -p '{"spec": {"externalComponents": {"system": {"database": true}}}}'
Copy to Clipboard Toggle word wrap
Note
This *disconnects the operator from reconciling the component, which meaans, the 3scale Operator will no longer reconcile the database Deployment/DeploymentConfig and associated PersistentVolumeClaim.
Copy to Clipboard Toggle word wrap

For pre-2.15 operator versions:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1

oc scale dc apicast-production -n $THREESCALE_NAMESPACE --replicas=$APICAST_PRODUCTION_REPLICA_COUNT

oc scale dc apicast-staging -n $THREESCALE_NAMESPACE --replicas=$APICAST_STAGING_REPLICA_COUNT

oc scale dc backend-cron -n $THREESCALE_NAMESPACE --replicas=$BACKEND_CRON_REPLICA_COUNT

oc scale dc backend-listener -n $THREESCALE_NAMESPACE --replicas=$BACKEND_LISTENER_REPLICA_COUNT

oc scale dc backend-redis -n $THREESCALE_NAMESPACE --replicas=$BACKEND_REDIS_REPLICA_COUNT

oc scale dc backend-worker -n $THREESCALE_NAMESPACE --replicas=$BACKEND_WORKER_REPLICA_COUNT

oc scale dc system-app -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_APP_REPLICA_COUNT

oc scale dc system-memcache -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_MEMCACHE_REPLICA_COUNT

oc scale dc system-redis -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_REDIS_REPLICA_COUNT

oc scale dc system-searchd -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SEARCHD_REPLICA_COUNT

oc scale dc system-sidekiq -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SIDEKIQ_REPLICA_COUNT

oc scale dc zync -n $THREESCALE_NAMESPACE --replicas=$ZYNC_REPLICA_COUNT

oc scale dc zync-database -n $THREESCALE_NAMESPACE --replicas=$ZYNC_DATABASE_REPLICA_COUNT

oc scale dc zync-que -n $THREESCALE_NAMESPACE --replicas=$ZYNC_QUE_REPLICA_COUNT
Copy to Clipboard Toggle word wrap

For 2.15+ operator versions:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1

oc scale deployment apicast-production -n $THREESCALE_NAMESPACE --replicas=$APICAST_PRODUCTION_REPLICA_COUNT

oc scale deployment apicast-staging -n $THREESCALE_NAMESPACE --replicas=$APICAST_STAGING_REPLICA_COUNT

oc scale deployment backend-cron -n $THREESCALE_NAMESPACE --replicas=$BACKEND_CRON_REPLICA_COUNT

oc scale deployment backend-listener -n $THREESCALE_NAMESPACE --replicas=$BACKEND_LISTENER_REPLICA_COUNT

oc scale deployment backend-redis -n $THREESCALE_NAMESPACE --replicas=$BACKEND_REDIS_REPLICA_COUNT

oc scale deployment backend-worker -n $THREESCALE_NAMESPACE --replicas=$BACKEND_WORKER_REPLICA_COUNT

oc scale deployment system-app -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_APP_REPLICA_COUNT

oc scale deployment system-memcache -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_MEMCACHE_REPLICA_COUNT

oc scale deployment system-redis -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_REDIS_REPLICA_COUNT

oc scale deployment system-searchd -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SEARCHD_REPLICA_COUNT

oc scale deployment system-sidekiq -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SIDEKIQ_REPLICA_COUNT

oc scale deployment zync -n $THREESCALE_NAMESPACE --replicas=$ZYNC_REPLICA_COUNT

oc scale deployment zync-database -n $THREESCALE_NAMESPACE --replicas=$ZYNC_DATABASE_REPLICA_COUNT

oc scale deployment zync-que -n $THREESCALE_NAMESPACE --replicas=$ZYNC_QUE_REPLICA_COUNT
Copy to Clipboard Toggle word wrap

Wait for 3scale instances to fully recover and confirm that the migration data was successful.
Once the 3scale instance state is confirmed to be correct, the on-cluster Postgres instance can be deleted.

5.2.8. Restoration in Case of Failure

If the migration is unsuccessful and you need to revert back to the previous configuration, scale down all resources.

For pre-2.15 operator versions:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0

oc scale dc/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
Copy to Clipboard Toggle word wrap

For 2.15+ operator versions:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0

oc scale deployment/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-redis,backend-worker,system-app,system-memcache,system-redis,system-searchd,system-sidekiq,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
Copy to Clipboard Toggle word wrap

Remove the system-database Secret:

oc delete secret system-database -n $THREESCALE_NAMESPACE
Copy to Clipboard Toggle word wrap

Re-create the system-database Secret from the backup:

oc apply -f system-database-secret.yaml
Copy to Clipboard Toggle word wrap

Scale d
For pre-2.15 operator versions:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1

oc scale dc apicast-production -n $THREESCALE_NAMESPACE --replicas=$APICAST_PRODUCTION_REPLICA_COUNT

oc scale dc apicast-staging -n $THREESCALE_NAMESPACE --replicas=$APICAST_STAGING_REPLICA_COUNT

oc scale dc backend-cron -n $THREESCALE_NAMESPACE --replicas=$BACKEND_CRON_REPLICA_COUNT

oc scale dc backend-listener -n $THREESCALE_NAMESPACE --replicas=$BACKEND_LISTENER_REPLICA_COUNT

oc scale dc backend-redis -n $THREESCALE_NAMESPACE --replicas=$BACKEND_REDIS_REPLICA_COUNT

oc scale dc backend-worker -n $THREESCALE_NAMESPACE --replicas=$BACKEND_WORKER_REPLICA_COUNT

oc scale dc system-app -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_APP_REPLICA_COUNT

oc scale dc system-memcache -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_MEMCACHE_REPLICA_COUNT

oc scale dc system-redis -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_REDIS_REPLICA_COUNT

oc scale dc system-searchd -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SEARCHD_REPLICA_COUNT

oc scale dc system-sidekiq -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SIDEKIQ_REPLICA_COUNT

oc scale dc zync -n $THREESCALE_NAMESPACE --replicas=$ZYNC_REPLICA_COUNT

oc scale dc zync-database -n $THREESCALE_NAMESPACE --replicas=$ZYNC_DATABASE_REPLICA_COUNT

oc scale dc zync-que -n $THREESCALE_NAMESPACE --replicas=$ZYNC_QUE_REPLICA_COUNT
Copy to Clipboard Toggle word wrap

For 2.15+ operator versions:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1

oc scale deployment apicast-production -n $THREESCALE_NAMESPACE --replicas=$APICAST_PRODUCTION_REPLICA_COUNT

oc scale deployment apicast-staging -n $THREESCALE_NAMESPACE --replicas=$APICAST_STAGING_REPLICA_COUNT

oc scale deployment backend-cron -n $THREESCALE_NAMESPACE --replicas=$BACKEND_CRON_REPLICA_COUNT

oc scale deployment backend-listener -n $THREESCALE_NAMESPACE --replicas=$BACKEND_LISTENER_REPLICA_COUNT

oc scale deployment backend-redis -n $THREESCALE_NAMESPACE --replicas=$BACKEND_REDIS_REPLICA_COUNT

oc scale deployment backend-worker -n $THREESCALE_NAMESPACE --replicas=$BACKEND_WORKER_REPLICA_COUNT

oc scale deployment system-app -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_APP_REPLICA_COUNT

oc scale deployment system-memcache -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_MEMCACHE_REPLICA_COUNT

oc scale deployment system-redis -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_REDIS_REPLICA_COUNT

oc scale deployment system-searchd -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SEARCHD_REPLICA_COUNT

oc scale deployment system-sidekiq -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SIDEKIQ_REPLICA_COUNT

oc scale deployment zync -n $THREESCALE_NAMESPACE --replicas=$ZYNC_REPLICA_COUNT

oc scale deployment zync-database -n $THREESCALE_NAMESPACE --replicas=$ZYNC_DATABASE_REPLICA_COUNT

oc scale deployment zync-que -n $THREESCALE_NAMESPACE --replicas=$ZYNC_QUE_REPLICA_COUNT
Copy to Clipboard Toggle word wrap

Wait for 3scale instances to fully recover and retry the migration from the start.

Chapter 6. Migration of Redis on-cluster to AWS

6.1. Prerequisites

  • Complete the Redis upgrade by referring Externalizing databases for 2.16
  • AWS access and relevant ElasticCache permissions
  • Consider your post-migration verification steps before proceeding. This may require you to take a snapshot of data using the api or portal and define a test plan to verify a successful migration
  • Consider performing the migration at the time of lowest traffic as the migration will be service-affecting
  • Consider letting 3scale components process all the jobs in the background before fully scaling the instance

6.2. Overview and considerations

  • Consider the following:

    • Is the Redis instance going to be AWS-managed (Amazon Redis OSS) or self-managed (ec2)
    • Review permissions around access to AWS resources
    • Review cluster configuration to ensure AWS resources are reachable
    • Consider connectivity security, firewalls, etc.
    • Consider Redis configuration, including instance type, security groups, maintenance, backups, etc.
    • Consider creating 2 databases for the backend instead of relying on logical databases (for queues and storage)
    • Consider whether you want to restore system-redis from the dump or create a new database
    • Consider the Redis versions and ensure that the version you are migrating from matches the version you are migrating to
    • Familiarize yourself with 3scale supported configurations, AWS might provide more options when configuring Redis that the configuration on cluster, but not all the options are fully supported (ACL, TLS for example)
  • Scale 3scale instance and the operator down
  • Retrieve Redis dump files
  • Follow AWS guidelines on how to create/seed Redis instances from the dump files, this includes:

    • Creating an S3 bucket
    • Adding appropriate permissions to the S3
    • Creating a Redis instance from the S3 backup
  • Performing pre-connections checks
  • Backup and update Redis secrets - backend-redis and system-redis
  • Scale up 3scale instance and the operator
  • Restore the secrets in case of failed migration

Export the following environments:

export REDIS_ON_CLUSTER_NAMESPACE=<namespace where the Redis pod is running>
export OPERATOR_NAMESPACE=<namespace where the 3scale operator is running>
export THREESCALE_NAMESPACE=<namespace where the 3scale instance is running>
Copy to Clipboard Toggle word wrap

Additionally, export the following replica counts. This is done to ensure that we are scaling the 3scale instance back to the replica values that we scaled it down from.

For pre-2.15 Operator versions:

SYSTEM_MEMCACHE_REPLICA_COUNT=$(oc get dc system-memcache -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
ZYNC_DATABASE_REPLICA_COUNT=$(oc get dc zync-database -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
APICAST_PRODUCTION_REPLICA_COUNT=$(oc get dc apicast-production -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
APICAST_STAGING_REPLICA_COUNT=$(oc get dc apicast-staging -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
BACKEND_CRON_REPLICA_COUNT=$(oc get dc backend-cron -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
BACKEND_LISTENER_REPLICA_COUNT=$(oc get dc backend-listener -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
BACKEND_WORKER_REPLICA_COUNT=$(oc get dc backend-worker -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
SYSTEM_APP_REPLICA_COUNT=$(oc get dc system-app -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
SYSTEM_SIDEKIQ_REPLICA_COUNT=$(oc get dc system-sidekiq -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
SYSTEM_SEARCHD_REPLICA_COUNT=$(oc get dc system-searchd -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
ZYNC_REPLICA_COUNT=$(oc get dc zync -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
ZYNC_QUE_REPLICA_COUNT=$(oc get dc zync-que -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
Copy to Clipboard Toggle word wrap

For 2.15 + Operator versions:

SYSTEM_MEMCACHE_REPLICA_COUNT=$(oc get deployment system-memcache -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
ZYNC_DATABASE_REPLICA_COUNT=$(oc get deployment zync-database -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
APICAST_PRODUCTION_REPLICA_COUNT=$(oc get deployment -n $THREESCALE_NAMESPACE apicast-production -o=jsonpath='{.spec.replicas}')
APICAST_STAGING_REPLICA_COUNT=$(oc get deployment apicast-staging -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
BACKEND_CRON_REPLICA_COUNT=$(oc get deployment backend-cron -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
BACKEND_LISTENER_REPLICA_COUNT=$(oc get deployment backend-listener -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
BACKEND_WORKER_REPLICA_COUNT=$(oc get deployment backend-worker -o=jsonpath='{.spec.replicas}')
SYSTEM_APP_REPLICA_COUNT=$(oc get deployment system-app -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
SYSTEM_SIDEKIQ_REPLICA_COUNT=$(oc get deployment system-sidekiq -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
SYSTEM_SEARCHD_REPLICA_COUNT=$(oc get deployment system-searchd -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
ZYNC_REPLICA_COUNT=$(oc get deployment zync -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
ZYNC_QUE_REPLICA_COUNT=$(oc get deployment zync-que -n $THREESCALE_NAMESPACE -o=jsonpath='{.spec.replicas}')
Copy to Clipboard Toggle word wrap

Scale down all resources apart from the database instances

For pre-2.15 operator version:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale dc/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
Copy to Clipboard Toggle word wrap

For 2.15 + operator version:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
Copy to Clipboard Toggle word wrap

6.2.2. Retrieve the Redis dump file

For deploymentConfig:

oc cp $(oc get pods -l 'deploymentConfig=backend-redis' -n $REDIS_ON_CLUSTER_NAMESPACE -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./backend-redis-dump.rdb

oc cp $(oc get pods -l 'deploymentConfig=system-redis' -n $REDIS_ON_CLUSTER_NAMESPACE -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./system-redis-dump.rdb
Copy to Clipboard Toggle word wrap

For deployment:

oc cp $(oc get pods -l 'deployment=backend-redis' -n $REDIS_ON_CLUSTER_NAMESPACE -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./backend-redis-dump.rdb

oc cp $(oc get pods -l 'deployment=system-redis' -n $REDIS_ON_CLUSTER_NAMESPACE -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./system-redis-dump.rdb
Copy to Clipboard Toggle word wrap
Note

Ensure to tweak the values accordingly to where and what your backend Redis instance is.

Follow the Tutorial: Seeding a new node-based cluster with an externally created backup from Step 2 - Create an Amazon S3 bucket and folder

Many configuration options must be considered when creating a Redis instance.

Note
Currently, the Redis on cluster version is 7.0, so ensure that the Redis on AWS is created with the same Major.Minor versions are currently used on the cluster.
Copy to Clipboard Toggle word wrap

6.2.4. Pre-connection check

After successful Redis creation, double-check that the communication between your cluster and your Redis instance on AWS is possible:

oc exec -it $(oc get pods -l 'deploymentConfig=backend-redis' -o jsonpath='{.items[0].metadata.name}') -- redis-cli -h <Redis host from AWS without port or with -p <port> if custom port is defined> KEYS "liveness-probe"

oc exec -it $(oc get pods -l 'deployment=backend-redis' -o jsonpath='{.items[0].metadata.name}') -- redis-cli -h <Redis host from AWS without port or with -p <port> if custom port is defined> KEYS "liveness-probe"
Copy to Clipboard Toggle word wrap

Consider updating the name of the deployment configuration or deployment.

If the connection was successful you must receive a liveness-probe key: liveness-probe

If the connection was successful but data is missing, you must receive the following: (empty array)

At this point, it means that Redis was not successfully restored from the dump file, or, the dump file is corrupted

If the connection is unsuccessful, the command hangs. If this happens, review your configuration on AWS as it might mean that the AWS resources are inaccessible from your cluster.

Ensure to run the commands for both, system and backend Redis instances.

6.2.5. Update Redis secret

Before updating the 3scale secrets so that the 3scale instance starts using the new external databases, back them up:

oc get secret backend-redis -n $THREESCALE_NAMESPACE -o yaml > backend-redis-secret.yaml

oc get secret system-redis -n $THREESCALE_NAMESPACE -o yaml > system-redis-secret.yaml
Copy to Clipboard Toggle word wrap

Change the value of the backend-redis secret REDIS_QUEUES_URL on the cluster to:

redis://<AWS Redis endpoint>:6379/1
Copy to Clipboard Toggle word wrap

Change the value of the backend-redis secret REDIS_STORAGE_URL on the cluster to:

redis://<AWS Redis endpoint>:6379/0
Copy to Clipboard Toggle word wrap

Change the value of the system-redis secret URL on the cluster to:

redis://<AWS Redis endpoint>:6379/0
Copy to Clipboard Toggle word wrap

For pre-2.15 operator version:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1
oc scale dc system-memcache -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_MEMCACHE_REPLICA_COUNT
oc scale dc zync-database -n $THREESCALE_NAMESPACE --replicas=$ZYNC_DATABASE_REPLICA_COUNT
oc scale dc apicast-production -n $THREESCALE_NAMESPACE --replicas=$APICAST_PRODUCTION_REPLICA_COUNT
oc scale dc apicast-staging -n $THREESCALE_NAMESPACE --replicas=$APICAST_STAGING_REPLICA_COUNT
oc scale dc backend-cron -n $THREESCALE_NAMESPACE --replicas=$BACKEND_CRON_REPLICA_COUNT
oc scale dc backend-listener -n $THREESCALE_NAMESPACE --replicas=$BACKEND_LISTENER_REPLICA_COUNT
oc scale dc backend-worker -n $THREESCALE_NAMESPACE --replicas=$BACKEND_WORKER_REPLICA_COUNT
oc scale dc system-app -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_APP_REPLICA_COUNT
oc scale dc system-sidekiq -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SIDEKIQ_REPLICA_COUNT
oc scale dc system-searchd -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SEARCHD_REPLICA_COUNT
oc scale dc zync -n $THREESCALE_NAMESPACE --replicas=$ZYNC_REPLICA_COUNT
oc scale dc zync-que -n $THREESCALE_NAMESPACE --replicas=$ZYNC_QUE_REPLICA_COUNT
Copy to Clipboard Toggle word wrap

For 2.15 + operator version:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1
oc scale deployment system-memcache -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_MEMCACHE_REPLICA_COUNT
oc scale deployment zync-database -n $THREESCALE_NAMESPACE --replicas=$ZYNC_DATABASE_REPLICA_COUNT
oc scale deployment apicast-production -n $THREESCALE_NAMESPACE --replicas=$APICAST_PRODUCTION_REPLICA_COUNT
oc scale deployment apicast-staging -n $THREESCALE_NAMESPACE --replicas=$APICAST_STAGING_REPLICA_COUNT
oc scale deployment backend-cron -n $THREESCALE_NAMESPACE --replicas=$BACKEND_CRON_REPLICA_COUNT
oc scale deployment backend-listener -n $THREESCALE_NAMESPACE --replicas=$BACKEND_LISTENER_REPLICA_COUNT
oc scale deployment backend-worker -n $THREESCALE_NAMESPACE --replicas=$BACKEND_WORKER_REPLICA_COUNT
oc scale deployment system-app -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_APP_REPLICA_COUNT
oc scale deployment system-sidekiq -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SIDEKIQ_REPLICA_COUNT
oc scale deployment system-searchd -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SEARCHD_REPLICA_COUNT
oc scale deployment zync -n $THREESCALE_NAMESPACE --replicas=$ZYNC_REPLICA_COUNT
oc scale deployment zync-que -n $THREESCALE_NAMESPACE --replicas=$ZYNC_QUE_REPLICA_COUNT
Copy to Clipboard Toggle word wrap

Wait for 3scale instances to fully recover and confirm that the migration data was successful.
Once the 3scale instance state is confirmed to be correct, the on-cluster Redis instances can be deleted.

6.2.7. Restoration in case of failure

If the migration is unsuccessful and you need to revert back to the previous configuration:
Scale the operator and 3scale instance down:

Scale entire instance down:
For pre-2.15 operator version:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale dc/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
Copy to Clipboard Toggle word wrap

For 2.15 + operator version:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=0
oc scale deployment/{apicast-production,apicast-staging,backend-cron,backend-listener,backend-worker,system-app,system-memcache,system-sidekiq,system-searchd,zync,zync-database,zync-que} -n $THREESCALE_NAMESPACE --replicas=0
Copy to Clipboard Toggle word wrap

Remove the backend-redis and system-redis secrets

oc delete secret system-redis -n $THREESCALE_NAMESPACE
oc delete secret backend-redis -n $THREESCALE_NAMESPACE
Copy to Clipboard Toggle word wrap

Re-create secrets from the backed-up secrets

oc apply -f backend-redis-secret.yaml
oc apply -f system-redis-secret.yaml
Copy to Clipboard Toggle word wrap

Scale up 3scale instance
For pre-2.15 operator version:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1
oc scale dc system-memcache -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_MEMCACHE_REPLICA_COUNT
oc scale dc zync-database -n $THREESCALE_NAMESPACE --replicas=$ZYNC_DATABASE_REPLICA_COUNT
oc scale dc apicast-production -n $THREESCALE_NAMESPACE --replicas=$APICAST_PRODUCTION_REPLICA_COUNT
oc scale dc apicast-staging -n $THREESCALE_NAMESPACE --replicas=$APICAST_STAGING_REPLICA_COUNT
oc scale dc backend-cron -n $THREESCALE_NAMESPACE --replicas=$BACKEND_CRON_REPLICA_COUNT
oc scale dc backend-listener -n $THREESCALE_NAMESPACE --replicas=$BACKEND_LISTENER_REPLICA_COUNT
oc scale dc backend-worker -n $THREESCALE_NAMESPACE --replicas=$BACKEND_WORKER_REPLICA_COUNT
oc scale dc system-app -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_APP_REPLICA_COUNT
oc scale dc system-sidekiq -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SIDEKIQ_REPLICA_COUNT
oc scale dc system-searchd -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SEARCHD_REPLICA_COUNT
oc scale dc zync -n $THREESCALE_NAMESPACE --replicas=$ZYNC_REPLICA_COUNT
oc scale dc zync-que -n $THREESCALE_NAMESPACE --replicas=$ZYNC_QUE_REPLICA_COUNT
Copy to Clipboard Toggle word wrap

For 2.15 + operator version:

oc scale deployment threescale-operator-controller-manager-v2 -n $OPERATOR_NAMESPACE --replicas=1
oc scale deployment system-memcache -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_MEMCACHE_REPLICA_COUNT
oc scale deployment zync-database -n $THREESCALE_NAMESPACE --replicas=$ZYNC_DATABASE_REPLICA_COUNT
oc scale deployment apicast-production -n $THREESCALE_NAMESPACE --replicas=$APICAST_PRODUCTION_REPLICA_COUNT
oc scale deployment apicast-staging -n $THREESCALE_NAMESPACE --replicas=$APICAST_STAGING_REPLICA_COUNT
oc scale deployment backend-cron -n $THREESCALE_NAMESPACE --replicas=$BACKEND_CRON_REPLICA_COUNT
oc scale deployment backend-listener -n $THREESCALE_NAMESPACE --replicas=$BACKEND_LISTENER_REPLICA_COUNT
oc scale deployment backend-worker -n $THREESCALE_NAMESPACE --replicas=$BACKEND_WORKER_REPLICA_COUNT
oc scale deployment system-app -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_APP_REPLICA_COUNT
oc scale deployment system-sidekiq -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SIDEKIQ_REPLICA_COUNT
oc scale deployment system-searchd -n $THREESCALE_NAMESPACE --replicas=$SYSTEM_SEARCHD_REPLICA_COUNT
oc scale deployment zync -n $THREESCALE_NAMESPACE --replicas=$ZYNC_REPLICA_COUNT
oc scale deployment zync-que -n $THREESCALE_NAMESPACE --replicas=$ZYNC_QUE_REPLICA_COUNT
Copy to Clipboard Toggle word wrap

This guide assists in diagnosing and troubleshooting some of the possible issues that arise from using external to 3scale databases. This can be both, databases on a cloud provider (for example AWS or GCP) or, external to 3scale but on a cluster (customer-managed on cluster databases).

7.1. Resources

  • Setup and review 3scale monitoring

In all cases, to get a better understanding of the root cause of an issue, you must enable 3scale monitoring. For more details, see Enabling 3scale monitoring stack. Choose the installation process based on the 3scale and OpenShift version in use.

  • Review OpenShift monitoring

OpenShift monitoring provides insights into the state of the cluster and its workloads. It can be extremely beneficial for understanding and diagnosing the root cause of an issue, ensure to familiarize yourself with the OpenShift documentation: About OpenShift Container Platform monitoring

  • Review cloud provider-specific monitoring

Depending on the cloud provider, the monitoring might vary. However, the general concept remains the same. Monitoring should provide sufficient information about the state of the running instance, its connections, memory, etc. Ensure to familiarize yourself with the cloud provider’s monitoring documentation.

  • Enable additional logs on 3scale components

Apicast - see APIcast parameters
System - see Reducing the log level of system-app component in 3scale

Zync - set the following envs on zync deployment/deployment configuration

oc set env dc/zync DEBUG=1 -n <3SCALE_NAMESPACE>
oc set env dc/zync-que DEBUG=1 -n <3SCALE_NAMESPACE>
Copy to Clipboard Toggle word wrap

Debug logs are beneficial in understanding the issue.

7.1.1. System database

MySQL, PostgreSQL, or Oracle can be used as the system database.
This part of the document covers possible issues and their manifestations, diagnosis steps, and possible solutions. Issues and manifestations are generic to all types of databases, diagnosis and solutions and can be different depending on the database provider.

7.1.1.1. System database limitations
  • 3scale currently does not support TLS with system database
7.1.1.2. Connectivity issue

This is the most basic system database issue and can occur when a database is unreachable.

7.1.1.2.1. Manifestation

The issue manifests in the system component when it is not being able to get ready.

7.1.1.2.2. Diagnosis

When this happens the system-app-pre hook pod will fail with the following error:

ActiveRecord::ConnectionNotEstablished: Unknown <your DB provider> server host '<host from the system-database secret'

In case the system pod can connect to the database (pre-hook pod passes) but the system app crashes, you might see the following error in the log:

Unknown database '<Database name>' (ActiveRecord::NoDatabaseError)

To confirm the issue is on the connectivity side, navigate to system pods and check the logs of the system-pre-hook and system-app

7.1.1.2.3. Solution

Connectivity issues can happen because of the following reasons:

  • Incorrectly configured system database secret - ensure that all the fields are set in the correct format - more information can be found in Externalizing MySQL database doc: Configuring an external MySQL database
  • Incorrectly configured connectivity setting on the cloud provider or OpenShift side - in the case of cloud provider databases, ensure that your cluster can reach the cloud provider resources, more information about setting up OpenShift with AWS can be found here: Installing a cluster quickly on AWS
  • Incorrectly configured secret with wrong database name - this issue happens when the system connects to the database itself correctly but cannot find the provided database name. To solve this problem, update the secret database name or ensure that the database name provided in the secret exists in the database itself - more information can be found in Externalizing MySQL database doc: Configuring an external MySQL database
7.1.1.3. Performance bottlenecks issues

This part of the document describes potential problems with the database bottlenecks.

7.1.1.3.1. Manifestation
7.1.1.3.1.1. 1. Slow Query Performance
  • Long Response Times: Queries take longer to execute than expected, leading to delays in application responses.
  • Increased Execution Time: Simple queries that previously ran quickly now take significantly more time.
  • Frequent Timeouts: Queries may timeout or fail due to exceeding maximum execution time limits.
7.1.1.3.1.2. 2. High CPU Utilization
  • Excessive CPU Load: The database server exhibits high CPU usage, often near 100%, causing slow performance for all database operations.
  • CPU Spikes: Frequent spikes in CPU usage during peak query execution times.
7.1.1.3.1.3. 3. High Memory Usage
  • Memory Swapping: The database server might start using swap memory, leading to a significant decrease in performance.
  • Insufficient Cache: Inefficient use of memory, leading to frequent cache misses and more disk I/O operations.
7.1.1.3.1.4. 4. Disk I/O Bottlenecks
  • High I/O Wait Times: The system shows high I/O wait times, indicating that processes are frequently waiting for disk operations to complete.
  • Slow Disk Access: Increased latency in reading from or writing to the disk, causing overall slow database operations.
  • Log File Saturation: Write-heavy operations (for example: transaction logs) may saturate disk bandwidth, slowing down other operations.
7.1.1.3.1.5. 5. Locking and Concurrency Issues
  • Lock Wait Timeouts: Frequent lock waits or deadlocks occur, causing queries to be delayed or aborted.
  • Transaction Contention: Multiple transactions contend for the same resources, leading to increased wait times and slower processing.
7.1.1.3.1.6. 6. Increased Connection Latency
  • Delayed Connections: Establishing new connections to the database becomes slower, potentially causing timeouts.
  • Connection Pool Saturation: Connection pools reach their maximum limits, causing delays or failures in acquiring a database connection.
7.1.1.3.1.7. 7. Query Contention
  • Deadlocks: Increased frequency of deadlocks, where two or more queries are waiting for each other to release locks.
  • Blocking Queries: Queries block each other, leading to cascading delays and slow performance for dependent queries.
7.1.1.3.1.8. 8. Increased Error Rates
  • Timeouts and Failures: Higher rates of query timeouts, transaction rollbacks, or failed queries.
  • Resource Exhaustion: Errors related to running out of critical resources like memory, disk space, or file descriptors.
7.1.1.3.1.9. 9. Unresponsive Database
  • Database Crashes or Hangs: The database server becomes unresponsive, requiring a restart to restore normal operation.
  • Long Recovery Times: After a crash or failure, the database takes a long time to recover, indicating underlying performance issues.
7.1.1.3.1.10. 10. Poor Application Performance
  • Slow Application Responses: Applications dependent on the database exhibit poor performance, with slower page loads, longer processing times, and delayed transactions.
  • User Complaints: End users report slowness or unresponsiveness in applications, often pointing to database-related issues.
7.1.1.3.1.11. 11. Increased Network Latency
  • Network Saturation: If the database is remote, high network latency or saturation can cause delays in query execution and data retrieval.
  • Slow Data Transfer: Large queries or data retrieval operations take longer than usual due to network-related bottlenecks.
7.1.1.3.1.12. 12. Inefficient Index Usage
  • Table Scans: Queries that should be using indexes are instead performing full table scans, leading to increased execution times.
  • Fragmented Indexes: Index fragmentation causes slower query performance and increased disk I/O.
7.1.1.3.1.13. 13. High Replication Lag
  • Delayed Replication: In a replicated setup, the lag between the primary and replica servers increases, causing stale data to be served from replicas.
  • Replication Conflicts: Replication errors or conflicts slow down the entire replication process, leading to data inconsistency.
7.1.1.3.2. Diagnosis

It is recommended to enable debug logging on the system app to understand the issue better. The logs provide useful information, like the QUERY and the response times. For example:

Settings Load (0.5ms) SELECT settings. FROM settings WHERE settings.account_id = 1 LIMIT 1*

By investigating the logs and comparing them to previous logs (if available) you must be able to identify the increase in response times.

Resource Monitoring
Track CPU, memory, and I/O usage using monitoring tools like Prometheus, Grafana, or native database monitoring solutions. The how to can vary depending on the database provider.

If the database runs on a cluster, 3scale Monitoring can be beneficial to investigate the database resource usage and network performance.

If the database runs on and is managed by the cloud provider, looking into the provider-specific monitoring stack, might help in tracing down the root cause. For example, AWS CloudWatch metrics.

Optimize Configuration
Adjust database configuration settings (for example: buffer sizes, cache limits) to better handle the workload. Increasing, for example, the buffer size or cache size in MySQL is necessary when your database is experiencing performance issues related to memory management, such as excessive disk I/O, slow query performance, or high contention for resources.

7.1.1.3.3. Solution

Resource issues
If you have encountered resource issues and know which resource is lacking a possible solution is to increase the resource limits. For example, when running the database on a cluster adjusting the deployment resource limits might be beneficial. If running on AWS or other cloud providers, consider moving to another type of instance with more resources available.

Optimize Configuration
Tweak the database configuration accordingly.

7.1.2. Redis databases

Redis databases are used for the system and backend components of 3scale.
This part of the document covers possible issues and their manifestations, diagnosis steps, and possible solutions.

7.1.2.1. Redis databases limitations
  • 3scale currently does not support Redis with ACL
  • 3scale currently does not support TLS
7.1.2.2. Connectivity issue

This is the most basic Redis database issue and can occur when a database is unreachable.

7.1.2.2.1. Manifestation

The issue manifests in the system and/or backend component not being able to become ready.

7.1.2.2.2. Diagnosis

When this happens the system-app-pre hook pod will fail with the following error:

Redis::CannotConnectError: Bad file descriptor (redis://<Redis host>:<Redis port>/1)

When the same is affecting workers the following error can be found in the backend worker pod, svc container:

Error connecting to Redis queue storage: Bad file descriptor (redis://<Redis host>:<Redis port>/1)

7.1.2.2.3. Solution

Connectivity issues can happen because of the following reasons:

  • Incorrectly configured backend redis secret - ensure that all the fields are set in the correct format. Foe more information, see External Redis Database configuration document.
  • Incorrectly configured connectivity setting on the cloud provider or OpenShift side - in the case of cloud provider databases, ensure that your cluster can reach the cloud provider resources. For more information, see Setting up OpenShift with AWS.
7.1.2.3. Performance bottlenecks issues

This part of the document describes potential problems with the database bottlenecks

7.1.2.3.1. Manifestation
7.1.2.3.1.1. 1. Memory Management Issues
  • Out of Memory (OOM) Errors: Redis throws errors when it runs out of memory, typically with the message OOM command not allowed when used memory > 'maxmemory'.
  • Increased Latency: As memory usage approaches the limit, Redis might experience increased latency due to more frequent garbage collection or swapping.
  • Evicted Keys: When Redis is configured with a maxmemory limit and eviction policy, keys may be evicted (deleted) before they should be, leading to data loss.
7.1.2.3.1.2. 2. Latency Issues
  • Increased Latency Spikes: Periodic spikes in latency, particularly during large operations or when Redis is persistently writing data to disk.
7.1.2.3.2. Diagnosis
7.1.2.3.2.1. Backend Redis
  1. Memory Management Issues
    The backend worker currently does not support debug-level logs, however, the backend worker issues are usually logged out to the logs:
    For example:
    bundler: failed to load command: bin/3scale_backend_worker (bin/3scale_backend_worker)
    /opt/ruby/deps/rubygems/github.com/3scale/redis-rb/redis-rb-external-gitcommit-7210a9d6cf733fe5a1ad0dd20f5f613167743810/app/lib/redis/client.rb:126:in `call': OOM command not allowed when used memory > 'maxmemory'.
    Which indicates memory issues on the backend worker.
    If Redis runs on cluster, running INFO memory on Redis pod is useful to check current memory usage.
    Use tools like Grafana dashboards from 3scale Monitoring or CloudWatch from AWS to investigate the memory usage.
    You can also check the eviction policy (volatile-lru, allkeys-lru etc) and track key evictions with INFO stats.
  2. Latency issues
    If running Redis on cluster, use “redis-cli –latency” or “redis-cli –latency-history” to monitor and log latency over time.
    If using a cloud provider database use metrics like CloudWatch might help understand the issue better.
7.1.2.3.2.2. System Redis
  1. Memory Management Issues
    For the system, Redis instance might become overloaded with jobs to be processed, this might be caused by system-sidekiq issues. Going through system-sidekiq logs might indicate what the issue is.

If sidekiq is functional, navigating to https://master.<domain>/sidekiq can also help diagnose the jobs being processed.

7.1.2.3.3. Solution
7.1.2.3.3.1. Backend Redis
  1. Memory Management Issues Consider increasing memory and configuring the eviction policies. Alternatively, look into distributing the load across multiple Redis instances to balance the memory usage (Sentinels), see more information on how to configure Redis with sentinels.
  2. Latency issues Consider Redis persistence tuning, which can be done by adjusting the settings of SAVE or AOF to reduce the impact of disk I/O on performance.
    Look into distributing the load across multiple Redis instances to balance the memory usage (Sentinels). See more information on how to configure Redis with sentinels.
7.1.2.3.3.2. System Redis
  • Memory Management Issues

Ensure sidekiq jobs are being processed promptly.

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat