Chapter 2. Upgrading 3scale version 2.11 to version 2.12 using templates


You can upgrade Red Hat 3scale API Management from version 2.11 to version 2.12 using a template-based deployment on OpenShift 3.11.

Important

This is the last release supporting upgrades of 3scale deployed with templates on OpenShift 3.11. You should follow the 3scale migration guide: from template to operator-based deployments to stay supported in future releases.

Important

To understand the required conditions and procedures, be sure to read the entire upgrade guide before applying the listed steps. The upgrade process disrupts the provision of the service until the procedure finishes. Due to this disruption, be sure to have a maintenance window.

2.1. Prerequisites to perform the upgrade

This section describes the required configurations, tasks, and tools to upgrade 3scale from 2.11 to 2.12 in a template-based installation.

2.1.1. Configurations

  • 3scale supports upgrade paths from 2.11 to 2.12 with templates on OpenShift 3.11.

2.1.2. Preliminary tasks

  • Ensure your OpenShift CLI tool is configured in the same project where 3scale is deployed.
  • Perform a backup of the database you are using with 3scale. The procedure of the backup is specific to each database type and setup.

2.1.3. Tools

You need these tools to perform the upgrade:

  • 3scale 2.11 deployed with templates in an OpenShift 3.11 project.
  • Bash shell: To run the commands detailed in the upgrade procedure.
  • base64: To encode and decode secret information.
  • jq: For JSON transformation purposes.

2.2. Upgrading from 2.11 to 2.12 in a template-based installation

Follow the procedure described in this section to upgrade 3scale 2.11 to 2.12 in a template-based installation.

To start with the upgrade, go to the project where 3scale is deployed.

$ oc project <3scale-project>

Then, follow these steps in this order:

2.2.1. Creating a backup of the 3scale project

Previous step

None.

Current step

This step lists the actions necessary to create a backup of the 3scale project.

Procedure

  1. Depending on the database used with 3scale, set ${SYSTEM_DB} with one of the following values:

    • If the database is MySQL, SYSTEM_DB=system-mysql.
    • If the database is PostgreSQL, SYSTEM_DB=system-postgresql.
  2. Create a backup file with the existing DeploymentConfigs:

    $ THREESCALE_DC_NAMES="apicast-production apicast-staging backend-cron backend-listener backend-redis backend-worker system-app system-memcache ${SYSTEM_DB} system-redis system-sidekiq system-sphinx zync zync-database zync-que"
    
    for component in ${THREESCALE_DC_NAMES}; do oc get --export -o yaml dc ${component} > ${component}_dc.yml ; done
  3. Backup all existing OpenShift resources in the project that are exported through the export all command:

    $ oc get -o yaml --export all > threescale-project-elements.yaml
  4. Create a backup file with the additional elements that are not exported with the export all command:

    $ for object in rolebindings serviceaccounts secrets imagestreamtags cm rolebindingrestrictions limitranges resourcequotas pvc templates cronjobs statefulsets hpa deployments replicasets poddisruptionbudget endpoints
    do
      oc get -o yaml --export $object > $object.yaml
    done
  5. Verify that all of the generated files are not empty, and that all of them have the expected content.

2.2.2. Removing unused AMP_RELEASE variables

Current step

This step removes unused AMP_RELEASE variables from system-app containers, the system-app pre hook, and then verifies that AMP_RELEASE does not exist.

Procedure

  1. Remove the variable from the system-app containers:

    • Note the dash char after the variable name.

      $ oc set env dc/system-app AMP_RELEASE-
  2. Remove the variable from the system-app pre hook:

    $ INDEX=$(oc get dc system-app -o json | jq '.spec.strategy.rollingParams.pre.execNewPod.env | map(.name == "AMP_RELEASE") | index(true)')
    oc patch dc/system-app --type=json -p "[{'op': 'remove', 'path': '/spec/strategy/rollingParams/pre/execNewPod/env/$INDEX'}]"
  3. Verify AMP_RELEASE does not exist:

    $ oc get dc system-app -o yaml | grep AMP_RELEASE

2.2.3. Upgrading your MySQL configuration

Note

If your 3scale deployment has the external databases mode enabled and uses MySQL 8.0, set the authentication plugin to mysql_native_password for 3scale 2.12.

Add the following to the MySQL configuration file:

[mysqld]
default_authentication_plugin=mysql_native_password

Current step

This step patches the MySQL configuration configmap to enable upgrade to MySQL 8.0.

Note

Only follow this procedure if a system-mysql deployment exists in your current 3scale installation.

Procedure

  1. Patch the configmap:

    $ oc patch configmap/mysql-extra-conf --type merge -p '{"data": {"mysql-default-authentication-plugin.cnf": "[mysqld]\ndefault_authentication_plugin=mysql_native_password"}}'
  2. Verify the configmap is updated:

    $ oc get cm mysql-extra-conf -o jsonpath='{.data.mysql-default-authentication-plugin\.cnf}'
    • Should return:

      [mysqld]
      default_authentication_plugin=mysql_native_password

2.2.4. Upgrading 3scale images

Current step

This step updates the 3scale images required for the upgrade process.

2.2.4.1. Patch the system image

  1. Create the new image stream tag:

    $ oc patch imagestream/amp-system --type=json -p '[{"op": "add", "path": "/spec/tags/-", "value": {"annotations": {"openshift.io/display-name": "AMP system 2.12"}, "from": { "kind": "DockerImage", "name": "registry.redhat.io/3scale-amp2/system-rhel7:3scale2.12"}, "name": "2.12", "referencePolicy": {"type": "Source"}}}]'
  2. To continue the procedure, consider the database used with your 3scale deployment:

2.2.4.1.1. Patching the system image: 3scale with Oracle Database
  1. To start patching the system image of 3scale with an Oracle Database, you must build the system image:

    • Download 3scale OpenShift templates from the GitHub repository and extract the archive:

      tar -xzf 3scale-amp-openshift-templates-3scale-2.12.0-GA.tar.gz
    • Place your Oracle Database Instant Client Package files into the 3scale-amp-openshift-templates-3scale-2.12.0-GA/amp/system-oracle/oracle-client-files directory.
    • Run the oc process command with the -f option and specify the build.yml OpenShift template with the oc apply command and the -f option to override the existing build:

      $ oc process -f build.yml | oc apply -f -
    • Enter the oc start-build command to build the new system image:

      $ oc start-build 3scale-amp-system-oracle --from-dir=.
  2. Patch the system-app ImageChangeTrigger:

    1. Remove the old 2.11-oracle trigger:

      $ oc set triggers dc/system-app --from-image=amp-system:2.11-oracle --containers=system-master,system-developer,system-provider --remove
    2. Add the new version-specific trigger:

      $ oc set triggers dc/system-app --from-image=amp-system:2.12-oracle --containers=system-master,system-developer,system-provider

      This triggers a redeployment of system-app. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.

  3. Patch the system-sidekiq ImageChange trigger:

    1. Remove the old 2.11-oracle trigger:

      $ oc set triggers dc/system-sidekiq --from-image=amp-system:2.11-oracle --containers=system-sidekiq,check-svc --remove
    2. Add the new version-specific trigger:

      $ oc set triggers dc/system-sidekiq --from-image=amp-system:2.12-oracle --containers=system-sidekiq,check-svc

      This triggers a redeployment of system-sidekiq. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.

  4. Patch the system-sphinx ImageChange trigger:

    1. Remove the old 2.11-oracle trigger:

      $ oc set triggers dc/system-sphinx --from-image=amp-system:2.11-oracle --containers=system-sphinx,system-master-svc --remove
    2. Add the new version-specific trigger:

      $ oc set triggers dc/system-sphinx --from-image=amp-system:2.12-oracle --containers=system-sphinx,system-master-svc

      This triggers a redeployment of system-sphinx. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.

  5. Scale 3scale back if you scaled it down.
2.2.4.1.2. Patching the system image: 3scale with other databases
  1. Patch the system-app ImageChange trigger:

    1. Remove the old 2.11 trigger:

      $ oc set triggers dc/system-app --from-image=amp-system:2.11 --containers=system-master,system-developer,system-provider --remove
    2. Add the new version-specific trigger:

      $ oc set triggers dc/system-app --from-image=amp-system:2.12 --containers=system-master,system-developer,system-provider

      This triggers a redeployment of system-app. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.

  2. Patch the system-sidekiq ImageChange trigger:

    1. Remove the old 2.11 trigger:

      $ oc set triggers dc/system-sidekiq --from-image=amp-system:2.11 --containers=system-sidekiq,check-svc --remove
    2. Add the new version-specific trigger:

      $ oc set triggers dc/system-sidekiq --from-image=amp-system:2.12 --containers=system-sidekiq,check-svc

      This triggers a redeployment of system-sidekiq. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.

  3. Patch the system-sphinx ImageChange trigger:

    1. Remove the old 2.11 trigger:

      $ oc set triggers dc/system-sphinx --from-image=amp-system:2.11 --containers=system-sphinx,system-master-svc --remove
    2. Add the new version-specific trigger:

      $ oc set triggers dc/system-sphinx --from-image=amp-system:2.12 --containers=system-sphinx,system-master-svc

      This triggers a redeployment of system-sphinx. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.

2.2.4.2. Patch the apicast image

  1. Patch the amp-apicast image stream:

    $ oc patch imagestream/amp-apicast --type=json -p '[{"op": "add", "path": "/spec/tags/-", "value": {"annotations": {"openshift.io/display-name": "AMP APIcast 2.12"}, "from": { "kind": "DockerImage", "name": "registry.redhat.io/3scale-amp2/apicast-gateway-rhel8:3scale2.12"}, "name": "2.12", "referencePolicy": {"type": "Source"}}}]'
  2. Patch the apicast-staging ImageChange trigger:

    1. Remove the old 2.11 trigger:

      $ oc set triggers dc/apicast-staging --from-image=amp-apicast:2.11 --containers=apicast-staging --remove
    2. Add the new version-specific trigger:

      $ oc set triggers dc/apicast-staging --from-image=amp-apicast:2.12 --containers=apicast-staging

      This triggers a redeployment of apicast-staging. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.

  3. Patch the apicast-production ImageChange trigger:

    1. Remove the old 2.11 trigger:

      $ oc set triggers dc/apicast-production --from-image=amp-apicast:2.11 --containers=apicast-production,system-master-svc --remove
    2. Add the new version-specific trigger:

      $ oc set triggers dc/apicast-production --from-image=amp-apicast:2.12 --containers=apicast-production,system-master-svc

      This triggers a redeployment of apicast-production. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.

2.2.4.3. Patch the backend image

  1. Patch the amp-backend image stream:

    $ oc patch imagestream/amp-backend --type=json -p '[{"op": "add", "path": "/spec/tags/-", "value": {"annotations": {"openshift.io/display-name": "AMP Backend 2.12"}, "from": { "kind": "DockerImage", "name": "registry.redhat.io/3scale-amp2/backend-rhel8:3scale2.12"}, "name": "2.12", "referencePolicy": {"type": "Source"}}}]'
  2. Patch the backend-listener ImageChange trigger:

    1. Remove the old 2.11 trigger:

      $ oc set triggers dc/backend-listener --from-image=amp-backend:2.11 --containers=backend-listener --remove
    2. Add the new version-specific trigger:

      $ oc set triggers dc/backend-listener --from-image=amp-backend:2.12 --containers=backend-listener

      This triggers a redeployment of backend-listener. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.

  3. Patch the backend-worker ImageChange trigger:

    1. Remove the old 2.11 trigger:

      $ oc set triggers dc/backend-worker --from-image=amp-backend:2.11 --containers=backend-worker,backend-redis-svc --remove
    2. Add the new version-specific trigger:

      $ oc set triggers dc/backend-worker --from-image=amp-backend:2.12 --containers=backend-worker,backend-redis-svc

      This triggers a redeployment of backend-worker. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.

  4. Patch the backend-cron ImageChange trigger:

    1. Remove the old 2.11 trigger:

      $ oc set triggers dc/backend-cron --from-image=amp-backend:2.11 --containers=backend-cron,backend-redis-svc --remove
    2. Add the new version-specific trigger:

      $ oc set triggers dc/backend-cron --from-image=amp-backend:2.12 --containers=backend-cron,backend-redis-svc

      This command triggers a redeployment of backend-cron. Wait until it is redeployed, its corresponding new pods are ready, and the previous pods are terminated.

2.2.4.4. Patch the zync image

  1. Patch the amp-zync image stream:

    $ oc patch imagestream/amp-zync --type=json -p '[{"op": "add", "path": "/spec/tags/-", "value": {"annotations": {"openshift.io/display-name": "AMP Zync 2.12"}, "from": { "kind": "DockerImage", "name": "registry.redhat.io/3scale-amp2/zync-rhel8:3scale2.12"}, "name": "2.12", "referencePolicy": {"type": "Source"}}}]'
  2. Patch the zync ImageChange trigger:

    1. Remove the old 2.11 trigger:

      $ oc set triggers dc/zync --from-image=amp-zync:2.11 --containers=zync,zync-db-svc --remove
    2. Add the new version-specific trigger:

      $ oc set triggers dc/zync --from-image=amp-zync:2.12 --containers=zync,zync-db-svc

      This triggers a redeployment of zync. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.

  3. Patch the zync-que ImageChange trigger:

    1. Remove the old 2.11 trigger:

      $ oc set triggers dc/zync-que --from-image=amp-zync:2.11 --containers=que --remove
    2. Add the new version-specific trigger:

      $ oc set triggers dc/zync-que --from-image=amp-zync:2.12 --containers=que

      This triggers a redeployment of zync-que. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.

2.2.4.5. Patch the system-memcached image

  1. Patch the system-memcached image stream:

    $ oc patch imagestream/system-memcached --type=json -p '[{"op": "add", "path": "/spec/tags/-", "value": {"annotations": {"openshift.io/display-name": "System 2.12 Memcached"}, "from": { "kind": "DockerImage", "name": "registry.redhat.io/3scale-amp2/memcached-rhel7:3scale2.12"}, "name": "2.12", "referencePolicy": {"type": "Source"}}}]'
  2. Patch the system-memcache ImageChange trigger:

    1. Remove the old 2.11 trigger:

      $ oc set triggers dc/system-memcache --from-image=system-memcached:2.11 --containers=memcache --remove
    2. Add the new version-specific trigger:

      $ oc set triggers dc/system-memcache --from-image=system-memcached:2.12 --containers=memcache

      This triggers a redeployment of the system-memcache DeploymentConfig. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.

2.2.4.6. Patch the zync-database-postgresql image

  1. Patch the zync-database-postgresql image stream:

    $ oc patch imagestream/zync-database-postgresql --type=json -p '[{"op": "add", "path": "/spec/tags/-", "value": {"annotations": {"openshift.io/display-name": "Zync 2.12 PostgreSQL"}, "from": { "kind": "DockerImage", "name": "registry.redhat.io/rhscl/postgresql-10-rhel7"}, "name": "2.12", "referencePolicy": {"type": "Source"}}}]'
    • This patch command updates the zync-database-postgresql image stream to contain the 2.12 tag. You can verify that the 2.12 tag has been created with these steps:

      1. Run this command:

        $ oc get is zync-database-postgresql
      2. Check that the Tags column shows the 2.12 tag.
  2. Patch the zync-database ImageChange trigger:

    1. Remove the old 2.11 trigger:

      $ oc set triggers dc/zync-database --from-image=zync-database-postgresql:2.11 --containers=postgresql --remove
    2. Add the new version-specific trigger:

      $ oc set triggers dc/zync-database --from-image=zync-database-postgresql:2.12 --containers=postgresql

      In case there are new updates on the image, this patch might also trigger a redeployment of the zync-database DeploymentConfig. If this happens, wait until the new pods are redeployed and ready, and the old pods are terminated.

2.2.4.7. Additional image changes

If one or more of the following DeploymentConfigs are available in your 3scale 2.11 installation, click the links that apply to obtain more information on how to proceed:

backend-redis DeploymentConfig

If the backend-redis DeploymentConfig exists in your current 3scale installation, patch the redis image for backend-redis:

  1. Patch the backend-redis image stream:

    $ oc patch imagestream/backend-redis --type=json -p '[{"op": "add", "path": "/spec/tags/-", "value": {"annotations": {"openshift.io/display-name": "Backend 2.12 Redis"}, "from": { "kind": "DockerImage", "name": "registry.redhat.io/rhscl/redis-5-rhel7:5"}, "name": "2.12", "referencePolicy": {"type": "Source"}}}]'

    This patch updates the backend-redis image stream to contain the 2.12 tag. With the command below, you can confirm that the tag has been created if the Tags column shows 2.12:

    $ oc get is backend-redis
  2. Patch the backend-redis ImageChange trigger:

    1. Remove the old 2.11 trigger:

      $ oc set triggers dc/backend-redis --from-image=backend-redis:2.11 --containers=backend-redis --remove
    2. Add the new version-specific trigger:

      $ oc set triggers dc/backend-redis --from-image=backend-redis:2.12 --containers=backend-redis

      In case there are new updates on the image, this patch might also trigger a redeployment of the backend-redis DeploymentConfig. If this happens, wait until the new pods are redeployed and ready, and the old pods are terminated.

system-redis DeploymentConfig

If the system-redis DeploymentConfig exists in your current 3scale installation, patch the redis image for system-redis.

  1. Patch the system-redis image stream:

    $ oc patch imagestream/system-redis --type=json -p '[{"op": "add", "path": "/spec/tags/-", "value": {"annotations": {"openshift.io/display-name": "System 2.12 Redis"}, "from": { "kind": "DockerImage", "name": "registry.redhat.io/rhscl/redis-5-rhel7:5"}, "name": "2.12", "referencePolicy": {"type": "Source"}}}]'

    This patch updates the system-redis image stream to contain the 2.12 tag. With the command below, you can confirm that the tag has been created if the Tags column shows 2.12:

    $ oc get is system-redis
  2. Patch the system-redis ImageChange trigger:

    1. Remove the old 2.11 trigger:

      $ oc set triggers dc/system-redis --from-image=system-redis:2.11 --containers=system-redis --remove
    2. Add the new version-specific trigger:

      $ oc set triggers dc/system-redis --from-image=system-redis:2.12 --containers=system-redis

      In case there are new updates on the image, this patch might also trigger a redeployment of the system-redis DeploymentConfig. If this happens, wait until the new pods are redeployed and ready, and the old pods are terminated.

system-mysql DeploymentConfig

If the system-mysql DeploymentConfig exists in your current 3scale installation, patch the MySQL image for system-mysql.

  1. Patch the system-mysql image stream:

    $ oc patch imagestream/system-mysql --type=json -p '[{"op": "add", "path": "/spec/tags/-", "value": {"annotations": {"openshift.io/display-name": "System 2.12 MySQL"}, "from": { "kind": "DockerImage", "name": "registry.redhat.io/rhel8/mysql-80:1"}, "name": "2.12", "referencePolicy": {"type": "Source"}}}]'

    This patch updates the system-mysql image stream to contain the 2.12 tag. With the command below, you can confirm that the tag has been created if the Tags column shows 2.12:

    $ oc get is system-mysql
  2. Patch the system-mysql ImageChange trigger:

    1. Remove the old 2.11 trigger:

      $ oc set triggers dc/system-mysql --from-image=system-mysql:2.11 --containers=system-mysql --remove
    2. Add the new version-specific trigger:

      $ oc set triggers dc/system-mysql --from-image=system-mysql:2.12 --containers=system-mysql

      In case there are new updates on the image, this patch might also trigger a redeployment of the system-mysql DeploymentConfig. If this happens, wait until the new pods are redeployed and ready, and the old pods are terminated.

system-postgresql DeploymentConfig

If the system-postgresql DeploymentConfig exists in your current 3scale installation, patch the PostgreSQL image for system-postgresql.

  1. Patch the system-postgresql image stream:

    $ oc patch imagestream/system-postgresql --type=json -p '[{"op": "add", "path": "/spec/tags/-", "value": {"annotations": {"openshift.io/display-name": "System 2.12 PostgreSQL"}, "from": { "kind": "DockerImage", "name": "registry.redhat.io/rhscl/postgresql-10-rhel7"}, "name": "2.12", "referencePolicy": {"type": "Source"}}}]'

    This patch updates the system-postgresql image stream to contain the 2.12 tag. With the command below, you can confirm that the tag has been created if the Tags column shows 2.12:

    $ oc get is system-postgresql
  2. Patch the system-postgresql ImageChange trigger:

    1. Remove the old 2.11 trigger:

      $ oc set triggers dc/system-postgresql --from-image=system-postgresql:2.11 --containers=system-postgresql --remove
    2. Add the new version-specific trigger:

      $ oc set triggers dc/system-postgresql --from-image=system-postgresql:2.12 --containers=system-postgresql

      In case there are new updates on the image, this patch might also trigger a redeployment of the system-postgresql DeploymentConfig. If this happens, wait until the new pods are redeployed and ready, and the old pods are terminated.

2.2.4.8. Confirm image URLs

Confirm that all the image URLs of the DeploymentConfigs contain the new image registry URLs with a hash added at the end of each URL address:

THREESCALE_DC_NAMES="apicast-production apicast-staging backend-cron backend-listener backend-redis backend-worker system-app system-memcache system-mysql system-redis system-sidekiq system-sphinx zync zync-database zync-que"
for component in ${THREESCALE_DC_NAMES}; do echo -n "${component} image: " && oc get dc $component -o json | jq .spec.template.spec.containers[0].image ; done

2.2.5. Removing unused MessageBus variables

Current step

This step removes unused MESSAGE_BUS_REDIS_* variables.

2.2.5.1. Remove MESSAGE_BUS_REDIS_* variables from the system-app deploymentconfig

  1. Remove MESSAGE_BUS_REDIS_* variables from the system-app containers:

    • Note the dash char after the variable name.

      $ oc set env dc/system-app MESSAGE_BUS_REDIS_URL-
      $ oc set env dc/system-app MESSAGE_BUS_REDIS_NAMESPACE-
      $ oc set env dc/system-app MESSAGE_BUS_REDIS_SENTINEL_HOSTS-
      $ oc set env dc/system-app MESSAGE_BUS_REDIS_SENTINEL_ROLE-
  2. Remove MESSAGE_BUS_REDIS_* variables from the system-app pre hook:

    $ INDEX=$(oc get dc system-app -o json | jq '.spec.strategy.rollingParams.pre.execNewPod.env | map(.name == "MESSAGE_BUS_REDIS_URL") | index(true)')
    oc patch dc/system-app --type=json -p "[{'op': 'remove', 'path': '/spec/strategy/rollingParams/pre/execNewPod/env/$INDEX'}]"
    
    $ INDEX=$(oc get dc system-app -o json | jq '.spec.strategy.rollingParams.pre.execNewPod.env | map(.name == "MESSAGE_BUS_REDIS_NAMESPACE") | index(true)')
    oc patch dc/system-app --type=json -p "[{'op': 'remove', 'path': '/spec/strategy/rollingParams/pre/execNewPod/env/$INDEX'}]"
    
    $ INDEX=$(oc get dc system-app -o json | jq '.spec.strategy.rollingParams.pre.execNewPod.env | map(.name == "MESSAGE_BUS_REDIS_SENTINEL_HOSTS") | index(true)')
    oc patch dc/system-app --type=json -p "[{'op': 'remove', 'path': '/spec/strategy/rollingParams/pre/execNewPod/env/$INDEX'}]"
    
    $ INDEX=$(oc get dc system-app -o json | jq '.spec.strategy.rollingParams.pre.execNewPod.env | map(.name == "MESSAGE_BUS_REDIS_SENTINEL_ROLE") | index(true)')
    oc patch dc/system-app --type=json -p "[{'op': 'remove', 'path': '/spec/strategy/rollingParams/pre/execNewPod/env/$INDEX'}]"
  3. Verify MESSAGE_BUS_REDIS_* environment variables do not exist:

    $ oc get dc system-app -o yaml | grep MESSAGE_BUS_REDIS

2.2.5.2. Remove MESSAGE_BUS_REDIS_* variables from the system-sidekiq deploymentconfig

  1. Remove MESSAGE_BUS_REDIS_* variables from the system-sidekiq containers:

    • Note the dash char after the variable name.

      $ oc set env dc/system-sidekiq MESSAGE_BUS_REDIS_URL-
      $ oc set env dc/system-sidekiq MESSAGE_BUS_REDIS_NAMESPACE-
      $ oc set env dc/system-sidekiq MESSAGE_BUS_REDIS_SENTINEL_HOSTS-
      $ oc set env dc/system-sidekiq MESSAGE_BUS_REDIS_SENTINEL_ROLE-
  2. Remove MESSAGE_BUS_REDIS_* variables from the system-sidekiq init-container:

    $ INDEX=$(oc get dc system-sidekiq -o json | jq '.spec.template.spec.initContainers[].env | map(.name == "MESSAGE_BUS_REDIS_URL") | index(true)')
    oc patch dc/system-sidekiq --type=json -p "[{'op': 'remove', 'path': '/spec/template/spec/initContainers/0/env/$INDEX'}]"
    
    $ INDEX=$(oc get dc system-sidekiq -o json | jq '.spec.template.spec.initContainers[].env | map(.name == "MESSAGE_BUS_REDIS_NAMESPACE") | index(true)')
    oc patch dc/system-sidekiq --type=json -p "[{'op': 'remove', 'path': '/spec/template/spec/initContainers/0/env/$INDEX'}]"
    
    $ INDEX=$(oc get dc system-sidekiq -o json | jq '.spec.template.spec.initContainers[].env | map(.name == "MESSAGE_BUS_REDIS_SENTINEL_HOSTS") | index(true)')
    oc patch dc/system-sidekiq --type=json -p "[{'op': 'remove', 'path': '/spec/template/spec/initContainers/0/env/$INDEX'}]"
    
    $ INDEX=$(oc get dc system-sidekiq -o json | jq '.spec.template.spec.initContainers[].env | map(.name == "MESSAGE_BUS_REDIS_SENTINEL_ROLE") | index(true)')
    oc patch dc/system-sidekiq --type=json -p "[{'op': 'remove', 'path': '/spec/template/spec/initContainers/0/env/$INDEX'}]"
  3. Verify MESSAGE_BUS_REDIS_* environment variables do not exist:

    $ oc get dc system-sidekiq -o yaml | grep MESSAGE_BUS_REDIS

2.2.5.3. Remove MESSAGE_BUS_REDIS_* variables from the system-sphinx deploymentconfig

  1. Remove MESSAGE_BUS_REDIS_* variables from the system-sphinx containers:

    • Note the dash char after the variable name.

      $ oc set env dc/system-sphinx MESSAGE_BUS_REDIS_URL-
      $ oc set env dc/system-sphinx MESSAGE_BUS_REDIS_NAMESPACE-
      $ oc set env dc/system-sphinx MESSAGE_BUS_REDIS_SENTINEL_HOSTS-
      $ oc set env dc/system-sphinx MESSAGE_BUS_REDIS_SENTINEL_ROLE-
  2. Verify MESSAGE_BUS_REDIS_* environment variables do not exist:

    $ oc get dc system-sphinx -o yaml | grep MESSAGE_BUS_REDIS

2.2.5.4. Remove MESSAGE_BUS_REDIS_* variables from the system-redis secret

  1. Remove MESSAGE_BUS_REDIS_* variables from the system-redis secret:

    $ oc patch secret/system-redis --type=json -p "[{'op': 'remove', 'path': '/data/MESSAGE_BUS_URL'}]"
    $ oc patch secret/system-redis --type=json -p "[{'op': 'remove', 'path': '/data/MESSAGE_BUS_NAMESPACE'}]"
    $ oc patch secret/system-redis --type=json -p "[{'op': 'remove', 'path': '/data/MESSAGE_BUS_SENTINEL_HOSTS'}]"
    $ oc patch secret/system-redis --type=json -p "[{'op': 'remove', 'path': '/data/MESSAGE_BUS_SENTINEL_ROLE'}]"
  2. Verify MESSAGE_BUS_REDIS_* environment variables do not exist:

    $ oc get secret system-redis -o yaml | grep MESSAGE_BUS

2.2.5.5. Upgrading with external system-database using PostgreSQL 10 and PostgreSQL 13

This upgrade supports external system-database using PostgreSQL 10. You should complete your 3scale upgrade first, then upgrade to PostgreSQL 13.

Next step

None. After you have performed all the listed steps, 3scale upgrade from 2.11 to 2.12 in a template-based deployment is now complete.

2.3. Upgrading 3scale with an Oracle Database in a template-based installation

This section explains how to update Red Hat 3scale API Management when you are using a 3scale system image with an Oracle Database, in a template-based installation with OpenShift 3.11.

Prerequisites

A 3scale installation with the Oracle Database. See Setting up your 3scale system image with an Oracle Database.

To upgrade your 3scale system image with an Oracle Database in a template-based installation, perform the procedure below:

2.3.1. Upgrading 3scale with Oracle 19c

This procedure guides you through an Oracle Database 19c update for 3scale 2.12 from an existing 3scale 2.11 installation.

IMPORTANT: Loss of connection to the database can potentially corrupt 3scale. Make a backup before proceeding to perform the upgrade. For more information see the Oracle Database documentation: Oracle Database Backup and Recovery User’s Guide.

Prerequisites

  • A 3scale 2.11 installation.
  • An Oracle Database 19c installation.

Procedure

  1. Download 3scale OpenShift templates from the GitHub repository and extract the archive:

    tar -xzf 3scale-amp-openshift-templates-3scale-2.12.0-GA.tar.gz
  2. Place your Oracle Database Instant Client Package files into the 3scale-amp-openshift-templates-3scale-2.12.0-GA/amp/system-oracle/oracle-client-files directory.
  3. Run the oc process command with the -f option and specify the build.yml OpenShift template:

    $ oc process -f build.yml | oc apply -f -
  4. Run the oc new-app command with the -f option to indicate the amp.yml OpenShift template, and the -p option to specify the WILDCARD_DOMAIN parameter with the domain of your OpenShift cluster:

    $ oc new-app -f amp.yml -p WILDCARD_DOMAIN=mydomain.com
    Note

    The following steps are optional. Use them if you remove ORACLE_SYSTEM_PASSWORD after the installation or a system upgrade.

  5. Enter the following oc patch commands, replacing SYSTEM_PASSWORD with the Oracle Database system password you set up in Preparing the Oracle Database:

    $ oc patch dc/system-app -p '[{"op": "add", "path": "/spec/strategy/rollingParams/pre/execNewPod/env/-", "value": {"name": "ORACLE_SYSTEM_PASSWORD", "value": "SYSTEM_PASSWORD"}}]' --type=json
    
    $ oc patch dc/system-app -p '{"spec": {"strategy": {"rollingParams": {"post":{"execNewPod": {"env": [{"name": "ORACLE_SYSTEM_PASSWORD", "value": "SYSTEM_PASSWORD"}]}}}}}}'
  6. Enter the following command, replacing DATABASE_URL to point to your Oracle Database, specified in Preparing the Oracle Database:

    $ oc patch secret/system-database -p '{"stringData": {"URL": "DATABASE_URL"}}'
  7. Enter the oc start-build command to build the new system image:

    $ oc start-build 3scale-amp-system-oracle --from-dir=.
  8. Wait until the build completes. To see the state of the build, run the following command:

    $ oc get build <build-name> -o jsonpath="{.status.phase}"
    1. Wait until the build is in a Complete state.
  9. Once you have set up your 3scale system image with your Oracle Database, remove ORACLE_SYSTEM_PASSWORD from the system-app DeploymentConfig. It is not necessary again until you upgrade to a new version of 3scale.

    $ oc set env dc/system-app ORACLE_SYSTEM_PASSWORD-

Verification for the open_cursors parameter setting

You must confirm that the open_cursors parameter in this database is set to a value that is greater than 1000.

To do this, log in to your Oracle Database as SYSTEM user and run the following command:

show parameter open_cursors;

The return value should be at least 1000. If it is not, change the parameter to a value greater than 1000 by following Oracle’s documentation on open cursors.

If the open_cursors parameter was previously configured to some limit less than 1000, and you do not increase the value, you might see the following error in one of the OpenShift system-app pod logs:

ORA-01000: maximum open cursors exceeded

Additional resources

For more information about 3scale and Oracle Database support, see Red Hat 3scale API Management Supported Configurations.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.