Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 2. Upgrading 3scale version 2.10 to version 2.11 using templates
You can upgrade Red Hat 3scale API Management from version 2.10 to version 2.11 using a template-based deployment on OpenShift 3.11.
To understand the required conditions and procedures, be sure to read the entire upgrade guide before applying the listed steps. The upgrade process disrupts the provision of the service until the procedure finishes. Due to this disruption, be sure to have a maintenance window.
2.1. Prerequisites to perform the upgrade Link kopierenLink in die Zwischenablage kopiert!
This section describes the required configurations, tasks, and tools to upgrade 3scale from 2.10 to 2.11 in a template-based installation.
2.1.1. Configurations Link kopierenLink in die Zwischenablage kopiert!
- 3scale supports upgrade paths from 2.10 to 2.11 with templates on OpenShift 3.11.
2.1.2. Preliminary tasks Link kopierenLink in die Zwischenablage kopiert!
- Ensure your OpenShift CLI tool is configured in the same project where 3scale is deployed.
- Perform a backup of the database you are using with 3scale. The procedure of the backup is specific to each database type and setup.
2.1.3. Tools Link kopierenLink in die Zwischenablage kopiert!
You need these tools to perform the upgrade:
- 3scale 2.10 deployed with templates in an OpenShift 3.11 project.
- Bash shell: To run the commands detailed in the upgrade procedure.
- base64: To encode and decode secret information.
- jq: For JSON transformation purposes.
2.2. Upgrading from 2.10 to 2.11 in a template-based installation Link kopierenLink in die Zwischenablage kopiert!
Follow the procedure described in this section to upgrade 3scale 2.10 to 2.11 in a template-based installation.
To start with the upgrade, go to the project where 3scale is deployed.
$ oc project <3scale-project>
Then, follow these steps in this order:
- Creating a backup of the 3scale project
- Updating 3scale version number
- Updating BACKEND_ROUTE environment variable
- Moving ‘zync’ DeploymentConfig monitoring annotations from DeploymentConfig annotations to PodTemplate annotations
-
Increasing
backend-cronDeploymentConfig resource requirements - Upgrading 3scale images
2.2.1. Creating a backup of the 3scale project Link kopierenLink in die Zwischenablage kopiert!
Previous step
None.
Current step
This step lists the actions necessary to create a backup of the 3scale project.
Procedure
Depending on the database used with 3scale, set ${SYSTEM_DB} with one of the following values:
-
If the database is MySQL,
SYSTEM_DB=system-mysql. -
If the database is PostgreSQL,
SYSTEM_DB=system-postgresql.
-
If the database is MySQL,
Create a backup file with the existing DeploymentConfigs:
$ THREESCALE_DC_NAMES="apicast-production apicast-staging backend-cron backend-listener backend-redis backend-worker system-app system-memcache ${SYSTEM_DB} system-redis system-sidekiq system-sphinx zync zync-database zync-que" for component in ${THREESCALE_DC_NAMES}; do oc get --export -o yaml dc ${component} > ${component}_dc.yml ; doneBackup all existing OpenShift resources in the project that are exported through the
export allcommand:$ oc get -o yaml --export all > threescale-project-elements.yamlCreate a backup file with the additional elements that are not exported with the
export allcommand:$ for object in rolebindings serviceaccounts secrets imagestreamtags cm rolebindingrestrictions limitranges resourcequotas pvc templates cronjobs statefulsets hpa deployments replicasets poddisruptionbudget endpoints do oc get -o yaml --export $object > $object.yaml done- Verify that all of the generated files are not empty, and that all of them have the expected content.
Next step
2.2.2. Updating 3scale version number Link kopierenLink in die Zwischenablage kopiert!
Previous step
Current step
This step updates the 3scale release version number from 2.10 to 2.11 in the system-environment ConfigMap. AMP_RELEASE is a ConfigMap entry referenced in some DeploymentConfig container environments.
Procedure
To patch AMP_RELEASE, run this command:
$ oc patch cm system-environment --patch '{"data": {"AMP_RELEASE": "2.11"}}'Confirm that the AMP_RELEASE key in the system-environment ConfigMap has the
2.11value:$ oc get cm system-environment -o json | jq '.data["AMP_RELEASE"]'
2.2.3. Updating BACKEND_ROUTE environment variable Link kopierenLink in die Zwischenablage kopiert!
Previous step
Current step
This step updates the BACKEND_ROUTE environment variable from system-app and system-sidekiq pods to use the backend-listener Kubernetes service instead of the OpenShift route.
Procedure
Update the variable in the
system-apppre-hook pod by editing the system-app DeploymentConfig:$ oc edit dc system-appYou will enter an interactive editor session. Find the BACKEND_ROUTE environment variable in the
.spec.strategy.rollingParams.pre.execNewPod.envarray section.Replace the following entry:
- name: BACKEND_ROUTE valueFrom: secretKeyRef: key: route_endpoint name: backend-listenerWith this entry:
- name: BACKEND_ROUTE value: http://backend-listener:3000/internal/Save your changes and exit the interactive editor session.
Update the entry on
system-appcontainers:$ oc set env dc/system-app BACKEND_ROUTE="http://backend-listener:3000/internal/"This command triggers a redeployment of
system-app. Wait until it is redeployed, its corresponding new pods are ready, and the previous pods are terminated.Update it on
system-sidekiqcontainer:$ oc set env dc/system-sidekiq BACKEND_ROUTE="http://backend-listener:3000/internal/"This command triggers a redeployment of
system-sidekiq. Wait until it is redeployed, its corresponding new pods are ready, and the previous pods are terminated.
2.2.4. Moving ‘zync’ DeploymentConfig monitoring annotations from DeploymentConfig annotations to PodTemplate annotations Link kopierenLink in die Zwischenablage kopiert!
Previous step
Current step
This step moves the prometheus.io/port and prometheus.io/scrape annotations from the zync DeploymentConfig annotations to the PodTemplate annotations.
Procedure
Take note of the current values for the
prometheus.io/port and prometheus.io/scrapeannotations by running:$ oc get dc zync -o json | jq .metadata.annotationsAdd the annotations to
zyncDeploymentConfig’s PodTemplate annotations. If theprometheus.io/port and prometheus.io/scrapeannotation values are different than the ones shown in the command below replace them with the values that are currently set in thezyncDeploymentConfig as shown by the previous command:$ oc patch dc zync --patch '{"spec":{"template":{"metadata":{"annotations":{"prometheus.io/port":"9393","prometheus.io/scrape":"true"}}}}}'Remove the original annotations from the zync DeploymentConfig annotations:
$ oc annotate dc zync prometheus.io/scrape- $ oc annotate dc zync prometheus.io/port-This command triggers a redeployment of zync. Wait until it is redeployed, its corresponding new pods are ready, and the previous pods are terminated.
2.2.5. Increasing backend-cron DeploymentConfig resource requirements Link kopierenLink in die Zwischenablage kopiert!
Previous step
Current step
As of 3scale 2.11, backend-cron DeploymentConfig might consume more memory than earlier versions. Use this procedure to increase the maximum memory limits from the currently set values.
The required backend-cron resource in 3scale 2.11 are:
{
"limits": {
"cpu": "500m",
"memory": "500Mi"
},
"requests": {
"cpu": "100m",
"memory": "100Mi"
}
}
If the current backend-cron deployment has no memory limits or the resource requirements are higher, you do not need to complete the following procedure.
Procedure
Check the current resource requirements set for
backend-cronwith the following command:$ oc get dc backend-cron -o json | jq .spec.template.spec.containers[0].resourcesIf the output is empty or
nullit means no resource requirements are set.To increase the current
backend-cronresource requirements, run the following command:$ oc patch dc backend-cron --patch '{"spec":{"template":{"spec":{"containers":[{"name":"backend-cron","resources":{"limits":{"memory":"500Mi", "cpu": "500m"}, "requests":{"memory":"100Mi", "cpu": "100m"}}}]}}}}'This command triggers a redeployment of
backend-cron. Wait until it is redeployed, its corresponding new pods are ready, and the previous pods are terminated.
Next step
2.2.6. Upgrading 3scale images Link kopierenLink in die Zwischenablage kopiert!
Current step
This step updates the 3scale images required for the upgrade process.
2.2.6.1. Patch the system image Link kopierenLink in die Zwischenablage kopiert!
Create the new image stream tag:
$ oc patch imagestream/amp-system --type=json -p '[{"op": "add", "path": "/spec/tags/-", "value": {"annotations": {"openshift.io/display-name": "AMP system 2.11"}, "from": { "kind": "DockerImage", "name": "registry.redhat.io/3scale-amp2/system-rhel7:3scale2.11"}, "name": "2.11", "referencePolicy": {"type": "Source"}}}]'To continue the procedure, consider the database used with your 3scale deployment:
- If the database is Oracle DB, follow the steps listed in Patching the system image: 3scale with Oracle Database
- If the database is different from Oracle DB, follow the steps listed in Patching the system image: 3scale with other databases
2.2.6.1.1. Patching the system image: 3scale with Oracle Database Link kopierenLink in die Zwischenablage kopiert!
- To start patching the system image of 3scale with an Oracle Database, perform steps 1, 2, 4, and 8 in Building the system image.
Patch the
system-appImageChangeTrigger:Remove the old
2.10-oracletrigger:$ oc set triggers dc/system-app --from-image=amp-system:2.10-oracle --containers=system-master,system-developer,system-provider --removeAdd the new version-specific trigger:
$ oc set triggers dc/system-app --from-image=amp-system:2.11-oracle --containers=system-master,system-developer,system-providerThis triggers a redeployment of
system-app. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.
Patch the
system-sidekiqImageChange trigger:Remove the old
2.10-oracletrigger:$ oc set triggers dc/system-sidekiq --from-image=amp-system:2.10-oracle --containers=system-sidekiq,check-svc --removeAdd the new version-specific trigger:
$ oc set triggers dc/system-sidekiq --from-image=amp-system:2.11-oracle --containers=system-sidekiq,check-svcThis triggers a redeployment of
system-sidekiq. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.
Patch the
system-sphinxImageChange trigger:Remove the old
2.10-oracletrigger:$ oc set triggers dc/system-sphinx --from-image=amp-system:2.10-oracle --containers=system-sphinx,system-master-svc --removeAdd the new version-specific trigger:
$ oc set triggers dc/system-sphinx --from-image=amp-system:2.11-oracle --containers=system-sphinx,system-master-svcThis triggers a redeployment of
system-sphinx. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.
- Scale 3scale back if you scaled it down.
2.2.6.1.2. Patching the system image: 3scale with other databases Link kopierenLink in die Zwischenablage kopiert!
Patch the
system-appImageChange trigger:Remove the old
2.10trigger:$ oc set triggers dc/system-app --from-image=amp-system:2.10 --containers=system-master,system-developer,system-provider --removeAdd the new version-specific trigger:
$ oc set triggers dc/system-app --from-image=amp-system:2.11 --containers=system-master,system-developer,system-providerThis triggers a redeployment of
system-app. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.
Patch the
system-sidekiqImageChange trigger:Remove the old
2.10trigger:$ oc set triggers dc/system-sidekiq --from-image=amp-system:2.10 --containers=system-sidekiq,check-svc --removeAdd the new version-specific trigger:
$ oc set triggers dc/system-sidekiq --from-image=amp-system:2.11 --containers=system-sidekiq,check-svcThis triggers a redeployment of
system-sidekiq. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.
Patch the
system-sphinxImageChange trigger:Remove the old
2.10trigger:$ oc set triggers dc/system-sphinx --from-image=amp-system:2.10 --containers=system-sphinx,system-master-svc --removeAdd the new version-specific trigger:
$ oc set triggers dc/system-sphinx --from-image=amp-system:2.11 --containers=system-sphinx,system-master-svcThis triggers a redeployment of
system-sphinx. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.
2.2.6.2. Patch the apicast image Link kopierenLink in die Zwischenablage kopiert!
Patch the
amp-apicastimage stream:$ oc patch imagestream/amp-apicast --type=json -p '[{"op": "add", "path": "/spec/tags/-", "value": {"annotations": {"openshift.io/display-name": "AMP APIcast 2.11"}, "from": {"kind": "DockerImage", "name": "registry.redhat.io/3scale-amp2/apicast-gateway-rhel8:3scale2.11"}, "name": "2.11", "referencePolicy": {"type": "Source"}}}]'Patch the
apicast-stagingImageChange trigger:Remove the old
2.10trigger:$ oc set triggers dc/apicast-staging --from-image=amp-apicast:2.10 --containers=apicast-staging --removeAdd the new version-specific trigger:
$ oc set triggers dc/apicast-staging --from-image=amp-apicast:2.11 --containers=apicast-stagingThis triggers a redeployment of
apicast-staging. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.
Patch the
apicast-productionImageChange trigger:Remove the old
2.10trigger:$ oc set triggers dc/apicast-production --from-image=amp-apicast:2.10 --containers=apicast-production,system-master-svc --removeAdd the new version-specific trigger:
$ oc set triggers dc/apicast-production --from-image=amp-apicast:2.11 --containers=apicast-production,system-master-svcThis triggers a redeployment of
apicast-production. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.
2.2.6.3. Patch the backend image Link kopierenLink in die Zwischenablage kopiert!
Patch the
amp-backendimage stream:$ oc patch imagestream/amp-backend --type=json -p '[{"op": "add", "path": "/spec/tags/-", "value": {"annotations": {"openshift.io/display-name": "AMP Backend 2.11"}, "from": {"kind": "DockerImage", "name": "registry.redhat.io/3scale-amp2/backend-rhel8:3scale2.11"}, "name": "2.11", "referencePolicy": {"type": "Source"}}}]'Patch the
backend-listenerImageChange trigger:Remove the old
2.10trigger:$ oc set triggers dc/backend-listener --from-image=amp-backend:2.10 --containers=backend-listener --removeAdd the new version-specific trigger:
$ oc set triggers dc/backend-listener --from-image=amp-backend:2.11 --containers=backend-listenerThis triggers a redeployment of
backend-listener. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.
Patch the
backend-workerImageChange trigger:Remove the old
2.10trigger:$ oc set triggers dc/backend-worker --from-image=amp-backend:2.10 --containers=backend-worker,backend-redis-svc --removeAdd the new version-specific trigger:
$ oc set triggers dc/backend-worker --from-image=amp-backend:2.11 --containers=backend-worker,backend-redis-svcThis triggers a redeployment of
backend-worker. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.
Patch the
backend-cronImageChange trigger:Remove the old
2.10trigger:$ oc set triggers dc/backend-cron --from-image=amp-backend:2.10 --containers=backend-cron,backend-redis-svc --removeAdd the new version-specific trigger:
$ oc set triggers dc/backend-cron --from-image=amp-backend:2.11 --containers=backend-cron,backend-redis-svcThis command triggers a redeployment of
backend-cron. Wait until it is redeployed, its corresponding new pods are ready, and the previous pods are terminated.
2.2.6.4. Patch the zync image Link kopierenLink in die Zwischenablage kopiert!
Patch the
amp-zyncimage stream:$ oc patch imagestream/amp-zync --type=json -p '[{"op": "add", "path": "/spec/tags/-", "value": {"annotations": {"openshift.io/display-name": "AMP Zync 2.11"}, "from": { "kind": "DockerImage", "name": "registry.redhat.io/3scale-amp2/zync-rhel8:3scale2.11"}, "name": "2.11", "referencePolicy": {"type": "Source"}}}]'Patch the
zyncImageChange trigger:Remove the old
2.10trigger:$ oc set triggers dc/zync --from-image=amp-zync:2.10 --containers=zync,zync-db-svc --removeAdd the new version-specific trigger:
$ oc set triggers dc/zync --from-image=amp-zync:2.11 --containers=zync,zync-db-svcThis triggers a redeployment of
zync. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.
Patch the
zync-queImageChange trigger:Remove the old
2.10trigger:$ oc set triggers dc/zync-que --from-image=amp-zync:2.10 --containers=que --removeAdd the new version-specific trigger:
$ oc set triggers dc/zync-que --from-image=amp-zync:2.11 --containers=queThis triggers a redeployment of
zync-que. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.
2.2.6.5. Patch the system-memcached image Link kopierenLink in die Zwischenablage kopiert!
Patch the
system-memcachedimage stream:$ oc patch imagestream/system-memcached --type=json -p '[{"op": "add", "path": "/spec/tags/-", "value": {"annotations": {"openshift.io/display-name": "System 2.11 Memcached"}, "from": { "kind": "DockerImage", "name": "registry.redhat.io/3scale-amp2/memcached-rhel7:3scale2.11"}, "name": "2.11", "referencePolicy": {"type": "Source"}}}]'Patch the
system-memcacheImageChange trigger:Remove the old
2.10trigger:$ oc set triggers dc/system-memcache --from-image=system-memcached:2.10 --containers=memcache --removeAdd the new version-specific trigger:
$ oc set triggers dc/system-memcache --from-image=system-memcached:2.11 --containers=memcacheThis triggers a redeployment of the
system-memcacheDeploymentConfig. Wait until it is redeployed, its corresponding new pods are ready, and the old ones terminated.
2.2.6.6. Patch the zync-database-postgresql image Link kopierenLink in die Zwischenablage kopiert!
Patch the
zync-database-postgresqlimage stream:$ oc patch imagestream/zync-database-postgresql --type=json -p '[{"op": "add", "path": "/spec/tags/-", "value": {"annotations": {"openshift.io/display-name": "Zync 2.11 PostgreSQL"}, "from": { "kind": "DockerImage", "name": "registry.redhat.io/rhscl/postgresql-10-rhel7"}, "name": "2.11", "referencePolicy": {"type": "Source"}}}]'This patch command updates the
zync-database-postgresqlimage stream to contain the 2.11 tag. You can verify that the 2.11 tag has been created with these steps:Run this command:
$ oc get is zync-database-postgresql- Check that the Tags column shows the 2.11 tag.
Patch the
zync-databaseImageChange trigger:Remove the old
2.10trigger:$ oc set triggers dc/zync-database --from-image=zync-database-postgresql:2.10 --containers=postgresql --removeAdd the new version-specific trigger:
$ oc set triggers dc/zync-database --from-image=zync-database-postgresql:2.11 --containers=postgresqlIn case there are new updates on the image, this patch might also trigger a redeployment of the
zync-databaseDeploymentConfig. If this happens, wait until the new pods are redeployed and ready, and the old pods are terminated.
2.2.6.7. Additional image changes Link kopierenLink in die Zwischenablage kopiert!
If one or more of the following DeploymentConfigs are available in your 3scale 2.10 installation, click the links that apply to obtain more information on how to proceed:
backend-redis DeploymentConfig
If the backend-redis DeploymentConfig exists in your current 3scale installation, patch the redis image for backend-redis:
Patch the
backend-redisimage stream:$ oc patch imagestream/backend-redis --type=json -p '[{"op": "add", "path": "/spec/tags/-", "value": {"annotations": {"openshift.io/display-name": "Backend 2.11 Redis"}, "from": { "kind": "DockerImage", "name": "registry.redhat.io/rhscl/redis-5-rhel7:5"}, "name": "2.11", "referencePolicy": {"type": "Source"}}}]'This patch updates the backend-redis image stream to contain the 2.11 tag. With the command below, you can confirm that the tag has been created if the Tags column shows 2.11:
$ oc get is backend-redisPatch the
backend-redisImageChange trigger:Remove the old
2.10trigger:$ oc set triggers dc/backend-redis --from-image=backend-redis:2.10 --containers=backend-redis --removeFor 3scale 2.11 redis image is upgraded from Redis 3 to 5, which contains a different binary path to Redis. The
backend-redisdeployment container command must be updated to use the new path. Note: Applying this change will temporarily leave thebackend-redisdeployment in an error state until you add the new version-specific trigger in the next substep:$ oc patch dc backend-redis --patch '{"spec":{"template":{"spec":{"containers":[{"name":"backend-redis","command":["/opt/rh/rh-redis5/root/usr/bin/redis-server"]}]}}}}'Add the new version-specific trigger:
$ oc set triggers dc/backend-redis --from-image=backend-redis:2.11 --containers=backend-redisIn case there are new updates on the image, this patch might also trigger a redeployment of the
backend-redisDeploymentConfig. If this happens, wait until the new pods are redeployed and ready, and the old pods are terminated.
system-redis DeploymentConfig
If the system-redis DeploymentConfig exists in your current 3scale installation, patch the redis image for system-redis.
Patch the
system-redisimage stream:$ oc patch imagestream/system-redis --type=json -p '[{"op": "add", "path": "/spec/tags/-", "value": {"annotations": {"openshift.io/display-name": "System 2.11 Redis"}, "from": { "kind": "DockerImage", "name": "registry.redhat.io/rhscl/redis-5-rhel7:5"}, "name": "2.11", "referencePolicy": {"type": "Source"}}}]'This patch updates the
system-redisimage stream to contain the 2.11 tag. With the command below, you can confirm that the tag has been created if the Tags column shows 2.11:$ oc get is system-redisPatch the
system-redisImageChange trigger:Remove the old
2.10trigger:$ oc set triggers dc/system-redis --from-image=system-redis:2.10 --containers=system-redis --removeFor 3scale 2.11 redis image is upgraded from Redis 3 to 5, which contains a different binary path to Redis. The
system-redisdeployment container command must be updated to use the new path. Note: Applying this change will temporarily leave thesystem-redisdeployment in an error state until you add the new version-specific trigger in the next substep:$ oc patch dc system-redis --patch '{"spec":{"template":{"spec":{"containers":[{"name":"system-redis","command":["/opt/rh/rh-redis5/root/usr/bin/redis-server"]}]}}}}'Add the new version-specific trigger:
$ oc set triggers dc/system-redis --from-image=system-redis:2.11 --containers=system-redisIn case there are new updates on the image, this patch might also trigger a redeployment of the
system-redisDeploymentConfig. If this happens, wait until the new pods are redeployed and ready, and the old pods are terminated.
system-mysql DeploymentConfig
If the system-mysql DeploymentConfig exists in your current 3scale installation, patch the MySQL image for system-mysql.
Patch the
system-mysqlimage stream:$ oc patch imagestream/system-mysql --type=json -p '[{"op": "add", "path": "/spec/tags/-", "value": {"annotations": {"openshift.io/display-name": "System 2.11 MySQL"}, "from": { "kind": "DockerImage", "name": "registry.redhat.io/rhscl/mysql-57-rhel7:5.7"}, "name": "2.11", "referencePolicy": {"type": "Source"}}}]'This patch updates the
system-mysqlimage stream to contain the 2.11 tag. With the command below, you can confirm that the tag has been created if the Tags column shows 2.11:$ oc get is system-mysqlPatch the
system-mysqlImageChange trigger:Remove the old
2.10trigger:$ oc set triggers dc/system-mysql --from-image=system-mysql:2.10 --containers=system-mysql --removeAdd the new version-specific trigger:
$ oc set triggers dc/system-mysql --from-image=system-mysql:2.11 --containers=system-mysqlIn case there are new updates on the image, this patch might also trigger a redeployment of the
system-mysqlDeploymentConfig. If this happens, wait until the new pods are redeployed and ready, and the old pods are terminated.
system-postgresql DeploymentConfig
If the system-postgresql DeploymentConfig exists in your current 3scale installation, patch the PostgreSQL image for system-postgresql.
Patch the
system-postgresqlimage stream:$ oc patch imagestream/system-postgresql --type=json -p '[{"op": "add", "path": "/spec/tags/-", "value": {"annotations": {"openshift.io/display-name": "System 2.11 PostgreSQL"}, "from": { "kind": "DockerImage", "name": "registry.redhat.io/rhscl/postgresql-10-rhel7"}, "name": "2.11", "referencePolicy": {"type": "Source"}}}]'This patch updates the
system-postgresqlimage stream to contain the 2.11 tag. With the command below, you can confirm that the tag has been created if the Tags column shows 2.11:$ oc get is system-postgresqlPatch the
system-postgresqlImageChange trigger:Remove the old
2.10trigger:$ oc set triggers dc/system-postgresql --from-image=system-postgresql:2.10 --containers=system-postgresql --removeAdd the new version-specific trigger:
$ oc set triggers dc/system-postgresql --from-image=system-postgresql:2.11 --containers=system-postgresqlIn case there are new updates on the image, this patch might also trigger a redeployment of the
system-postgresqlDeploymentConfig. If this happens, wait until the new pods are redeployed and ready, and the old pods are terminated.
2.2.6.8. Confirm image URLs Link kopierenLink in die Zwischenablage kopiert!
Confirm that all the image URLs of the DeploymentConfigs contain the new image registry URLs with a hash added at the end of each URL address:
$ THREESCALE_DC_NAMES="apicast-production apicast-staging backend-cron backend-listener backend-redis backend-worker system-app system-memcache system-mysql system-redis system-sidekiq system-sphinx zync zync-database zync-que"
for component in ${THREESCALE_DC_NAMES}; do echo -n "${component} image: " && oc get dc $component -o json | jq .spec.template.spec.containers[0].image ; done
Next step
None. After you have performed all the listed steps, 3scale upgrade from 2.10 to 2.11 in a template-based deployment is now complete.
2.3. Upgrading 3scale with an Oracle Database in a template-based installation Link kopierenLink in die Zwischenablage kopiert!
This section explains how to update Red Hat 3scale API Management when you are using a 3scale system image with an Oracle Database, in a template-based installation with OpenShift 3.11.
Prerequisites
A 3scale installation with the Oracle Database. See Setting up your 3scale system image with an Oracle Database.
To upgrade your 3scale system image with an Oracle Database in a template-based installation, perform the procedure below:
2.3.1. Upgrading 3scale with Oracle 19c Link kopierenLink in die Zwischenablage kopiert!
This procedure guides you through an Oracle Database 19c update for 3scale 2.11 from an existing 3scale 2.10 installation.
IMPORTANT: Loss of connection to the database can potentially corrupt 3scale. Make a backup before proceeding to perform the upgrade. For more information see the Oracle Database documentation: Oracle Database Backup and Recovery User’s Guide.
Prerequisites
- A 3scale 2.10 installation.
An Oracle Database 19c installation.
- For more information about configuring 3scale with Oracle, see Preparing the Oracle Database.
Procedure
Download 3scale OpenShift templates from the GitHub repository and extract the archive:
tar -xzf 3scale-amp-openshift-templates-3scale-2.11.1-GA.tar.gz-
Place your Oracle Database Instant Client Package files into the
3scale-amp-openshift-templates-3scale-2.11.1-GA/amp/system-oracle/oracle-client-filesdirectory. Run the
oc processcommand with the-foption and specify thebuild.ymlOpenShift template:$ oc process -f build.yml | oc apply -f -Run the
oc new-appcommand with the-foption to indicate theamp.ymlOpenShift template, and the-poption to specify theWILDCARD_DOMAINparameter with the domain of your OpenShift cluster:$ oc new-app -f amp.yml -p WILDCARD_DOMAIN=mydomain.comNoteThe following steps are optional. Use them if you remove
ORACLE_SYSTEM_PASSWORDafter the installation or a system upgrade.Enter the following
oc patchcommands, replacingSYSTEM_PASSWORDwith the Oracle Databasesystempassword you set up in Preparing the Oracle Database:$ oc patch dc/system-app -p '[{"op": "add", "path": "/spec/strategy/rollingParams/pre/execNewPod/env/-", "value": {"name": "ORACLE_SYSTEM_PASSWORD", "value": "SYSTEM_PASSWORD"}}]' --type=json $ oc patch dc/system-app -p '{"spec": {"strategy": {"rollingParams": {"post":{"execNewPod": {"env": [{"name": "ORACLE_SYSTEM_PASSWORD", "value": "SYSTEM_PASSWORD"}]}}}}}}'Enter the following command, replacing
DATABASE_URLto point to your Oracle Database, specified in Preparing the Oracle Database:$ oc patch secret/system-database -p '{"stringData": {"URL": "DATABASE_URL"}}'Enter the
oc start-buildcommand to build the new system image:$ oc start-build 3scale-amp-system-oracle --from-dir=.Wait until the build completes. To see the state of the build, run the following command:
$ oc get build <build-name> -o jsonpath="{.status.phase}"- Wait until the build is in a Complete state.
Once you have set up your 3scale system image with your Oracle Database, remove
ORACLE_SYSTEM_PASSWORDfrom thesystem-appDeploymentConfig. It is not necessary again until you upgrade to a new version of 3scale.$ oc set env dc/system-app ORACLE_SYSTEM_PASSWORD-
Additional resources
For more information about 3scale and Oracle Database support, see Red Hat 3scale API Management Supported Configurations.