Chapter 1. Upgrade 3scale API Management 2.2 to 2.3
The 2.3 version of 3scale API Management only updates the APIcast component of the product.
Perform the steps in this document to upgrade the APIcast to version 2.3.
1.1. Prerequisites
- You must be running 3scale On-Premises 2.2
- OpenShift CLI
Red Hat recommends that you establish a maintenance window when performing the upgrade because this process may cause a disruption in service.
1.2. Select the Project
From a terminal session, log in to your OpenShift cluster using the following command. Here, <YOUR_OPENSHIFT_CLUSTER> is the URL of your OpenShift cluster.
oc login https://<YOUR_OPENSHIFT_CLUSTER>:8443
Select the project you want to upgrade using the following command. Here, <3scale-22-project> is the name of your project.
oc project <3scale-22-project>
1.3. Gather the Needed Values
Gather the following values from the APIcast component of your current 2.2 deployment:
- APICAST_MANAGEMENT_API
- OPENSSL_VERIFY
- APICAST_RESPONSE_CODES
Export these values from the current deployment into the active shell.
export `oc env dc/apicast-production --list | grep -E '^(APICAST_MANAGEMENT_API|OPENSSL_VERIFY|APICAST_RESPONSE_CODES)=' | tr "\n" ' ' `
Optionally, to query individual values from the OpenShift CLI, run the following
oc get
command, where<variable_name>
is the name of the variable you want to query.oc get "-o=custom-columns=NAMES:.spec.template.spec.containers[0].env[?(.name==\"<variable_name>\")].value" dc/apicast-production
Set the value for the new version of the 3scale API Management release.
export AMP_RELEASE=2.3.0
Set the values for the new environment variables.
export AMP_APICAST_IMAGE=registry.access.redhat.com/3scale-amp23/apicast-gateway
Set the
APICAST_ACCESS_TOKEN
environment variable with the valid Access Token for Account Management API. You can extract it from theTHREESCALE_PORTAL_ENDPOINT
environment variable.oc env dc/apicast-production --list | grep THREESCALE_PORTAL_ENDPOINT
This will return the following output:
THREESCALE_PORTAL_ENDPOINT=http://<ACCESS_TOKEN>@system-master:3000/master/api/proxy/configs
Export the
<ACCESS_TOKEN>
value to the environment variableAPICAST_ACCESS_TOKEN
.export APICAST_ACCESS_TOKEN=<ACCESS_TOKEN>
Confirm that the necessary values are exported to the active shell.
echo AMP_RELEASE=$AMP_RELEASE echo AMP_APICAST_IMAGE=$AMP_APICAST_IMAGE echo APICAST_ACCESS_TOKEN=$APICAST_ACCESS_TOKEN echo APICAST_MANAGEMENT_API=$APICAST_MANAGEMENT_API echo OPENSSL_VERIFY=$OPENSSL_VERIFY echo APICAST_RESPONSE_CODES=$APICAST_RESPONSE_CODES
1.4. Patch APIcast
To patch the
apicast-staging
deployment configuration, run the followingoc patch
command.oc patch dc/apicast-staging -p " metadata: name: apicast-staging labels: app: APIcast 3scale.component: apicast 3scale.component-element: staging spec: replicas: 1 selector: deploymentConfig: apicast-staging strategy: rollingParams: intervalSeconds: 1 maxSurge: 25% maxUnavailable: 25% timeoutSeconds: 1800 updatePeriodSeconds: 1 type: Rolling template: metadata: labels: deploymentConfig: apicast-staging app: APIcast 3scale.component: apicast 3scale.component-element: staging annotations: prometheus.io/scrape: 'true' prometheus.io/port: '9421' spec: containers: - env: - name: THREESCALE_PORTAL_ENDPOINT value: \"http://${APICAST_ACCESS_TOKEN}@system-master:3000/master/api/proxy/configs\" - name: APICAST_CONFIGURATION_LOADER value: \"lazy\" - name: APICAST_CONFIGURATION_CACHE value: \"0\" - name: THREESCALE_DEPLOYMENT_ENV value: \"sandbox\" - name: APICAST_MANAGEMENT_API value: \"${APICAST_MANAGEMENT_API}\" - name: BACKEND_ENDPOINT_OVERRIDE value: http://backend-listener:3000 - name: OPENSSL_VERIFY value: '${APICAST_OPENSSL_VERIFY}' - name: APICAST_RESPONSE_CODES value: '${APICAST_RESPONSE_CODES}' - name: REDIS_URL value: \"redis://system-redis:6379/2\" image: amp-apicast:latest imagePullPolicy: IfNotPresent name: apicast-staging resources: limits: cpu: 100m memory: 128Mi requests: cpu: 50m memory: 64Mi livenessProbe: httpGet: path: /status/live port: 8090 initialDelaySeconds: 10 timeoutSeconds: 5 periodSeconds: 10 readinessProbe: httpGet: path: /status/ready port: 8090 initialDelaySeconds: 15 timeoutSeconds: 5 periodSeconds: 30 ports: - containerPort: 8080 protocol: TCP - containerPort: 8090 protocol: TCP - name: metrics containerPort: 9421 protocol: TCP triggers: - type: ConfigChange - type: ImageChange imageChangeParams: automatic: true containerNames: - apicast-staging from: kind: ImageStreamTag name: amp-apicast:latest "
To patch the
apicast-production
deployment configuration, run the followingoc patch
command.oc patch dc/apicast-production -p " metadata: name: apicast-production labels: app: APIcast 3scale.component: apicast 3scale.component-element: production spec: replicas: 1 selector: deploymentConfig: apicast-production strategy: rollingParams: intervalSeconds: 1 maxSurge: 25% maxUnavailable: 25% timeoutSeconds: 1800 updatePeriodSeconds: 1 type: Rolling template: metadata: labels: deploymentConfig: apicast-production app: APIcast 3scale.component: apicast 3scale.component-element: production annotations: prometheus.io/scrape: 'true' prometheus.io/port: '9421' spec: initContainers: - name: system-master-svc image: amp-apicast:latest command: ['sh', '-c', 'until \$(curl --output /dev/null --silent --fail --head http://system-master:3000/status); do sleep $SLEEP_SECONDS; done'] activeDeadlineSeconds: 1200 env: - name: SLEEP_SECONDS value: \"1\" containers: - env: - name: THREESCALE_PORTAL_ENDPOINT value: \"http://${APICAST_ACCESS_TOKEN}@system-master:3000/master/api/proxy/configs\" - name: APICAST_CONFIGURATION_LOADER value: \"boot\" - name: APICAST_CONFIGURATION_CACHE value: \"300\" - name: THREESCALE_DEPLOYMENT_ENV value: \"production\" - name: APICAST_MANAGEMENT_API value: \"${APICAST_MANAGEMENT_API}\" - name: BACKEND_ENDPOINT_OVERRIDE value: http://backend-listener:3000 - name: OPENSSL_VERIFY value: '${APICAST_OPENSSL_VERIFY}' - name: APICAST_RESPONSE_CODES value: '${APICAST_RESPONSE_CODES}' - name: REDIS_URL value: \"redis://system-redis:6379/1\" image: amp-apicast:latest imagePullPolicy: IfNotPresent name: apicast-production resources: limits: cpu: 1000m memory: 128Mi requests: cpu: 500m memory: 64Mi livenessProbe: httpGet: path: /status/live port: 8090 initialDelaySeconds: 10 timeoutSeconds: 5 periodSeconds: 10 readinessProbe: httpGet: path: /status/ready port: 8090 initialDelaySeconds: 15 timeoutSeconds: 5 periodSeconds: 30 ports: - containerPort: 8080 protocol: TCP - containerPort: 8090 protocol: TCP - name: metrics containerPort: 9421 protocol: TCP triggers: - type: ConfigChange - type: ImageChange imageChangeParams: automatic: true containerNames: - system-master-svc - apicast-production from: kind: ImageStreamTag name: amp-apicast:latest "
To patch the
amp-apicast
image stream, run the followingoc patch
command.oc patch is/amp-apicast -p " metadata: name: amp-apicast labels: app: APIcast 3scale.component: apicast annotations: openshift.io/display-name: AMP APIcast spec: tags: - name: latest annotations: openshift.io/display-name: AMP APIcast (latest) from: kind: ImageStreamTag name: "${AMP_RELEASE}" - name: "${AMP_RELEASE}" annotations: openshift.io/display-name: AMP APIcast ${AMP_RELEASE} from: kind: DockerImage name: ${AMP_APICAST_IMAGE} importPolicy: insecure: false "
-
Set
importPolicy.insecure
totrue
if the server is allowed to bypass certificate verification or connect directly over HTTP during image import.
1.5. Verify Upgrade
After you have performed the upgrade procedure, verify the success of the upgrade by making test API calls to the updated APIcast.
It may take some time for the redeployment operations to complete in OpenShift.
1.6. Upgrade APIcast in OpenShift
If you deployed APIcast outside of the complete 3scale API Management on-premises installation using the apicast.yml
OpenShift template, take the following steps to upgrade your deployment. The steps assume that the name of the deployment configuration is apicast
, which is the default value in the apicast.yml
template of the 3scale version 2.2. If you used a different name, you must adjust the commands accordingly.
Update the container image
oc patch dc/apicast --patch='{"spec":{"template":{"spec":{"containers":[{"name": "apicast", "image":"registry.access.redhat.com/3scale-amp23/apicast-gateway"}]}}}}'
Add the port definition for port
9421
used for Prometheus metricsoc patch dc/apicast --patch='{"spec": {"template": {"spec": {"containers": [{"name": "apicast","ports": [{"name": "metrics", "containerPort": 9421, "protocol": "TCP"}]}]}}}}'
Add Prometheus annotations
oc patch dc/apicast --patch='{"spec": {"template": {"metadata": {"annotations": {"prometheus.io/scrape": "true", "prometheus.io/port": "9421"}}}}}'
Remove the
APICAST_WORKERS
environment variableoc env dc/apicast APICAST_WORKERS-
APICAST_WORKERS
allows specifying the value for the directive worker_processes
. By default, APIcast uses the value auto
, which triggers auto-detection of the best number of workers, when running in OpenShift or Kubernetes environments. Thus, it is recommended not set the APICAST_WORKERS
value explicitly and let APIcast perform auto-detection.
The APICAST_WORKERS
parameter is no longer present in the apicast.yml
OpenShift template. In case you are using scripts that deploy the template with APICAST_WORKERS
parameter, make sure you remove this parameter from the scripts, otherwise the deployment will fail with the following error: error: unexpected parameter name "APICAST_WORKERS"