이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 1. Upgrade 3scale 2.0 to 2.1


Perform the steps in this document to upgrade your on-premises AMP deployment from version 2.0 to 2.1.

1.1. Prerequisites:

  • You must be running 3scale On-Premises 2.0
  • OpenShift CLI
  • 3scale AMP 2.1 templates
  • Access and permissions to your openshift server and project
Warning

This process may cause a disruption in service, Red Hat recommends you establish a maintenance window when performing your upgrade.

1.2. Select the Project

  1. Make backups
  2. From a terminal session, log in to your openshift cluster:

    oc login https://<YOUR_OPENSHIFT_CLUSTER>:8443
    Copy to Clipboard Toggle word wrap
  3. Select the project you want to upgrade:

    oc project <YOUR_AMP_20_PROJECT>
    Copy to Clipboard Toggle word wrap

1.3. Patch System Components

Once you have selected your project, continue your in-place upgrade through the oc patch command. The oc patch command allows you to patch your deployment configurations.

In this section of the upgrade, you must patch deployment configurations for the following pods:

  • system-app
  • system-resque
  • system-sidekiq
  • system-sphinx

Follow these steps to patch deployment configurations:

  1. Patch the system-app deployment configuration

    1. Enter the following oc patch command:

      oc patch dc/system-app -p '
      spec:
        strategy:
          rollingParams:
            pre:
              execNewPod:
                containerName: system-provider
                env:
                - name: SSL_CERT_DIR
                  value: "/etc/pki/tls/certs"
                - name: ZYNC_AUTHENTICATION_TOKEN
                  valueFrom:
                    secretKeyRef:
                      name: zync
                      key: ZYNC_AUTHENTICATION_TOKEN
        template:
          spec:
            containers:
            - name: system-provider
              env:
              - name: SSL_CERT_DIR
                value: "/etc/pki/tls/certs"
              - name: AMP_RELEASE
                value: "2.1.0-CR2-redhat-1"
              - name: ZYNC_AUTHENTICATION_TOKEN
                valueFrom:
                  secretKeyRef:
                    name: zync
                    key: ZYNC_AUTHENTICATION_TOKEN
              volumeMounts:
                - name: system-config
                  mountPath: /opt/system/config/zync.yml
                  subPath: zync.yml
                - name: system-config
                  mountPath: /opt/system/config/rolling_updates.yml
                  subPath: rolling_updates.yml
            - name: system-developer
              env:
              - name: SSL_CERT_DIR
                value: "/etc/pki/tls/certs"
              - name: AMP_RELEASE
                value: "2.1.0-CR2-redhat-1"
              - name: ZYNC_AUTHENTICATION_TOKEN
                valueFrom:
                  secretKeyRef:
                    name: zync
                    key: ZYNC_AUTHENTICATION_TOKEN
              volumeMounts:
                - name: system-config
                  mountPath: /opt/system/config/zync.yml
                  subPath: zync.yml
                - name: system-config
                  mountPath: /opt/system/config/rolling_updates.yml
                  subPath: rolling_updates.yml
            volumes:
              - name: system-config
                configMap:
                  name: system
                  items:
                  - key: zync.yml
                    path: zync.yml
                  - key: rolling_updates.yml
                    path: rolling_updates.yml
      '
      Copy to Clipboard Toggle word wrap
  2. Patch the system-resque deployment configuration

    1. Enter the following oc patch command:

      oc patch dc/system-resque -p '
      spec:
        template:
          spec:
            containers:
            - name: system-resque
              env:
              - name: SSL_CERT_DIR
                value: "/etc/pki/tls/certs"
              - name: AMP_RELEASE
                value: "2.1.0-CR2-redhat-1"
              - name: ZYNC_AUTHENTICATION_TOKEN
                valueFrom:
                  secretKeyRef:
                    name: zync
                    key: ZYNC_AUTHENTICATION_TOKEN
              volumeMounts:
                - name: system-config
                  mountPath: /opt/system/config/zync.yml
                  subPath: zync.yml
                - name: system-config
                  mountPath: /opt/system/config/rolling_updates.yml
                  subPath: rolling_updates.yml
            - name: system-scheduler
              env:
              - name: SSL_CERT_DIR
                value: "/etc/pki/tls/certs"
              - name: AMP_RELEASE
                value: "2.1.0-CR2-redhat-1"
              - name: ZYNC_AUTHENTICATION_TOKEN
                valueFrom:
                  secretKeyRef:
                    name: zync
                    key: ZYNC_AUTHENTICATION_TOKEN
              volumeMounts:
                - name: system-config
                  mountPath: /opt/system/config/zync.yml
                  subPath: zync.yml
                - name: system-config
                  mountPath: /opt/system/config/rolling_updates.yml
                  subPath: rolling_updates.yml
            volumes:
              - name: system-config
                configMap:
                  name: system
                  items:
                  - key: zync.yml
                    path: zync.yml
                  - key: rolling_updates.yml
                    path: rolling_updates.yml
      '
      Copy to Clipboard Toggle word wrap
  3. Patch the system-sideqik deployment configuration

    1. Enter the following oc patch command:

      oc patch dc/system-sidekiq -p '
      spec:
        template:
          spec:
            containers:
            - name: system-sidekiq
              env:
              - name: SSL_CERT_DIR
                value: "/etc/pki/tls/certs"
              - name: AMP_RELEASE
                value: "2.1.0-CR2-redhat-1"
              - name: ZYNC_AUTHENTICATION_TOKEN
                valueFrom:
                  secretKeyRef:
                    name: zync
                    key: ZYNC_AUTHENTICATION_TOKEN
              volumeMounts:
                - name: system-config
                  mountPath: /opt/system/config/zync.yml
                  subPath: zync.yml
                - name: system-config
                  mountPath: /opt/system/config/rolling_updates.yml
                  subPath: rolling_updates.yml
            volumes:
              - name: system-config
                configMap:
                  name: system
                  items:
                  - key: zync.yml
                    path: zync.yml
                  - key: rolling_updates.yml
                    path: rolling_updates.yml
      '
      Copy to Clipboard Toggle word wrap
  4. Patch the system-sphinx deployment configuration

    1. Enter the following oc patch command:

      oc patch dc/system-sphinx -p '
      spec:
        template:
          spec:
            containers:
            - name: system-sphinx
              env:
              - name: SSL_CERT_DIR
                value: "/etc/pki/tls/certs"
              - name: AMP_RELEASE
                value: "2.1.0-CR2-redhat-1"
              - name: ZYNC_AUTHENTICATION_TOKEN
                valueFrom:
                  secretKeyRef:
                    name: zync
                    key: ZYNC_AUTHENTICATION_TOKEN
              volumeMounts:
                - name: system-config
                  mountPath: /opt/system/config/zync.yml
                  subPath: zync.yml
                - name: system-config
                  mountPath: /opt/system/config/rolling_updates.yml
                  subPath: rolling_updates.yml
            volumes:
              - name: system-config
                configMap:
                  name: system
                  items:
                  - key: zync.yml
                    path: zync.yml
                  - key: rolling_updates.yml
                    path: rolling_updates.yml
      '
      Copy to Clipboard Toggle word wrap

1.4. Set imageChange triggers

Once you have selected your project and patched the system components, continue your in-place upgrade through the oc set triggers command.

Follow these steps to set up image change triggers:

  1. Enter the following oc set triggers commands for Backend:

    oc set triggers dc/backend-cron --containers='backend-cron' --from-image=amp-backend:latest
    oc set triggers dc/backend-listener --containers='backend-listener' --from-image=amp-backend:latest
    oc set triggers dc/backend-worker --containers='backend-worker' --from-image=amp-backend:latest
    Copy to Clipboard Toggle word wrap
  2. Enter the following oc set triggers commands for System:

    oc set triggers dc/system-sphinx --containers='system-sphinx' --from-image=amp-system:latest
    oc set triggers dc/system-app --containers='system-developer,system-provider' --from-image=amp-system:latest
    oc set triggers dc/system-sidekiq --containers='system-sidekiq' --from-image=amp-system:latest
    oc set triggers dc/system-resque --containers='system-scheduler,system-resque' --from-image=amp-system:latest
    Copy to Clipboard Toggle word wrap
  3. Enter the following oc set triggers commands for APIcast:

    oc set triggers dc/apicast-staging --containers='apicast-staging' --from-image=amp-apicast:latest
    oc set triggers dc/apicast-production --containers='apicast-production' --from-image=amp-apicast:latest
    Copy to Clipboard Toggle word wrap

1.5. Deploy the 2.1 Template

Once you have patched system components and set imageChange triggers, you must deploy the 2.1 AMP template over your 2.0 deployment:

Using the existing wildcard domain of your current deployment, deploy the 2.1 template over top of your 2.0 project:

oc new-app -f amp.yml --param WILDCARD_DOMAIN=<YOUR_DOMAIN>
Copy to Clipboard Toggle word wrap
Note

If you do not know the wildcard domain of your current deployment, you can find it with the following command:

oc get dc/system-app -o jsonpath='{.spec.template.spec.containers[?(@.name == "system-provider")].env[?(@.name == "THREESCALE_SUPERDOMAIN")].value}'
Copy to Clipboard Toggle word wrap

The 2.1 template will deploy over top of your 2.0 deplyment. This deployment will result in a set of errors; these are expected and were resolved in the Patch System Components section.

1.6. Verify Upgrade

Once you have performed the upgrade procedure, verify the success of your upgrade operation by checking the version number in the lower-right corner of your 3scale Admin Portal.

Note

It may take some time for your redeployment operations to complete in OpenShift

맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2025 Red Hat