Chapter 5. Downgrading a cluster


After an OpenShift Container Platform upgrade, you might need to downgrade your cluster to an earlier version. You can downgrade from OpenShift Container Platform version 3.11 to version 3.10.

Warning

In the initial release of OpenShift Container Platform version 3.11, downgrading does not completely restore your cluster to version 3.10. Do not downgrade.

If you need to downgrade, contact Red Hat support so they can help you determine the best course of action.

Important

Downgrading a cluster to version 3.10 is supported for only RPM-based installations of OpenShift Container Platform, and you must take your entire cluster offline to downgrade.

5.1. Verifying backups

  1. Ensure that a backup of the master-config.yaml file, scheduler.json file, and the etcd data directory exist on your masters:

    /etc/origin/master/master-config.yaml.<timestamp>
    /etc/origin/master/master.env
    /etc/origin/master/scheduler.json
    /var/lib/etcd/openshift-backup-xxxx

    You save these files during the upgrade process.

  2. Locate the copies of the following files that you created when you prepared for an upgrade.

    On node and master hosts:

    /etc/origin/node/node-config.yaml

    On etcd hosts, including masters that have etcd co-located on them:

    /etc/etcd/etcd.conf

5.2. Shutting down the cluster

  1. On all master and node hosts, stop the master and node services by removing the pod definition and rebooting the host:

    # mkdir -p /etc/origin/node/pods-stopped
    # mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/
    # reboot

5.3. Removing RPMs and static pods

  1. On all masters, nodes, and etcd members (if using a dedicated etcd cluster), remove the following packages:

    # yum remove atomic-openshift \
       atomic-openshift-excluder \
       atomic-openshift-hyperkube \
       atomic-openshift-node \
       atomic-openshift-docker-excluder \
       atomic-openshift-clients
  2. Verify the packages were removed successfully:

    # rpm -qa | grep atomic-openshift
  3. On control plane hosts (master and etcd hosts), move the static pod definitions:

    # mkdir /etc/origin/node/pods-backup
    # mv /etc/origin/node/pods/* /etc/origin/node/pods-backup/
  4. Reboot each host:

    # reboot

5.4. Reinstalling RPMs

  1. Disable the OpenShift Container Platform 3.11 repositories, and re-enable the 3.10 repositories:

    # subscription-manager repos \
        --disable=rhel-7-server-ose-3.11-rpms \
        --enable=rhel-7-server-ose-3.10-rpms
  2. On each master and node host, install the following packages:

    # yum install atomic-openshift \
        atomic-openshift-node \
        atomic-openshift-docker-excluder \
        atomic-openshift-excluder \
        atomic-openshift-clients \
        atomic-openshift-hyperkube
  3. On each host, verify the packages were installed successfully:

    # rpm -qa | grep atomic-openshift

5.5. Bringing OpenShift Container Platform services back online

After you finish your changes, bring OpenShift Container Platform back online.

Procedure

  1. On each OpenShift Container Platform master, restore your master and node configuration from backup and enable and restart all relevant services:

    # cp ${MYBACKUPDIR}/etc/origin/node/pods/* /etc/origin/node/pods/
    # cp ${MYBACKUPDIR}/etc/origin/master/master.env /etc/origin/master/master.env
    # cp ${MYBACKUPDIR}/etc/origin/master/master-config.yaml.<timestamp> /etc/origin/master/master-config.yaml
    # cp ${MYBACKUPDIR}/etc/origin/node/node-config.yaml.<timestamp> /etc/origin/node/node-config.yaml
    # cp ${MYBACKUPDIR}/etc/origin/master/scheduler.json.<timestamp> /etc/origin/master/scheduler.json
    # master-restart api
    # master-restart controllers
  2. On each OpenShift Container Platform node, update the node configuration maps as needed, and enable and restart the atomic-openshift-node service:

    # cp /etc/origin/node/node-config.yaml.<timestamp> /etc/origin/node/node-config.yaml
    # systemctl enable atomic-openshift-node
    # systemctl start atomic-openshift-node

5.6. Verifying the downgrade

To verify the downgrade:
  1. Check that all nodes are marked as Ready:

    # oc get nodes
    NAME                        STATUS                     AGE
    master.example.com          Ready,SchedulingDisabled   165d
    node1.example.com           Ready                      165d
    node2.example.com           Ready                      165d
  2. Verify the successful downgrade of the registry and router, if deployed:

    1. Verify you are running the v3.10 versions of the docker-registry and router images:

      # oc get -n default dc/docker-registry -o json | grep \"image\"
          "image": "openshift3/ose-docker-registry:v3.10",
      # oc get -n default dc/router -o json | grep \"image\"
          "image": "openshift3/ose-haproxy-router:v3.10",
    2. Verify that docker-registry and router pods are running and in ready state:

      # oc get pods -n default
      
      NAME                       READY     STATUS    RESTARTS   AGE
      docker-registry-2-b7xbn    1/1       Running   0          18m
      router-2-mvq6p             1/1       Running   0          6m
  3. Use the diagnostics tool on the master to look for common issues and provide suggestions:

    # oc adm diagnostics
    ...
    [Note] Summary of diagnostics execution:
    [Note] Completed with no errors or warnings seen.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.