Search

Chapter 6. Downgrading OpenShift

download PDF

6.1. Overview

Following an OpenShift Container Platform upgrade, it may be desirable in extreme cases to downgrade your cluster to a previous version. The following sections outline the required steps for each system in a cluster to perform such a downgrade for the OpenShift Container Platform 3.4 to 3.3 downgrade path.

Warning

These steps are currently only supported for RPM-based installations of OpenShift Container Platform and assumes downtime of the entire cluster.

6.2. Verifying Backups

The Ansible playbook used during the upgrade process should have created a backup of the master-config.yaml file and the etcd data directory. Ensure these exist on your masters and etcd members:

/etc/origin/master/master-config.yaml.<timestamp>
/var/lib/origin/etcd-backup-<timestamp>

Also, back up the node-config.yaml file on each node (including masters, which have the node component on them) with a timestamp:

/etc/origin/node/node-config.yaml.<timestamp>

If you use a separate etcd cluster instead of a single embedded etcd instance, the backup is likely created on all etcd members, though only one is required for the recovery process. You can run a separate etcd instance that is co-located with your master nodes.

The RPM downgrade process in a later step should create .rpmsave backups of the following files, but it may be a good idea to keep a separate copy regardless:

/etc/sysconfig/atomic-openshift-master
/etc/etcd/etcd.conf 1
1
Only required if using a separate etcd cluster.

6.3. Shutting Down the Cluster

On all masters, nodes, and etcd members, if you use a separate etcd cluster that runs on different nodes, ensure the relevant services are stopped.

On the master in a single master cluster:

# systemctl stop atomic-openshift-master

On each master in a multi-master cluster:

# systemctl stop atomic-openshift-master-api
# systemctl stop atomic-openshift-master-controllers

On all master and node hosts:

# systemctl stop atomic-openshift-node

On any etcd hosts for a separate etcd cluster:

# systemctl stop etcd

6.4. Removing RPMs

  1. The *-excluder packages add entries to the exclude directive in the host’s /etc/yum.conf file when installed. Run the following command on each host to remove the atomic-openshift-* and docker packages from the exclude list:

    # atomic-openshift-excluder unexclude
    # atomic-openshift-docker-excluder unexclude
  2. On all masters, nodes, and etcd members, if you use a separate etcd cluster that runs on different nodes, remove the following packages:

    # yum remove atomic-openshift \
        atomic-openshift-clients \
        atomic-openshift-node \
        atomic-openshift-master \
        openvswitch \
        atomic-openshift-sdn-ovs \
        tuned-profiles-atomic-openshift-node \
        atomic-openshift-excluder \
        atomic-openshift-docker-excluder
  3. If you use a separate etcd cluster, also remove the etcd package:

    # yum remove etcd

    If using the embedded etcd, leave the etcd package installed. It is required for running the etcdctl command to issue operations in later steps.

6.5. Downgrading Docker

OpenShift Container Platform 3.3 requires Docker 1.10.3.

Downgrade to Docker 1.10.3. on each host using the following steps:

  1. Remove all local containers and images on the host. Any pods backed by a replication controller will be recreated.

    Warning

    The following commands are destructive and should be used with caution.

    Delete all containers:

    # docker rm $(docker ps -a -q)

    Delete all images:

    # docker rmi $(docker images -q)
  2. Use yum swap (instead of yum downgrade) to install Docker 1.10.3:

    # yum swap docker-* docker-*1.10.3
    # sed -i 's/--storage-opt dm.use_deferred_deletion=true//' /etc/sysconfig/docker-storage
    # systemctl restart docker
  3. You should now have Docker 1.10.3 installed and running on the host. Verify with the following:

    # docker version
    Client:
     Version:      1.10.3-el7
     API version:  1.20
     Package Version: docker-1.10.3.el7.x86_64
    [...]
    
    # systemctl status docker
    ● docker.service - Docker Application Container Engine
       Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
       Active: active (running) since Wed 2017-06-21 15:44:20 EDT; 30min ago
    [...]

6.6. Reinstalling RPMs

  1. Disable the OpenShift Container Platform 3.4 repositories, and re-enable the 3.3 repositories:

    # subscription-manager repos \
        --disable=rhel-7-server-ose-3.4-rpms \
        --enable=rhel-7-server-ose-3.3-rpms
  2. On each master, install the following packages:

    # yum install atomic-openshift \
        atomic-openshift-clients \
        atomic-openshift-node \
        atomic-openshift-master \
        openvswitch \
        atomic-openshift-sdn-ovs \
        tuned-profiles-atomic-openshift-node \
        atomic-openshift-excluder \
        atomic-openshift-docker-excluder
  3. On each node, install the following packages:

    # yum install atomic-openshift \
        atomic-openshift-node \
        openvswitch \
        atomic-openshift-sdn-ovs \
        tuned-profiles-atomic-openshift-node \
        atomic-openshift-excluder \
        atomic-openshift-docker-excluder
  4. If you use a separate etcd cluster, install the following package on each etcd member:

    # yum install etcd

6.7. Restoring etcd

See Backup and Restore.

6.8. Bringing OpenShift Container Platform Services Back Online

See Backup and Restore.

6.9. Verifying the Downgrade

  1. To verify the downgrade, first check that all nodes are marked as Ready:

    # oc get nodes
    NAME                        STATUS                     AGE
    master.example.com          Ready,SchedulingDisabled   165d
    node1.example.com           Ready                      165d
    node2.example.com           Ready                      165d
  2. Then, verify that you are running the expected versions of the docker-registry and router images, if deployed:

    # oc get -n default dc/docker-registry -o json | grep \"image\"
        "image": "openshift3/ose-docker-registry:v3.3.1.38-2",
    # oc get -n default dc/router -o json | grep \"image\"
        "image": "openshift3/ose-haproxy-router:v3.3.1.38-2",
  3. You can use the diagnostics tool on the master to look for common issues and provide suggestions:

    # oc adm diagnostics
    ...
    [Note] Summary of diagnostics execution:
    [Note] Completed with no errors or warnings seen.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.