Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 5. Downgrading a cluster
After an OpenShift Container Platform upgrade, you might need to downgrade your cluster to an earlier version. You can downgrade from OpenShift Container Platform version 3.11 to version 3.10.
In the initial release of OpenShift Container Platform version 3.11, downgrading does not completely restore your cluster to version 3.10. Do not downgrade.
If you need to downgrade, contact Red Hat support so they can help you determine the best course of action.
Downgrading a cluster to version 3.10 is supported for only RPM-based installations of OpenShift Container Platform, and you must take your entire cluster offline to downgrade.
5.1. Verifying backups
Ensure that a backup of the master-config.yaml file, scheduler.json file, and the etcd data directory exist on your masters:
/etc/origin/master/master-config.yaml.<timestamp> /etc/origin/master/master.env /etc/origin/master/scheduler.json /var/lib/etcd/openshift-backup-xxxx
You save these files during the upgrade process.
Locate the copies of the following files that you created when you prepared for an upgrade.
On node and master hosts:
/etc/origin/node/node-config.yaml
On etcd hosts, including masters that have etcd co-located on them:
/etc/etcd/etcd.conf
5.2. Shutting down the cluster
On all master and node hosts, stop the master and node services by removing the pod definition and rebooting the host:
# mkdir -p /etc/origin/node/pods-stopped # mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/ # reboot
5.3. Removing RPMs and static pods
On all masters, nodes, and etcd members (if using a dedicated etcd cluster), remove the following packages:
# yum remove atomic-openshift \ atomic-openshift-excluder \ atomic-openshift-hyperkube \ atomic-openshift-node \ atomic-openshift-docker-excluder \ atomic-openshift-clients
Verify the packages were removed successfully:
# rpm -qa | grep atomic-openshift
On control plane hosts (master and etcd hosts), move the static pod definitions:
# mkdir /etc/origin/node/pods-backup # mv /etc/origin/node/pods/* /etc/origin/node/pods-backup/
Reboot each host:
# reboot
5.4. Reinstalling RPMs
Disable the OpenShift Container Platform 3.11 repositories, and re-enable the 3.10 repositories:
# subscription-manager repos \ --disable=rhel-7-server-ose-3.11-rpms \ --enable=rhel-7-server-ose-3.10-rpms
On each master and node host, install the following packages:
# yum install atomic-openshift \ atomic-openshift-node \ atomic-openshift-docker-excluder \ atomic-openshift-excluder \ atomic-openshift-clients \ atomic-openshift-hyperkube
On each host, verify the packages were installed successfully:
# rpm -qa | grep atomic-openshift
5.5. Bringing OpenShift Container Platform services back online
After you finish your changes, bring OpenShift Container Platform back online.
Procedure
On each OpenShift Container Platform master, restore your master and node configuration from backup and enable and restart all relevant services:
# cp ${MYBACKUPDIR}/etc/origin/node/pods/* /etc/origin/node/pods/ # cp ${MYBACKUPDIR}/etc/origin/master/master.env /etc/origin/master/master.env # cp ${MYBACKUPDIR}/etc/origin/master/master-config.yaml.<timestamp> /etc/origin/master/master-config.yaml # cp ${MYBACKUPDIR}/etc/origin/node/node-config.yaml.<timestamp> /etc/origin/node/node-config.yaml # cp ${MYBACKUPDIR}/etc/origin/master/scheduler.json.<timestamp> /etc/origin/master/scheduler.json # master-restart api # master-restart controllers
On each OpenShift Container Platform node, update the node configuration maps as needed, and enable and restart the atomic-openshift-node service:
# cp /etc/origin/node/node-config.yaml.<timestamp> /etc/origin/node/node-config.yaml # systemctl enable atomic-openshift-node # systemctl start atomic-openshift-node
5.6. Verifying the downgrade
To verify the downgrade:
Check that all nodes are marked as Ready:
# oc get nodes NAME STATUS AGE master.example.com Ready,SchedulingDisabled 165d node1.example.com Ready 165d node2.example.com Ready 165d
Verify the successful downgrade of the registry and router, if deployed:
Verify you are running the
v3.10
versions of the docker-registry and router images:# oc get -n default dc/docker-registry -o json | grep \"image\" "image": "openshift3/ose-docker-registry:v3.10", # oc get -n default dc/router -o json | grep \"image\" "image": "openshift3/ose-haproxy-router:v3.10",
Verify that docker-registry and router pods are running and in ready state:
# oc get pods -n default NAME READY STATUS RESTARTS AGE docker-registry-2-b7xbn 1/1 Running 0 18m router-2-mvq6p 1/1 Running 0 6m
Use the diagnostics tool on the master to look for common issues and provide suggestions:
# oc adm diagnostics ... [Note] Summary of diagnostics execution: [Note] Completed with no errors or warnings seen.