This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 4. Downgrading OpenShift
4.1. Overview 링크 복사링크가 클립보드에 복사되었습니다!
Following an OpenShift Enterprise upgrade, it may be desirable in extreme cases to downgrade your cluster to a previous version. The following sections outline the required steps for each system in a cluster to perform such a downgrade for the OpenShift Enterprise 3.2 to 3.1 downgrade path.
These steps are currently only supported for RPM-based installations of OpenShift Enterprise and assumes downtime of the entire cluster.
For the OpenShift Enterprise 3.1 to 3.0 downgrade path, see the OpenShift Enterprise 3.1 documentation, which has modified steps.
4.2. Verifying Backups 링크 복사링크가 클립보드에 복사되었습니다!
The Ansible playbook used during the upgrade process should have created a backup of the master-config.yaml file and the etcd data directory. Ensure these exist on your masters and etcd members:
/etc/origin/master/master-config.yaml.<timestamp> /var/lib/origin/etcd-backup-<timestamp>
/etc/origin/master/master-config.yaml.<timestamp>
/var/lib/origin/etcd-backup-<timestamp>
Also, back up the node-config.yaml file on each node (including masters, which have the node component on them) with a timestamp:
/etc/origin/node/node-config.yaml.<timestamp>
/etc/origin/node/node-config.yaml.<timestamp>
If you use a separate etcd cluster instead of a single embedded etcd instance, the backup is likely created on all etcd members, though only one is required for the recovery process. You can run a separate etcd instance that is co-located with your master nodes.
The RPM downgrade process in a later step should create .rpmsave backups of the following files, but it may be a good idea to keep a separate copy regardless:
/etc/sysconfig/atomic-openshift-master /etc/etcd/etcd.conf
/etc/sysconfig/atomic-openshift-master
/etc/etcd/etcd.conf
- 1
- Only required if using a separate etcd cluster.
4.3. Shutting Down the Cluster 링크 복사링크가 클립보드에 복사되었습니다!
On all masters, nodes, and etcd members, if you use a separate etcd cluster that runs on different nodes, ensure the relevant services are stopped.
On the master in a single master cluster:
systemctl stop atomic-openshift-master
# systemctl stop atomic-openshift-master
On each master in a multi-master cluster:
systemctl stop atomic-openshift-master-api systemctl stop atomic-openshift-master-controllers
# systemctl stop atomic-openshift-master-api
# systemctl stop atomic-openshift-master-controllers
On all master and node hosts:
systemctl stop atomic-openshift-node
# systemctl stop atomic-openshift-node
On any etcd hosts for a separate etcd cluster:
systemctl stop etcd
# systemctl stop etcd
4.4. Removing RPMs 링크 복사링크가 클립보드에 복사되었습니다!
On all masters, nodes, and etcd members, if you use a separate etcd cluster that runs on different nodes, remove the following packages:
If you use a separate etcd cluster, also remove the etcd package:
yum remove etcd
# yum remove etcd
If using the embedded etcd, leave the etcd package installed. It is required for running the etcdctl
command to issue operations in later steps.
4.5. Downgrading Docker 링크 복사링크가 클립보드에 복사되었습니다!
OpenShift Enterprise 3.2 requires Docker 1.9.1 and also supports Docker 1.10.3, however OpenShift Enterprise 3.1 requires Docker 1.8.2.
Downgrade to Docker 1.8.2 on each host using the following steps:
Remove all local containers and images on the host. Any pods backed by a replication controller will be recreated.
WarningThe following commands are destructive and should be used with caution.
Delete all containers:
docker rm $(docker ps -a -q)
# docker rm $(docker ps -a -q)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete all images:
docker rmi $(docker images -q)
# docker rmi $(docker images -q)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use
yum swap
(instead ofyum downgrade
) to install Docker 1.8.2:yum swap docker-* docker-*1.8.2 sed -i 's/--storage-opt dm.use_deferred_deletion=true//' /etc/sysconfig/docker-storage systemctl restart docker
# yum swap docker-* docker-*1.8.2 # sed -i 's/--storage-opt dm.use_deferred_deletion=true//' /etc/sysconfig/docker-storage # systemctl restart docker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You should now have Docker 1.8.2 installed and running on the host. Verify with the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.6. Reinstalling RPMs 링크 복사링크가 클립보드에 복사되었습니다!
Disable the OpenShift Enterprise 3.3 repositories, and re-enable the 3.2 repositories:
subscription-manager repos \ --disable=rhel-7-server-ose-3.3-rpms \ --enable=rhel-7-server-ose-3.2-rpms
# subscription-manager repos \
--disable=rhel-7-server-ose-3.3-rpms \
--enable=rhel-7-server-ose-3.2-rpms
On each master, install the following packages:
On each node, install the following packages:
yum install atomic-openshift \ atomic-openshift-node \ openvswitch \ atomic-openshift-sdn-ovs \ tuned-profiles-atomic-openshift-node
# yum install atomic-openshift \
atomic-openshift-node \
openvswitch \
atomic-openshift-sdn-ovs \
tuned-profiles-atomic-openshift-node
If you use a separate etcd cluster, install the following package on each etcd member:
yum install etcd
# yum install etcd
4.7. Restoring etcd 링크 복사링크가 클립보드에 복사되었습니다!
See Backup and Restore.
4.8. Bringing OpenShift Enterprise Services Back Online 링크 복사링크가 클립보드에 복사되었습니다!
See Backup and Restore.
4.9. Verifying the Downgrade 링크 복사링크가 클립보드에 복사되었습니다!
To verify the downgrade, first check that all nodes are marked as Ready:
oc get nodes
# oc get nodes
NAME STATUS AGE
master.example.com Ready,SchedulingDisabled 165d
node1.example.com Ready 165d
node2.example.com Ready 165d
Then, verify that you are running the expected versions of the docker-registry and router images, if deployed:
oc get -n default dc/docker-registry -o json | grep \"image\" oc get -n default dc/router -o json | grep \"image\"
# oc get -n default dc/docker-registry -o json | grep \"image\"
"image": "openshift3/ose-docker-registry:v3.1.1.6",
# oc get -n default dc/router -o json | grep \"image\"
"image": "openshift3/ose-haproxy-router:v3.1.1.6",
You can use the diagnostics tool on the master to look for common issues and provide suggestions. In OpenShift Enterprise 3.1, the oc adm diagnostics
tool is available as openshift ex diagnostics
:
openshift ex diagnostics
# openshift ex diagnostics
...
[Note] Summary of diagnostics execution:
[Note] Completed with no errors or warnings seen.