Este conteúdo não está disponível no idioma selecionado.
Chapter 6. Downgrading OpenShift
6.1. Overview Copiar o linkLink copiado para a área de transferência!
Following an OpenShift Container Platform upgrade, it may be desirable in extreme cases to downgrade your cluster to a previous version. The following sections outline the required steps for each system in a cluster to perform such a downgrade for the OpenShift Container Platform 3.4 to 3.3 downgrade path.
These steps are currently only supported for RPM-based installations of OpenShift Container Platform and assumes downtime of the entire cluster.
6.2. Verifying Backups Copiar o linkLink copiado para a área de transferência!
The Ansible playbook used during the upgrade process should have created a backup of the master-config.yaml file and the etcd data directory. Ensure these exist on your masters and etcd members:
/etc/origin/master/master-config.yaml.<timestamp>
/var/lib/origin/etcd-backup-<timestamp>
Also, back up the node-config.yaml file on each node (including masters, which have the node component on them) with a timestamp:
/etc/origin/node/node-config.yaml.<timestamp>
If you use a separate etcd cluster instead of a single embedded etcd instance, the backup is likely created on all etcd members, though only one is required for the recovery process. You can run a separate etcd instance that is co-located with your master nodes.
The RPM downgrade process in a later step should create .rpmsave backups of the following files, but it may be a good idea to keep a separate copy regardless:
/etc/sysconfig/atomic-openshift-master
/etc/etcd/etcd.conf
- 1
- Only required if using a separate etcd cluster.
6.3. Shutting Down the Cluster Copiar o linkLink copiado para a área de transferência!
On all masters, nodes, and etcd members, if you use a separate etcd cluster that runs on different nodes, ensure the relevant services are stopped.
On the master in a single master cluster:
# systemctl stop atomic-openshift-master
On each master in a multi-master cluster:
# systemctl stop atomic-openshift-master-api
# systemctl stop atomic-openshift-master-controllers
On all master and node hosts:
# systemctl stop atomic-openshift-node
On any etcd hosts for a separate etcd cluster:
# systemctl stop etcd
6.4. Removing RPMs Copiar o linkLink copiado para a área de transferência!
The *-excluder packages add entries to the exclude directive in the host’s /etc/yum.conf file when installed. Run the following command on each host to remove the
atomic-openshift-*anddockerpackages from the exclude list:# atomic-openshift-excluder unexclude # atomic-openshift-docker-excluder unexcludeOn all masters, nodes, and etcd members, if you use a separate etcd cluster that runs on different nodes, remove the following packages:
# yum remove atomic-openshift \ atomic-openshift-clients \ atomic-openshift-node \ atomic-openshift-master \ openvswitch \ atomic-openshift-sdn-ovs \ tuned-profiles-atomic-openshift-node \ atomic-openshift-excluder \ atomic-openshift-docker-excluderIf you use a separate etcd cluster, also remove the etcd package:
# yum remove etcdIf using the embedded etcd, leave the etcd package installed. It is required for running the
etcdctlcommand to issue operations in later steps.
6.5. Downgrading Docker Copiar o linkLink copiado para a área de transferência!
OpenShift Container Platform 3.3 requires Docker 1.10.3.
Downgrade to Docker 1.10.3. on each host using the following steps:
Remove all local containers and images on the host. Any pods backed by a replication controller will be recreated.
WarningThe following commands are destructive and should be used with caution.
Delete all containers:
# docker rm $(docker ps -a -q)Delete all images:
# docker rmi $(docker images -q)Use
yum swap(instead ofyum downgrade) to install Docker 1.10.3:# yum swap docker-* docker-*1.10.3 # sed -i 's/--storage-opt dm.use_deferred_deletion=true//' /etc/sysconfig/docker-storage # systemctl restart dockerYou should now have Docker 1.10.3 installed and running on the host. Verify with the following:
# docker version Client: Version: 1.10.3-el7 API version: 1.20 Package Version: docker-1.10.3.el7.x86_64 [...] # systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2017-06-21 15:44:20 EDT; 30min ago [...]
6.6. Reinstalling RPMs Copiar o linkLink copiado para a área de transferência!
Disable the OpenShift Container Platform 3.4 repositories, and re-enable the 3.3 repositories:
# subscription-manager repos \ --disable=rhel-7-server-ose-3.4-rpms \ --enable=rhel-7-server-ose-3.3-rpmsOn each master, install the following packages:
# yum install atomic-openshift \ atomic-openshift-clients \ atomic-openshift-node \ atomic-openshift-master \ openvswitch \ atomic-openshift-sdn-ovs \ tuned-profiles-atomic-openshift-node \ atomic-openshift-excluder \ atomic-openshift-docker-excluderOn each node, install the following packages:
# yum install atomic-openshift \ atomic-openshift-node \ openvswitch \ atomic-openshift-sdn-ovs \ tuned-profiles-atomic-openshift-node \ atomic-openshift-excluder \ atomic-openshift-docker-excluderIf you use a separate etcd cluster, install the following package on each etcd member:
# yum install etcd
6.7. Restoring etcd Copiar o linkLink copiado para a área de transferência!
See Backup and Restore.
6.8. Bringing OpenShift Container Platform Services Back Online Copiar o linkLink copiado para a área de transferência!
See Backup and Restore.
6.9. Verifying the Downgrade Copiar o linkLink copiado para a área de transferência!
To verify the downgrade, first check that all nodes are marked as Ready:
# oc get nodes NAME STATUS AGE master.example.com Ready,SchedulingDisabled 165d node1.example.com Ready 165d node2.example.com Ready 165dThen, verify that you are running the expected versions of the docker-registry and router images, if deployed:
# oc get -n default dc/docker-registry -o json | grep \"image\" "image": "openshift3/ose-docker-registry:v3.3.1.38-2", # oc get -n default dc/router -o json | grep \"image\" "image": "openshift3/ose-haproxy-router:v3.3.1.38-2",You can use the diagnostics tool on the master to look for common issues and provide suggestions:
# oc adm diagnostics ... [Note] Summary of diagnostics execution: [Note] Completed with no errors or warnings seen.