This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Chapter 4. Updating operating systems
Updating the operating system (OS) on a host, by either upgrading across major releases or updating the system software for a minor release, can impact the OpenShift Container Platform software running on those machines. In particular, these updates can affect the iptables
rules or ovs
flows that OpenShift Container Platform requires to operate.
4.1. Updating the operating system on a host
To safely upgrade the OS on a host:
Drain the node in preparation for maintenance:
oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
Copy to Clipboard Copied! In order to protect sensitive packages that do not need to be updated, apply the exclude rules to the host:
atomic-openshift-docker-excluder exclude atomic-openshift-excluder exclude
# atomic-openshift-docker-excluder exclude # atomic-openshift-excluder exclude
Copy to Clipboard Copied! Update the host packages and reboot the host. A reboot ensures that the host is running the newest versions and means that the
docker
and OpenShift Container Platform processes have been restarted, which forces them to check that all of the rules in other services are correct.yum update
# yum update # reboot
Copy to Clipboard Copied! However, instead of rebooting a node host, you can restart the services that are affected or preserve the
iptables
state. Both processes are described in the OpenShift Container Platform iptables topic. Theovs
flow rules do not need to be saved, but restarting the OpenShift Container Platform node software fixes the flow rules.Configure the host to be schedulable again:
oc adm uncordon <node_name>
$ oc adm uncordon <node_name>
Copy to Clipboard Copied!
4.1.1. Upgrading Nodes Running OpenShift Container Storage
If using OpenShift Container Storage, upgrade the OpenShift Container Platform nodes running OpenShift Container Storage one at a time.
-
Run
oc get daemonset -n <project_name>
to verify the label found underNODE-SELECTOR
. The default value isglusterfs=storage-host
. To determine what the pod is, runoc get pods -n <project_name> --selectors=glusterfs=
. Remove the daemonset label from the node:
oc label node <node_name> <daemonset_label> -n <project_name>
$ oc label node <node_name> <daemonset_label> -n <project_name>
Copy to Clipboard Copied! This will cause the OpenShift Container Storage pod to terminate on that node. To overwrite the existing label, use the
--overwrite
flag.-
To run the upgrade playbook on the single node where you terminated OpenShift Container Storage , use
-e openshift_upgrade_nodes_label="type=upgrade"
. When the upgrade completes, relabel the node with the daemonset label:
oc label node <node_name> <daemonset_label> -n <project_name>
$ oc label node <node_name> <daemonset_label> -n <project_name>
Copy to Clipboard Copied! - Wait for the OpenShift Container Storage pod to respawn and appear.
oc rsh
into the gluster pod to check the volume heal:oc rsh <pod_name> for vol in `gluster volume list`; do gluster volume heal $vol info; done exit
$ oc rsh <pod_name> $ for vol in `gluster volume list`; do gluster volume heal $vol info; done $ exit
Copy to Clipboard Copied! Ensure all of the volumes are healed and there are no outstanding tasks. The
heal info
command lists all pending entries for a given volume’s heal process. A volume is considered healed whenNumber of entries
for that volume is0
. Usegluster volume status <volume_name>
for additional details about the volume. TheOnline
state should be markedY
for all bricks.