Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 4. Updating operating systems
			Updating the operating system (OS) on a host, by either upgrading across major releases or updating the system software for a minor release, can impact the OpenShift Container Platform software running on those machines. In particular, these updates can affect the iptables rules or ovs flows that OpenShift Container Platform requires to operate.
		
4.1. Updating the operating system on a host
To safely upgrade the OS on a host:
- Drain the node in preparation for maintenance: - oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets - $ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- In order to protect sensitive packages that do not need to be updated, apply the exclude rules to the host: - atomic-openshift-docker-excluder exclude atomic-openshift-excluder exclude - # atomic-openshift-docker-excluder exclude # atomic-openshift-excluder exclude- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - A reboot ensures that the host is running the newest versions and means that the - container engineand OpenShift Container Platform processes have been restarted, which forces them to check that all of the rules in other services are correct.- yum update reboot - # yum update # reboot- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - However, instead of rebooting a node host, you can restart the services that are affected or preserve the - iptablesstate. Both processes are described in the OpenShift Container Platform iptables topic. The- ovsflow rules do not need to be saved, but restarting the OpenShift Container Platform node software fixes the flow rules.
- Configure the host to be schedulable again: - oc adm uncordon <node_name> - $ oc adm uncordon <node_name>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
4.1.1. Upgrading Nodes Running OpenShift Container Storage
If using OpenShift Container Storage, upgrade the OpenShift Container Platform nodes running OpenShift Container Storage one at a time.
- To begin, recall the project in which OpenShift Container Storage was deployed.
- Confirm the node and pod selectors configured on the service’s daemonset. - oc get daemonset -n <project_name> -o wide - $ oc get daemonset -n <project_name> -o wide- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- Use - -o wideto include the pod selector in the output.- These selectors are found under - NODE-SELECTORand- SELECTOR, respectively. The example commands below will use- glusterfs=storage-hostand- glusterfs=storage-pod, respectively.
- Given the daemonset’s node selector, confirm which hosts have the label, and hence are running pods from the daemonset: - oc get nodes --selector=glusterfs=storage-host - $ oc get nodes --selector=glusterfs=storage-host- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Chose a node which will have its operating system upgraded. 
- Remove the daemonset label from the node: - oc label node <node_name> glusterfs- - $ oc label node <node_name> glusterfs-- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - This will cause the OpenShift Container Storage pod to terminate on that node. 
The node can now have its OS upgraded as described above.
- To restart an OpenShift Container Storage pod on the node, relabel the node with the daemonset label: - oc label node <node_name> glusterfs=storage-host - $ oc label node <node_name> glusterfs=storage-host- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Wait for the OpenShift Container Storage pod to respawn and appear.
- Given the daemonset’s pod selector, determine the name of the newly spawned pod by searching for a pod running on the node whose OS you upgraded: - oc get pod -n <project_name> --selector=glusterfs=storage-pod -o wide - $ oc get pod -n <project_name> --selector=glusterfs=storage-pod -o wide- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- Use - -o wideto include which host the pod is running on in the output.
- oc rshinto the gluster pod to check the volume heal:- oc rsh <pod_name> for vol in `gluster volume list`; do gluster volume heal $vol info; done exit - $ oc rsh <pod_name> $ for vol in `gluster volume list`; do gluster volume heal $vol info; done $ exit- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Ensure all of the volumes are healed and there are no outstanding tasks. The - heal infocommand lists all pending entries for a given volume’s heal process. A volume is considered healed when- Number of entriesfor that volume is- 0. Use- gluster volume status <volume_name>for additional details about the volume. The- Onlinestate should be marked- Yfor all bricks.