Este contenido no está disponible en el idioma seleccionado.
Chapter 11. Upgrading
The upgrade of the OpenShift sandboxed containers components consists of the following steps:
-
Upgrade OpenShift Container Platform to update the
Kataruntime and its dependencies. - Upgrade the OpenShift sandboxed containers Operator to update the Operator subscription.
You can upgrade OpenShift Container Platform before or after the OpenShift sandboxed containers Operator upgrade, with the one exception noted below. Always apply the KataConfig patch immediately after upgrading the OpenShift sandboxed containers Operator.
11.1. Upgrading resources Copiar enlaceEnlace copiado en el portapapeles!
Red Hat Enterprise Linux CoreOS (RHCOS) extensions deploy the OpenShift sandboxed containers resources onto the cluster.
The RHCOS extension sandboxed containers contains the required components to run OpenShift sandboxed containers, such as the Kata containers runtime, the hypervisor QEMU, and other dependencies. You upgrade the extension by upgrading the cluster to a new release of OpenShift Container Platform.
For more information about upgrading OpenShift Container Platform, see Updating Clusters.
11.2. Upgrading the Operator Copiar enlaceEnlace copiado en el portapapeles!
Use Operator Lifecycle Manager (OLM) to upgrade the OpenShift sandboxed containers Operator either manually or automatically. Selecting between manual or automatic upgrade during the initial deployment determines the future upgrade mode. For manual upgrades, the OpenShift Container Platform web console shows the available updates that the cluster administrator can install.
For more information about upgrading the OpenShift sandboxed containers Operator in Operator Lifecycle Manager (OLM), see Updating installed Operators.
11.3. Updating the pod VM image Copiar enlaceEnlace copiado en el portapapeles!
For AWS, Azure, and IBM deployments, you must update the pod VM image. Upgrading the OpenShift sandboxed containers Operator when the enablePeerpods: paramter is true will not update the existing pod VM image automatically. To update the pod VM image after an upgrade you must delete and re-create the KataConfig CR.
You can also check the peer pod config map for AWS and Azure deployments to ensure that the image ID is empty before re-creating the KataConfig CR.
11.3.1. Deleting the KataConfig custom resource Copiar enlaceEnlace copiado en el portapapeles!
You can delete the KataConfig custom resource (CR) by using the command line.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have access to the cluster as a user with the
cluster-adminrole.
Procedure
Delete the
KataConfigCR by running the following command:oc delete kataconfig example-kataconfig
$ oc delete kataconfig example-kataconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the custom resource was deleted by running the following command:
oc get kataconfig example-kataconfig
$ oc get kataconfig example-kataconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
No example-kataconfig instances exist
No example-kataconfig instances existCopy to Clipboard Copied! Toggle word wrap Toggle overflow
When uninstalling OpenShift sandboxed containers deployed using a cloud provider, you must delete all of the pods. Any remaining pod resources might result in an unexpected bill from your cloud provider.
11.3.2. Ensure peer pods CM image ID is empty Copiar enlaceEnlace copiado en el portapapeles!
When you delete the KataConfig CR, it should delete the peer pods CM image ID. For AWS and Azure deployments, check to ensure that the peer pods CM image ID is empty.
Procedure
Obtain the config map you created for the peer pods:
oc get cm -n openshift-sandboxed-containers-operator peer-pods-cm -o jsonpath="{.data.AZURE_IMAGE_ID}"$ oc get cm -n openshift-sandboxed-containers-operator peer-pods-cm -o jsonpath="{.data.AZURE_IMAGE_ID}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use
PODVM_AMI_IDfor AWS. UseAZURE_IMAGE_IDfor Azure.- Check the status stanza of the YAML file.
-
If the
PODVM_AMI_IDparameter for AWS or theAZURE_IMAGE_IDparameter for Azure contains a value, set the value to "". If you have set the value to "", patch the peer pods config map:
oc patch configmap peer-pods-cm -n openshift-sandboxed-containers-operator -p '{"data":{"AZURE_IMAGE_ID":""}}'$ oc patch configmap peer-pods-cm -n openshift-sandboxed-containers-operator -p '{"data":{"AZURE_IMAGE_ID":""}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use
PODVM_AMI_IDfor AWS. UseAZURE_IMAGE_IDfor Azure.
11.3.3. Creating the KataConfig custom resource Copiar enlaceEnlace copiado en el portapapeles!
You must create the KataConfig custom resource (CR) to install kata-remote as a runtime class on your worker nodes.
Creating the KataConfig CR triggers the OpenShift sandboxed containers Operator to do the following:
-
Create a
RuntimeClassCR namedkata-remotewith a default configuration. This enables users to configure workloads to usekata-remoteas the runtime by referencing the CR in theRuntimeClassNamefield. This CR also specifies the resource overhead for the runtime.
OpenShift sandboxed containers installs kata-remote as a secondary, optional runtime on the cluster and not as the primary runtime.
Creating the KataConfig CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. Factors that impede reboot time are as follows:
- A larger OpenShift Container Platform deployment with a greater number of worker nodes.
- Activation of the BIOS and Diagnostics utility.
- Deployment on a hard disk drive rather than an SSD.
- Deployment on physical nodes such as bare metal, rather than on virtual nodes.
- A slow CPU and network.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole.
Procedure
Create an
example-kataconfig.yamlmanifest file according to the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
KataConfigCR by running the following command:oc apply -f example-kataconfig.yaml
$ oc apply -f example-kataconfig.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow The new
KataConfigCR is created and installskata-remoteas a runtime class on the worker nodes.Wait for the
kata-remoteinstallation to complete and the worker nodes to reboot before verifying the installation.Monitor the installation progress by running the following command:
watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p"
$ watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p"Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the status of all workers under
kataNodesisinstalledand the conditionInProgressisFalsewithout specifying a reason, thekata-remoteis installed on the cluster.Verify the daemon set by running the following command:
oc get -n openshift-sandboxed-containers-operator ds/osc-caa-ds
$ oc get -n openshift-sandboxed-containers-operator ds/osc-caa-dsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the runtime classes by running the following command:
oc get runtimeclass
$ oc get runtimeclassCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME HANDLER AGE kata kata 152m kata-remote kata-remote 152m
NAME HANDLER AGE kata kata 152m kata-remote kata-remote 152mCopy to Clipboard Copied! Toggle word wrap Toggle overflow