Chapter 8. Upgrading
The upgrade of the OpenShift sandboxed containers components consists of the following steps:
-
Upgrade OpenShift Container Platform to update the
Kata
runtime and its dependencies. - Upgrade the OpenShift sandboxed containers Operator to update the Operator subscription.
You can upgrade OpenShift Container Platform before or after the OpenShift sandboxed containers Operator upgrade, with the one exception noted below. Always apply the KataConfig
patch immediately after upgrading the OpenShift sandboxed containers Operator.
8.1. Upgrading resources
Red Hat Enterprise Linux CoreOS (RHCOS) extensions deploy the OpenShift sandboxed containers resources onto the cluster.
The RHCOS extension sandboxed containers
contains the required components to run OpenShift sandboxed containers, such as the Kata containers runtime, the hypervisor QEMU, and other dependencies. You upgrade the extension by upgrading the cluster to a new release of OpenShift Container Platform.
For more information about upgrading OpenShift Container Platform, see Updating Clusters.
8.2. Upgrading the Operator
Use Operator Lifecycle Manager (OLM) to upgrade the OpenShift sandboxed containers Operator either manually or automatically. Selecting between manual or automatic upgrade during the initial deployment determines the future upgrade mode. For manual upgrades, the OpenShift Container Platform web console shows the available updates that the cluster administrator can install.
For more information about upgrading the OpenShift sandboxed containers Operator in Operator Lifecycle Manager (OLM), see Updating installed Operators.
8.3. Updating the pod VM image
For AWS, Azure, and IBM deployments, you must update the pod VM image. Upgrading the OpenShift sandboxed containers Operator when the enablePeerpods:
paramter is true
will not update the existing pod VM image automatically. To update the pod VM image after an upgrade you must delete and re-create the KataConfig
CR.
You can also check the peer pod config map for AWS and Azure deployments to ensure that the image ID is empty before re-creating the KataConfig
CR.
8.3.1. Deleting the KataConfig custom resource
You can delete the KataConfig
custom resource (CR) by using the command line.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Delete the
KataConfig
CR by running the following command:$ oc delete kataconfig example-kataconfig
Verify that the custom resource was deleted by running the following command:
$ oc get kataconfig example-kataconfig
Example output
No example-kataconfig instances exist
8.3.2. Ensure peer pods CM image ID is empty
When you delete the KataConfig
CR, it should delete the peer pods CM image ID. For AWS and Azure deployments, check to ensure that the peer pods CM image ID is empty.
Procedure
Obtain the config map you created for the peer pods:
$ oc get cm -n openshift-sandboxed-containers-operator peer-pods-cm -o jsonpath="{.data.AZURE_IMAGE_ID}"
Use
PODVM_AMI_ID
for AWS. UseAZURE_IMAGE_ID
for Azure.- Check the status stanza of the YAML file.
-
If the
PODVM_AMI_ID
parameter for AWS or theAZURE_IMAGE_ID
parameter for Azure contains a value, set the value to "". If you have set the value to "", patch the peer pods config map:
$ oc patch configmap peer-pods-cm -n openshift-sandboxed-containers-operator -p '{"data":{"AZURE_IMAGE_ID":""}}'
Use
PODVM_AMI_ID
for AWS. UseAZURE_IMAGE_ID
for Azure.
8.3.3. Creating the KataConfig custom resource
You must create the KataConfig
custom resource (CR) to install kata-remote
as a runtime class on your worker nodes.
Creating the KataConfig
CR triggers the OpenShift sandboxed containers Operator to do the following:
-
Create a
RuntimeClass
CR namedkata-remote
with a default configuration. This enables users to configure workloads to usekata-remote
as the runtime by referencing the CR in theRuntimeClassName
field. This CR also specifies the resource overhead for the runtime.
OpenShift sandboxed containers installs kata-remote
as a secondary, optional runtime on the cluster and not as the primary runtime.
Creating the KataConfig
CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. Factors that impede reboot time are as follows:
- A larger OpenShift Container Platform deployment with a greater number of worker nodes.
- Activation of the BIOS and Diagnostics utility.
- Deployment on a hard disk drive rather than an SSD.
- Deployment on physical nodes such as bare metal, rather than on virtual nodes.
- A slow CPU and network.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Create an
example-kataconfig.yaml
manifest file according to the following example:apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: example-kataconfig spec: enablePeerPods: true logLevel: info # kataConfigPoolSelector: # matchLabels: # <label_key>: '<label_value>' 1
Create the
KataConfig
CR by running the following command:$ oc apply -f example-kataconfig.yaml
The new
KataConfig
CR is created and installskata-remote
as a runtime class on the worker nodes.Wait for the
kata-remote
installation to complete and the worker nodes to reboot before verifying the installation.Monitor the installation progress by running the following command:
$ watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p"
When the status of all workers under
kataNodes
isinstalled
and the conditionInProgress
isFalse
without specifying a reason, thekata-remote
is installed on the cluster.Verify the daemon set by running the following command:
$ oc get -n openshift-sandboxed-containers-operator ds/peerpodconfig-ctrl-caa-daemon
Verify the runtime classes by running the following command:
$ oc get runtimeclass
Example output
NAME HANDLER AGE kata kata 152m kata-remote kata-remote 152m