Chapter 1. OpenShift sandboxed containers 1.4 release notes
1.1. About this release Copy linkLink copied to clipboard!
These release notes track the development of OpenShift sandboxed containers 1.4 alongside Red Hat Red Hat OpenShift 4.13.
This product is fully supported and enabled by default as of Red Hat OpenShift 4.10.
1.2. New features and enhancements Copy linkLink copied to clipboard!
1.2.1. Peer pods support for OpenShift sandboxed containers (Technology Preview) Copy linkLink copied to clipboard!
Users can now deploy OpenShift sandboxed containers workloads using peer pods on either AWS or Microsoft Azure. This enables users to circumvent the need for nested virtualization. This feature is in Technology Preview and not fully supported. For more information, see Deploying OpenShift sandboxed containers workloads using peer pods.
1.2.2. QEMU error log collection Copy linkLink copied to clipboard!
QEMU warning and error logs now print to both the node journal, the Kata runtime logs, and the CRI-O logs. For more information, see Viewing debug logs for OpenShift sandboxed containers.
1.2.3. Updated channel for installing OpenShift sandboxed containers Operator Copy linkLink copied to clipboard!
The subscription channel when installing OpenShift sandboxed containers Operator is now always stable, instead of stable-<version> to enable consistency.
1.3. Bug fixes Copy linkLink copied to clipboard!
Previously, upgrading OpenShift sandboxed containers did not automatically update the existing
KataConfigCR. As a result, monitor pods from previous deployments were not restarted and continued to run with an outdatedkataMonitorimage.Starting from release 1.3.2, the
kataMonitorImagewas removed from theKataConfigCR, and the upgrade for all monitor pods is handled internally by the Operator.Previously, users could not install OpenShift sandboxed containers on a disconnected cluster. The pull specification of the kata-monitor container image used a tag instead of a digest. This prevented the image from being mirrored with the
ImageContentSourcePolicyresource.With this release, the CSV
spec.relatedImagessection has been updated to ensure that all container images in the OpenShift sandboxed containers Operator are included. As a result, all container pull specifications now utilize digests instead of tags, enabling the installation of OpenShift sandboxed containers in disconnected environments.-
Previously, metrics were not available for OpenShift sandboxed containers running on a tainted node. With this release, a toleration has been added to the
kata-monitorpods, enabling the pods to run and collect metrics on any node, even a tainted node. (KATA-2121) -
Previously, the base images for the OpenShift sandboxed containers Operator used
ubi8/ubi-mimimalimages. With this release, to ensure compatibility with RHEL 9 clusters and Red Hat OpenShift 4.13, the base images have been updated to useubi9/ubiimages. (KATA-2212)
1.4. Known issues Copy linkLink copied to clipboard!
If you are using OpenShift sandboxed containers, you might receive SELinux denials when accessing files or directories mounted from the
hostPathvolume in an Red Hat OpenShift cluster. These denials can occur even when running privileged sandboxed containers because privileged sandboxed containers do not disable SELinux checks.Following SELinux policy on the host guarantees full isolation of the host file system from the sandboxed workload by default. This also provides stronger protection against potential security flaws in the
virtiofsddaemon or QEMU.If the mounted files or directories do not have specific SELinux requirements on the host, you can use local persistent volumes as an alternative. Files are automatically relabeled to
container_file_t, following the SELinux policy for container runtimes. See Persistent storage using local volumes.Automatic relabeling is not an option when mounted files or directories are expected to have specific SELinux labels on the host. Instead, you can set custom SELinux rules on the host to allow the
virtiofsddaemon to access these specific labels. (KATA-469)Some OpenShift sandboxed containers Operator pods use container CPU resource limits to increase the number of available CPUs for the pod. These pods might receive fewer CPUs than requested. If the functionality is available inside the container, you can diagnose CPU resource issues by using
oc rsh <pod>to access a pod and running thelscpucommand:lscpu
$ lscpuCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
CPU(s): 16 On-line CPU(s) list: 0-12,14,15 Off-line CPU(s) list: 13
CPU(s): 16 On-line CPU(s) list: 0-12,14,15 Off-line CPU(s) list: 13Copy to Clipboard Copied! Toggle word wrap Toggle overflow The list of offline CPUs will likely change unpredictably from run to run.
As a workaround, you can use a pod annotation to request additional CPUs rather than setting a CPU limit. CPU requests that use pod annotation are not affected by this issue, because the processor allocation method is different. Rather than setting a CPU limit, the following
annotationmust be added to the metadata of the pod:metadata: annotations: io.katacontainers.config.hypervisor.default_vcpus: "16"
metadata: annotations: io.katacontainers.config.hypervisor.default_vcpus: "16"Copy to Clipboard Copied! Toggle word wrap Toggle overflow The progress of the runtime installation is shown in the
statussection of thekataConfigcustom resource (CR). However, the progress is not shown if all of the following conditions are true:-
There are no worker nodes defined. You can run
oc get machineconfigpoolto check the number of worker nodes in the machine config pool. -
No
kataConfigPoolSelectoris specified to select nodes for installation.
In this case, the installation starts on the control plane nodes because the Operator assumes it is a converged cluster where nodes have both control plane and worker roles. The
statussection of thekataConfigCR is not updated during the installation. (KATA-1017)-
There are no worker nodes defined. You can run
-
In the KataConfig tab in the web console, if you click Create KataConfig while in the YAML view, the
KataConfigYAML is missing thespecfields. Toggling to the Form view and then back to the YAML view fixes this issue and displays the full YAML. (KATA-1372) -
In the KataConfig tab in the web console, a
404: Not founderror message appears whether aKataConfigCR already exists or not. To access an existingKataConfigCR, go to Home > Search. From the Resources list, select KataConfig. (KATA-1605) -
During the installation of the
KataConfigCR, the node status will be incorrect if theKataConfigCR deletion is initiated before the first node reboots. When this happens, the Operator is stuck in a state where the Operator attempts to delete and install theKataConfigCR simultaneously. The expected behavior is that the installation stops and theKataConfigCR is deleted. (KATA-1851) When you set SELinux Multi-Category Security (MCS) labels in the security context of a container, the pod will not start and throw the following error:
Error: CreateContainer failed: EACCES: Permission denied: unknown
Error: CreateContainer failed: EACCES: Permission denied: unknownCopy to Clipboard Copied! Toggle word wrap Toggle overflow The runtime does not have access to the security context of the containers when the sandboxed container is created. This means that
virtiofsddoes not run with the appropriate SELinux label and cannot access host files for the container. As a result, you cannot rely on MCS labels to isolate files in the sandboxed container on a per-container basis. This means that all containers can access all files within the sandboxed container. Currently, there is no workaround for this issue.When stopping a sandboxed container workload, the following QEMU error messages are logged to the worker node system journal:
qemu-kvm: Failed to write msg. qemu-kvm: Failed to set msg fds. qemu-kvm: vhost VQ 0 ring restore failed qqemu-kvm: vhost_set_vring_call failed
qemu-kvm: Failed to write msg. qemu-kvm: Failed to set msg fds. qemu-kvm: vhost VQ 0 ring restore failed qqemu-kvm: vhost_set_vring_call failedCopy to Clipboard Copied! Toggle word wrap Toggle overflow These errors are harmless and can be ignored.
For more information on how to access the system journal logs, see Collecting OpenShift sandboxed containers data for Red Hat Support.
When installing the OpenShift sandboxed containers Operator using the web console, the UI might display the incorrect operator version after clicking Install.
The incorrect version appears in gray text in the installation window and reads:
<Version number> provided by Red Hat.
The correct operator is installed. You can navigate to Operators
Installed Operators to see the correct version listed beneath the OpenShift sandboxed containers Operator. When using peer pods with OpenShift sandboxed containers, the
kata-remote-ccruntime class is created when you create theKataConfigCR and set theenablePeerPodsfield totrue. As a result, users should see thekata-remote-ccruntime class in theKataConfigCR, in addition to thekataruntime class, and users should technically be able to run both standard Kata pods and peer-pod Kata pods on the same cluster.As a cluster admin, when you examine the
KataConfigCR, you will only findkatain theStatus.runtimeClassfield. The runtime classkata-remote-ccdoes not appear. Currently, there is no workaround for this issue.-
FIPS compliance for OpenShift sandboxed containers only applies to the
kataruntime class. The new peer pods runtime classkata-remote-ccis not yet fully supported, and has not been tested for FIPS compliance. (KATA-2166)
1.5. Limitations Copy linkLink copied to clipboard!
When using older versions of the Buildah tool in OpenShift sandboxed containers, the build fails with the following error:
process exited with error: fork/exec /bin/sh: no such file or directory subprocess exited with status 1
process exited with error: fork/exec /bin/sh: no such file or directory subprocess exited with status 1Copy to Clipboard Copied! Toggle word wrap Toggle overflow You must use the latest version of Buildah, available at quay.io.
1.6. Asynchronous errata updates Copy linkLink copied to clipboard!
Security, bug fix, and enhancement updates for OpenShift sandboxed containers 4.13 are released as asynchronous errata through the Red Hat Network. All Red Hat OpenShift 4.13 errata are available on the Red Hat Customer Portal. For more information about asynchronous errata, see the Red Hat OpenShift Life Cycle.
Red Hat Customer Portal users can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, users are notified by email whenever new errata relevant to their registered systems are released.
Red Hat Customer Portal user accounts must have systems registered and consuming Red Hat OpenShift entitlements for Red Hat OpenShift errata notification emails to generate.
This section will continue to be updated over time to provide notes on enhancements and bug fixes for future asynchronous errata releases of OpenShift sandboxed containers 1.4.
1.6.1. RHBA-2023:3529 - OpenShift sandboxed containers 1.4.0 image release, bug fix, and enhancement advisory Copy linkLink copied to clipboard!
Issued: 2023-06-08
OpenShift sandboxed containers release 1.4.0 is now available. This advisory contains an update for OpenShift sandboxed containers with enhancements and bug fixes.
The list of bug fixes included in the update is documented in the RHBA-2023:3529 advisory.
1.6.2. RHSA-2023:4290 - OpenShift sandboxed containers 1.4.1 image release, bug fix, and security advisory Copy linkLink copied to clipboard!
Issued: 2023-07-27
OpenShift sandboxed containers release 1.4.1 is now available. This advisory contains an update for OpenShift sandboxed containers with security and bug fixes.
The list of bug fixes included in the update is documented in the RHSA-2023:4290 advisory.