This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 1. OpenShift sandboxed containers 1.3 release notes
1.1. About this release Copier lienLien copié sur presse-papiers!
These release notes track the development of OpenShift sandboxed containers 1.3 alongside Red Hat OpenShift Container Platform 4.11.
This product is fully supported and enabled by default as of OpenShift Container Platform 4.10.
1.2. New features and enhancements Copier lienLien copié sur presse-papiers!
1.2.1. Container ID in metrics list Copier lienLien copié sur presse-papiers!
The sandbox_id with the ID of the relevant sandboxed container now appears in the metrics list on the Metrics page in the web console.
In addition, the kata-monitor process now adds three new labels to kata-specific metrics: cri_uid, cri_name, and cri_namespace. These labels enable kata-specific metrics to relate to corresponding kubernetes workloads.
For more information about kata-specific metrics, see About OpenShift sandboxed containers metrics.
1.2.2. OpenShift sandboxed containers availability on AWS bare metal Copier lienLien copié sur presse-papiers!
Previously, OpenShift sandboxed containers availability on AWS bare metal was in Technology Preview. With this release, installing OpenShift sandboxed containers on AWS bare-metal clusters is fully supported.
1.2.3. Support for OpenShift sandboxed containers on single-node OpenShift Copier lienLien copié sur presse-papiers!
OpenShift sandboxed containers now work on single-node OpenShift clusters when the OpenShift sandboxed containers Operator is installed by Red Hat Advanced Cluster Management (RHACM).
1.3. Bug fixes Copier lienLien copié sur presse-papiers!
Previously, when creating the
KataConfigCR and observing the pod status under theopenshift-sandboxed-containers-operatornamespace, a huge number of restarts for monitor pods was shown. The monitor pods use a specific SELinux policy that was installed as part of thesandboxed-containersextension installation. The monitor pod was created immediately. However, the SELinux policy was not yet available, which resulted in a pod creation error, followed by a pod restart.With this release, the SELinux policy is available when the monitor pod is created, and the monitor pod transitions to a
Runningstate immediately. (KATA-1338)-
Previously, OpenShift sandboxed containers deployed a security context constraint (SCC) on startup which enforced a custom SELinux policy that was not available on Machine Config Operator (MCO) pods. This caused the MCO pod to change to a
CrashLoopBackOffstate and cluster upgrades to fail. With this release, OpenShift sandboxed containers deploys the SCC when creating theKataConfigCR and no longer enforces using the custom SELinux policy. (KATA-1373) -
Previously, when uninstalling the OpenShift sandboxed containers Operator, the
sandboxed-containers-operator-scccustom resource was not deleted. With this release, thesandboxed-containers-operator-scccustom resource is deleted when uninstalling the OpenShift sandboxed containers Operator. (KATA-1569)
1.4. Known issues Copier lienLien copié sur presse-papiers!
If you are using OpenShift sandboxed containers, you might receive SELinux denials when accessing files or directories mounted from the
hostPathvolume in an OpenShift Container Platform cluster. These denials can occur even when running privileged sandboxed containers because privileged sandboxed containers do not disable SELinux checks.Following SELinux policy on the host guarantees full isolation of the host file system from the sandboxed workload by default. This also provides stronger protection against potential security flaws in the
virtiofsddaemon or QEMU.If the mounted files or directories do not have specific SELinux requirements on the host, you can use local persistent volumes as an alternative. Files are automatically relabeled to
container_file_t, following SELinux policy for container runtimes. See Persistent storage using local volumes for more information.Automatic relabeling is not an option when mounted files or directories are expected to have specific SELinux labels on the host. Instead, you can set custom SELinux rules on the host to allow the
virtiofsddaemon to access these specific labels. (BZ#1904609)Some OpenShift sandboxed containers Operator pods use container CPU resource limits to increase the number of available CPUs for the pod. These pods might receive fewer CPUs than requested. If the functionality is available inside the container, you can diagnose CPU resource issues by using
oc rsh <pod>to access a pod and running thelscpucommand:lscpu
$ lscpuCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
CPU(s): 16 On-line CPU(s) list: 0-12,14,15 Off-line CPU(s) list: 13
CPU(s): 16 On-line CPU(s) list: 0-12,14,15 Off-line CPU(s) list: 13Copy to Clipboard Copied! Toggle word wrap Toggle overflow The list of offline CPUs will likely change unpredictably from run to run.
As a workaround, you can use a pod annotation to request additional CPUs rather than setting a CPU limit. CPU requests that use pod annotation are not affected by this issue, because the processor allocation method is different. Rather than setting a CPU limit, the following
annotationmust be added to the metadata of the pod:metadata: annotations: io.katacontainers.config.hypervisor.default_vcpus: "16"
metadata: annotations: io.katacontainers.config.hypervisor.default_vcpus: "16"Copy to Clipboard Copied! Toggle word wrap Toggle overflow The progress of the runtime installation is shown in the
statussection of thekataConfigcustom resource (CR). However, the progress is not shown if all of the following conditions are true:-
There are no worker nodes defined. You can run
oc get machineconfigpoolto check the number of worker nodes in the machine config pool. -
No
kataConfigPoolSelectoris specified to select nodes for installation.
In this case, the installation starts on the control plane nodes because the Operator assumes it is a converged cluster where nodes have both control plane and worker roles. The
statussection of thekataConfigCR is not updated during the installation. (KATA-1017)-
There are no worker nodes defined. You can run
When using older versions of the Buildah tool in OpenShift sandboxed containers, the build fails with the following error:
process exited with error: fork/exec /bin/sh: no such file or directory subprocess exited with status 1
process exited with error: fork/exec /bin/sh: no such file or directory subprocess exited with status 1Copy to Clipboard Copied! Toggle word wrap Toggle overflow You must use the latest version of Buildah, available at quay.io.
-
In the KataConfig tab in the web console, if you click Create KataConfig while in the YAML view, the
KataConfigYAML is missing thespecfields. Toggling to the Form view and then back to the YAML view fixes this issue and displays the full YAML. (KATA-1372) -
In the KataConfig tab in the web console, a
404: Not founderror message appears whether aKataConfigCR already exists or not. To access an existingKataConfigCR, go to Home > Search. From the Resources list, select KataConfig. (KATA-1605) Upgrading OpenShift sandboxed containers does not automatically update the existing
KataConfigCR. As a result, monitor pods from previous deployments are not restarted and continue to run with an outdatedkataMonitorimage.Upgrade the
kataMonitorimage with the following command:oc patch kataconfig example-kataconfig --type merge --patch '{"spec":{"kataMonitorImage":"registry.redhat.io/openshift-sandboxed-containers/osc-monitor-rhel8:1.3.0"}}'$ oc patch kataconfig example-kataconfig --type merge --patch '{"spec":{"kataMonitorImage":"registry.redhat.io/openshift-sandboxed-containers/osc-monitor-rhel8:1.3.0"}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can also upgrade the
kataMonitorimage by editing theKataConfigYAML in the web console.
1.5. Asynchronous errata updates Copier lienLien copié sur presse-papiers!
Security, bug fix, and enhancement updates for OpenShift sandboxed containers 4.11 are released as asynchronous errata through the Red Hat Network. All OpenShift Container Platform 4.11 errata are available on the Red Hat Customer Portal. For more information about asynchronous errata, see the OpenShift Container Platform Life Cycle.
Red Hat Customer Portal users can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, users are notified by email whenever new errata relevant to their registered systems are released.
Red Hat Customer Portal user accounts must have systems registered and consuming OpenShift Container Platform entitlements for OpenShift Container Platform errata notification emails to generate.
This section will continue to be updated over time to provide notes on enhancements and bug fixes for future asynchronous errata releases of OpenShift sandboxed containers 1.3.
1.5.1. RHSA-2022:6072 - OpenShift sandboxed containers 1.3.0 image release, bug fix, and enhancement advisory. Copier lienLien copié sur presse-papiers!
Issued: 2022-08-17
OpenShift sandboxed containers release 1.3.0 is now available. This advisory contains an update for OpenShift sandboxed containers with enhancements and bug fixes.
The list of bug fixes included in the update is documented in the RHSA-2022:6072 advisory.
1.5.2. RHSA-2022:7058 - OpenShift sandboxed containers 1.3.1 security fix and bug fix advisory. Copier lienLien copié sur presse-papiers!
Issued: 2022-10-19
OpenShift sandboxed containers release 1.3.1 is now available. This advisory contains an update for OpenShift sandboxed containers with security fixes and a bug fix.
The list of bug fixes included in the update is documented in the RHSA-2022:7058 advisory.