Search

Chapter 1. {sandboxed-containers-first} {sandboxed-containers-version} release notes

download PDF

1.1. About this release

These release notes track the development of OpenShift sandboxed containers 1.2 alongside Red Hat OpenShift Container Platform 4.10.

This product was previously released as a Technology Preview product in OpenShift Container Platform 4.9 and is now generally available and enabled by default in OpenShift Container Platform 4.10.

1.2. New features and enhancements

This release adds the following features to OpenShift sandboxed containers.

1.2.1. Kata specific metrics and dashboard

The OpenShift sandboxed containers Operator now deploys an osc-monitor daemon set. This enables the collection of metrics specific to workloads running in sandboxed containers, including metrics about the hypervisor and the Guest OS instances. In addition, a pre-configured dashboard provides insights into OpenShift sandboxed containers components, such as the total number of lightweight VMs enabled in a cluster and the CPU and memory consumption per VM. All metrics, as well as the dashboard, are available in the web console. For more information, see Monitoring OpenShift sandboxed containers.

1.2.2. Enhanced logging

Administrators can now collect enhanced logs for OpenShift sandboxed containers runtime components. Enhanced logs are available when the CRI-O log level is set to debug. These logs are collected by the must-gather tool, or can be viewed in the node journal. For more information, see Enabling debug logs for OpenShift sandboxed containers.

1.2.3. Check node eligibility to run OpenShift sandboxed containers

Administrators can now check the eligibility of cluster nodes to run OpenShift sandboxed containers. This feature uses the Node Feature Discovery (NFD) Operator to detect node capabilities. Eligible nodes are labeled with feature.node.kubernetes.io/runtime.kata, and the OpenShift sandboxed containers Operator uses this label to select candidate nodes for installation.

The administrator must deploy the NFD Operator to use this feature, create a specific NodeFeatureDiscovery custom resource, and enable checkNodeEligibility when creating the KataConfig custom resource. For more information, see Checking the eligibility of cluster nodes to run OpenShift sandboxed containers.

1.2.4. OpenShift sandboxed containers compatibility with OpenShift Virtualization

Users can now run OpenShift sandboxed containers on clusters with OpenShift Virtualization when VMs are configured correctly. For more information, see Using OpenShift sandboxed containers with OpenShift Virtualization

1.2.5. OpenShift sandboxed containers availability on AWS bare metal (Technology Preview)

Users can now install OpenShift sandboxed containers on AWS bare-metal clusters. This feature is in Technology Preview and not fully supported. For more information, see Understanding OpenShift sandboxed containers.

1.3. Bug fixes

  • Previously, using loop devices in a sandboxed container was not possible due to missing kernel modules. With this release, these kernel modules are included in the package. (KATA-1334)
  • Previously, the MachineConfigPool (MCP) object created by the Operator to track nodes with the Kata runtime installed was not automatically removed on deletion of the KataConfig custom resource (CR). With this release, the deletion of the KataConfig CR results in the removal of the kata-oc MCP object. (KATA-1184)
  • Previously, when you created a kataConfigPoolSelector field and changed it, the OpenShift sandboxed containers Operator did not apply the change. With this release, the Operator acts on changes to the kataConfigPoolSelector field in the custom resource definition and adapts installations of the runtime on nodes accordingly. (KATA-1190)
  • Previously, the SourceImage field was displayed on the web console and using the field had no effect on the installation. With this release, the unused SourceImage field is no longer displayed when creating the KataConfig CR from the web console. (KATA-1015)

1.4. Known issues

  • If you are using OpenShift sandboxed containers, you might receive SELinux denials when accessing files or directories mounted from the hostPath volume in an OpenShift Container Platform cluster. These denials can occur even when running privileged sandboxed containers because privileged sandboxed containers do not disable SELinux checks.

    Following SELinux policy on the host guarantees full isolation of the host file system from the sandboxed workload by default, and provides stronger protection against potential security flaws in the virtiofsd daemon or QEMU.

    If the mounted files or directories do not have specific SELinux requirements on the host, you can use local persistent volumes as an alternative. Files are automatically relabeled to container_file_t, following SELinux policy for container runtimes. See Persistent storage using local volumes for more information.

    Automatic relabeling is not an option when mounted files or directories are expected to have specific SELinux labels on the host. Instead, you can set custom SELinux rules on the host in order to allow the virtiofsd daemon to access these specific labels. (BZ#1904609)

  • You might encounter an issue where the Machine Config Operator (MCO) pod changes to a CrashLoopBackOff state and the openshift.io/scc annotation of the pod displays sandboxed-containers-operator-scc instead of the default hostmount-anyuid value.

    If this happens, temporarily change the seLinuxOptions strategy in the sandboxed-containers-operator-scc SCC to the less restrictive RunAsAny, so that the admission process does not prefer it over the hostmount-anyuid SCC.

    1. Change the seLinuxOptions strategy by running the following command:

      $ oc patch scc sandboxed-containers-operator-scc --type=merge --patch '{"seLinuxContext":{"type": "RunAsAny"}}'
    2. Restart the MCO pod by running the following commands:

      $ oc scale deployments/machine-config-operator -n openshift-machine-config-operator --replicas=0
      $ oc scale deployments/machine-config-operator -n openshift-machine-config-operator --replicas=1
    3. Revert the seLinuxOptions strategy of the sandboxed-containers-operator-scc to its original value of MustRunAs by running the following command:

      $ oc patch scc sandboxed-containers-operator-scc --type=merge --patch '{"seLinuxContext":{"type": "MustRunAs"}}'
    4. Verify that the hostmount-anyuid SCC is applied to the MCO pod by running the following command:

      $ oc get pods -n openshift-machine-config-operator -l k8s-app=machine-config-operator -o yaml | grep scc
      openshift.io/scc: hostmount-anyuid

      (BZ#2057545)

  • The OpenShift sandboxed containers Operator pods that use container CPU resource limits to increase the number of available CPUs for the pod might receive fewer CPUs than requested. If the functionality is available inside the container, you can diagnose CPU resources by using oc rsh <pod> and running the lscpu command.

    $ lscpu

    Example output

    CPU(s):                          16
    On-line CPU(s) list:             0-12,14,15
    Off-line CPU(s) list:            13

    The list of available offline CPUs will likely change from run to run in an unpredictable manner.

    As a workaround, you can use a pod annotation to request additional CPUs rather than setting a CPU limit. The method of allocating processors is different, and CPUs requested by means of pod annotation are not affected by this issue. Rather than setting a CPU limit, the following annotation must be added to the metadata of the pod:

    metadata:
      annotations:
        io.katacontainers.config.hypervisor.default_vcpus: "16"

    (KATA-1376)

  • The progress of the runtime installation is shown in the status section of the kataConfig CR. However, the progress is not shown if all of the following conditions are true:

    • The cluster has a machine config pool worker without any members (machinecount=0).
    • No kataConfigPoolSelector is specified to select nodes for installation.

    In this case, the installation starts on the master nodes because the Operator assumes it is a converged cluster where nodes have both master and worker roles. The status section of the kataConfig CR is not updated during the installation. (KATA-1017)

  • When creating the KataConfig CR and observing the pod status under the openshift-sandboxed-containers-operator namespace, a huge number of restarts for monitor pods is shown. The monitor pods use a specific SELinux policy that is installed as part of sandboxed-containers extension installation. The monitor pod gets created immediately, however the SELinux policy is not yet available, which results in a pod creation error, followed by a pod restart. When the extension installation succeeds, the SELinux policy is available and the monitor pod transitions to a Running state. This does not affect any of the OpenShift sandboxed containers Operator functionality. (KATA-1338)

1.5. Asynchronous errata updates

Security, bug fix, and enhancement updates for OpenShift sandboxed containers 1.2 are released as asynchronous errata through the Red Hat Network. All OpenShift Container Platform 4.10 errata is available on the Red Hat Customer Portal. See the OpenShift Container Platform Life Cycle for more information about asynchronous errata.

Red Hat Customer Portal users can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, users are notified via email whenever new errata relevant to their registered systems are released.

Note

Red Hat Customer Portal user accounts must have systems registered and consuming OpenShift Container Platform entitlements for OpenShift Container Platform errata notification emails to generate.

This section will continue to be updated over time to provide notes on enhancements and bug fixes for future asynchronous errata releases of OpenShift sandboxed containers 1.2.

1.5.1. RHSA-2022:1508 - OpenShift sandboxed containers 1.2.2 bug fix advisory.

Issued: 2022-07-26

OpenShift sandboxed containers release 1.2.2 is now available. This advisory contains an update for OpenShift sandboxed containers with bug fixes.

The list of bug fixes included in the update is documented in the RHSA-2022:5725 advisory.

1.5.2. RHSA-2022:1508 - OpenShift sandboxed containers 1.2.1 bug fix advisory.

Issued: 2022-04-21

OpenShift sandboxed containers release 1.2.1 is now available. This advisory contains an update for OpenShift sandboxed containers with bug fixes.

The list of bug fixes included in the update is documented in the RHSA-2022:1508 advisory.

1.5.3. RHSA-2022:0855 - OpenShift sandboxed containers 1.2.0 image release, security update, bug fix, and enhancement advisory.

Issued: 2022-03-14

OpenShift sandboxed containers release 1.2.0 is now available. This advisory contains an update for OpenShift sandboxed containers with enhancements, security updates, and bug fixes.

The list of bug fixes included in the update is documented in the RHSA-2022:0855 advisory.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.