Este contenido no está disponible en el idioma seleccionado.

Chapter 1. About OpenShift sandboxed containers


OpenShift sandboxed containers for OpenShift Container Platform integrates Kata Containers as an optional runtime, providing enhanced security and isolation by running containerized applications in lightweight virtual machines. This integration provides a more secure runtime environment for sensitive workloads without significant changes to existing OpenShift workflows. This runtime supports containers in dedicated virtual machines (VMs), providing improved workload isolation.

1.1. Features

OpenShift sandboxed containers provides the following features:

Run privileged or untrusted workloads

You can safely run workloads that require specific privileges, without the risk of compromising cluster nodes by running privileged containers. Workloads that require special privileges include the following:

  • Workloads that require special capabilities from the kernel, beyond the default ones granted by standard container runtimes such as CRI-O, for example to access low-level networking features.
  • Workloads that need elevated root privileges, for example to access a specific physical device. With OpenShift sandboxed containers, it is possible to pass only a specific device through to the virtual machines (VM), ensuring that the workload cannot access or misconfigure the rest of the system.
  • Workloads for installing or using set-uid root binaries. These binaries grant special privileges and, as such, can present a security risk. With OpenShift sandboxed containers, additional privileges are restricted to the virtual machines, and grant no special access to the cluster nodes.

    Some workloads require privileges specifically for configuring the cluster nodes. Such workloads should still use privileged containers, because running on a virtual machine would prevent them from functioning.

Ensure isolation for sensitive workloads
The OpenShift sandboxed containers for Red Hat OpenShift Container Platform integrates Kata Containers as an optional runtime, providing enhanced security and isolation by running containerized applications in lightweight virtual machines. This integration provides a more secure runtime environment for sensitive workloads without significant changes to existing OpenShift workflows. This runtime supports containers in dedicated virtual machines (VMs), providing improved workload isolation.
Ensure kernel isolation for each workload
You can run workloads that require custom kernel tuning (such as sysctl, scheduler changes, or cache tuning) and the creation of custom kernel modules (such as out of tree or special arguments).
Share the same workload across tenants
You can run workloads that support many users (tenants) from different organizations sharing the same OpenShift Container Platform cluster. The system also supports running third-party workloads from multiple vendors, such as container network functions (CNFs) and enterprise applications. Third-party CNFs, for example, may not want their custom settings interfering with packet tuning or with sysctl variables set by other applications. Running inside a completely isolated kernel is helpful in preventing "noisy neighbor" configuration problems.
Ensure proper isolation and sandboxing for testing software
You can run containerized workloads with known vulnerabilities or handle issues in an existing application. This isolation enables administrators to give developers administrative control over pods, which is useful when the developer wants to test or validate configurations beyond those an administrator would typically grant. Administrators can, for example, safely and securely delegate kernel packet filtering (eBPF) to developers. eBPF requires CAP_ADMIN or CAP_BPF privileges, and is therefore not allowed under a standard CRI-O configuration, as this would grant access to every process on the Container Host worker node. Similarly, administrators can grant access to intrusive tools such as SystemTap, or support the loading of custom kernel modules during their development.
Ensure default resource containment through VM boundaries
By default, OpenShift sandboxed containers manages resources such as CPU, memory, storage, and networking in a robust and secure way. Since OpenShift sandboxed containers deploys on VMs, additional layers of isolation and security give a finer-grained access control to the resource. For example, an errant container will not be able to assign more memory than is available to the VM. Conversely, a container that needs dedicated access to a network card or to a disk can take complete control over that device without getting any access to other devices.

1.2. Compatibility with OpenShift Container Platform

The required functionality for the OpenShift Container Platform platform is supported by two main components:

  • Kata runtime: This includes Red Hat Enterprise Linux CoreOS (RHCOS) and updates with every OpenShift Container Platform release.
  • OpenShift sandboxed containers Operator: Install the Operator using either the web console or OpenShift CLI (oc).

The OpenShift sandboxed containers Operator is a Rolling Stream Operator, which means the latest version is the only supported version. It works with all currently supported versions of OpenShift Container Platform. For more information, see OpenShift Container Platform Life Cycle Policy for additional details.

The Operator depends on the features that come with the RHCOS host and the environment it runs in.

Note

You must install Red Hat Enterprise Linux CoreOS (RHCOS) on the worker nodes. RHEL nodes are not supported.

The following compatibility matrix for OpenShift sandboxed containers and OpenShift Container Platform releases identifies compatible features and environments.

Table 1.1. Supported architectures
ArchitectureOpenShift Container Platform version

x86_64

4.8 or later

s390x

4.14 or later

There are two ways to deploy Kata containers runtime:

  • Bare metal
  • Peer pods

Peer pods technology for the deployment of OpenShift sandboxed containers in public clouds was available as Developer Preview in OpenShift sandboxed containers 1.5 and OpenShift Container Platform 4.14.

With the release of OpenShift sandboxed containers 1.7, the Operator requires OpenShift Container Platform version 4.15 or later.

Table 1.2. Feature availability by OpenShift version
FeatureDeployment methodOpenShift Container Platform 4.15OpenShift Container Platform 4.16

Confidential Containers

Bare metal

  

Peer pods

Technology Preview

Technology Preview [1]

GPU support [2]

Bare metal

  

Peer pods

Developer Preview

Developer Preview

  1. Technology Preview of Confidential Containers has been available since OpenShift sandboxed containers 1.7.0.
  2. GPU functionality is not available on IBM Z.
Table 1.3. Supported cloud platforms for OpenShift sandboxed containers
PlatformGPUConfidential Containers

AWS Cloud Computing Services

Developer Preview

 

Microsoft Azure Cloud Computing Services

Developer Preview

Technology Preview [1]

  1. Technology Preview of Confidential Containers has been available since OpenShift sandboxed containers 1.7.0.

1.3. Node eligibility checks

You can verify that your bare-metal cluster nodes support OpenShift sandboxed containers by running a node eligibility check. The most common reason for node ineligibility is lack of virtualization support. If you run sandboxed workloads on ineligible nodes, you will experience errors.

High-level workflow

  1. Install the Node Feature Discovery Operator.
  2. Create the NodeFeatureDiscovery custom resource (CR).
  3. Enable node eligibility checks when you create the Kataconfig CR. You can run node eligibility checks on all worker nodes or on selected nodes.

1.4. Common terms

The following terms are used throughout the documentation.

Sandbox

A sandbox is an isolated environment where programs can run. In a sandbox, you can run untested or untrusted programs without risking harm to the host machine or the operating system.

In the context of OpenShift sandboxed containers, sandboxing is achieved by running workloads in a different kernel using virtualization, providing enhanced control over the interactions between multiple workloads that run on the same host.

Pod

A pod is a construct that is inherited from Kubernetes and OpenShift Container Platform. It represents resources where containers can be deployed. Containers run inside of pods, and pods are used to specify resources that can be shared between multiple containers.

In the context of OpenShift sandboxed containers, a pod is implemented as a virtual machine. Several containers can run in the same pod on the same virtual machine.

OpenShift sandboxed containers Operator
The OpenShift sandboxed containers Operator manages the lifecycle of sandboxed containers on a cluster. You can use the OpenShift sandboxed containers Operator to perform tasks such as the installation and removal of sandboxed containers, software updates, and status monitoring.
Kata Containers
Kata Containers is a core upstream project that is used to build OpenShift sandboxed containers. OpenShift sandboxed containers integrate Kata Containers with OpenShift Container Platform.
KataConfig
KataConfig objects represent configurations of sandboxed containers. They store information about the state of the cluster, such as the nodes on which the software is deployed.
Runtime class
A RuntimeClass object describes which runtime can be used to run a given workload. A runtime class that is named kata is installed and deployed by the OpenShift sandboxed containers Operator. The runtime class contains information about the runtime that describes resources that the runtime needs to operate, such as the pod overhead.
Peer pod

A peer pod in OpenShift sandboxed containers extends the concept of a standard pod. Unlike a standard sandboxed container, where the virtual machine is created on the worker node itself, in a peer pod, the virtual machine is created through a remote hypervisor using any supported hypervisor or cloud provider API.

The peer pod acts as a regular pod on the worker node, with its corresponding VM running elsewhere. The remote location of the VM is transparent to the user and is specified by the runtime class in the pod specification. The peer pod design circumvents the need for nested virtualization.

IBM Secure Execution
IBM Secure Execution for Linux is an advanced security feature introduced with IBM z15® and LinuxONE III. This feature extends the protection provided by pervasive encryption. IBM Secure Execution safeguards data at rest, in transit, and in use. It enables secure deployment of workloads and ensures data protection throughout its lifecycle. For more information, see Introducing IBM Secure Execution for Linux.
Confidential Containers

Confidential Containers protects containers and data by verifying that your workload is running in a Trusted Execution Environment (TEE). You can deploy this feature to safeguard the privacy of big data analytics and machine learning inferences.

Trustee is a component of Confidential Containers. Trustee is an attestation service that verifies the trustworthiness of the location where you plan to run your workload or where you plan to send confidential information. Trustee includes components deployed on a trusted side and used to verify whether the remote workload is running in a Trusted Execution Environment (TEE). Trustee is flexible and can be deployed in several different configurations to support a wide variety of applications and hardware platforms.

Confidential compute attestation Operator
The Confidential compute attestation Operator manages the installation, lifecycle, and configuration of Confidential Containers.

1.5. OpenShift sandboxed containers Operator

The OpenShift sandboxed containers Operator encapsulates all of the components from Kata containers. It manages installation, lifecycle, and configuration tasks.

The OpenShift sandboxed containers Operator is packaged in the Operator bundle format as two container images:

  • The bundle image contains metadata and is required to make the operator OLM-ready.
  • The second container image contains the actual controller that monitors and manages the KataConfig resource.

The OpenShift sandboxed containers Operator is based on the Red Hat Enterprise Linux CoreOS (RHCOS) extensions concept. RHCOS extensions are a mechanism to install optional OpenShift Container Platform software. The OpenShift sandboxed containers Operator uses this mechanism to deploy sandboxed containers on a cluster.

The sandboxed containers RHCOS extension contains RPMs for Kata, QEMU, and its dependencies. You can enable them by using the MachineConfig resources that the Machine Config Operator provides.

Additional resources

1.6. About Confidential Containers

Confidential Containers provides a confidential computing environment to protect containers and data by leveraging Trusted Execution Environments.

Important

Confidential Containers on Microsoft Azure Cloud Computing Services, IBM Z®, and IBM® LinuxONE is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

You can sign container images by using a tool such as Red Hat Trusted Artifact Signer. Then, you create a container image signature verification policy.

The Trustee Operator verifies the signatures, ensuring that only trusted and authenticated container images are deployed in your environment.

For more information, see Exploring the OpenShift Confidential Containers solution.

1.7. OpenShift Virtualization

You can deploy OpenShift sandboxed containers on clusters with OpenShift Virtualization.

To run OpenShift Virtualization and OpenShift sandboxed containers at the same time, your virtual machines must be live migratable so that they do not block node reboots. See About live migration in the OpenShift Virtualization documentation for details.

1.8. Block volume support

OpenShift Container Platform can statically provision raw block volumes. These volumes do not have a file system, and can provide performance benefits for applications that either write to the disk directly or implement their own storage service.

You can use a local block device as persistent volume (PV) storage with OpenShift sandboxed containers. This block device can be provisioned by using the Local Storage Operator (LSO).

The Local Storage Operator is not installed in OpenShift Container Platform by default. See Installing the Local Storage Operator for installation instructions.

You can provision raw block volumes for OpenShift sandboxed containers by specifying volumeMode: Block in the PV specification.

Block volume example

apiVersion: "local.storage.openshift.io/v1"
kind: "LocalVolume"
metadata:
  name: "local-disks"
  namespace: "openshift-local-storage"
spec:
  nodeSelector:
    nodeSelectorTerms:
    - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - worker-0
  storageClassDevices:
    - storageClassName: "local-sc"
      forceWipeDevicesAndDestroyAllData: false
      volumeMode: Block 1
      devicePaths:
        - /path/to/device 2

1
Set volumeMode to Block to indicate that this PV is a raw block volume.
2
Replace this value with the filepath to your LocalVolume resource by-id. PVs are created for these local disks when the provisioner is deployed successfully. You must also use this path to label the node that uses the block device when deploying OpenShift sandboxed containers.

1.9. FIPS compliance

OpenShift Container Platform is designed for Federal Information Processing Standards (FIPS) 140-2 and 140-3. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures.

For more information about the NIST validation program, see Cryptographic Module Validation Program. For the latest NIST status for the individual versions of RHEL cryptographic libraries that have been submitted for validation, see Compliance Activities and Government Standards.

OpenShift sandboxed containers can be used on FIPS enabled clusters.

When running in FIPS mode, OpenShift sandboxed containers components, VMs, and VM images are adapted to comply with FIPS.

Note

FIPS compliance for OpenShift sandboxed containers only applies to the kata runtime class. The peer pod runtime class, kata-remote, is not yet fully supported and has not been tested for FIPS compliance.

FIPS compliance is one of the most critical components required in highly secure environments, to ensure that only supported cryptographic technologies are allowed on nodes.

Important

The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.

To understand Red Hat’s view of OpenShift Container Platform compliance frameworks, refer to the Risk Management and Regulatory Readiness chapter of the OpenShift Security Guide Book.

Red Hat logoGithubRedditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

© 2024 Red Hat, Inc.