Chapter 1. About confidential containers
Confidential containers provides a confidential computing environment to protect containers and data by leveraging Trusted Execution Environments.
For more information, see Exploring the OpenShift confidential containers solution.
1.1. Compatibility with OpenShift Container Platform Copy linkLink copied to clipboard!
The required functionality for Red Hat OpenShift Container Platform is supported by two main components:
- Kata runtime
- The Kata runtime is included with Red Hat Enterprise Linux CoreOS (RHCOS) and receives updates with every OpenShift Container Platform release. When enabling peer pods with the Kata runtime, the OpenShift sandboxed containers Operator requires external network connectivity to pull the necessary image components and helper utilities to create the pod virtual machine (VM) image.
- OpenShift sandboxed containers Operator
- The OpenShift sandboxed containers Operator is a Rolling Stream Operator, which means the latest version is the only supported version. It works with all currently supported versions of OpenShift Container Platform.
The Operator depends on the features that come with the RHCOS host and the environment it runs in.
You must install RHCOS on the worker nodes. Red Hat Enterprise Linux (RHEL) nodes are not supported.
The following compatibility matrix for OpenShift sandboxed containers and OpenShift Container Platform releases identifies compatible features and environments.
Architecture | OpenShift Container Platform version |
---|---|
x86_64 | 4.16 or later |
s390x | 4.16 or later |
There are two ways to deploy the Kata containers runtime:
- Bare metal
- Peer pods
You can deploy OpenShift sandboxed containers by using peer pods on Microsoft Azure Cloud Computing Services, AWS Cloud Computing Services, or Google Cloud. With the release of OpenShift sandboxed containers 1.10, the OpenShift sandboxed containers Operator requires OpenShift Container Platform version 4.16 or later.
Feature | Deployment method | OpenShift Container Platform 4.16 | OpenShift Container Platform 4.17 | OpenShift Container Platform 4.18 | OpenShift Container Platform 4.19 |
---|---|---|---|---|---|
Confidential containers | Bare metal | N/A | N/A | N/A | N/A |
Azure peer pods | GA | GA | GA | GA | |
GPU support | Bare metal | N/A | N/A | N/A | N/A |
IBM Z | N/A | N/A | N/A | N/A | |
Azure | Developer Preview | Developer Preview | Developer Preview | Developer Preview | |
AWS | Developer Preview | Developer Preview | Developer Preview | Developer Preview | |
Google Cloud | Developer Preview | Developer Preview | Developer Preview | Developer Preview |
GPU support for peer pods is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
Platform | GPU | Confidential containers |
---|---|---|
Azure | Developer Preview | GA |
AWS | Developer Preview | N/A |
Google Cloud | Developer Preview | N/A |
1.2. Peer pod resource requirements Copy linkLink copied to clipboard!
You must ensure that your cluster has sufficient resources.
Peer pod virtual machines (VMs) require resources in two locations:
-
The worker node. The worker node stores metadata, Kata shim resources (
containerd-shim-kata-v2
), remote-hypervisor resources (cloud-api-adaptor
), and the tunnel setup between the worker nodes and the peer pod VM. - The cloud instance. This is the actual peer pod VM running in the cloud.
The CPU and memory resources used in the Kubernetes worker node are handled by the pod overhead included in the RuntimeClass (kata-remote
) definition used for creating peer pods.
The total number of peer pod VMs running in the cloud is defined as Kubernetes Node extended resources. This limit is per node and is set by the PEERPODS_LIMIT_PER_NODE
attribute in the peer-pods-cm
config map.
The extended resource is named kata.peerpods.io/vm
, and enables the Kubernetes scheduler to handle capacity tracking and accounting.
You can edit the limit per node based on the requirements for your environment after you install the OpenShift sandboxed containers Operator.
A mutating webhook adds the extended resource kata.peerpods.io/vm
to the pod specification. It also removes any resource-specific entries from the pod specification, if present. This enables the Kubernetes scheduler to account for these extended resources, ensuring the peer pod is only scheduled when resources are available.
The mutating webhook modifies a Kubernetes pod as follows:
-
The mutating webhook checks the pod for the expected
RuntimeClassName
value, specified in theTARGET_RUNTIME_CLASS
environment variable. If the value in the pod specification does not match the value in theTARGET_RUNTIME_CLASS
, the webhook exits without modifying the pod. If the
RuntimeClassName
values match, the webhook makes the following changes to the pod spec:-
The webhook removes every resource specification from the
resources
field of all containers and init containers in the pod. -
The webhook adds the extended resource (
kata.peerpods.io/vm
) to the spec by modifying the resources field of the first container in the pod. The extended resourcekata.peerpods.io/vm
is used by the Kubernetes scheduler for accounting purposes.
-
The webhook removes every resource specification from the
The mutating webhook excludes specific system namespaces in OpenShift Container Platform from mutation. If a peer pod is created in those system namespaces, then resource accounting using Kubernetes extended resources does not work unless the pod spec includes the extended resource.
As a best practice, define a cluster-wide policy to only allow peer pod creation in specific namespaces.
1.3. About initdata Copy linkLink copied to clipboard!
The initdata specification provides a flexible way to initialize a peer pod with sensitive or workload-specific data at runtime, avoiding the need to embed such data in the virtual machine (VM) image. This enhances security by reducing the exposure of confidential information and improves flexibility by eliminating custom image builds. For example, initdata can include three configuration settings:
- An X.509 certificate for secure communication.
- A cryptographic key for authentication.
-
An optional Kata Agent
policy.rego
file to enforce runtime behavior when overriding the default permissive Kata Agent policy.
The initdata content configures the following components:
- Attestation Agent (AA), which verifies the trustworthiness of the peer pod by sending evidence for attestation.
- Confidential Data Hub (CDH), which manages secrets and secure data access within the peer pod VM.
- Kata Agent, which enforces runtime policies and manages the lifecycle of the containers inside the pod VM.
You create an initdata.toml
file and convert it to a Base64-encoded, gzip-format string. You apply the initdata string to your workload by one of the following methods:
-
Global configuration: Add the initdata string as the value of the
INITDATA
key in the peer pods config map to create a default configuration for all peer pods. Pod configuration: Add the initdata string as an annotation to a pod manifest, allowing customization for individual workloads.
NoteThe initdata annotation in the pod manifest overrides the global
INITDATA
value in the peer pods config map for that specific pod. The Kata runtime handles this precedence automatically at pod creation time.