Este conteúdo não está disponível no idioma selecionado.

Chapter 5. Deploying confidential containers on IBM Z and IBM LinuxONE bare-metal servers


You can deploy confidential containers workloads on a Red Hat OpenShift Container Platform cluster running on IBM Z® and IBM® LinuxONE bare-metal servers.

In the bare metal approach, you launch confidential containers virtual machines (VMs) directly on a logical partition (LPAR) that is booted with Red Hat Enterprise Linux CoreOS (RHCOS). The LPAR acts as a compute node in the cluster, providing a dedicated environment for running confidential workloads.

This approach eliminates the need for intermediate peer pod components, resulting in faster boot times, quicker recovery from failures, and simpler storage integration. As a result, it is suitable for production workloads that require high performance, consistent storage behaviour, and resource management that aligns with Kubernetes standards.

Important

Confidential containers on IBM Z® and IBM® LinuxONE bare-metal servers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

5.1. Preparation

Review these prerequisites and concepts before you deploy confidential containers on IBM Z® and IBM® LinuxONE bare-metal servers.

5.1.1. Prerequisites

  • You have installed the latest version of Red Hat OpenShift Container Platform on the cluster where you are running your confidential containers workload.
  • You have deployed Red Hat build of Trustee on an OpenShift Container Platform cluster in a trusted environment. For more information, see Deploying Red Hat build of Trustee.

5.1.2. Initdata

The initdata specification provides a flexible way to initialize a pod with workload-specific data at runtime, avoiding the need to embed such data in the virtual machine (VM) image.

This approach enhances security by reducing the exposure of confidential information and improves flexibility by eliminating custom image builds. For example, initdata can include three configuration settings:

  • An X.509 certificate for secure communication.
  • A cryptographic key for authentication.
  • An optional Kata Agent policy.rego file to enforce runtime behavior when overriding the default Kata Agent policy.

The initdata content configures the following components:

  • Attestation Agent (AA), which verifies the trustworthiness of the pod by sending evidence for attestation.
  • Confidential Data Hub (CDH), which manages secrets and secure data access within the pod VM.
  • Kata Agent, which enforces runtime policies and manages the lifecycle of the containers inside the pod VM.

You create an initdata.toml file and convert it to a Base64-encoded, gzip-format string. You add the initdata string as an annotation to a pod manifest, allowing customization for individual workloads.

5.2. Deployment overview

You deploy confidential containers on IBM Z® and IBM® LinuxONE bare-metal servers by performing the following steps:

  1. Install the OpenShift sandboxed containers Operator.
  2. Configure auto-detection of TEEs.
  3. Enable the confidential containers feature gate.
  4. Create initdata to initialize a peer pod with sensitive or workload-specific data at runtime.
  5. Upload a Secure Execution image to the container registry.
  6. Create the kata-addon-artifacts config map.
  7. Create initdata to initialize a pod with sensitive or workload-specific data at runtime.

    Important

    Do not use the default permissive Kata Agent policy in a production environment. You must configure a restrictive policy, preferably by creating initdata.

    As a minimum requirement, you must disable ExecProcessRequest to prevent a cluster administrator from accessing sensitive data by running the oc exec command on a confidential containers pod.

  8. Apply initdata to a pod.
  9. Create the KataConfig CR.
  10. Verify the attestation process.

5.3. Creating MachineConfig config map for TDX

If you use Intel Trust Domain Extensions (TDX), you must create a MachineConfig object before you install the Red Hat build of Trustee Operator.

Procedure

  1. Create a tdx-machine-config.yaml manifest file according to the following example:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: <role> 
    1
    
      name: 99-enable-intel-tdx
    spec:
      kernelArguments:
      - kvm_intel.tdx=1
      - nohibernate
      config:
        ignition:
          version: 3.2.0
        storage:
          files:
            - path: /etc/modules-load.d/vsock.conf
              mode: 0644
              contents:
                source: data:text/plain;charset=utf-8;base64,dnNvY2stbG9vcGJhY2sK
    Copy to Clipboard Toggle word wrap
    1
    Specify master for single-node OpenShift or kata-oc for a multi-node cluster.
  2. Create the TDX config map by running the following command:

    $ oc create -f tdx-config.yaml
    Copy to Clipboard Toggle word wrap

5.4. Installing and upgrading the OpenShift sandboxed containers Operator

You can install or upgrade the OpenShift sandboxed containers Operator by using the command line interface (CLI).

Note

You must configure the OpenShift sandboxed containers Operator subscription for manual updates by setting the value of installPlanApproval to Manual. Automatic updates are not supported.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

Procedure

  1. Create an osc-namespace.yaml manifest file:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-sandboxed-containers-operator
    Copy to Clipboard Toggle word wrap
  2. Create the namespace by running the following command:

    $ oc apply -f osc-namespace.yaml
    Copy to Clipboard Toggle word wrap
  3. Create an osc-operatorgroup.yaml manifest file:

    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: sandboxed-containers-operator-group
      namespace: openshift-sandboxed-containers-operator
    spec:
      targetNamespaces:
      - openshift-sandboxed-containers-operator
    Copy to Clipboard Toggle word wrap
  4. Create the operator group by running the following command:

    $ oc apply -f osc-operatorgroup.yaml
    Copy to Clipboard Toggle word wrap
  5. Create an osc-subscription.yaml manifest file:

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: sandboxed-containers-operator
      namespace: openshift-sandboxed-containers-operator
    spec:
      channel: stable
      installPlanApproval: Manual
      name: sandboxed-containers-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      startingCSV: sandboxed-containers-operator.v1.11.0
    Copy to Clipboard Toggle word wrap
  6. Create the subscription by running the following command:

    $ oc create -f osc-subscription.yaml
    Copy to Clipboard Toggle word wrap
  7. Get the InstallPlan CR for the OpenShift sandboxed containers Operator by running the following command:

    $ oc get installplan -n openshift-sandboxed-containers-operator
    Copy to Clipboard Toggle word wrap
    • Installation example output

      NAME            CSV                                      APPROVAL  APPROVED
      install-bl4fl   sandboxed-containers-operator.v1.11.0    Manual    false
      Copy to Clipboard Toggle word wrap
    • Upgrade example output

      NAME            CSV                                     APPROVAL   APPROVED
      install-jdzrb   sandboxed-containers-operator.v1.11.0   Manual     false
      install-pfk8l   sandboxed-containers-operator.v1.10.3   Manual     true
      Copy to Clipboard Toggle word wrap
  8. Approve the manual installation by running the following command:

    $ oc patch installplan <installplan_name> -p '{"spec":{"approved":true}}' --type=merge -n openshift-sandboxed-containers-operator
    Copy to Clipboard Toggle word wrap
    <installplan_name>
    Specify the InstallPlan resource. For example, install-jdzrb.
  9. Verify that the Operator is correctly installed by running the following command:

    $ oc get csv -n openshift-sandboxed-containers-operator
    Copy to Clipboard Toggle word wrap

    This command can take several minutes to complete.

  10. Watch the process by running the following command:

    $ watch oc get csv -n openshift-sandboxed-containers-operator
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                             DISPLAY                                  VERSION             REPLACES                   PHASE
    openshift-sandboxed-containers   openshift-sandboxed-containers-operator  1.11.0    1.10.3        Succeeded
    Copy to Clipboard Toggle word wrap

5.5. Configuring auto-detection of TEEs

You must configure your nodes so that the OpenShift sandboxed containers Operator can detect the Trusted Execution Environments (TEEs).

You label the nodes by installing and configuring the Node Feature Discovery (NFD) Operator.

5.5.1. Creating a NodeFeatureDiscovery custom resource

Prerequisites

Procedure

  1. Create a my-nfd.yaml manifest file according to the following example:

    apiVersion: nfd.openshift.io/v1
    kind: NodeFeatureDiscovery
    metadata:
      name: nfd-instance
      namespace: openshift-nfd
    spec:
      operand:
        image: registry.redhat.io/openshift4/ose-node-feature-discovery-rhel9:v4.20
        imagePullPolicy: Always
        servicePort: 12000
      workerConfig:
        configData: |
    Copy to Clipboard Toggle word wrap
  2. Create the NodeFeatureDiscovery CR:

    $ oc create -f my-nfd.yaml
    Copy to Clipboard Toggle word wrap

5.5.2. Creating the NodeFeatureRule custom resource

Procedure

  1. Create a custom resource manifest named my-nodefeaturerule.yaml for your TEE:

    apiVersion: nfd.openshift.io/v1alpha1
    kind: NodeFeatureRule
    metadata:
      name: osc-rules
      namespace: openshift-nfd
    spec:
      rules:
        - name: "runtime.kata"
          labels:
            "feature.node.kubernetes.io/runtime.kata": "true"
          matchAny:
            - matchFeatures:
                - feature: cpu.cpuid
                  matchExpressions:
                    SSE42: {op: Exists}
                    VMX: {op: Exists}
                - feature: kernel.loadedmodule
                  matchExpressions:
                    kvm: {op: Exists}
                    kvm_intel: {op: Exists}
            - matchFeatures:
                - feature: cpu.cpuid
                  matchExpressions:
                    SSE42: {op: Exists}
                    SVM: {op: Exists}
                - feature: kernel.loadedmodule
                  matchExpressions:
                    kvm: {op: Exists}
                    kvm_amd: {op: Exists}
    Copy to Clipboard Toggle word wrap
apiVersion: nfd.openshift.io/v1alpha1
kind: NodeFeatureRule
metadata:
  name: ibm-se-rule
  namespace: openshift-nfd
spec:
  rules:
    - name: "ibm.se.enabled"
      labels:
        ibm.feature.node.kubernetes.io/se: "true"
      matchFeatures:
        - feature: cpu.security
          matchExpressions:
            se.enabled: { op: IsTrue }
Copy to Clipboard Toggle word wrap
  1. Create the NodeFeatureRule CR by running the following command:

    $ oc create -f my-nodefeaturerule.yaml
    Copy to Clipboard Toggle word wrap
Note

A relabeling delay of up to 1 minute might occur.

5.6. Creating the osc-feature-gates config map

You enable the confidential containers feature gate by creating the config map.

Bare metal solutions on IBM Z® and IBM® LinuxONE now support only a DaemonSet deployment approach. This method uses the prebuilt image pull process to ensure the virtual machine (VM) runs a Secure Execution-enabled kernel image.

Procedure

  1. Create a my-feature-gate.yaml manifest file:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: osc-feature-gates
      namespace: openshift-sandboxed-containers-operator
    data:
      confidential: "true"
      deploymentMode: daemonset
    Copy to Clipboard Toggle word wrap

    where

    <deployment_mode>

    On OpenShift Container Platform clusters with the Machine Config Operator (MCO), the deploymentMode field is optional and can be omitted. Specifies the strategy for installing and configuring the Kata runtime. Specify the deployment mode:

    • MachineConfig for clusters that always use the MCO
    • DaemonSet for clusters that never use the MCO
    • DaemonSetFallback for clusters that sometimes use the MCO
  2. Create the my-feature-gates config map by running the following command:

    $ oc create -f my-feature-gate.yaml
    Copy to Clipboard Toggle word wrap

5.7. Uploading a Secure Execution image to the container registry

You can either use a custom Secure Execution image or an IBM® Hyper Protect Confidential Container (HPCC) image to deploy confidential containers on IBM Z and IBM LinuxONE bare-metal servers.

You must build a Secure Execution image, create a Dockerfile and push the image to your container registry.

Procedure

  1. Build a Secure Execution image.
  2. Create a Dockerfile file for the Secure Execution image:

    FROM alpine:3.20
    RUN mkdir -p /images
    COPY ./<image_name> /images/<image_name>
    RUN chmod 644 /images/<image_name>
    Copy to Clipboard Toggle word wrap
    <image_name>
    Specify the custom Secure Execution image name. For example, se.img.
  3. Build a container image with a custom tag from the Dockerfile:

    $ docker build -t <registry_name>/<user_name>/kata-se-artifacts:<image_tag> .
    Copy to Clipboard Toggle word wrap
  4. Push the container image to your registry:

    $ docker push <registry_name>/<user_name>/kata-se-artifacts:<image_tag>
    Copy to Clipboard Toggle word wrap

5.8. Creating the kata-addon-artifacts config map

You must create the kata-addon-artifacts config map to enable the use of custom kernel artifacts from container images when deploying in daemon set mode.

Important

If you are using the IBM® Hyper Protect Confidential Container (HPCC) image, see IBM HPCC documentation for further procedure information.

Procedure

  1. Create the kata-addon-artifacts.yaml manifest file:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kata-addon-artifacts
      namespace: openshift-sandboxed-containers-operator
    data:
      addonImage: "<container_image_path>"
      kernelPath: "<kernel_path>"
    Copy to Clipboard Toggle word wrap
    <container_image_path>
    Specify the path to your container image in the registry. For example, quay.io/openshift_sandboxed_containers/kata-se-artifacts:v1.0.
    <kernel_path>
    Specify the kernel path inside the image. For example, /images/se.img.
  2. Create the kata-addon-artifacts config map by running the following command:

    $ oc create -f kata-addon-artifacts.yaml
    Copy to Clipboard Toggle word wrap

5.9. Creating initdata

You create initdata to securely initialize a pod with sensitive or workload-specific data at runtime, thus avoiding the need to embed this data in a virtual machine image. This approach provides additional security by reducing the risk of exposure of confidential information and eliminates the need for custom image builds.

You can specify initdata in the pods config map, for global configuration, or in a pod manifest, for a specific pod. The initdata value in a pod manifest overrides the value set in the pods config map.

Important

In a production environment, you must create initdata to override the default permissive Kata agent policy.

You can specify initdata in the pod manifest, for a specific pod.

Important

You must delete the kbs_cert setting if you configure insecure_http = true in the kbs-config config map for Red Hat build of Trustee.

Procedure

  1. Obtain the Red Hat build of Trustee IP address by running the following command:

    $ oc get node $(oc get pod -n trustee-operator-system \
      -o jsonpath='{.items[0].spec.nodeName}') \
      -o jsonpath='{.status.addresses[?(@.type=="InternalIP")].address}'
    Copy to Clipboard Toggle word wrap

    Example output

    192.168.122.22
    Copy to Clipboard Toggle word wrap

  2. Obtain the port by running the following command:

    $ oc get svc kbs-service -n trustee-operator-system
    Copy to Clipboard Toggle word wrap

    Example output

    NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
    kbs-service  NodePort    172.30.116.11   <none>        8080:32178/TCP   12d
    Copy to Clipboard Toggle word wrap

  3. Create the initdata.toml file:

    algorithm = "sha384"
    version = "0.1.0"
    
    [data]
    "aa.toml" = '''
    [token_configs]
    [token_configs.coco_as]
    
    url = '<trustee_url>'
    
    [token_configs.kbs]
    url = '<trustee_url>'
    '''
    
    "cdh.toml" = '''
    socket = 'unix:///run/confidential-containers/cdh.sock'
    credentials = []
    
    [kbc]
    name = 'cc_kbc'
    url = '<trustee_url>'
    kbs_cert = """
    -----BEGIN CERTIFICATE-----
    <kbs_certificate>
    -----END CERTIFICATE-----
    """
    [image]
    image_security_policy_uri = 'kbs:///default/<secret-policy-name>/<key>
    '''
    Copy to Clipboard Toggle word wrap
    url
    Specify the Red Hat build of Trustee
    <kbs_certificate>
    Specify the Base64-encoded TLS certificate for the attestation agent.
    kbs_cert
    Delete the kbs_cert setting if you configure insecure_http = true in the kbs-config config map for Red Hat build of Trustee.
    image_security_policy_uri
    Optional, only if you enabled the container image signature verification policy. Replace <secret-policy-name> and <key> with the secret name and key, respectively specified in Creating the KbsConfig custom resource.
  4. Convert the initdata.toml file to a Base64-encoded string in gzip format in a text file by running the following command:

    $ cat initdata.toml | gzip | base64 -w0 > initdata.txt
    Copy to Clipboard Toggle word wrap

    Record this string to use in the pod manifest.

5.10. Applying initdata to a pod

You can override the global INITDATA setting by applying customized initdata to a specific pod for special use cases, such as development and testing with a relaxed policy, or when using different Red Hat build of Trustee configurations. You can customize initdata by adding an annotation to the workload pod YAML.

Prerequisite

  • You have created an initdata string.

Procedure

  1. Add the initdata string to the pod manifest:

    apiVersion: v1
    kind: Pod
    metadata:
      name: ocp-cc-pod
      labels:
        app: ocp-cc-pod
      annotations:
        io.katacontainers.config.hypervisor.cc_init_data: <initdata_string>
    spec:
      runtimeClassName: kata-cc
      containers:
      - name: <container_name>
        image: registry.access.redhat.com/ubi9/ubi:latest
        command:
        - sleep
        - "36000"
        securityContext:
          privileged: false
          seccompProfile:
            type: RuntimeDefault
    Copy to Clipboard Toggle word wrap
  2. Create the pod by running the following command:

    $ oc create -f my-pod.yaml
    Copy to Clipboard Toggle word wrap

5.11. Creating the KataConfig custom resource

You must create the KataConfig custom resource (CR) to install kata-cc as a runtime class on your worker nodes.

OpenShift sandboxed containers installs kata-cc as a secondary, optional runtime on the cluster and not as the primary runtime.

Creating the KataConfig CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. The following factors can increase the reboot time:

  • A large OpenShift Container Platform deployment with a greater number of worker nodes.
  • Activation of the BIOS and Diagnostics utility.
  • Deployment on a hard disk drive rather than an SSD.
  • Deployment on physical nodes such as bare metal, rather than on virtual nodes.
  • A slow CPU and network.

Procedure

  1. Create an example-kataconfig.yaml manifest file according to the following example:

    apiVersion: kataconfiguration.openshift.io/v1
    kind: KataConfig
    metadata:
      name: example-kataconfig
    spec:
    
      enablePeerPods: false
      checkNodeEligibility: true
    
      logLevel: info
    #  kataConfigPoolSelector:
    #    matchLabels:
    #      <label_key>: '<label_value>' 
    1
    Copy to Clipboard Toggle word wrap
    1
    Optional: If you have applied node labels to install kata-cc on specific nodes, specify the key and value, for example, cc: 'true'.
  2. Create the KataConfig CR by running the following command:

    $ oc create -f example-kataconfig.yaml
    Copy to Clipboard Toggle word wrap

    The new KataConfig CR is created and installs kata-cc as a runtime class on the worker nodes.

    Wait for the kata-cc installation to complete and the worker nodes to reboot before verifying the installation.

  3. Monitor the installation progress by running the following command:

    $ watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p"
    Copy to Clipboard Toggle word wrap

    When the status of all workers under kataNodes is installed and the condition InProgress is False without specifying a reason, the kata-cc is installed on the cluster.

  4. Verify the runtime classes by running the following command:

    $ oc get runtimeclass
    Copy to Clipboard Toggle word wrap

    Example output

    NAME             HANDLER          AGE
    kata-cc      kata-se      152m
    Copy to Clipboard Toggle word wrap

5.12. Verifying attestation

You can verify the attestation process by creating a test pod to retrieve a specific resource from Red Hat build of Trustee.

Important

This procedure is an example to verify that attestation is working. Do not write sensitive data to standard I/O, because the data can be captured by using a memory dump. Only data written to memory is encrypted.

Procedure

  1. Create a test-pod.yaml manifest file:

    apiVersion: v1
    kind: Pod
    metadata:
      name: ocp-cc-pod
      labels:
        app: ocp-cc-pod
      annotations:
        io.katacontainers.config.hypervisor.cc_init_data: <initdata_string> 
    1
    
    spec:
      runtimeClassName: kata-cc
      containers:
        - name: skr-openshift
          image: registry.access.redhat.com/ubi9/ubi:latest
          command:
            - sleep
            - "36000"
          securityContext:
            privileged: false
            seccompProfile:
              type: RuntimeDefault
    metadata:
      name: coco-test-pod
      labels:
        app: coco-test-pod
      annotations:
        io.katacontainers.config.hypervisor.cc_init_data: <initdata_string> 
    2
    
    spec:
      runtimeClassName: kata-cc
      containers:
        - name: test-container
          image: registry.access.redhat.com/ubi9/ubi:9.3
          command:
            - sleep
            - "36000"
          securityContext:
            privileged: false
            seccompProfile:
              type: RuntimeDefault
    Copy to Clipboard Toggle word wrap
    1 2
    Optional: Setting initdata in a pod annotation overrides the global INITDATA setting in the peer pods config map.
  2. Create the pod by running the following command:

    $ oc create -f test-pod.yaml
    Copy to Clipboard Toggle word wrap
  3. Log in to the pod by running the following command:

    $ oc exec -it ocp-cc-pod -- bash
    Copy to Clipboard Toggle word wrap
  4. Fetch the Red Hat build of Trustee resource by running the following command:

    $ curl http://127.0.0.1:8006/cdh/resource/default/attestation-status/status
    Copy to Clipboard Toggle word wrap

    Example output

    success #/
    Copy to Clipboard Toggle word wrap

Voltar ao topo
Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2025 Red Hat