Chapter 5. Deploying on IBM Z and IBM LinuxONE
You can deploy OpenShift sandboxed containers or Confidential Containers on IBM Z® and IBM® LinuxONE.
OpenShift sandboxed containers on IBM Z® and IBM® LinuxONE is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Cluster requirements
- You have installed Red Hat OpenShift Container Platform 4.14 or later on the cluster where you are installing the OpenShift sandboxed containers Operator.
- Your cluster has at least one worker node.
5.1. Peer pod resource requirements
You must ensure that your cluster has sufficient resources.
Peer pod virtual machines (VMs) require resources in two locations:
-
The worker node. The worker node stores metadata, Kata shim resources (
containerd-shim-kata-v2
), remote-hypervisor resources (cloud-api-adaptor
), and the tunnel setup between the worker nodes and the peer pod VM. - The libvirt virtual machine instance. This is the actual peer pod VM running in the LPAR (KVM host).
The CPU and memory resources used in the Kubernetes worker node are handled by the pod overhead included in the RuntimeClass (kata-remote
) definition used for creating peer pods.
The total number of peer pod VMs running in the cloud is defined as Kubernetes Node extended resources. This limit is per node and is set by the limit
attribute in the peerpodConfig
custom resource (CR).
The peerpodConfig
CR, named peerpodconfig-openshift
, is created when you create the kataConfig
CR and enable peer pods, and is located in the openshift-sandboxed-containers-operator
namespace.
The following peerpodConfig
CR example displays the default spec
values:
apiVersion: confidentialcontainers.org/v1alpha1
kind: PeerPodConfig
metadata:
name: peerpodconfig-openshift
namespace: openshift-sandboxed-containers-operator
spec:
cloudSecretName: peer-pods-secret
configMapName: peer-pods-cm
limit: "10" 1
nodeSelector:
node-role.kubernetes.io/kata-oc: ""
- 1
- The default limit is 10 VMs per node.
The extended resource is named kata.peerpods.io/vm
, and enables the Kubernetes scheduler to handle capacity tracking and accounting.
You can edit the limit per node based on the requirements for your environment after you install the OpenShift sandboxed containers Operator.
A mutating webhook adds the extended resource kata.peerpods.io/vm
to the pod specification. It also removes any resource-specific entries from the pod specification, if present. This enables the Kubernetes scheduler to account for these extended resources, ensuring the peer pod is only scheduled when resources are available.
The mutating webhook modifies a Kubernetes pod as follows:
-
The mutating webhook checks the pod for the expected
RuntimeClassName
value, specified in theTARGET_RUNTIME_CLASS
environment variable. If the value in the pod specification does not match the value in theTARGET_RUNTIME_CLASS
, the webhook exits without modifying the pod. If the
RuntimeClassName
values match, the webhook makes the following changes to the pod spec:-
The webhook removes every resource specification from the
resources
field of all containers and init containers in the pod. -
The webhook adds the extended resource (
kata.peerpods.io/vm
) to the spec by modifying the resources field of the first container in the pod. The extended resourcekata.peerpods.io/vm
is used by the Kubernetes scheduler for accounting purposes.
-
The webhook removes every resource specification from the
The mutating webhook excludes specific system namespaces in OpenShift Container Platform from mutation. If a peer pod is created in those system namespaces, then resource accounting using Kubernetes extended resources does not work unless the pod spec includes the extended resource.
As a best practice, define a cluster-wide policy to only allow peer pod creation in specific namespaces.
5.2. Deploying OpenShift sandboxed containers on IBM Z and IBM LinuxONE
You can deploy OpenShift sandboxed containers on IBM Z® and IBM® LinuxONE by using the command line interface (CLI) to perform the following tasks:
- Install the OpenShift sandboxed containers Operator.
- Optional: Change the number of virtual machines running on each worker node.
- Configure the libvirt volume.
- Optional: Create a custom peer pod VM image.
- Create the peer pods secret.
- Create the peer pods config map.
- Create the peer pod VM image config map.
- Create the KVM host secret.
-
Create the
KataConfig
custom resource. - Configure the OpenShift sandboxed containers workload objects.
5.2.1. Installing the OpenShift sandboxed containers Operator
You can install the OpenShift sandboxed containers Operator by using the CLI.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Create an
osc-namespace.yaml
manifest file:apiVersion: v1 kind: Namespace metadata: name: openshift-sandboxed-containers-operator
Create the namespace by running the following command:
$ oc apply -f osc-namespace.yaml
Create an
osc-operatorgroup.yaml
manifest file:apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sandboxed-containers-operator-group namespace: openshift-sandboxed-containers-operator spec: targetNamespaces: - openshift-sandboxed-containers-operator
Create the operator group by running the following command:
$ oc apply -f osc-operatorgroup.yaml
Create an
osc-subscription.yaml
manifest file:apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sandboxed-containers-operator namespace: openshift-sandboxed-containers-operator spec: channel: stable installPlanApproval: Automatic name: sandboxed-containers-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: sandboxed-containers-operator.v1.7.0
Create the subscription by running the following command:
$ oc apply -f osc-subscription.yaml
Verify that the Operator is correctly installed by running the following command:
$ oc get csv -n openshift-sandboxed-containers-operator
This command can take several minutes to complete.
Watch the process by running the following command:
$ watch oc get csv -n openshift-sandboxed-containers-operator
Example output
NAME DISPLAY VERSION REPLACES PHASE openshift-sandboxed-containers openshift-sandboxed-containers-operator 1.7.0 1.6.0 Succeeded
Additional resources
- Using Operator Lifecycle Manager on restricted networks.
- Configuring proxy support in Operator Lifecycle Manager for disconnected environments.
5.2.2. Modifying the number of peer pod VMs per node
You can change the limit of peer pod virtual machines (VMs) per node by editing the peerpodConfig
custom resource (CR).
Procedure
Check the current limit by running the following command:
$ oc get peerpodconfig peerpodconfig-openshift -n openshift-sandboxed-containers-operator \ -o jsonpath='{.spec.limit}{"\n"}'
Modify the
limit
attribute of thepeerpodConfig
CR by running the following command:$ oc patch peerpodconfig peerpodconfig-openshift -n openshift-sandboxed-containers-operator \ --type merge --patch '{"spec":{"limit":"<value>"}}' 1
- 1
- Replace <value> with the limit you want to define.
5.2.3. Configuring the libvirt volume
You must configure the libvirt volume on your KVM host. Peer pods use the libvirt provider of the Cloud API Adaptor to create and manage virtual machines.
Prerequisites
- You have installed the OpenShift sandboxed containers Operator on your OpenShift Container Platform cluster by using the OpenShift Container Platform web console or the command line.
- You have administrator privileges for your KVM host.
-
You have installed
podman
on your KVM host. -
You have installed
virt-customize
on your KVM host.
Procedure
- Log in to the KVM host.
Set the name of the libvirt pool by running the following command:
$ export LIBVIRT_POOL=<libvirt_pool>
You need the
LIBVIRT_POOL
value to create the secret for the libvirt provider.Set the name of the libvirt pool by running the following command:
$ export LIBVIRT_VOL_NAME=<libvirt_volume>
You need the
LIBVIRT_VOL_NAME
value to create the secret for the libvirt provider.Set the path of the default storage pool location, by running the following command:
$ export LIBVIRT_POOL_DIRECTORY=<target_directory> 1
- 1
- To ensure libvirt has read and write access permissions, use a subdirectory of the libvirt storage directory. The default is
/var/lib/libvirt/images/
.
Create a libvirt pool by running the following command:
$ virsh pool-define-as $LIBVIRT_POOL --type dir --target "$LIBVIRT_POOL_DIRECTORY"
Start the libvirt pool by running the following command:
$ virsh pool-start $LIBVIRT_POOL
Create a libvirt volume for the pool by running the following command:
$ virsh -c qemu:///system \ vol-create-as --pool $LIBVIRT_POOL \ --name $LIBVIRT_VOL_NAME \ --capacity 20G \ --allocation 2G \ --prealloc-metadata \ --format qcow2
5.2.4. Creating a custom peer pod VM image
You can create a custom peer pod virtual machine (VM) image instead of using the default Operator-built image.
You build an Open Container Initiative (OCI) container with the peer pod QCOW2 image. Later, you add the container registry URL and the image path to the peer pod VM image config map.
Procedure
Create a
Dockerfile.podvm-oci
file:FROM scratch ARG PODVM_IMAGE_SRC ENV PODVM_IMAGE_PATH="/image/podvm.qcow2" COPY $PODVM_IMAGE_SRC $PODVM_IMAGE_PATH
Build a container with the pod VM QCOW2 image by running the following command:
$ docker build -t podvm-libvirt \ --build-arg PODVM_IMAGE_SRC=<podvm_image_source> \ 1 --build-arg PODVM_IMAGE_PATH=<podvm_image_path> \ 2 -f Dockerfile.podvm-oci .
5.2.5. Creating the peer pods secret
You must create the peer pods secret for OpenShift sandboxed containers.
The secret stores credentials for creating the pod virtual machine (VM) image and peer pod instances.
By default, the OpenShift sandboxed containers Operator creates the secret based on the credentials used to create the cluster. However, you can manually create a secret that uses different credentials.
Prerequisites
-
LIBVIRT_POOL
. Use the value you set when you configured libvirt on the KVM host. -
LIBVIRT_VOL_NAME
. Use the value you set when you configured libvirt on the KVM host. LIBVIRT_URI
. This value is the default gateway IP address of the libvirt network. Check your libvirt network setup to obtain this value.NoteIf libvirt uses the default bridge virtual network, you can obtain the
LIBVIRT_URI
by running the following commands:$ virtint=$(bridge_line=$(virsh net-info default | grep Bridge); echo "${bridge_line//Bridge:/}" | tr -d [:blank:]) $ LIBVIRT_URI=$( ip -4 addr show $virtint | grep -oP '(?<=inet\s)\d+(\.\d+){3}') $ LIBVIRT_GATEWAY_URI="qemu+ssh://root@${LIBVIRT_URI}/system?no_verify=1"
-
REDHAT_OFFLINE_TOKEN
. You have generated this token to download the RHEL image at Red Hat API Tokens.
Procedure
Create a
peer-pods-secret.yaml
manifest file according to the following example:apiVersion: v1 kind: Secret metadata: name: peer-pods-secret namespace: openshift-sandboxed-containers-operator type: Opaque stringData: CLOUD_PROVIDER: "libvirt" LIBVIRT_URI: "<libvirt_gateway_uri>" 1 LIBVIRT_POOL: "<libvirt_pool>" 2 LIBVIRT_VOL_NAME: "<libvirt_volume>" 3 REDHAT_OFFLINE_TOKEN: "<rh_offline_token>" 4
Create the secret by running the following command:
$ oc apply -f peer-pods-secret.yaml
5.2.6. Creating the peer pods config map
You must create the peer pods config map for OpenShift sandboxed containers.
Procedure
Create a
peer-pods-cm.yaml
manifest file according to the following example:apiVersion: v1 kind: ConfigMap metadata: name: peer-pods-cm namespace: openshift-sandboxed-containers-operator data: CLOUD_PROVIDER: "libvirt" DISABLECVM: "true"
Create the config map by running the following command:
$ oc apply -f peer-pods-cm.yaml
5.2.7. Creating the peer pod VM image config map
You must create the config map for the peer pod VM image.
Procedure
Create a
libvirt-podvm-image-cm.yaml
manifest according to the following example:apiVersion: v1 kind: ConfigMap metadata: name: libvirt-podvm-image-cm namespace: openshift-sandboxed-containers-operator data: PODVM_DISTRO: "rhel" CAA_SRC: "https://github.com/confidential-containers/cloud-api-adaptor" CAA_REF: "<cloud_api_adaptor_version>" 1 DOWNLOAD_SOURCES: "no" CONFIDENTIAL_COMPUTE_ENABLED: "yes" UPDATE_PEERPODS_CM: "yes" ORG_ID: "<rhel_organization_id>" ACTIVATION_KEY: "<rhel_activation_key>" 2 IMAGE_NAME: "<podvm_libvirt_image>" PODVM_IMAGE_URI: "oci::<image_repo_url>:<image_tag>::<image_path>" 3 SE_BOOT: "true" 4 BASE_OS_VERSION: "<rhel_image_os_version>" 5
- 1
- Specify the latest version of the Cloud API Adaptor source.
- 2
- Specify your RHEL activation key.
- 3
- Optional: Specify the following values if you created a container image:
-
image_repo_url
: Container registry URL. -
image_tag
: Image tag. -
image_path
: Image path. Default:/image/podvm.qcow2
.
-
- 4
SE_BOOT: "true"
enables IBM Secure Execution for an Operator-built image. Set tofalse
if you created a container image.- 5
- Specify the RHEL image operating system version. IBM Z® Secure Execution supports RHEL 9.4 and later versions.
Create the config map by running the following command:
$ oc apply -f libvirt-podvm-image-cm.yaml
The libvirt pod VM image config map is created for your libvirt provider.
5.2.8. Creating the KVM host secret
You must create the secret for your KVM host.
Procedure
Generate an SSH key pair by running the following command:
$ ssh-keygen -f ./id_rsa -N ""
Copy the public SSH key to your KVM host:
$ ssh-copy-id -i ./id_rsa.pub <KVM_HOST_IP>
Create the
Secret
object by running the following command:$ oc create secret generic ssh-key-secret \ -n openshift-sandboxed-containers-operator \ --from-file=id_rsa.pub=./id_rsa.pub \ --from-file=id_rsa=./id_rsa
Delete the SSH keys you created:
$ shred --remove id_rsa.pub id_rsa
5.2.9. Creating the KataConfig custom resource
You must create the KataConfig
custom resource (CR) to install kata-remote
as a runtime class on your worker nodes.
Creating the KataConfig
CR triggers the OpenShift sandboxed containers Operator to do the following:
-
Create a
RuntimeClass
CR namedkata-remote
with a default configuration. This enables users to configure workloads to usekata-remote
as the runtime by referencing the CR in theRuntimeClassName
field. This CR also specifies the resource overhead for the runtime.
OpenShift sandboxed containers installs kata-remote
as a secondary, optional runtime on the cluster and not as the primary runtime.
Creating the KataConfig
CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. Factors that impede reboot time are as follows:
- A larger OpenShift Container Platform deployment with a greater number of worker nodes.
- Activation of the BIOS and Diagnostics utility.
- Deployment on a hard disk drive rather than an SSD.
- Deployment on physical nodes such as bare metal, rather than on virtual nodes.
- A slow CPU and network.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Create an
example-kataconfig.yaml
manifest file according to the following example:apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: example-kataconfig spec: enablePeerPods: true logLevel: info # kataConfigPoolSelector: # matchLabels: # <label_key>: '<label_value>' 1
- 1
- Optional: If you have applied node labels to install
kata-remote
on specific nodes, specify the key and value, for example,osc: 'true'
.
Create the
KataConfig
CR by running the following command:$ oc apply -f example-kataconfig.yaml
The new
KataConfig
CR is created and installskata-remote
as a runtime class on the worker nodes.Wait for the
kata-remote
installation to complete and the worker nodes to reboot before verifying the installation.Monitor the installation progress by running the following command:
$ watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p"
When the status of all workers under
kataNodes
isinstalled
and the conditionInProgress
isFalse
without specifying a reason, thekata-remote
is installed on the cluster.Verify that you have built the peer pod image and uploaded it to the libvirt volume by running the following command:
$ oc describe configmap peer-pods-cm -n openshift-sandboxed-containers-operator
Example output
Name: peer-pods-cm Namespace: openshift-sandboxed-containers-operator Labels: <none> Annotations: <none> Data ==== CLOUD_PROVIDER: libvirt BinaryData ==== Events: <none>
Monitor the
kata-oc
machine config pool progress to ensure that it is in theUPDATED
state, whenUPDATEDMACHINECOUNT
equalsMACHINECOUNT
, by running the following command:$ watch oc get mcp/kata-oc
Verify the daemon set by running the following command:
$ oc get -n openshift-sandboxed-containers-operator ds/peerpodconfig-ctrl-caa-daemon
Verify the runtime classes by running the following command:
$ oc get runtimeclass
Example output
NAME HANDLER AGE kata kata 152m kata-remote kata-remote 152m
5.2.10. Configuring workload objects
You must configure OpenShift sandboxed containers workload objects by setting kata-remote
as the runtime class for the following pod-templated objects:
-
Pod
objects -
ReplicaSet
objects -
ReplicationController
objects -
StatefulSet
objects -
Deployment
objects -
DeploymentConfig
objects
Do not deploy workloads in an Operator namespace. Create a dedicated namespace for these resources.
Prerequisites
-
You have created the
KataConfig
custom resource (CR).
Procedure
Add
spec.runtimeClassName: kata-remote
to the manifest of each pod-templated workload object as in the following example:apiVersion: v1 kind: <object> # ... spec: runtimeClassName: kata-remote # ...
OpenShift Container Platform creates the workload object and begins scheduling it.
Verification
-
Inspect the
spec.runtimeClassName
field of a pod-templated object. If the value iskata-remote
, then the workload is running on OpenShift sandboxed containers, using peer pods.
5.3. Deploying Confidential Containers on IBM Z and IBM LinuxONE
You can deploy Confidential Containers on IBM Z® and IBM® LinuxONE after you deploy OpenShift sandboxed containers.
Confidential Containers on IBM Z® and IBM® LinuxONE is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Cluster requirements
- You have installed Red Hat OpenShift Container Platform 4.15 or later on the cluster where you are installing the Confidential compute attestation Operator.
You deploy Confidential Containers by performing the following steps:
- Install the Confidential compute attestation Operator.
- Create the route for Trustee.
- Enable the Confidential Containers feature gate.
- Update the peer pods config map.
-
Delete the
KataConfig
custom resource (CR). - Update the peer pods secret.
-
Re-create the
KataConfig
CR. - Create the Trustee authentication secret.
- Create the Trustee config map.
- Obtain the IBM Secure Execution (SE) header.
- Configure the SE certificates and keys.
- Create the persistent storage components.
Configure attestation policies:
- Create reference values.
- Create secrets for attested clients.
- Create the resource access policy.
- Create the attestation policy for SE.
-
Create the
KbsConfig
CR. - Verify the attestation process.
5.3.1. Installing the Confidential compute attestation Operator
You can install the Confidential compute attestation Operator on IBM Z® and IBM® LinuxONE by using the CLI.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Create a
trustee-namespace.yaml
manifest file:apiVersion: v1 kind: Namespace metadata: name: trustee-operator-system
Create the
trustee-operator-system
namespace by running the following command:$ oc apply -f trustee-namespace.yaml
Create a
trustee-operatorgroup.yaml
manifest file:apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: trustee-operator-group namespace: trustee-operator-system spec: targetNamespaces: - trustee-operator-system
Create the operator group by running the following command:
$ oc apply -f trustee-operatorgroup.yaml
Create a
trustee-subscription.yaml
manifest file:apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: trustee-operator namespace: trustee-operator-system spec: channel: stable installPlanApproval: Automatic name: trustee-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: trustee-operator.v0.1.0
Create the subscription by running the following command:
$ oc apply -f trustee-subscription.yaml
Verify that the Operator is correctly installed by running the following command:
$ oc get csv -n trustee-operator-system
This command can take several minutes to complete.
Watch the process by running the following command:
$ watch oc get csv -n trustee-operator-system
Example output
NAME DISPLAY PHASE trustee-operator.v0.1.0 Trustee Operator 0.1.0 Succeeded
5.3.2. Creating the route for Trustee
You can create a secure route with edge TLS termination for Trustee. External ingress traffic reaches the router pods as HTTPS and passes on to the Trustee pods as HTTP.
Prerequisites
- You have enabled the Confidential Containers feature gate.
- You have installed the Confidential compute attestation Operator.
Procedure
Create an edge route by running the following command:
$ oc create route edge --service=kbs-service --port kbs-port \ -n trustee-operator-system
NoteNote: Currently, only a route with a valid CA-signed certificate is supported. You cannot use a route with self-signed certificate.
Set the
TRUSTEE_HOST
variable by running the following command:$ TRUSTEE_HOST=$(oc get route -n trustee-operator-system kbs-service \ -o jsonpath={.spec.host})
Verify the route by running the following command:
$ echo $TRUSTEE_HOST
Example output
kbs-service-trustee-operator-system.apps.memvjias.eastus.aroapp.io
5.3.3. Enabling the Confidential Containers feature gate
You must enable the Confidential Containers feature gate.
Procedure
Create a
cc-feature-gate.yaml
manifest file:apiVersion: v1 kind: ConfigMap metadata: name: osc-feature-gates namespace: openshift-sandboxed-containers-operator data: confidential: "true"
Create the config map by running the following command:
$ oc apply -f cc-feature-gate.yaml
5.3.4. Updating the peer pods config map
You must update the peer pods config map for Confidential Containers.
Set Secure Boot to true
to enable it by default. The default value is false
, which presents a security risk.
Procedure
Create a
peer-pods-cm.yaml
manifest file according to the following example:apiVersion: v1 kind: ConfigMap metadata: name: peer-pods-cm namespace: openshift-sandboxed-containers-operator data: CLOUD_PROVIDER: "libvirt" DISABLECVM: "false"
Create the config map by running the following command:
$ oc apply -f peer-pods-cm.yaml
Restart the
peerpodconfig-ctrl-caa-daemon
daemon set by running the following command:$ oc set env ds/peerpodconfig-ctrl-caa-daemon \ -n openshift-sandboxed-containers-operator REBOOT="$(date)"
5.3.5. Deleting the KataConfig custom resource
You can delete the KataConfig
custom resource (CR) by using the command line.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Delete the
KataConfig
CR by running the following command:$ oc delete kataconfig example-kataconfig
Verify that the custom resource was deleted by running the following command:
$ oc get kataconfig example-kataconfig
Example output
No example-kataconfig instances exist
5.3.6. Updating the peer pods secret
You must update the peer pods secret for Confidential Containers.
The secret stores credentials for creating the pod virtual machine (VM) image and peer pod instances.
By default, the OpenShift sandboxed containers Operator creates the secret based on the credentials used to create the cluster. However, you can manually create a secret that uses different credentials.
Prerequisites
-
REDHAT_OFFLINE_TOKEN
. You have generated this token to download the RHEL image at Red Hat API Tokens. -
HKD_CRT
. The Host Key Document (HKD) certificate enables secure execution on IBM Z®. For more information, see Obtaining a host key document from Resource Link in the IBM documentation.
Procedure
Create a
peer-pods-secret.yaml
manifest file according to the following example:apiVersion: v1 kind: Secret metadata: name: peer-pods-secret namespace: openshift-sandboxed-containers-operator type: Opaque stringData: REDHAT_OFFLINE_TOKEN: "<rh_offline_token>" 1 HKD_CRT: "<hkd_crt_value>" 2
Create the secret by running the following command:
$ oc apply -f peer-pods-secret.yaml
5.3.7. Re-creating the KataConfig custom resource
You must re-create the KataConfig
custom resource (CR) for Confidential Containers.
Creating the KataConfig
CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. Factors that impede reboot time are as follows:
- A larger OpenShift Container Platform deployment with a greater number of worker nodes.
- Activation of the BIOS and Diagnostics utility.
- Deployment on a hard disk drive rather than an SSD.
- Deployment on physical nodes such as bare metal, rather than on virtual nodes.
- A slow CPU and network.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Create an
example-kataconfig.yaml
manifest file according to the following example:apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: example-kataconfig spec: enablePeerPods: true logLevel: info # kataConfigPoolSelector: # matchLabels: # <label_key>: '<label_value>' 1
- 1
- Optional: If you have applied node labels to install
kata-remote
on specific nodes, specify the key and value, for example,cc: 'true'
.
Create the
KataConfig
CR by running the following command:$ oc apply -f example-kataconfig.yaml
The new
KataConfig
CR is created and installskata-remote
as a runtime class on the worker nodes.Wait for the
kata-remote
installation to complete and the worker nodes to reboot before verifying the installation.Monitor the installation progress by running the following command:
$ watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p"
When the status of all workers under
kataNodes
isinstalled
and the conditionInProgress
isFalse
without specifying a reason, thekata-remote
is installed on the cluster.Verify that you have built the peer pod image and uploaded it to the libvirt volume by running the following command:
$ oc describe configmap peer-pods-cm -n openshift-sandboxed-containers-operator
Example output
Name: peer-pods-cm Namespace: openshift-sandboxed-containers-operator Labels: <none> Annotations: <none> Data ==== CLOUD_PROVIDER: libvirt DISABLECVM: false 1 LIBVIRT_IMAGE_ID: fa-pp-vol 2 BinaryData ==== Events: <none>
Monitor the
kata-oc
machine config pool progress to ensure that it is in theUPDATED
state, whenUPDATEDMACHINECOUNT
equalsMACHINECOUNT
, by running the following command:$ watch oc get mcp/kata-oc
Verify the daemon set by running the following command:
$ oc get -n openshift-sandboxed-containers-operator ds/peerpodconfig-ctrl-caa-daemon
Verify the runtime classes by running the following command:
$ oc get runtimeclass
Example output
NAME HANDLER AGE kata kata 152m kata-remote kata-remote 152m
5.3.8. Creating the Trustee authentication secret
You must create the authentication secret for Trustee.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Create a private key by running the following command:
$ openssl genpkey -algorithm ed25519 > privateKey
Create a public key by running the following command:
$ openssl pkey -in privateKey -pubout -out publicKey
Create a secret by running the following command:
$ oc create secret generic kbs-auth-public-key --from-file=publicKey -n trustee-operator-system
Verify the secret by running the following command:
$ oc get secret -n trustee-operator-system
5.3.9. Creating the Trustee config map
You must create the config map to configure the Trustee server.
Prerequisites
- You have created a route for Trustee.
Procedure
Create a
kbs-config-cm.yaml
manifest file:apiVersion: v1 kind: ConfigMap metadata: name: kbs-config-cm namespace: trustee-operator-system data: kbs-config.json: | { "insecure_http" : true, "sockets": ["0.0.0.0:8080"], "auth_public_key": "/etc/auth-secret/publicKey", "attestation_token_config": { "attestation_token_type": "CoCo" }, "repository_config": { "type": "LocalFs", "dir_path": "/opt/confidential-containers/kbs/repository" }, "as_config": { "work_dir": "/opt/confidential-containers/attestation-service", "policy_engine": "opa", "attestation_token_broker": "Simple", "attestation_token_config": { "duration_min": 5 }, "rvps_config": { "store_type": "LocalJson", "store_config": { "file_path": "/opt/confidential-containers/rvps/reference-values/reference-values.json" } } }, "policy_engine_config": { "policy_path": "/opt/confidential-containers/opa/policy.rego" } }
Create the config map by running the following command:
$ oc apply -f kbs-config-cm.yaml
5.3.10. Obtaining the IBM Secure Execution header
You must obtain the IBM Secure Execution (SE) header.
Prerequisites
- You have a network block storage device to store the SE header temporarily.
Procedure
Create a temporary folder for the SE header by running the following command:
$ mkdir -p /tmp/ibmse/hdr
Download the
pvextract-hdr
tool from IBM s390 Linux repository by running the following command:$ wget https://github.com/ibm-s390-linux/s390-tools/raw/v2.33.1/rust/pvattest/tools/pvextract-hdr -O /tmp/pvextract-hdr
Make the tool executable by running the following command:
$ chmod +x /tmp/pvextract-hdr
Set the
$IMAGE_OUTPUT_DIR
variable by running the following command:$ export IMAGE=$IMAGE_OUTPUT_DIR/se-podvm-commit-short-id.qcow2
Set the
$IMAGE
variable by running the following command:$ export IMAGE=/root/rooo/se-podvm-d1fb986-dirty-s390x.qcow2
Enable the
nbd
kernel module by running the following command:$ modprobe nbd
Connect the SE image as a network block device (NBD) by running the following command:
$ qemu-nbd --connect=/dev/nbd0 $IMAGE
Create a mount directory for the SE image by running the following command:
$ mkdir -p /mnt/se-image/
Pause the process by running the following command:
$ sleep 1
List your block devices by running the following command:
$ lsblk
Example output
nbd0 43:0 0 100G 0 disk ├─nbd0p1 43:1 0 255M 0 part ├─nbd0p2 43:2 0 6G 0 part │ └─luks-e23e15fa-9c2a-45a5-9275-aae9d8e709c3 253:2 0 6G 0 crypt └─nbd0p3 43:3 0 12.4G 0 part nbd1 43:32 0 20G 0 disk ├─nbd1p1 43:33 0 255M 0 part ├─nbd1p2 43:34 0 6G 0 part │ └─luks-5a540f7c-c0cb-419b-95e0-487670d91525 253:3 0 6G 0 crypt └─nbd1p3 43:35 0 86.9G 0 part nbd2 43:64 0 0B 0 disk nbd3 43:96 0 0B 0 disk nbd4 43:128 0 0B 0 disk nbd5 43:160 0 0B 0 disk nbd6 43:192 0 0B 0 disk nbd7 43:224 0 0B 0 disk nbd8 43:256 0 0B 0 disk nbd9 43:288 0 0B 0 disk nbd10 43:320 0 0B 0 disk
Mount the SE image directory on an available NBD partition and extract the SE header by running the following command:
$ mount /dev/<nbdXp1> /mnt/se-image/ /tmp/pvextract-hdr \ -o /tmp/ibmse/hdr/hdr.bin /mnt/se-image/se.img
Example output
SE header found at offset 0x014000 SE header written to '/tmp/ibmse/hdr/hdr.bin' (640 bytes)
The following error is displayed if the NBD is unavailable:
mount: /mnt/se-image: can't read superblock on /dev/nbd0p1
Unmount the SE image directory by running the following command:
$ umount /mnt/se-image/
Disconnect the network block storage device by running the following command:
$ qemu-nbd --disconnect /dev/nbd0
5.3.11. Configuring the IBM Secure Execution certificates and keys
You must configure the IBM Secure Execution (SE) certificates and keys for your worker nodes.
Prerequisites
- You have the IP address of the bastion node.
- You have the internal IP addresses of the worker nodes.
Procedure
Obtain the attestation policy fields by performing the following steps:
Download the
se_parse_hdr.py
script from the OpenShift Trustee repository by running the following command:$ wget https://github.com/openshift/trustee/raw/main/attestation-service/verifier/src/se/se_parse_hdr.py -O /tmp/se_parse_hdr.py
Create a temporary directory for the SE Host Key Document (HKD) certificate by running the following command:
$ mkdir /tmp/ibmse/hkds/
Copy your Host Key Document (HKD) certificate to the temporary directory by running the following command:
$ cp ~/path/to/<hkd_cert.crt> /tmp/ibmse/hkds/<hkd_cert.crt>
NoteThe HKD certificate must be the same certificate that you downloaded when you created the peer pods secret.
Obtain the attestation policy fields by running the
se_parse_hdr.py
script:$ python3 /tmp/se_parse_hdr.py /tmp/ibmse/hdr/hdr.bin /tmp/ibmse/hkds/<hkd_cert.crt>
Example output
... ================================================ se.image_phkh: xxx se.version: 256 se.tag: xxx se.attestation_phkh: xxx
Record these values for the SE attestation policy config map.
Obtain the certificates and certificate revocation lists (CRLs) by performing the following steps:
Create a temporary directory for certificates by running the following command:
$ mkdir /tmp/ibmse/certs
Download the
ibm-z-host-key-signing-gen2.crt
certificate by running the following command:$ wget https://www.ibm.com/support/resourcelink/api/content/public/ibm-z-host-key-signing-gen2.crt -O /tmp/ibmse/certs/ibm-z-host-key-signing-gen2.crt
Download the
DigiCertCA.crt
certificate by running the following command:$ wget https://www.ibm.com/support/resourcelink/api/content/public/DigiCertCA.crt -O /tmp/ibmse/certs/DigiCertCA.crt
Create a temporary directory for the CRLs by running the following command:
$ mkdir /tmp/ibmse/crls
Download the
DigiCertTrustedRootG4.crl
file by running the following command:$ wget http://crl3.digicert.com/DigiCertTrustedRootG4.crl -O /tmp/ibmse/crls/DigiCertTrustedRootG4.crl
Download the
DigiCertTrustedG4CodeSigningRSA4096SHA3842021CA1.crl
file by running the following command:$ wget http://crl3.digicert.com/DigiCertTrustedG4CodeSigningRSA4096SHA3842021CA1.crl -O /tmp/ibmse/crls/DigiCertTrustedG4CodeSigningRSA4096SHA3842021CA1.crl
Generate the RSA keys:
Generate an RSA key pair by running the following command:
$ openssl genrsa -aes256 -passout pass:<password> -out /tmp/encrypt_key-psw.pem 4096 1
- 1
- Specify the RSA key password.
Create a temporary directory for the RSA keys by running the following command:
$ mkdir /tmp/ibmse/rsa
Create an
encrypt_key.pub
key by running the following command:$ openssl rsa -in /tmp/encrypt_key-psw.pem -passin pass:<password> -pubout -out /tmp/ibmse/rsa/encrypt_key.pub
Create an
encrypt_key.pem
key by running the following command:$ openssl rsa -in /tmp/encrypt_key-psw.pem -passin pass:<password> -out /tmp/ibmse/rsa/encrypt_key.pem
Verify the structure of the
/tmp/ibmse
directory by running the following command:$ tree /tmp/ibmse
Example output
/tmp/ibmse ├── certs │ ├── ibm-z-host-key-signing-gen2.crt | └── DigiCertCA.crt ├── crls │ └── ibm-z-host-key-gen2.crl │ └── DigiCertTrustedRootG4.crl │ └── DigiCertTrustedG4CodeSigningRSA4096SHA3842021CA1.crl ├── hdr │ └── hdr.bin ├── hkds │ └── <hkd_cert.crt> └── rsa ├── encrypt_key.pem └── encrypt_key.pub
Copy these files to the OpenShift Container Platform worker nodes by performing the following steps:
Create a compressed file from the
/tmp/ibmse
directory by running the following command:$ tar -czf ibmse.tar.gz -C /tmp/ibmse
Copy the
.tar.gz
file to the bastion node in your cluster by running the following command:$ scp /tmp/ibmse.tar.gz root@<ocp_bastion_ip>:/tmp 1
- 1
- Specify the IP address of the bastion node.
Connect to the bastion node over SSH by running the following command:
$ ssh root@<ocp_bastion_ip>
Copy the
.tar.gz
file to each worker node by running the following command:$ scp /tmp/ibmse.tar.gz core@<worker_node_ip>:/tmp 1
- 1
- Specify the IP address of the worker node.
Extract the
.tar.gz
on each worker node by running the following command:$ ssh core@<worker_node_ip> 'sudo mkdir -p /opt/confidential-containers/ && sudo tar -xzf /tmp/ibmse.tar.gz -C /opt/confidential-containers/'
Update the
ibmse
folder permissions by running the following command:$ ssh core@<worker_node_ip> 'sudo chmod -R 755 /opt/confidential-containers/ibmse/'
5.3.12. Creating the persistent storage components
You must create persistent storage components, persistent volume (PV) and persistent volume claim (PVC) to mount the ibmse
folder to the Trustee pod.
Procedure
Create a
persistent-volume.yaml
manifest file:apiVersion: v1 kind: PersistentVolume metadata: name: ibmse-pv namespace: trustee-operator-system spec: capacity: storage: 100Mi accessModes: - ReadOnlyMany storageClassName: "" local: path: /opt/confidential-containers/ibmse nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: node-role.kubernetes.io/worker operator: Exists
Create the persistent volume by running the following command:
$ oc apply -f persistent-volume.yaml
Create a
persistent-volume-claim.yaml
manifest file:apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ibmse-pvc namespace: trustee-operator-system spec: accessModes: - ReadOnlyMany storageClassName: "" resources: requests: storage: 100Mi
Create the persistent volume claim by running the following command:
$ oc apply -f persistent-volume-claim.yaml
5.3.13. Configuring attestation policies
You can configure the following attestation policy settings:
- Reference values
You can configure reference values for the Reference Value Provider Service (RVPS) by specifying the trusted digests of your hardware platform.
The client collects measurements from the running software, the Trusted Execution Environment (TEE) hardware and firmware and it submits a quote with the claims to the Attestation Server. These measurements must match the trusted digests registered to the Trustee. This process ensures that the confidential VM (CVM) is running the expected software stack and has not been tampered with.
- Secrets for clients
- You must create one or more secrets to share with attested clients.
- Resource access policy
You must configure a policy for the Trustee policy engine to determine which resources to access.
Do not confuse the Trustee policy engine with the Attestation Service policy engine, which determines the validity of TEE evidence.
- Attestation policy
- You must create an attestation policy for IBM Secure Execution.
Procedure
Create an
rvps-configmap.yaml
manifest file:apiVersion: v1 kind: ConfigMap metadata: name: rvps-reference-values namespace: trustee-operator-system data: reference-values.json: | [ 1 ]
- 1
- Leave this value empty.
Create the RVPS config map by running the following command:
$ oc apply -f rvps-configmap.yaml
Create one or more secrets to share with attested clients according to the following example:
$ oc create secret generic kbsres1 --from-literal key1=<res1val1> \ --from-literal key2=<res1val2> -n trustee-operator-system
In this example, the
kbsres1
secret has two entries (key1
,key2
), which the Trustee clients retrieve. You can add more secrets according to your requirements.Create a
resourcepolicy-configmap.yaml
manifest file:apiVersion: v1 kind: ConfigMap metadata: name: resource-policy namespace: trustee-operator-system data: policy.rego: | 1 package policy 2 path := split(data["resource-path"], "/") default allow = false allow { count(path) == 3 input["tee"] == "se" }
- 1
- The name of the resource policy,
policy.rego
, must match the resource policy defined in the Trustee config map. - 2
- The resource policy follows the Open Policy Agent specification. This example allows the retrieval of all resources when the TEE is not the sample attester.
Create the resource policy config map by running the following command:
$ oc apply -f resourcepolicy-configmap.yaml
Create an
attestation-policy.yaml
manifest file:apiVersion: v1 kind: ConfigMap metadata: name: attestation-policy namespace: trustee-operator-system data: default.rego: | 1 package policy import rego.v1 default allow = false converted_version := sprintf("%v", [input["se.version"]]) allow if { input["se.attestation_phkh"] == "<se.attestation_phkh>" 2 input["se.image_phkh"] == "<se.image_phkh>" input["se.tag"] == "<se.tag>" converted_version == "256" }
Create the attestation policy config map by running the following command:
$ oc apply -f attestation-policy.yaml
5.3.14. Creating the KbsConfig custom resource
You must create the KbsConfig
custom resource (CR) to launch Trustee.
Then, you check the Trustee pods and pod logs to verify the configuration.
Procedure
Create a
kbsconfig-cr.yaml
manifest file:apiVersion: confidentialcontainers.org/v1alpha1 kind: KbsConfig metadata: labels: app.kubernetes.io/name: kbsconfig app.kubernetes.io/instance: kbsconfig app.kubernetes.io/part-of: trustee-operator app.kubernetes.io/managed-by: kustomize app.kubernetes.io/created-by: trustee-operator name: kbsconfig namespace: trustee-operator-system spec: kbsConfigMapName: kbs-config-cm kbsAuthSecretName: kbs-auth-public-key kbsDeploymentType: AllInOneDeployment kbsRvpsRefValuesConfigMapName: rvps-reference-values kbsSecretResources: ["kbsres1"] kbsResourcePolicyConfigMapName: resource-policy kbsAttestationPolicyConfigMapName: attestation-policy kbsServiceType: NodePort ibmSEConfigSpec: certStorePvc: ibmse-pvc
Create the
KbsConfig
CR by running the following command:$ oc apply -f kbsconfig-cr.yaml
Verification
Set the default project by running the following command:
$ oc project trustee-operator-system
Check the pods by running the following command:
$ oc get pods -n trustee-operator-system
Example output
NAME READY STATUS RESTARTS AGE trustee-deployment-8585f98449-9bbgl 1/1 Running 0 22m trustee-operator-controller-manager-5fbd44cd97-55dlh 2/2 Running 0 59m
Set the
POD_NAME
environmental variable by running the following command:$ POD_NAME=$(oc get pods -l app=kbs -o jsonpath='{.items[0].metadata.name}' -n trustee-operator-system)
Check the pod logs by running the following command:
$ oc logs -n trustee-operator-system $POD_NAME
Example output
[2024-05-30T13:44:24Z INFO kbs] Using config file /etc/kbs-config/kbs-config.json [2024-05-30T13:44:24Z WARN attestation_service::rvps] No RVPS address provided and will launch a built-in rvps [2024-05-30T13:44:24Z INFO attestation_service::token::simple] No Token Signer key in config file, create an ephemeral key and without CA pubkey cert [2024-05-30T13:44:24Z INFO api_server] Starting HTTPS server at [0.0.0.0:8080] [2024-05-30T13:44:24Z INFO actix_server::builder] starting 12 workers [2024-05-30T13:44:24Z INFO actix_server::server] Tokio runtime found; starting in existing Tokio runtime
Expose the
ibmse-pvc
persistent volume claim to the Trustee pods by running the following command:$ oc patch deployment trustee-deployment --namespace=trustee-operator-system --type=json -p='[{"op": "remove", "path": "/spec/template/spec/volumes/5/persistentVolumeClaim/readOnly"}]'
Verify that the
kbs-service
is exposed on a node port by running the following command:$ oc get svc kbs-service -n trustee-operator-system
Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kbs-service NodePort 198.51.100.54 <none> 8080:31862/TCP 23h
The
kbs-service
URL ishttps://<worker_node_ip>:<node_port>
, for example,https://172.16.0.56:31862
.
5.3.15. Verifying the attestation process
You can verify the attestation process by creating a test pod and retrieving its secret. The pod image deploys the KBS client, a tool for testing the Key Broker Service and basic attestation flows.
This procedure is an example to verify that attestation is working. Do not write sensitive data to standard I/O because the data can be captured by using a memory dump. Only data written to memory is encrypted.
Prerequisites
- You have created a route if the Trustee server and the test pod are not running in the same cluster.
Procedure
Create a
verification-pod.yaml
manifest file:apiVersion: v1 kind: Pod metadata: name: kbs-client spec: containers: - name: kbs-client image: quay.io/confidential-containers/kbs-client:latest imagePullPolicy: IfNotPresent command: - sleep - "360000" env: - name: RUST_LOG value: none
Create the pod by running the following command:
$ oc create -f verification-pod.yaml
Copy the
https.crt
file to thekbs-client
pod by running the following command:$ oc cp https.crt kbs-client:/
Fetch the pod secret by running the following command:
$ oc exec -it kbs-client -- kbs-client --cert-file https.crt \ --url https://kbs-service:8080 get-resource \ --path default/kbsres1/key1
Example output
res1val1
The Trustee server returns the secret only if the attestation is successful.