Chapter 8. Deploying Confidential Containers on IBM Z and IBM LinuxONE
You can deploy Confidential Containers on IBM Z® and IBM® LinuxONE after you deploy OpenShift sandboxed containers.
Confidential Containers on IBM Z® and IBM® LinuxONE is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
IBM® Hyper Protect Confidential Container (HPCC) for Red Hat OpenShift Container Platform is now production-ready. HPCC enables Confidential Computing technology at the enterprise scale by providing a multiparty Hyper Protect Contract, deployment attestation, and validation of container runtime and OCI image integrity.
HPCC is supported by IBM Z17® and IBM® LinuxONE Emperor 5 and is compatible with OpenShift sandboxed containers 1.9 and later. For more information, see the IBM HPCC documentation.
Cluster requirements
- You have installed Red Hat OpenShift Container Platform 4.15 or later on the cluster where you are installing the Confidential compute attestation Operator.
LPAR requirements
- You have LinuxONE Emperor 4.
- You have enabled Secure Unpack Facility on your Logical Partition (LPAR), which is necessary for the IBM Secure Execution. For more information, see Enabling the KVM host for IBM Secure Execution.
You deploy Confidential Containers by performing the following steps:
- Install the Confidential compute attestation Operator.
- Create the route for Trustee.
- Enable the Confidential Containers feature gate.
- Create initdata.
- Update the peer pods config map.
- Optional: Customize the Kata agent policy.
-
Delete the
KataConfigcustom resource (CR). - Update the peer pods secret.
- Optional: Select a custom peer pod VM image.
-
Re-create the
KataConfigCR. - Create the Trustee authentication secret.
- Create the Trustee config map.
- Obtain the IBM Secure Execution (SE) header.
- Configure the SE certificates and keys.
- Create the persistent storage components.
- Configure Trustee values, policies, and secrets.
-
Create the
KbsConfigCR. - Verify the Trustee configuration.
- Verify the attestation process.
8.1. Installing the Confidential compute attestation Operator Copy linkLink copied to clipboard!
You can install the Confidential compute attestation Operator on IBM Z® and IBM® LinuxONE by using the CLI.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have access to the cluster as a user with the
cluster-adminrole.
Procedure
Create a
trustee-namespace.yamlmanifest file:apiVersion: v1 kind: Namespace metadata: name: trustee-operator-systemCreate the
trustee-operator-systemnamespace by running the following command:$ oc apply -f trustee-namespace.yamlCreate a
trustee-operatorgroup.yamlmanifest file:apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: trustee-operator-group namespace: trustee-operator-system spec: targetNamespaces: - trustee-operator-systemCreate the operator group by running the following command:
$ oc apply -f trustee-operatorgroup.yamlCreate a
trustee-subscription.yamlmanifest file:apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: trustee-operator-system namespace: trustee-operator-system spec: channel: stable installPlanApproval: Automatic name: trustee-operator source: trustee-operator-catalog sourceNamespace: openshift-marketplaceCreate the subscription by running the following command:
$ oc apply -f trustee-subscription.yamlVerify that the Operator is correctly installed by running the following command:
$ oc get csv -n trustee-operator-systemThis command can take several minutes to complete.
Watch the process by running the following command:
$ watch oc get csv -n trustee-operator-systemExample output
NAME DISPLAY PHASE trustee-operator.v0.3.0 Trustee Operator 0.3.0 Succeeded
8.2. Enabling the Confidential Containers feature gate Copy linkLink copied to clipboard!
You must enable the Confidential Containers feature gate.
Prerequisites
- You have subscribed to the OpenShift sandboxed containers Operator.
Procedure
Create a
cc-feature-gate.yamlmanifest file:apiVersion: v1 kind: ConfigMap metadata: name: osc-feature-gates namespace: openshift-sandboxed-containers-operator data: confidential: "true"Create the config map by running the following command:
$ oc apply -f cc-feature-gate.yaml
8.3. Creating the route for Trustee Copy linkLink copied to clipboard!
You can create a secure route with edge TLS termination for Trustee. External ingress traffic reaches the router pods as HTTPS and passes on to the Trustee pods as HTTP.
Prerequisites
- You have installed the Confidential compute attestation Operator.
Procedure
Create an edge route by running the following command:
$ oc create route edge --service=kbs-service --port kbs-port \ -n trustee-operator-systemNoteNote: Currently, only a route with a valid CA-signed certificate is supported. You cannot use a route with self-signed certificate.
Set the
TRUSTEE_HOSTvariable by running the following command:$ TRUSTEE_HOST=$(oc get route -n trustee-operator-system kbs-service \ -o jsonpath={.spec.host})Verify the route by running the following command:
$ echo $TRUSTEE_HOSTExample output
kbs-service-trustee-operator-system.apps.memvjias.eastus.aroapp.io
8.4. About initdata Copy linkLink copied to clipboard!
The initdata specification provides a flexible way to initialize a peer pod with sensitive or workload-specific data at runtime, avoiding the need to embed such data in the virtual machine (VM) image. This enhances security by reducing exposure of confidential information and improves flexibility by eliminating custom image builds. For example, initdata can include three configuration settings:
- An X.509 certificate for secure communication.
- A cryptographic key for authentication.
-
An optional Kata Agent
policy.regofile to enforce runtime behavior when overriding the default Kata Agent policy.
You can apply an initdata configuration by using one of the following methods:
- Globally by including it in the peer pods config map, setting a cluster-wide default for all pods.
For a specific pod when configuring a pod workload object, allowing customization for individual workloads.
The
io.katacontainers.config.runtime.cc_init_dataannotation you specify when configuring a pod workload object overrides the globalINITDATAsetting in the peer pods config map for that specific pod. The Kata runtime handles this precedence automatically at pod creation time.
The initdata content configures the following components:
- Attestation Agent (AA), which verifies the trustworthiness of the peer pod by sending evidence to the Trustee for attestation.
- Confidential Data Hub (CDH), which manages secrets and secure data access within the peer pod VM.
- Kata Agent, which enforces runtime policies and manages the lifecycle of the containers inside the pod VM.
8.5. Creating initdata Copy linkLink copied to clipboard!
Create a TOML file with initdata and convert it to a Base64-encoded string. Use this string to specify the value in the peer pods config map, in the peer pod manifest, or in the busybox.yaml file.
You must delete the kbs_cert setting if you configure insecure_http = true in the Trustee config map.
Procedure
Obtain the Trustee IP address by running the following command:
$ oc get node $(oc get pod -n trustee-operator-system -o jsonpath='{.items[0].spec.nodeName}') -o jsonpath='{.status.addresses[?(@.type=="InternalIP")].address}'Example output
192.168.122.22Obtain the Trustee port by running the following command:
$ oc get svc kbs-service -n trustee-operator-systemExample output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kbs-service NodePort 172.30.116.11 <none> 8080:32178/TCP 12dCreate the
initdata.tomlconfiguration file:```toml algorithm = "sha384" version = "0.1.0" [data] "aa.toml" = ''' [token_configs] [token_configs.coco_as] url = 'https://<worker_node_ip>:<node_port>'1 [token_configs.kbs] url = 'https://<worker_node_ip>:<node_port>' cert = """ -----BEGIN CERTIFICATE----- <kbs_certificate>2 -----END CERTIFICATE----- """ ''' "cdh.toml" = ''' socket = 'unix:///run/confidential-containers/cdh.sock' credentials = [] [kbc] name = 'cc_kbc' url = 'https://<worker_node_ip>:<node_port>' kbs_cert = """3 -----BEGIN CERTIFICATE----- <kbs_certificate>4 -----END CERTIFICATE----- """ ''' "policy.rego" = '''5 package agent_policy default AddARPNeighborsRequest := true default AddSwapRequest := true default CloseStdinRequest := true default CopyFileRequest := true default CreateContainerRequest := true default CreateSandboxRequest := true default DestroySandboxRequest := true default ExecProcessRequest := true default GetMetricsRequest := true default GetOOMEventRequest := true default GuestDetailsRequest := true default ListInterfacesRequest := true default ListRoutesRequest := true default MemHotplugByProbeRequest := true default OnlineCPUMemRequest := true default PauseContainerRequest := true default PullImageRequest := true default ReadStreamRequest := true default RemoveContainerRequest := true default RemoveStaleVirtiofsShareMountsRequest := true default ReseedRandomDevRequest := true default ResumeContainerRequest := true default SetGuestDateTimeRequest := true default SetPolicyRequest := true default SignalProcessRequest := true default StartContainerRequest := true default StartTracingRequest := true default StatsContainerRequest := true default StopTracingRequest := true default TtyWinResizeRequest := true default UpdateContainerRequest := true default UpdateEphemeralMountsRequest := true default UpdateInterfaceRequest := true default UpdateRoutesRequest := true default WaitProcessRequest := true default WriteStreamRequest := true ''' ```- 1
- Specify the Trustee IP address and the port, for example,
https://192.168.122.22:32178. - 2
- Specify the Base64-encoded TLS certificate for the attestation agent. This is not required for testing purposes, but it is recommended for production systems.
- 3
- Delete the
kbs_certsetting if you configureinsecure_http = truein the Trustee config map. - 4
- Specify the Base64-encoded TLS certificate for the Trustee instance.
- 5
- Optional: You can specify a custom Kata Agent policy.
Convert the
initdata.tomlfile to a Base64-encoded string in a text file by running the following command:$ base64 -w0 initdata.toml > initdata.txt
8.6. Updating the peer pods config map Copy linkLink copied to clipboard!
You must update the peer pods config map for Confidential Containers.
Set Secure Boot to true to enable it by default. The default value is false, which presents a security risk.
Procedure
Create a
peer-pods-cm.yamlmanifest file according to the following example:apiVersion: v1 kind: ConfigMap metadata: name: peer-pods-cm namespace: openshift-sandboxed-containers-operator data: CLOUD_PROVIDER: "libvirt" PEERPODS_LIMIT_PER_NODE: "10"1 LIBVIRT_POOL: "<libvirt_pool>"2 LIBVIRT_VOL_NAME: "<libvirt_volume>"3 LIBVIRT_DIR_NAME: "/var/lib/libvirt/images/<directory_name>"4 LIBVIRT_NET: "default"5 INITDATA: "<base64_encoded_initdata>"6 DISABLECVM: "false"- 1
- Specify the maximum number of peer pods that can be created per node. The default value is
10. - 2
- Specify the libvirt pool. If you have manually configured the libvirt pool, use the same name as in your KVM host configuration.
- 3
- Specify the libvirt volume name. If you have manually configured the libvirt volume, use the same name as in your KVM host configuration.
- 4
- Specify the libvirt directory for storing virtual machine disk images, such as
.qcow2, or.rawfiles. To ensure libvirt has read and write access permissions, use a subdirectory of the libvirt storage directory. The default is/var/lib/libvirt/images/. - 5
- Optional: Specify a libvirt network if you do not want to use the default network.
- 6
- Specify the Base64-encoded string you created in the
initdata.txtfile.
Create the config map by running the following command:
$ oc apply -f peer-pods-cm.yamlRestart the
ds/osc-caa-dsdaemon set by running the following command:$ oc set env ds/osc-caa-ds \ -n openshift-sandboxed-containers-operator REBOOT="$(date)"
8.7. Customizing the Kata agent policy Copy linkLink copied to clipboard!
The Kata agent policy is a security mechanism that controls agent API requests for pods running with the Kata runtime. Written in Rego and enforced by the Kata agent within the pod virtual machine (VM), this policy determines which operations are allowed or denied.
By default, the Kata agent policy disables the exec and log APIs, as they might transmit or receive unencrypted data through the control plane, which is insecure.
You can override the default policy with a custom one for specific use cases, such as development and testing where security is not a concern. For example, you might run in an environment where the control plane can be trusted. You can apply a custom policy in several ways:
- Embedding it in the pod VM image.
- Patching the peer pods config map.
- Adding an annotation to the workload pod YAML.
For production systems, the preferred method is to use initdata to override the Kata agent policy. The following procedure applies a custom policy to an individual pod using the io.katacontainers.config.agent.policy annotation. The policy is provided in Base64-encoded Rego format. This approach overrides the default policy at pod creation without modifying the pod VM image.
Enabling the exec or log APIs in Confidential Containers workloads might expose sensitive information. Do not enable these APIs in production environments.
A custom policy replaces the default policy entirely. To modify only specific APIs, include the full policy and adjust the relevant rules.
Procedure
Create a
policy.regofile with your custom policy. The following example shows all configurable APIs, withexecandlogenabled for demonstration:package agent_policy import future.keywords.in import input default CopyFileRequest := false default CreateContainerRequest := false default CreateSandboxRequest := true default DestroySandboxRequest := true default ExecProcessRequest := true # Enabled to allow exec API default GetOOMEventRequest := true default GuestDetailsRequest := true default OnlineCPUMemRequest := true default PullImageRequest := true default ReadStreamRequest := true # Enabled to allow log API default RemoveContainerRequest := true default RemoveStaleVirtiofsShareMountsRequest := true default SignalProcessRequest := true default StartContainerRequest := true default StatsContainerRequest := true default TtyWinResizeRequest := true default UpdateEphemeralMountsRequest := true default UpdateInterfaceRequest := true default UpdateRoutesRequest := true default WaitProcessRequest := true default WriteStreamRequest := falseThis policy enables the
exec(ExecProcessRequest) andlog(ReadStreamRequest) APIs. Adjust thetrueorfalsevalues to customize the policy further based on your needs.Convert the
policy.regofile to a Base64-encoded string by running the following command:$ base64 -w0 policy.regoSave the output for use in the yaml file.
Add the Base64-encoded policy to a
my-pod.yamlpod specification file:apiVersion: v1 kind: Pod metadata: name: <pod_name> annotations: io.katacontainers.config.agent.policy: <base64_encoded_policy> spec: runtimeClassName: kata-remote containers: - name: <container_name> image: registry.access.redhat.com/ubi9/ubi:latest command: - sleep - "36000" securityContext: privileged: false seccompProfile: type: RuntimeDefaultApply the pod manifest by running the following command:
$ oc apply -f my-pod.yaml
8.8. Deleting the KataConfig custom resource Copy linkLink copied to clipboard!
You can delete the KataConfig custom resource (CR) by using the command line.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have access to the cluster as a user with the
cluster-adminrole.
Procedure
Delete the
KataConfigCR by running the following command:$ oc delete kataconfig example-kataconfigVerify that the custom resource was deleted by running the following command:
$ oc get kataconfig example-kataconfigExample output
No example-kataconfig instances exist
When uninstalling OpenShift sandboxed containers deployed using a cloud provider, you must delete all of the pods. Any remaining pod resources might result in an unexpected bill from your cloud provider.
8.9. Updating the peer pods secret Copy linkLink copied to clipboard!
You must update the peer pods secret.
The secret stores credentials for creating the pod virtual machine (VM) image and peer pod instances.
By default, the OpenShift sandboxed containers Operator creates the secret based on the credentials used to create the cluster. However, you can manually create a secret that uses different credentials.
Prerequisites
-
REDHAT_OFFLINE_TOKEN. You have generated this token to download the RHEL image at Red Hat API Tokens. -
HOST_KEY_CERTS. The Host Key Document (HKD) certificate enables secure execution on IBM Z®. For more information, see Obtaining a host key document from Resource Link in the IBM documentation.
Procedure
Create a
peer-pods-secret.yamlmanifest file according to the following example:apiVersion: v1 kind: Secret metadata: name: peer-pods-secret namespace: openshift-sandboxed-containers-operator type: Opaque stringData: CLOUD_PROVIDER: "libvirt" LIBVIRT_URI: "<libvirt_gateway_uri>"1 REDHAT_OFFLINE_TOKEN: "<rh_offline_token>"2 HOST_KEY_CERTS: "<host_key_crt_value>"3 Create the secret by running the following command:
$ oc apply -f peer-pods-secret.yaml
8.10. Selecting a custom peer pod VM image Copy linkLink copied to clipboard!
You can select a custom peer pod virtual machine (VM) image, tailored to your workload requirements by adding an annotation to the pod manifest. The custom image overrides the default image specified in the peer pods config map. You create a new libvirt volume in your libvirt pool and upload the custom peer pod VM image to the new volume. Then, you update the pod manifest to use the custom peer pod VM image.
Prerequisites
- The ID of the custom pod VM image to use, compatible with the cloud provider or hypervisor, is available.
Procedure
Set the name of the libvirt pool by running the following command:
$ export LIBVIRT_POOL=<libvirt_pool>1 - 1
- Specify the existing libvirt pool name.
Set the name of the new libvirt volume by running the following command:
$ export LIBVIRT_VOL_NAME=<new_libvirt_volume>Create a libvirt volume for the pool by running the following command:
$ virsh -c qemu:///system \ vol-create-as --pool $LIBVIRT_POOL \ --name $LIBVIRT_VOL_NAME \ --capacity 20G \ --allocation 2G \ --prealloc-metadata \ --format qcow2Upload the custom peer pod VM image to the libvirt volume:
$ virsh -c qemu:///system vol-upload \ --vol $LIBVIRT_VOL_NAME <custom_podvm_image.qcow2> \1 --pool $LIBVIRT_POOL --sparse- 1
- Specify the custom peer pod VM image name.
Create a
pod-manifest.yamlmanifest file according to the following example:apiVersion: v1 kind: Pod metadata: name: pod-manifest annotations: io.katacontainers.config.hypervisor.image: "<new_libvirt_volume>"1 spec: runtimeClassName: kata-remote2 containers: - name: <example_container>3 image: registry.access.redhat.com/ubi9/ubi:9.3 command: ["sleep", "36000"]Create the pod by running the following command:
$ oc apply -f pod-manifest.yaml
8.11. Re-creating the KataConfig custom resource Copy linkLink copied to clipboard!
You must re-create the KataConfig custom resource (CR) for Confidential Containers.
Creating the KataConfig CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. Factors that impede reboot time are as follows:
- A larger OpenShift Container Platform deployment with a greater number of worker nodes.
- Activation of the BIOS and Diagnostics utility.
- Deployment on a hard disk drive rather than an SSD.
- Deployment on physical nodes such as bare metal, rather than on virtual nodes.
- A slow CPU and network.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole.
Procedure
Create an
example-kataconfig.yamlmanifest file according to the following example:apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: example-kataconfig spec: enablePeerPods: true logLevel: info # kataConfigPoolSelector: # matchLabels: # <label_key>: '<label_value>'1 - 1
- Optional: If you have applied node labels to install
kata-remoteon specific nodes, specify the key and value, for example,cc: 'true'.
Create the
KataConfigCR by running the following command:$ oc apply -f example-kataconfig.yamlThe new
KataConfigCR is created and installskata-remoteas a runtime class on the worker nodes.Wait for the
kata-remoteinstallation to complete and the worker nodes to reboot before verifying the installation.Monitor the installation progress by running the following command:
$ watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p"When the status of all workers under
kataNodesisinstalledand the conditionInProgressisFalsewithout specifying a reason, thekata-remoteis installed on the cluster.Verify that you have built the peer pod image and uploaded it to the libvirt volume by running the following command:
$ oc describe configmap peer-pods-cm -n openshift-sandboxed-containers-operatorExample output
Name: peer-pods-cm Namespace: openshift-sandboxed-containers-operator Labels: <none> Annotations: <none> Data ==== CLOUD_PROVIDER: libvirt DISABLECVM: false1 LIBVIRT_IMAGE_ID: fa-pp-vol2 BinaryData ==== Events: <none>Monitor the
kata-ocmachine config pool progress to ensure that it is in theUPDATEDstate, whenUPDATEDMACHINECOUNTequalsMACHINECOUNT, by running the following command:$ watch oc get mcp/kata-ocVerify the daemon set by running the following command:
$ oc get -n openshift-sandboxed-containers-operator ds/osc-caa-dsVerify the runtime classes by running the following command:
$ oc get runtimeclassExample output
NAME HANDLER AGE kata kata 152m kata-remote kata-remote 152m
8.12. Creating the Trustee authentication secret Copy linkLink copied to clipboard!
You must create the authentication secret for Trustee.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have access to the cluster as a user with the
cluster-adminrole.
Procedure
Create a private key by running the following command:
$ openssl genpkey -algorithm ed25519 > privateKeyCreate a public key by running the following command:
$ openssl pkey -in privateKey -pubout -out publicKeyCreate a secret by running the following command:
$ oc create secret generic kbs-auth-public-key --from-file=publicKey -n trustee-operator-systemVerify the secret by running the following command:
$ oc get secret -n trustee-operator-system
8.13. Creating the Trustee config map Copy linkLink copied to clipboard!
You must create the config map to configure the Trustee server.
The following configuration example turns off security features to enable demonstration of Technology Preview features. It is not meant for a production environment.
Prerequisites
- You have created a route for Trustee.
Procedure
Create a
kbs-config-cm.yamlmanifest file:apiVersion: v1 kind: ConfigMap metadata: name: kbs-config-cm namespace: trustee-operator-system data: kbs-config.toml: | [http_server] sockets = ["0.0.0.0:8080"] insecure_http = false private_key = "/etc/https-key/https.key" certificate = "/etc/https-cert/https.crt" [admin] insecure_api = false auth_public_key = "/etc/auth-secret/publicKey" [attestation_token] insecure_key = true attestation_token_type = "CoCo" [attestation_service] type = "coco_as_builtin" work_dir = "/opt/confidential-containers/attestation-service" policy_engine = "opa" [attestation_service.attestation_token_broker] type = "Simple" policy_dir = "/opt/confidential-containers/attestation-service/policies" [attestation_service.attestation_token_config] duration_min = 5 [attestation_service.rvps_config] type = "BuiltIn" [attestation_service.rvps_config.storage] type = "LocalJson" file_path = "/opt/confidential-containers/rvps/reference-values/reference-values.json" [[plugins]] name = "resource" type = "LocalFs" dir_path = "/opt/confidential-containers/kbs/repository" [policy_engine] policy_path = "/opt/confidential-containers/opa/policy.rego"Create the config map by running the following command:
$ oc apply -f kbs-config-cm.yaml
8.14. Configuring the IBM Secure Execution certificates and keys Copy linkLink copied to clipboard!
You must configure the IBM Secure Execution (SE) certificates and keys for your worker nodes.
Prerequisites
- You have the IP address of the bastion node.
- You have the internal IP addresses of the worker nodes.
Procedure
Generate the Key Broker Service (KBS) certificate and key by performing the following steps:
Create the
kbs.confconfiguration file according to the following example:[req] default_bits = 2048 default_keyfile = localhost.key distinguished_name = req_distinguished_name req_extensions = req_ext x509_extensions = v3_ca [req_distinguished_name] countryName = Country Name (2-letter code) countryName_default = <country_name> stateOrProvinceName = State or Province Name (full name) stateOrProvinceName_default = <state_name> localityName = Locality Name (eg, city) localityName_default = <locality_name> organizationName = Organization Name (eg, company) organizationName_default = Red Hat organizationalUnitName = organizationalunit organizationalUnitName_default = Development commonName = Common Name (e.g. server FQDN or YOUR name) commonName_default = kbs-service commonName_max = 64 [req_ext] subjectAltName = @alt_names [v3_ca] subjectAltName = @alt_names [alt_names] IP.1 = <trustee_ip> DNS.1 = localhost DNS.2 = 127.0.0.1Generate the KBS key and self-signed certificate by running the following command:
openssl req -x509 -nodes -days 365 \ -newkey rsa:2048 \ -keyout kbs.key \ -out kbs.crt \ -config kbs.conf \ -passin pass:Copy the KBS key to the
ibmsedirectory by running the following command:$ cp kbs.key /tmp/ibmse/kbs.keyCopy the KBS certificate to the
ibmsedirectory by running the following command:$ cp kbs.crt /tmp/ibmse/kbs.crt
Obtain the attestation policy fields by performing the following steps:
Create a directory to download the
GetRvps.shscript by running the following command:$ mkdir -p Rvps-Extraction/Download the script by running the following command:
$ wget https://github.com/openshift/sandboxed-containers-operator/raw/devel/scripts/rvps-extraction/GetRvps.sh -O $PWD/GetRvps.shCreate a subdirectory by running the following command:
$ mkdir -p Rvps-Extraction/static-filesGo to the
static-filesdirectory by running the following command:$ cd Rvps-Extraction/static-filesDownload the
pvextract-hdrtool by running the following command:$ wget https://github.com/openshift/sandboxed-containers-operator/raw/devel/scripts/rvps-extraction/static-files/pvextract-hdr -O $PWD/pvextract-hdrMake the tool executable by running the following command:
$ chmod +x pvextract-hdrDownload the
se_parse_hdr.pyscript by running the following command:$ wget https://github.com/openshift/sandboxed-containers-operator/raw/devel/scripts/rvps-extraction/static-files/se_parse_hdr.py -O $PWD/se_parse_hdr.pyCopy your Host Key Document (HKD) certificate to the
static-filesdirectory by running the following command:$ cp ~/path/to/<hkd_cert.crt> .The
static-filesdirectory contains the following files:-
HKD.crt -
pvextract-hdr -
se_parse_hdr.py
-
Go to the
Rvps-Extractiondirectory by running the following command:$ cd ..Make the
GetRvps.shscript executable by running the following command:$ chmod +x GetRvps.shRun the script:
$ ./GetRvps.shExample output
***Installing necessary packages for RVPS values extraction *** Updating Subscription Management repositories. Last metadata expiration check: 0:37:12 ago on Mon Nov 18 09:20:29 2024. Package python3-3.9.19-8.el9_5.1.s390x is already installed. Package python3-cryptography-36.0.1-4.el9.s390x is already installed. Package kmod-28-10.el9.s390x is already installed. Dependencies resolved. Nothing to do. Complete! ***Installation Finished *** 1) Generate the RVPS From Local Image from User pc 2) Generate RVPS from Volume 3) Quit Please enter your choice:Enter
2to generate the Reference Value Provider Service from the volume:Please enter your choice: 2Enter
fa-ppfor the libvirt pool name:Enter the Libvirt Pool Name: fa-ppEnter the libvirt gateway URI:
Enter the Libvirt URI Name: <libvirt-uri>1 - 1
- Specify the
LIBVIRT_URIvalue that you used to create the peer pods secret.
Enter
fa-pp-volfor the libvirt volume name:Enter the Libvirt Volume Name: fa-pp-volExample output
Downloading from PODVM Volume... mount: /mnt/myvm: special device /dev/nbd3p1 does not exist. Error: Failed to mount the image. Retrying... Mounting on second attempt passed /dev/nbd3 disconnected SE header found at offset 0x014000 SE header written to '/root/Rvps-Extraction/output-files/hdr.bin' (640 bytes) se.tag: 42f3fe61e8a7e859cab3bb033fd11c61 se.image_phkh: 92d0aff6eb86719b6b1ea0cb98d2c99ff2ec693df3efff2158f54112f6961508 provenance = ewogICAgInNlLmF0dGVzdGF0aW9uX3Boa2giOiBbCiAgICAgICAgIjkyZDBhZmY2ZWI4NjcxOWI2YjFlYTBjYjk4ZDJjOTlmZjJlYzY5M2RmM2VmZmYyMTU4ZjU0MTEyZjY5NjE1MDgiCiAgICBdLAogICAgInNlLnRhZyI6IFsKICAgICAgICAiNDJmM2ZlNjFlOGE3ZTg1OWNhYjNiYjAzM2ZkMTFjNjEiCiAgICBdLAogICAgInNlLmltYWdlX3Boa2giOiBbCiAgICAgICAgIjkyZDBhZmY2ZWI4NjcxOWI2YjFlYTBjYjk4ZDJjOTlmZjJlYzY5M2RmM2VmZmYyMTU4ZjU0MTEyZjY5NjE1MDgiCiAgICBdLAogICAgInNlLnVzZXJfZGF0YSI6IFsKICAgICAgICAiMDAiCiAgICBdLAogICAgInNlLnZlcnNpb24iOiBbCiAgICAgICAgIjI1NiIKICAgIF0KfQo= -rw-r--r--. 1 root root 640 Dec 16 10:57 /root/Rvps-Extraction/output-files/hdr.bin -rw-r--r--. 1 root root 446 Dec 16 10:57 /root/Rvps-Extraction/output-files/ibmse-policy.rego -rw-r--r--. 1 root root 561 Dec 16 10:57 /root/Rvps-Extraction/output-files/se-message
Obtain the certificates and certificate revocation lists (CRLs) by performing the following steps:
Create a temporary directory for certificates by running the following command:
$ mkdir /tmp/ibmse/certsDownload the
ibm-z-host-key-signing-gen2.crtcertificate by running the following command:$ wget https://www.ibm.com/support/resourcelink/api/content/public/ibm-z-host-key-signing-gen2.crt -O /tmp/ibmse/certs/ibm-z-host-key-signing-gen2.crtDownload the
DigiCertCA.crtcertificate by running the following command:$ wget https://www.ibm.com/support/resourcelink/api/content/public/DigiCertCA.crt -O /tmp/ibmse/certs/DigiCertCA.crtCreate a temporary directory for the CRLs by running the following command:
$ mkdir /tmp/ibmse/crlsDownload the
ibm-z-host-key-gen2.crlfile by running the following command:$ wget https://www.ibm.com/support/resourcelink/api/content/public/ibm-z-host-key-gen2.crl -O /tmp/ibmse/crls/ibm-z-host-key-gen2.crlDownload the
DigiCertTrustedRootG4.crlfile by running the following command:$ wget http://crl3.digicert.com/DigiCertTrustedRootG4.crl -O /tmp/ibmse/crls/DigiCertTrustedRootG4.crlDownload the
DigiCertTrustedG4CodeSigningRSA4096SHA3842021CA1.crlfile by running the following command:$ wget http://crl3.digicert.com/DigiCertTrustedG4CodeSigningRSA4096SHA3842021CA1.crl -O /tmp/ibmse/crls/DigiCertTrustedG4CodeSigningRSA4096SHA3842021CA1.crlCreate a temporary directory for the
hdr.binfile by running the following command:$ mkdir -p /tmp/ibmse/hdr/Copy the
hdr.binfile to thehdrdirectory by running the following command:$ cp /root/Rvps-Extraction/output-files/hdr.bin /tmp/ibmse/hdr/Create a temporary directory for Host Key Document (HKD) certificate by running the following command:
$ mkdir -p /tmp/ibmse/hkdsCopy your HKD certificate to the
hkdsdirectory by running the following command:$ cp ~/path/to/<hkd_cert.crt> /tmp/ibmse/hkds/
Generate the RSA keys:
Generate an RSA key pair by running the following command:
$ openssl genrsa -aes256 -passout pass:<password> -out /tmp/encrypt_key-psw.pem 40961 - 1
- Specify the RSA key password.
Create a temporary directory for the RSA keys by running the following command:
$ mkdir -p /tmp/ibmse/rsaCreate an
encrypt_key.pubkey by running the following command:$ openssl rsa -in /tmp/encrypt_key-psw.pem -passin pass:<password> -pubout -out /tmp/ibmse/rsa/encrypt_key.pubCreate an
encrypt_key.pemkey by running the following command:$ openssl rsa -in /tmp/encrypt_key-psw.pem -passin pass:<password> -out /tmp/ibmse/rsa/encrypt_key.pem
Verify the structure of the
/tmp/ibmsedirectory by running the following command:$ tree /tmp/ibmseExample output
/tmp/ibmse ├──kbs.key ├──kbs.crt | ├── certs │ ├── ibm-z-host-key-signing-gen2.crt | └── DigiCertCA.crt ├── crls │ └── ibm-z-host-key-gen2.crl │ └── DigiCertTrustedRootG4.crl │ └── DigiCertTrustedG4CodeSigningRSA4096SHA3842021CA1.crl ├── hdr │ └── hdr.bin ├── hkds │ └── <hkd_cert.crt> └── rsa ├── encrypt_key.pem └── encrypt_key.pubCopy these files to the OpenShift Container Platform worker nodes by performing the following steps:
Create a compressed file from the
/tmp/ibmsedirectory by running the following command:$ tar -czf ibmse.tar.gz -C /tmp/ ibmseCopy the
.tar.gzfile to the bastion node in your cluster by running the following command:$ scp /tmp/ibmse.tar.gz root@<ocp_bastion_ip>:/tmp1 - 1
- Specify the IP address of the bastion node.
Connect to the bastion node over SSH by running the following command:
$ ssh root@<ocp_bastion_ip>Copy the
.tar.gzfile to each worker node by running the following command:$ scp /tmp/ibmse.tar.gz core@<worker_node_ip>:/tmp1 - 1
- Specify the IP address of the worker node.
Extract the
.tar.gzon each worker node by running the following command:$ ssh core@<worker_node_ip> 'sudo mkdir -p /opt/confidential-containers/ && sudo tar -xzf /tmp/ibmse.tar.gz -C /opt/confidential-containers/'Update the
ibmsefolder permissions by running the following command:$ ssh core@<worker_node_ip> 'sudo chmod -R 755 /opt/confidential-containers/ibmse/'
Create the secrets in the cluster with the KBS key and certificate by performing the following steps:
Create a
kbs-https-certificate.yamlmanifest file according to the following example:apiVersion: v1 kind: Secret metadata: name: kbs-https-certificate namespace: trustee-operator-system data: https.crt: $(cat /tmp/ibmse/kbs.crt | base64 -w 0)Create the secret with the KBS certificate by running the following command:
$ oc apply -f kbs-https-certificate.yamlCreate a
kbs-https-key.yamlmanifest file according to the following example:apiVersion: v1 kind: Secret metadata: name: kbs-https-key namespace: trustee-operator-system data: https.key: $(cat /tmp/ibmse/kbs.key | base64 -w 0)Create the secret with the KBS key by running the following command:
$ oc apply -f kbs-https-key.yaml
8.15. Creating the persistent storage components Copy linkLink copied to clipboard!
You must create persistent storage components, persistent volume (PV) and persistent volume claim (PVC) to mount the ibmse folder to the Trustee pod.
Procedure
Create a
persistent-volume.yamlmanifest file:apiVersion: v1 kind: PersistentVolume metadata: name: ibmse-pv namespace: trustee-operator-system spec: capacity: storage: 100Mi accessModes: - ReadOnlyMany storageClassName: "" local: path: /opt/confidential-containers/ibmse nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: node-role.kubernetes.io/worker operator: ExistsCreate the persistent volume by running the following command:
$ oc apply -f persistent-volume.yamlCreate a
persistent-volume-claim.yamlmanifest file:apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ibmse-pvc namespace: trustee-operator-system spec: accessModes: - ReadOnlyMany storageClassName: "" resources: requests: storage: 100MiCreate the persistent volume claim by running the following command:
$ oc apply -f persistent-volume-claim.yaml
8.16. Configuring Trustee values, policies, and secrets Copy linkLink copied to clipboard!
You can configure the following values, policies, and secrets for Trustee:
- Reference values for the Reference Value Provider Service.
- Attestation policy for IBM Secure Execution.
- Secret for custom keys for Trustee clients.
- Secret for container image signature verification.
- Container image signature verification policy. This policy is mandatory. If you do not use container image signature verification, you must create a policy that does not verify signatures.
- Resource access policy.
8.16.1. Configuring reference values Copy linkLink copied to clipboard!
You can configure reference values for the Reference Value Provider Service (RVPS) by specifying the trusted digests of your hardware platform.
The client collects measurements from the running software, the Trusted Execution Environment (TEE) hardware and firmware and it submits a quote with the claims to the Attestation Server. These measurements must match the trusted digests registered to the Trustee. This process ensures that the confidential VM (CVM) is running the expected software stack and has not been tampered with.
Procedure
Create an
rvps-configmap.yamlmanifest file:apiVersion: v1 kind: ConfigMap metadata: name: rvps-reference-values namespace: trustee-operator-system data: reference-values.json: | [1 ]- 1
- Leave this value empty.
Create the RVPS config map by running the following command:
$ oc apply -f rvps-configmap.yaml
8.16.2. Creating the attestation policy for IBM Secure Execution Copy linkLink copied to clipboard!
You must create the attestation policy for IBM Secure Execution.
Procedure
Create an
attestation-policy.yamlmanifest file:apiVersion: v1 kind: ConfigMap metadata: name: attestation-policy namespace: trustee-operator-system data: default.rego: | package policy import rego.v1 default allow = false converted_version := sprintf("%v", [input["se.version"]]) allow if { input["se.attestation_phkh"] == "<se.attestation_phkh>" input["se.image_phkh"] == "<se.image_phkh>" input["se.tag"] == "<se.tag>" converted_version == "256" }- default.rego
- Do not modify the policy name.
- <se.attestation_phkh>
-
Replace this with the attestation policy fields you obtained by running the
se_parse_hdr.pyscript.
Create the attestation policy config map by running the following command:
$ oc apply -f attestation-policy.yaml
8.16.3. Creating a secret with custom keys for clients Copy linkLink copied to clipboard!
You can create a secret that contains one or more custom keys for Trustee clients.
In this example, the kbsres1 secret has two entries (key1, key2), which the clients retrieve. You can add additional secrets according to your requirements by using the same format.
Prerequisites
- You have created one or more custom keys.
Procedure
Create a secret for the custom keys according to the following example:
$ oc apply secret generic kbsres1 \ --from-literal key1=<custom_key1> \1 --from-literal key2=<custom_key2> \ -n trustee-operator-system- 1
- Specify a custom key.
The
kbsres1secret is specified in thespec.kbsSecretResourceskey of theKbsConfigcustom resource.
8.16.4. Creating a secret for container image signature verification Copy linkLink copied to clipboard!
If you use container image signature verification, you must create a secret that contains the public container image signing key.
The Confidential compute attestation Operator uses the secret to verify the signature, ensuring that only trusted and authenticated container images are deployed in your environment.
You can use Red Hat Trusted Artifact Signer or other tools to sign container images.
Procedure
Create a secret for container image signature verification by running the following command:
$ oc apply secret generic <type> \1 --from-file=<tag>=./<public_key_file> \2 -n trustee-operator-system-
Record the
<type>value. You must add this value to thespec.kbsSecretResourceskey when you create theKbsConfigcustom resource.
8.16.5. Creating the container image signature verification policy Copy linkLink copied to clipboard!
You create the container image signature verification policy because signature verification is always enabled. If this policy is missing, the pods will not start.
If you are not using container image signature verification, you create the policy without signature verification.
For more information, see containers-policy.json 5.
Procedure
Create a
security-policy-config.jsonfile according to the following examples:Without signature verification:
{ "default": [ { "type": "insecureAcceptAnything" }], "transports": {} }With signature verification:
{ "default": [ ], "transports": { "docker": { "<container_registry_url>/<username>/busybox:latest":1 [ { "type": "sigstoreSigned", "keyPath": "kbs:///default/img-sig/pub-key"2 } ] } } }
Create the security policy by running the following command:
$ oc apply secret generic security-policy \ --from-file=osc=./<security-policy-config.json> \ -n trustee-operator-systemDo not alter the secret type,
security-policy, or the key,osc.The
security-policysecret is specified in thespec.kbsSecretResourceskey of theKbsConfigcustom resource.
8.16.6. Creating the resource access policy Copy linkLink copied to clipboard!
You configure the resource access policy for the Trustee policy engine. This policy determines which resources Trustee can access.
The Trustee policy engine is different from the Attestation Service policy engine, which determines the validity of TEE evidence.
Procedure
Create a
resourcepolicy-configmap.yamlmanifest file:apiVersion: v1 kind: ConfigMap metadata: name: resource-policy namespace: trustee-operator-system data: policy.rego: package policy default allow = true allow { input["tee"] == "se" }- policy.rego
-
The name of the resource policy,
policy.rego, must match the resource policy defined in the Trustee config map. - package policy
- The resource policy follows the Open Policy Agent specification.
Create the resource policy config map by running the following command:
$ oc apply -f resourcepolicy-configmap.yaml
8.17. Creating the KbsConfig custom resource Copy linkLink copied to clipboard!
You create the KbsConfig custom resource (CR) to launch Trustee.
Then, you check the Trustee pods and pod logs to verify the configuration.
Procedure
Create a
kbsconfig-cr.yamlmanifest file:apiVersion: confidentialcontainers.org/v1alpha1 kind: KbsConfig metadata: labels: app.kubernetes.io/name: kbsconfig app.kubernetes.io/instance: kbsconfig app.kubernetes.io/part-of: trustee-operator app.kubernetes.io/managed-by: kustomize app.kubernetes.io/created-by: trustee-operator name: kbsconfig namespace: trustee-operator-system spec: kbsConfigMapName: kbs-config-cm kbsAuthSecretName: kbs-auth-public-key kbsDeploymentType: AllInOneDeployment kbsRvpsRefValuesConfigMapName: rvps-reference-values kbsSecretResources: ["kbsres1", "security-policy", "<type>"]1 kbsResourcePolicyConfigMapName: resource-policy kbsAttestationPolicyConfigMapName: attestation-policy kbsHttpsKeySecretName: kbs-https-key kbsHttpsCertSecretName: kbs-https-certificate kbsServiceType: NodePort ibmSEConfigSpec: certStorePvc: ibmse-pvc KbsEnvVars: SE_SKIP_CERTS_VERIFICATION: "false"2 Create the
KbsConfigCR by running the following command:$ oc apply -f kbsconfig-cr.yaml
8.18. Verifying the Trustee configuration Copy linkLink copied to clipboard!
You verify the Trustee configuration by checking the Trustee pods and logs.
Procedure
Set the default project by running the following command:
$ oc project trustee-operator-systemCheck the Trustee pods by running the following command:
$ oc get pods -n trustee-operator-systemExample output
NAME READY STATUS RESTARTS AGE trustee-deployment-8585f98449-9bbgl 1/1 Running 0 22m trustee-operator-controller-manager-5fbd44cd97-55dlh 2/2 Running 0 59mSet the
POD_NAMEenvironmental variable by running the following command:$ POD_NAME=$(oc get pods -l app=kbs -o jsonpath='{.items[0].metadata.name}' -n trustee-operator-system)Check the pod logs by running the following command:
$ oc logs -n trustee-operator-system $POD_NAMEExample output
[2024-05-30T13:44:24Z INFO kbs] Using config file /etc/kbs-config/kbs-config.json [2024-05-30T13:44:24Z WARN attestation_service::rvps] No RVPS address provided and will launch a built-in rvps [2024-05-30T13:44:24Z INFO attestation_service::token::simple] No Token Signer key in config file, create an ephemeral key and without CA pubkey cert [2024-05-30T13:44:24Z INFO api_server] Starting HTTPS server at [0.0.0.0:8080] [2024-05-30T13:44:24Z INFO actix_server::builder] starting 12 workers [2024-05-30T13:44:24Z INFO actix_server::server] Tokio runtime found; starting in existing Tokio runtimeVerify that the
kbs-serviceis exposed on a node port by running the following command:$ oc get svc kbs-service -n trustee-operator-systemExample output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kbs-service NodePort 198.51.100.54 <none> 8080:31862/TCP 23hObtain the Trustee deployment pod name by running the following command:
$ oc get pods -n trustee-operator-system | grep -i trustee-deploymentExample output
NAME READY STATUS RESTARTS AGE trustee-deployment-d746679cd-plq82 1/1 Running 0 2m32s
8.19. Verifying the attestation process Copy linkLink copied to clipboard!
You can verify the attestation process by creating a BusyBox pod. The pod image deploys the confidential workload where you can retrieve the key.
This procedure is an example to verify that attestation is working. Do not write sensitive data to standard I/O because the data can be captured by using a memory dump. Only data written to memory is encrypted.
Procedure
Create a
busybox.yamlmanifest file:apiVersion: v1 kind: Pod metadata: name: busybox namespace: default labels: run: busybox spec: runtimeClassName: kata-remote restartPolicy: Never containers: - name: busybox image: quay.io/prometheus/busybox:latest imagePullPolicy: Always command: - "sleep" - "3600"Create the pod by running the following command:
$ oc create -f busybox.yamlLog in to the pod by running the following command:
$ oc exec -it busybox -n default -- /bin/shGet the secret key by running the following command:
$ wget http://127.0.0.1:8006/cdh/resource/default/kbsres1/key1Example output
Connecting to 127.0.0.1:8006 (127.0.0.1:8006) saving to 'key1' key1 100% |*******************************************| 8 0:00:00 ETA 'key1' savedDisplay the
key1value by running the following command:$ cat key1Example output
res1val1/ #