Chapter 3. Deploying confidential containers on IBM Z and IBM LinuxONE
You deploy confidential containers on a Red Hat OpenShift Container Platform cluster on IBM Z® and IBM® LinuxONE for your workloads.
Confidential containers on IBM Z® and IBM® LinuxONE is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You deploy confidential containers by performing the following steps:
- Install the OpenShift sandboxed containers Operator.
- Create the peer pods secret.
- Enable the confidential containers feature gate.
-
Optional: If you pull a peer pod VM image from a private registry such as
registry.access.redhat.com, configure the pull secret for peer pods. Create initdata to initialize a peer pod with sensitive or workload-specific data at runtime. See About initdata for details.
ImportantDo not use the default permissive Kata Agent policy in a production environment. You must configure a restrictive policy, preferably by creating initdata.
As a minimum requirement, you must disable
ExecProcessRequestto prevent a cluster administrator from accessing sensitive data by running theoc execcommand on a confidential containers pod.- Create the peer pods config map. You can add initdata to the config map to create a default global configuration for your peer pods.
- Optional: Add initdata to a pod manifest to override the global initdata configuration you set in the peer pods config map.
- Optional: Select a custom peer pod VM image.
-
Create the
KataConfigCR. - Verify the attestation process.
IBM® Hyper Protect Confidential Container (HPCC) for Red Hat OpenShift Container Platform is now production-ready. HPCC enables Confidential Computing technology at the enterprise scale by providing a multiparty Hyper Protect Contract, deployment attestation, and validation of container runtime and OCI image integrity.
HPCC is supported by IBM Z17® and IBM® LinuxONE Emperor 5. For more information, see the IBM HPCC documentation.
3.1. Prerequisites Copy linkLink copied to clipboard!
- You have installed the latest version of Red Hat OpenShift Container Platform on the cluster where you are running your confidential containers workload.
- You have deployed Red Hat build of Trustee on an OpenShift Container Platform cluster in a trusted environment. For more information, see Deploying Red Hat build of Trustee.
- You are using LinuxONE Emperor 4.
- You have enabled Secure Unpack Facility on your Logical Partition (LPAR), which is necessary for the IBM Secure Execution. For more information, see Enabling the KVM host for IBM Secure Execution.
3.2. Installing the OpenShift sandboxed containers Operator Copy linkLink copied to clipboard!
You install the OpenShift sandboxed containers Operator by using the command line interface (CLI).
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole.
Procedure
Create an
osc-namespace.yamlmanifest file:apiVersion: v1 kind: Namespace metadata: name: openshift-sandboxed-containers-operatorCreate the namespace by running the following command:
$ oc create -f osc-namespace.yamlCreate an
osc-operatorgroup.yamlmanifest file:apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sandboxed-containers-operator-group namespace: openshift-sandboxed-containers-operator spec: targetNamespaces: - openshift-sandboxed-containers-operatorCreate the operator group by running the following command:
$ oc create -f osc-operatorgroup.yamlCreate an
osc-subscription.yamlmanifest file:apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sandboxed-containers-operator namespace: openshift-sandboxed-containers-operator spec: channel: stable installPlanApproval: Automatic name: sandboxed-containers-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: sandboxed-containers-operator.v1.10.3Create the subscription by running the following command:
$ oc create -f osc-subscription.yamlVerify that the Operator is correctly installed by running the following command:
$ oc get csv -n openshift-sandboxed-containers-operatorThis command can take several minutes to complete.
Watch the process by running the following command:
$ watch oc get csv -n openshift-sandboxed-containers-operatorExample output
NAME DISPLAY VERSION REPLACES PHASE openshift-sandboxed-containers openshift-sandboxed-containers-operator 1.10.3 1.9.0 Succeeded
3.3. Creating the peer pods secret Copy linkLink copied to clipboard!
You must create a peer pods secret. The secret stores credentials for creating the pod virtual machine (VM) image and peer pod instances.
Prerequisites
LIBVIRT_URI. This value is the default gateway IP address of the libvirt network. Check your libvirt network setup to obtain this value.NoteIf libvirt uses the default bridge virtual network, you can obtain the
LIBVIRT_URIby running the following commands:$ virtint=$(bridge_line=$(virsh net-info default | grep Bridge); echo "${bridge_line//Bridge:/}" | tr -d [:blank:]) $ LIBVIRT_URI=$( ip -4 addr show $virtint | grep -oP '(?<=inet\s)\d+(\.\d+){3}') $ LIBVIRT_GATEWAY_URI="qemu+ssh://root@${LIBVIRT_URI}/system?no_verify=1"-
REDHAT_OFFLINE_TOKEN. You have generated this token to download the RHEL image at Red Hat API Tokens. -
HOST_KEY_CERTS. The Host Key Document (HKD) certificate enables secure execution on IBM Z®. For more information, see Obtaining a host key document from Resource Link in the IBM documentation.
Procedure
Create a
peer-pods-secret.yamlmanifest file according to the following example:apiVersion: v1 kind: Secret metadata: name: peer-pods-secret namespace: openshift-sandboxed-containers-operator type: Opaque stringData: CLOUD_PROVIDER: "libvirt" LIBVIRT_URI: "<libvirt_gateway_uri>"1 REDHAT_OFFLINE_TOKEN: "<rh_offline_token>"2 HOST_KEY_CERTS: "<host_key_crt_value>"3 Create the secret by running the following command:
$ oc create -f peer-pods-secret.yaml
3.4. Enabling the confidential containers feature gate Copy linkLink copied to clipboard!
You enable the confidential containers feature gate by creating the osc-feature-gates config map.
Procedure
Create a
cc-feature-gate.yamlmanifest file:apiVersion: v1 kind: ConfigMap metadata: name: osc-feature-gates namespace: openshift-sandboxed-containers-operator data: confidential: "true"Create the
osc-feature-gatesconfig map by running the following command:$ oc create -f cc-feature-gate.yaml
3.5. Configuring the pull secret for peer pods Copy linkLink copied to clipboard!
To pull pod VM images from a private registry, you must configure the pull secret for peer pods.
Then, you can link the pull secret to the default service account or you can specify the pull secret in the peer pod manifest.
Procedure
Set the
NSvariable to the namespace where you deploy your peer pods:$ NS=<namespace>Copy the pull secret to the peer pod namespace:
$ oc get secret pull-secret -n openshift-config -o yaml \ | sed "s/namespace: openshift-config/namespace: ${NS}/" \ | oc apply -n "${NS}" -f -You can use the cluster pull secret, as in this example, or a custom pull secret.
Optional: Link the pull secret to the default service account:
$ oc secrets link default pull-secret --for=pull -n ${NS}Alternatively, add the pull secret to the peer pod manifest:
apiVersion: v1 kind: <Pod> spec: containers: - name: <container_name> image: <image_name> imagePullSecrets: - name: pull-secret # ...
3.6. Creating initdata Copy linkLink copied to clipboard!
You create initdata to securely initialize a peer pod with sensitive or workload-specific data at runtime, thus avoiding the need to embed this data in a virtual machine image. This approach provides additional security by reducing the risk of exposure of confidential information and eliminates the need for custom image builds.
In a production environment, you must create initdata to override the default permissive Kata agent policy.
You can specify initdata in the peer pods config map, for global configuration, or in a peer pod manifest, for a specific pod. The initdata value in a peer pod manifest overrides the value set in the peer pods config map.
You must delete the kbs_cert setting if you configure insecure_http = true in the kbs-config config map for Red Hat build of Trustee.
Procedure
Obtain the Red Hat build of Trustee IP address by running the following command:
$ oc get node $(oc get pod -n trustee-operator-system \ -o jsonpath='{.items[0].spec.nodeName}') \ -o jsonpath='{.status.addresses[?(@.type=="InternalIP")].address}'Example output
192.168.122.22Obtain the port by running the following command:
$ oc get svc kbs-service -n trustee-operator-systemExample output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kbs-service NodePort 172.30.116.11 <none> 8080:32178/TCP 12dCreate the
initdata.tomlfile:algorithm = "sha384" version = "0.1.0" [data] "aa.toml" = ''' [token_configs] [token_configs.coco_as] url = '<trustee_url>' [token_configs.kbs] url = '<trustee_url>' cert = """ -----BEGIN CERTIFICATE----- <kbs_certificate> -----END CERTIFICATE----- """ ''' "cdh.toml" = ''' socket = 'unix:///run/confidential-containers/cdh.sock' credentials = [] [kbc] name = 'cc_kbc' url = '<trustee_url>' kbs_cert = """ -----BEGIN CERTIFICATE----- <kbs_certificate> -----END CERTIFICATE----- """ ''' "policy.rego" = ''' package agent_policy default AddARPNeighborsRequest := true default AddSwapRequest := true default CloseStdinRequest := true default CopyFileRequest := true default CreateContainerRequest := true default CreateSandboxRequest := true default DestroySandboxRequest := true default ExecProcessRequest := true default GetMetricsRequest := true default GetOOMEventRequest := true default GuestDetailsRequest := true default ListInterfacesRequest := true default ListRoutesRequest := true default MemHotplugByProbeRequest := true default OnlineCPUMemRequest := true default PauseContainerRequest := true default PullImageRequest := true default ReadStreamRequest := true default RemoveContainerRequest := true default RemoveStaleVirtiofsShareMountsRequest := true default ReseedRandomDevRequest := true default ResumeContainerRequest := true default SetGuestDateTimeRequest := true default SetPolicyRequest := true default SignalProcessRequest := true default StartContainerRequest := true default StartTracingRequest := true default StatsContainerRequest := true default StopTracingRequest := true default TtyWinResizeRequest := true default UpdateContainerRequest := true default UpdateEphemeralMountsRequest := true default UpdateInterfaceRequest := true default UpdateRoutesRequest := true default WaitProcessRequest := true default WriteStreamRequest := true '''- URL
-
Specify the Red Hat build of Trustee IP address and the port, for example,
https://192.168.122.22:32178. - <kbs_certificate>
- Specify the Base64-encoded TLS certificate for the attestation agent.
- kbs_cert
-
Delete the
kbs_certsetting if you configureinsecure_http = truein thekbs-configconfig map for Red Hat build of Trustee.
Convert the
initdata.tomlfile to a Base64-encoded string in gzip format in a text file by running the following command:$ cat initdata.toml | gzip | base64 -w0 > initdata.txtRecord this string for the peer pods config map or a peer pod manifest.
Calculate the SHA-256 hash of an
initdata.tomlfile and assign its value to thehashvariable by running the following command:$ hash=$(sha256sum initdata.toml | cut -d' ' -f1)Assign 32 bytes of 0s to the
initial_pcrvariable by running the following command:$ initial_pcr=0000000000000000000000000000000000000000000000000000000000000000Calculate the SHA-256 hash of
hashandinitial_pcrand assign its value to thePCR8_HASHvariable by running the following command:$ PCR8_HASH=$(echo -n "$initial_pcr$hash" | xxd -r -p | sha256sum | cut -d' ' -f1) && echo $PCR8_HASHRecord the
PCR8_HASHvalue for the. Calculate the SHA-256 hash of aninitdata.tomlfile and assign its value to thehashvariable by running the following command:$ hash=$(sha256sum initdata.toml | cut -d' ' -f1)Assign 32 bytes of 0s to the
initial_pcrvariable by running the following command:$ initial_pcr=0000000000000000000000000000000000000000000000000000000000000000Calculate the SHA-256 hash of
hashandinitial_pcrand assign its value to thePCR8_HASHvariable by running the following command:$ PCR8_HASH=$(echo -n "$initial_pcr$hash" | xxd -r -p | sha256sum | cut -d' ' -f1) && echo $PCR8_HASHRecord the
PCR8_HASHvalue for the RVPS config map.
3.7. Creating the peer pods config map Copy linkLink copied to clipboard!
You must create the peer pods config map.
Optional: Add initdata to the peer pods config map to create a default configuration for all peer pods.
Procedure
Create a
peer-pods-cm.yamlmanifest file according to the following example:apiVersion: v1 kind: ConfigMap metadata: name: peer-pods-cm namespace: openshift-sandboxed-containers-operator data: CLOUD_PROVIDER: "libvirt" LIBVIRT_POOL: "<libvirt_pool>" LIBVIRT_VOL_NAME: "<libvirt_volume>" LIBVIRT_DIR_NAME: "/var/lib/libvirt/images/<directory_name>" LIBVIRT_NET: "default" PEERPODS_LIMIT_PER_NODE: "10" ROOT_VOLUME_SIZE: "6" DISABLECVM: "false" INITDATA: "<initdata_string>"LIBVIRT_POOL- If you have manually configured the libvirt pool, use the same name as in your KVM host configuration.
LIBVIRT_VOL_NAME- If you have manually configured the libvirt volume, use the same name as in your KVM host configuration.
LIBVIRT_DIR_NAME-
Specify the libvirt directory for storing virtual machine disk images, such as
.qcow2, or.rawfiles. To ensure libvirt has read and write access permissions, use a subdirectory of the libvirt storage directory. The default is/var/lib/libvirt/images/. LIBVIRT_NET- Specify a libvirt network if you do not want to use the default network.
PEERPODS_LIMIT_PER_NODE-
You can increase this value to run more peer pods on a node. The default value is
10. ROOT_VOLUME_SIZE- You can increase this value for pods with larger container images. Specify the root volume size in gigabytes for the pod VM. The default and minimum size is 6 GB.
- INITDATA
- Specify the initdata string to create a default configuration for all peer pods. If you add initdata to a peer pod manifest, that setting overrides this global configuration.
Create the config map by running the following command:
$ oc create -f peer-pods-cm.yaml
3.8. Applying initdata to a pod Copy linkLink copied to clipboard!
You can override the global INITDATA setting you applied in the peer pods config map by applying customized initdata to a specific pod for special use cases, such as development and testing with a relaxed policy, or when using different Red Hat build of Trustee configurations. You can customize initdata by adding an annotation to the workload pod YAML.
Prerequisite
- You have created an initdata string.
Procedure
Add the initdata string to the pod manifest:
apiVersion: v1 kind: Pod metadata: name: ocp-cc-pod labels: app: ocp-cc-pod annotations: io.katacontainers.config.runtime.cc_init_data: <initdata_string> spec: runtimeClassName: kata-remote containers: - name: <container_name> image: registry.access.redhat.com/ubi9/ubi:latest command: - sleep - "36000" securityContext: privileged: false seccompProfile: type: RuntimeDefaultCreate the pod by running the following command:
$ oc create -f my-pod.yaml
3.9. Selecting a custom peer pod VM image Copy linkLink copied to clipboard!
You can select a custom peer pod virtual machine (VM) image, tailored to your workload requirements by adding an annotation to the pod manifest. The custom image overrides the default image specified in the peer pods config map.
You create a new libvirt volume in your libvirt pool and upload the custom peer pod VM image to the new volume. Then, you update the pod manifest to use the custom peer pod VM image.
Procedure
Set the
LIBVIRT_POOLvariable by running the following command:$ export LIBVIRT_POOL=<libvirt_pool>Set the
LIBVIRT_VOL_NAMEvariable to a new libvirt volume by running the following command:$ export LIBVIRT_VOL_NAME=<new_libvirt_volume>Create a libvirt volume for the pool by running the following command:
$ virsh -c qemu:///system \ vol-create-as --pool $LIBVIRT_POOL \ --name $LIBVIRT_VOL_NAME \ --capacity 20G \ --allocation 2G \ --prealloc-metadata \ --format qcow2Upload the custom peer pod VM image to the new libvirt volume:
$ virsh -c qemu:///system vol-upload \ --vol $LIBVIRT_VOL_NAME <custom_podvm_image.qcow2> \ --pool $LIBVIRT_POOL --sparseCreate a
my-pod-manifest.yamlfile according to the following example:apiVersion: v1 kind: Pod metadata: name: my-pod-manifest annotations: io.katacontainers.config.hypervisor.image: "<new_libvirt_volume>" spec: runtimeClassName: kata-remote containers: - name: <example_container> image: registry.access.redhat.com/ubi9/ubi:9.3 command: ["sleep", "36000"]Create the pod by running the following command:
$ oc create -f my-pod-manifest.yaml
3.10. Creating the KataConfig custom resource Copy linkLink copied to clipboard!
You must create the KataConfig custom resource (CR) to install kata-remote as a runtime class on your worker nodes.
OpenShift sandboxed containers installs kata-remote as a secondary, optional runtime on the cluster and not as the primary runtime.
Creating the KataConfig CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. The following factors can increase the reboot time:
- A large OpenShift Container Platform deployment with a greater number of worker nodes.
- Activation of the BIOS and Diagnostics utility.
- Deployment on a hard disk drive rather than an SSD.
- Deployment on physical nodes such as bare metal, rather than on virtual nodes.
- A slow CPU and network.
Procedure
Create an
example-kataconfig.yamlmanifest file according to the following example:apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: example-kataconfig spec: enablePeerPods: true logLevel: info # kataConfigPoolSelector: # matchLabels: # <label_key>: '<label_value>'1 - 1
- Optional: If you have applied node labels to install
kata-remoteon specific nodes, specify the key and value, for example,cc: 'true'.
Create the
KataConfigCR by running the following command:$ oc create -f example-kataconfig.yamlThe new
KataConfigCR is created and installskata-remoteas a runtime class on the worker nodes.Wait for the
kata-remoteinstallation to complete and the worker nodes to reboot before verifying the installation.Monitor the installation progress by running the following command:
$ watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p"When the status of all workers under
kataNodesisinstalledand the conditionInProgressisFalsewithout specifying a reason, thekata-remoteis installed on the cluster.Verify that you have built the peer pod image and uploaded it to the libvirt volume by running the following command:
$ oc describe configmap peer-pods-cm -n openshift-sandboxed-containers-operatorExample output
Name: peer-pods-cm Namespace: openshift-sandboxed-containers-operator Labels: <none> Annotations: <none> Data ==== CLOUD_PROVIDER: libvirt DISABLECVM: false1 LIBVIRT_IMAGE_ID: fa-pp-vol2 BinaryData ==== Events: <none>Monitor the
kata-ocmachine config pool progress to ensure that it is in theUPDATEDstate, whenUPDATEDMACHINECOUNTequalsMACHINECOUNT, by running the following command:$ watch oc get mcp/kata-ocVerify the daemon set by running the following command:
$ oc get -n openshift-sandboxed-containers-operator ds/osc-caa-dsVerify the runtime classes by running the following command:
$ oc get runtimeclassExample output
NAME HANDLER AGE kata-remote kata-remote 152m
3.11. Verifying attestation Copy linkLink copied to clipboard!
You can verify the attestation process by creating a BusyBox pod. The pod image deploys the confidential workload where you can retrieve the key.
This procedure is an example to verify that attestation is working. Do not write sensitive data to standard I/O, because the data can be captured by using a memory dump. Only data written to memory is encrypted.
Procedure
Create a
test-pod.yamlmanifest file:apiVersion: v1 kind: Pod metadata: name: busybox namespace: default annotations: io.katacontainers.config.runtime.cc_init_data: <initdata_string> labels: run: busybox spec: runtimeClassName: kata-remote restartPolicy: Never containers: - name: busybox image: quay.io/prometheus/busybox:latest imagePullPolicy: Always command: - "sleep" - "3600"Create the pod by running the following command:
$ oc create -f test-pod.yamlLog in to the pod by running the following command:
$ oc exec -it busybox -n default -- /bin/shFetch the pod secret by running the following command:
$ wget http://127.0.0.1:8006/cdh/resource/default/kbsres1/key1Example output
Connecting to 127.0.0.1:8006 (127.0.0.1:8006) saving to 'key1' key1 100% |*******************************************| 8 0:00:00 ETA 'key1' savedDisplay the
key1value by running the following command:$ cat key1Example output
res1val1/ #