Deploying confidential containers
Protecting containers and data by leveraging trusted execution environments
Abstract
Preface Copy linkLink copied to clipboard!
Providing feedback on Red Hat documentation
You can provide feedback or report an error by submitting the Create Issue form in Jira:
- Ensure that you are logged in to Jira. If you do not have a Jira account, you must create a Red Hat Jira account.
- Launch the Create Issue form.
Complete the Summary, Description, and Reporter fields.
In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue.
- Click Create.
Chapter 1. About confidential containers Copy linkLink copied to clipboard!
Confidential containers provides a confidential computing environment to protect containers and data by leveraging Trusted Execution Environments.
For more information, see Exploring the OpenShift confidential containers solution.
1.1. Compatibility with OpenShift Container Platform Copy linkLink copied to clipboard!
The required functionality for Red Hat OpenShift Container Platform is supported by two main components:
- Kata runtime
- The Kata runtime is included with Red Hat Enterprise Linux CoreOS (RHCOS) and receives updates with every OpenShift Container Platform release. When enabling peer pods with the Kata runtime, the OpenShift sandboxed containers Operator requires external network connectivity to pull the necessary image components and helper utilities to create the pod virtual machine (VM) image.
- OpenShift sandboxed containers Operator
- The OpenShift sandboxed containers Operator is a Rolling Stream Operator, which means the latest version is the only supported version. It works with all currently supported versions of OpenShift Container Platform.
The Operator depends on the features that come with the RHCOS host and the environment it runs in.
You must install RHCOS on the worker nodes. Red Hat Enterprise Linux (RHEL) nodes are not supported.
The following compatibility matrix for OpenShift sandboxed containers and OpenShift Container Platform releases identifies compatible features and environments.
| Architecture | OpenShift Container Platform version |
|---|---|
| x86_64 | 4.16 or later |
| s390x | 4.16 or later |
There are two ways to deploy the Kata containers runtime:
- Bare metal
- Peer pods
You can deploy OpenShift sandboxed containers by using peer pods on Microsoft Azure Cloud Computing Services, AWS Cloud Computing Services, or Google Cloud. With the release of OpenShift sandboxed containers 1.10, the OpenShift sandboxed containers Operator requires OpenShift Container Platform version 4.16 or later.
| Feature | Deployment method | OpenShift Container Platform 4.16 | OpenShift Container Platform 4.17 | OpenShift Container Platform 4.18 | OpenShift Container Platform 4.19 |
|---|---|---|---|---|---|
| Confidential containers | Bare metal | N/A | N/A | N/A | N/A |
| Azure peer pods | GA | GA | GA | GA | |
| GPU support | Bare metal | N/A | N/A | N/A | N/A |
| IBM Z | N/A | N/A | N/A | N/A | |
| Azure | Developer Preview | Developer Preview | Developer Preview | Developer Preview | |
| AWS | Developer Preview | Developer Preview | Developer Preview | Developer Preview | |
| Google Cloud | Developer Preview | Developer Preview | Developer Preview | Developer Preview |
GPU support for peer pods is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
| Platform | GPU | Confidential containers |
|---|---|---|
| Azure | Developer Preview | GA |
| AWS | Developer Preview | N/A |
| Google Cloud | Developer Preview | N/A |
1.2. Peer pod resource requirements Copy linkLink copied to clipboard!
You must ensure that your cluster has sufficient resources.
Peer pod virtual machines (VMs) require resources in two locations:
-
The worker node. The worker node stores metadata, Kata shim resources (
containerd-shim-kata-v2), remote-hypervisor resources (cloud-api-adaptor), and the tunnel setup between the worker nodes and the peer pod VM. - The cloud instance. This is the actual peer pod VM running in the cloud.
The CPU and memory resources used in the Kubernetes worker node are handled by the pod overhead included in the RuntimeClass (kata-remote) definition used for creating peer pods.
The total number of peer pod VMs running in the cloud is defined as Kubernetes Node extended resources. This limit is per node and is set by the PEERPODS_LIMIT_PER_NODE attribute in the peer-pods-cm config map.
The extended resource is named kata.peerpods.io/vm, and enables the Kubernetes scheduler to handle capacity tracking and accounting.
You can edit the limit per node based on the requirements for your environment after you install the OpenShift sandboxed containers Operator.
A mutating webhook adds the extended resource kata.peerpods.io/vm to the pod specification. It also removes any resource-specific entries from the pod specification, if present. This enables the Kubernetes scheduler to account for these extended resources, ensuring the peer pod is only scheduled when resources are available.
The mutating webhook modifies a Kubernetes pod as follows:
-
The mutating webhook checks the pod for the expected
RuntimeClassNamevalue, specified in theTARGET_RUNTIME_CLASSenvironment variable. If the value in the pod specification does not match the value in theTARGET_RUNTIME_CLASS, the webhook exits without modifying the pod. If the
RuntimeClassNamevalues match, the webhook makes the following changes to the pod spec:-
The webhook removes every resource specification from the
resourcesfield of all containers and init containers in the pod. -
The webhook adds the extended resource (
kata.peerpods.io/vm) to the spec by modifying the resources field of the first container in the pod. The extended resourcekata.peerpods.io/vmis used by the Kubernetes scheduler for accounting purposes.
-
The webhook removes every resource specification from the
The mutating webhook excludes specific system namespaces in OpenShift Container Platform from mutation. If a peer pod is created in those system namespaces, then resource accounting using Kubernetes extended resources does not work unless the pod spec includes the extended resource.
As a best practice, define a cluster-wide policy to only allow peer pod creation in specific namespaces.
1.3. About initdata Copy linkLink copied to clipboard!
The initdata specification provides a flexible way to initialize a peer pod with sensitive or workload-specific data at runtime, avoiding the need to embed such data in the virtual machine (VM) image. This approach enhances security by reducing the exposure of confidential information and improves flexibility by eliminating custom image builds. For example, initdata can include three configuration settings:
- An X.509 certificate for secure communication.
- A cryptographic key for authentication.
-
An optional Kata Agent
policy.regofile to enforce runtime behavior when overriding the default permissive Kata Agent policy.
The initdata content configures the following components:
- Attestation Agent (AA), which verifies the trustworthiness of the peer pod by sending evidence for attestation.
- Confidential Data Hub (CDH), which manages secrets and secure data access within the peer pod VM.
- Kata Agent, which enforces runtime policies and manages the lifecycle of the containers inside the pod VM.
You create an initdata.toml file and convert it to a Base64-encoded, gzip-format string. You apply the initdata string to your workload by one of the following methods:
-
Global configuration: Add the initdata string as the value of the
INITDATAkey in the peer pods config map to create a default configuration for all peer pods. Pod configuration: Add the initdata string as an annotation to a pod manifest, allowing customization for individual workloads.
NoteThe initdata annotation in the pod manifest overrides the global
INITDATAvalue in the peer pods config map for that specific pod. The Kata runtime handles this precedence automatically at pod creation time.
Chapter 2. Deploying confidential containers on Azure Copy linkLink copied to clipboard!
You deploy confidential containers on a Red Hat OpenShift Container Platform cluster on Microsoft Azure Cloud Computing Services for your workloads.
You deploy confidential containers by performing the following steps:
- Configure outbound connections.
- Install the OpenShift sandboxed containers Operator.
- Enable the confidential containers feature gate.
-
Optional: If you pull a peer pod VM image from a private registry such as
registry.access.redhat.com, configure the pull secret for peer pods. Create initdata to initialize a peer pod with sensitive or workload-specific data at runtime. See About initdata for details.
ImportantDo not use the default permissive Kata Agent policy in a production environment. You must configure a restrictive policy, preferably by creating initdata.
As a minimum requirement, you must disable
ExecProcessRequestto prevent a cluster administrator from accessing sensitive data by running theoc execcommand on a confidential containers pod.- Create the peer pods config map. You can add initdata to the config map to create a default global configuration for your peer pods.
- Optional: Add initdata to a pod manifest to override the global initdata configuration you set in the peer pods config map.
-
Create the
KataConfigCR. - Verify the attestation process.
2.1. Prerequisites Copy linkLink copied to clipboard!
- You have installed the latest version of Red Hat OpenShift Container Platform on the cluster where you are running your confidential containers workload.
- You have deployed Red Hat build of Trustee on an OpenShift Container Platform cluster in a trusted environment. For more information, see Deploying Red Hat build of Trustee.
- You have enabled ports 15150 and 9000 for communication in the subnet used for worker nodes and the pod virtual machine (VM). The ports enable communication between the Kata shim running on the worker node and the Kata agent running on the pod VM.
- You have configured outbound connectivity for the pod VM subnet.
2.2. Configuring outbound connections Copy linkLink copied to clipboard!
To enable peer pods to communicate with external networks, such as the public internet, you must configure outbound connectivity for the pod virtual machine (VM) subnet. This involves setting up a NAT gateway and, optionally, defining how the subnet integrates with your cluster’s virtual network (VNet) in Azure.
- Peer pods and subnets
- Peer pods operate in a dedicated Azure subnet that requires explicit configuration for outbound access. This subnet can either be the default worker subnet used by OpenShift Container Platform nodes or a separate, custom subnet created specifically for peer pods.
- VNet peering
- When using a separate subnet, VNet peering connects the peer pod VNet to the cluster’s VNet, ensuring internal communication while maintaining isolation. This requires non-overlapping CIDR ranges between the VNets.
You can configure outbound connectivity in two ways:
- Default worker subnet: Modify the existing worker subnet to include a NAT gateway. This is simpler and reuses cluster resources, but it offers less isolation.
- Peer pod VNet: Set up a dedicated VNet and subnet for peer pods, attach a NAT gateway, and peer it with the cluster VNet. This provides greater isolation and flexibility at the cost of additional complexity.
2.2.1. Configuring the default worker subnet for outbound connections Copy linkLink copied to clipboard!
You can configure the default worker subnet with a NAT gateway.
Prerequisites
-
The Azure CLI (
az) is installed and authenticated. - You have administrator access to the Azure resource group and the VNet.
Procedure
Set the
AZURE_RESOURCE_GROUPenvironment variable by running the following command:AZURE_RESOURCE_GROUP=$(oc get infrastructure/cluster \ -o jsonpath='{.status.platformStatus.azure.resourceGroupName}')$ AZURE_RESOURCE_GROUP=$(oc get infrastructure/cluster \ -o jsonpath='{.status.platformStatus.azure.resourceGroupName}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
AZURE_REGIONenvironment variable by running the following command:AZURE_REGION=$(az group show --resource-group ${AZURE_RESOURCE_GROUP}\ --query "{Location:location}" --output tsv) && \ echo "AZURE_REGION: \"$AZURE_REGION\""$ AZURE_REGION=$(az group show --resource-group ${AZURE_RESOURCE_GROUP}\ --query "{Location:location}" --output tsv) && \ echo "AZURE_REGION: \"$AZURE_REGION\""Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
AZURE_VNET_NAMEenvironment variable by running the following command:AZURE_VNET_NAME=$(az network vnet list \ -g "${AZURE_RESOURCE_GROUP}" --query '[].name' -o tsv)$ AZURE_VNET_NAME=$(az network vnet list \ -g "${AZURE_RESOURCE_GROUP}" --query '[].name' -o tsv)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
AZURE_SUBNET_IDenvironment variable by running the following command:AZURE_SUBNET_ID=$(az network vnet subnet list \ --resource-group "${AZURE_RESOURCE_GROUP}" \ --vnet-name "${AZURE_VNET_NAME}" --query "[].{Id:id} \ | [? contains(Id, 'worker')]" --output tsv)$ AZURE_SUBNET_ID=$(az network vnet subnet list \ --resource-group "${AZURE_RESOURCE_GROUP}" \ --vnet-name "${AZURE_VNET_NAME}" --query "[].{Id:id} \ | [? contains(Id, 'worker')]" --output tsv)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the NAT gateway environment variables for the peer pod subnet by running the following commands:
export PEERPOD_NAT_GW=peerpod-nat-gw
$ export PEERPOD_NAT_GW=peerpod-nat-gwCopy to Clipboard Copied! Toggle word wrap Toggle overflow export PEERPOD_NAT_GW_IP=peerpod-nat-gw-ip
$ export PEERPOD_NAT_GW_IP=peerpod-nat-gw-ipCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a public IP address for the NAT gateway by running the following command:
az network public-ip create -g "${AZURE_RESOURCE_GROUP}" \ -n "${PEERPOD_NAT_GW_IP}" -l "${AZURE_REGION}" --sku Standard$ az network public-ip create -g "${AZURE_RESOURCE_GROUP}" \ -n "${PEERPOD_NAT_GW_IP}" -l "${AZURE_REGION}" --sku StandardCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the NAT gateway and associate it with the public IP address by running the following command:
az network nat gateway create -g "${AZURE_RESOURCE_GROUP}" \ -l "${AZURE_REGION}" --public-ip-addresses "${PEERPOD_NAT_GW_IP}" \ -n "${PEERPOD_NAT_GW}"$ az network nat gateway create -g "${AZURE_RESOURCE_GROUP}" \ -l "${AZURE_REGION}" --public-ip-addresses "${PEERPOD_NAT_GW_IP}" \ -n "${PEERPOD_NAT_GW}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the VNet subnet to use the NAT gateway by running the following command:
az network vnet subnet update --nat-gateway "${PEERPOD_NAT_GW}" \ --ids "${AZURE_SUBNET_ID}"$ az network vnet subnet update --nat-gateway "${PEERPOD_NAT_GW}" \ --ids "${AZURE_SUBNET_ID}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Confirm the NAT gateway is attached to the VNet subnet by running the following command:
az network vnet subnet show --ids "${AZURE_SUBNET_ID}" \ --query "natGateway.id" -o tsv$ az network vnet subnet show --ids "${AZURE_SUBNET_ID}" \ --query "natGateway.id" -o tsvCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output contains the NAT gateway resource ID. If no NAT gateway is attached, the output is empty.
Example output
/subscriptions/12345678-1234-1234-1234-1234567890ab/resourceGroups/myResourceGroup/providers/Microsoft.Network/natGateways/myNatGateway
/subscriptions/12345678-1234-1234-1234-1234567890ab/resourceGroups/myResourceGroup/providers/Microsoft.Network/natGateways/myNatGatewayCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2.2. Creating a peer pod VNet for outbound connections Copy linkLink copied to clipboard!
To enable public internet access, you can create a dedicated virtual network (VNet) for peer pods, attach a network address translation (NAT) gateway, create a subnet, and enable VNet peering with non-overlapping address spaces.
Prerequisites
-
The Azure CLI (
az) is installed - You have signed in to Azure. See Authenticate to Azure using Azure CLI.
- You have administrator access to the Azure resource group and VNet hosting the cluster.
-
You have verified the cluster VNet classless inter-domain routing (CIDR) address. The default value is
10.0.0.0/14. If you overrode the default value, you have ensured that you chose a non-overlapping CIDR address for the peer pod VNet. For example,192.168.0.0/16.
Procedure
Set the environmental variables for the peer pod network:
Set the peer pod VNet environment variables by running the following commands:
export PEERPOD_VNET_NAME="${PEERPOD_VNET_NAME:-peerpod-vnet}"$ export PEERPOD_VNET_NAME="${PEERPOD_VNET_NAME:-peerpod-vnet}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow export PEERPOD_VNET_CIDR="${PEERPOD_VNET_CIDR:-192.168.0.0/16}"$ export PEERPOD_VNET_CIDR="${PEERPOD_VNET_CIDR:-192.168.0.0/16}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the peer pod subnet environment variables by running the following commands:
export PEERPOD_SUBNET_NAME="${PEERPOD_SUBNET_NAME:-peerpod-subnet}"$ export PEERPOD_SUBNET_NAME="${PEERPOD_SUBNET_NAME:-peerpod-subnet}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow export PEERPOD_SUBNET_CIDR="${PEERPOD_SUBNET_CIDR:-192.168.0.0/16}"$ export PEERPOD_SUBNET_CIDR="${PEERPOD_SUBNET_CIDR:-192.168.0.0/16}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Set the environmental variables for Azure:
AZURE_RESOURCE_GROUP=$(oc get infrastructure/cluster \ -o jsonpath='{.status.platformStatus.azure.resourceGroupName}')$ AZURE_RESOURCE_GROUP=$(oc get infrastructure/cluster \ -o jsonpath='{.status.platformStatus.azure.resourceGroupName}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow AZURE_REGION=$(az group show --resource-group ${AZURE_RESOURCE_GROUP}\ --query "{Location:location}" --output tsv) && \ echo "AZURE_REGION: \"$AZURE_REGION\""$ AZURE_REGION=$(az group show --resource-group ${AZURE_RESOURCE_GROUP}\ --query "{Location:location}" --output tsv) && \ echo "AZURE_REGION: \"$AZURE_REGION\""Copy to Clipboard Copied! Toggle word wrap Toggle overflow AZURE_VNET_NAME=$(az network vnet list \ -g "${AZURE_RESOURCE_GROUP}" --query '[].name' -o tsv)$ AZURE_VNET_NAME=$(az network vnet list \ -g "${AZURE_RESOURCE_GROUP}" --query '[].name' -o tsv)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the peer pod NAT gateway environment variables by running the following commands:
export PEERPOD_NAT_GW="${PEERPOD_NAT_GW:-peerpod-nat-gw}"$ export PEERPOD_NAT_GW="${PEERPOD_NAT_GW:-peerpod-nat-gw}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow export PEERPOD_NAT_GW_IP="${PEERPOD_NAT_PUBLIC_IP:-peerpod-nat-gw-ip}"$ export PEERPOD_NAT_GW_IP="${PEERPOD_NAT_PUBLIC_IP:-peerpod-nat-gw-ip}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the VNET:
Create the peer pod VNet by running the following command:
az network vnet create --resource-group "${AZURE_RESOURCE_GROUP}" \ --name "${PEERPOD_VNET_NAME}" \ --address-prefixes "${PEERPOD_VNET_CIDR}"$ az network vnet create --resource-group "${AZURE_RESOURCE_GROUP}" \ --name "${PEERPOD_VNET_NAME}" \ --address-prefixes "${PEERPOD_VNET_CIDR}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a public IP address for the peer pod VNet by running the following command:
az network public-ip create -g "${AZURE_RESOURCE_GROUP}" \ -n "${PEERPOD_NAT_GW_IP}" -l "${AZURE_REGION}"$ az network public-ip create -g "${AZURE_RESOURCE_GROUP}" \ -n "${PEERPOD_NAT_GW_IP}" -l "${AZURE_REGION}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a NAT gateway for the peer pod VNet by running the following command:
az network nat gateway create -g "${AZURE_RESOURCE_GROUP}" \ -l "${AZURE_REGION}" \ --public-ip-addresses "${PEERPOD_NAT_GW_IP}" \ -n "${PEERPOD_NAT_GW}"$ az network nat gateway create -g "${AZURE_RESOURCE_GROUP}" \ -l "${AZURE_REGION}" \ --public-ip-addresses "${PEERPOD_NAT_GW_IP}" \ -n "${PEERPOD_NAT_GW}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a subnet in the peer pod VNet and attach the NAT gateway by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Configure the virtual network peering connection:
Create the peering connection by running the following command:
az network vnet peering create -g "${AZURE_RESOURCE_GROUP}" \ -n peerpod-azure-vnet-to-peerpod-vnet \ --vnet-name "${AZURE_VNET_NAME}" \ --remote-vnet "${PEERPOD_VNET_NAME}" --allow-vnet-access \ --allow-forwarded-traffic$ az network vnet peering create -g "${AZURE_RESOURCE_GROUP}" \ -n peerpod-azure-vnet-to-peerpod-vnet \ --vnet-name "${AZURE_VNET_NAME}" \ --remote-vnet "${PEERPOD_VNET_NAME}" --allow-vnet-access \ --allow-forwarded-trafficCopy to Clipboard Copied! Toggle word wrap Toggle overflow Sync the peering connection by running the following command:
az network vnet peering sync -g "${AZURE_RESOURCE_GROUP}" \ -n peerpod-azure-vnet-to-peerpod-vnet \ --vnet-name "${AZURE_VNET_NAME}"$ az network vnet peering sync -g "${AZURE_RESOURCE_GROUP}" \ -n peerpod-azure-vnet-to-peerpod-vnet \ --vnet-name "${AZURE_VNET_NAME}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Complete the peering connection by running the following command:
az network vnet peering create -g "${AZURE_RESOURCE_GROUP}" \ -n peerpod-peerpod-vnet-to-azure-vnet \ --vnet-name "${PEERPOD_VNET_NAME}" \ --remote-vnet "${AZURE_VNET_NAME}" --allow-vnet-access \ --allow-forwarded-traffic$ az network vnet peering create -g "${AZURE_RESOURCE_GROUP}" \ -n peerpod-peerpod-vnet-to-azure-vnet \ --vnet-name "${PEERPOD_VNET_NAME}" \ --remote-vnet "${AZURE_VNET_NAME}" --allow-vnet-access \ --allow-forwarded-trafficCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check the peering connection status from the cluster VNet by running the following command:
az network vnet peering show -g "${AZURE_RESOURCE_GROUP}" \ -n peerpod-azure-vnet-to-peerpod-vnet \ --vnet-name "${AZURE_VNET_NAME}" \ --query "peeringState" -o tsv$ az network vnet peering show -g "${AZURE_RESOURCE_GROUP}" \ -n peerpod-azure-vnet-to-peerpod-vnet \ --vnet-name "${AZURE_VNET_NAME}" \ --query "peeringState" -o tsvCopy to Clipboard Copied! Toggle word wrap Toggle overflow This should return
Connected.Verify that the NAT gateway is attached to the peer pod subnet by running the following command:
az network vnet subnet show --resource-group "${AZURE_RESOURCE_GROUP}" \ --vnet-name "${PEERPOD_VNET_NAME}" --name "${PEERPOD_SUBNET_NAME}" \ --query "natGateway.id" -o tsv$ az network vnet subnet show --resource-group "${AZURE_RESOURCE_GROUP}" \ --vnet-name "${PEERPOD_VNET_NAME}" --name "${PEERPOD_SUBNET_NAME}" \ --query "natGateway.id" -o tsvCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Installing the OpenShift sandboxed containers Operator Copy linkLink copied to clipboard!
You install the OpenShift sandboxed containers Operator by using the command line interface (CLI).
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole.
Procedure
Create an
osc-namespace.yamlmanifest file:apiVersion: v1 kind: Namespace metadata: name: openshift-sandboxed-containers-operator
apiVersion: v1 kind: Namespace metadata: name: openshift-sandboxed-containers-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the namespace by running the following command:
oc create -f osc-namespace.yaml
$ oc create -f osc-namespace.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
osc-operatorgroup.yamlmanifest file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the operator group by running the following command:
oc create -f osc-operatorgroup.yaml
$ oc create -f osc-operatorgroup.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
osc-subscription.yamlmanifest file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the subscription by running the following command:
oc create -f osc-subscription.yaml
$ oc create -f osc-subscription.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the Operator is correctly installed by running the following command:
oc get csv -n openshift-sandboxed-containers-operator
$ oc get csv -n openshift-sandboxed-containers-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command can take several minutes to complete.
Watch the process by running the following command:
watch oc get csv -n openshift-sandboxed-containers-operator
$ watch oc get csv -n openshift-sandboxed-containers-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DISPLAY VERSION REPLACES PHASE openshift-sandboxed-containers openshift-sandboxed-containers-operator 1.10.3 1.9.0 Succeeded
NAME DISPLAY VERSION REPLACES PHASE openshift-sandboxed-containers openshift-sandboxed-containers-operator 1.10.3 1.9.0 SucceededCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4. Enabling the confidential containers feature gate Copy linkLink copied to clipboard!
You enable the confidential containers feature gate by creating the osc-feature-gates config map.
Procedure
Create a
cc-feature-gate.yamlmanifest file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
osc-feature-gatesconfig map by running the following command:oc create -f cc-feature-gate.yaml
$ oc create -f cc-feature-gate.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5. Creating the Azure secret Copy linkLink copied to clipboard!
You must create the SSH key secret, which is required by the Azure virtual machine (VM) creation API. Azure only requires the SSH public key. OpenShift sandboxed containers disables SSH in VMs, so the keys have no effect in the VMs.
Procedure
Generate an SSH key pair by running the following command:
ssh-keygen -f ./id_rsa -N ""
$ ssh-keygen -f ./id_rsa -N ""Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
Secretobject by running the following command:oc create secret generic ssh-key-secret \ -n openshift-sandboxed-containers-operator \ --from-file=id_rsa.pub=./id_rsa.pub \ --from-file=id_rsa=./id_rsa
$ oc create secret generic ssh-key-secret \ -n openshift-sandboxed-containers-operator \ --from-file=id_rsa.pub=./id_rsa.pub \ --from-file=id_rsa=./id_rsaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the SSH keys you created:
shred --remove id_rsa.pub id_rsa
$ shred --remove id_rsa.pub id_rsaCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6. Configuring the pull secret for peer pods Copy linkLink copied to clipboard!
To pull pod VM images from a private registry, you must configure the pull secret for peer pods.
Then, you can link the pull secret to the default service account or you can specify the pull secret in the peer pod manifest.
Procedure
Set the
NSvariable to the namespace where you deploy your peer pods:NS=<namespace>
$ NS=<namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the pull secret to the peer pod namespace:
oc get secret pull-secret -n openshift-config -o yaml \ | sed "s/namespace: openshift-config/namespace: ${NS}/" \ | oc apply -n "${NS}" -f -$ oc get secret pull-secret -n openshift-config -o yaml \ | sed "s/namespace: openshift-config/namespace: ${NS}/" \ | oc apply -n "${NS}" -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use the cluster pull secret, as in this example, or a custom pull secret.
Optional: Link the pull secret to the default service account:
oc secrets link default pull-secret --for=pull -n ${NS}$ oc secrets link default pull-secret --for=pull -n ${NS}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, add the pull secret to the peer pod manifest:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7. Creating initdata Copy linkLink copied to clipboard!
You create initdata to securely initialize a peer pod with sensitive or workload-specific data at runtime, thus avoiding the need to embed this data in a virtual machine image. This approach provides additional security by reducing the risk of exposure of confidential information and eliminates the need for custom image builds.
In a production environment, you must create initdata to override the default permissive Kata agent policy.
You can specify initdata in the peer pods config map, for global configuration, or in a peer pod manifest, for a specific pod. The initdata value in a peer pod manifest overrides the value set in the peer pods config map.
Then, you generate a Platform Configuration Register (PCR) 8 hash from the initdata.toml file for the Reference Value Provider Service (RVPS) config map for Red Hat build of Trustee.
Red Hat build of Trustee uses the RVPS to validate attestation evidence sent by confidential workloads. The RVPS contains trusted reference values, such as file hashes, that are compared to the PCR measurements included in attestation requests. These hashes are not generated by Red Hat build of Trustee.
You must delete the kbs_cert setting if you configure insecure_http = true in the kbs-config config map for Red Hat build of Trustee.
Procedure
Obtain the Red Hat build of Trustee URL by running the following command:
TRUSTEE_URL=$(oc get route kbs-service \ -n trustee-operator-system -o jsonpath='{.spec.host}') \ && echo $TRUSTEE_URL$ TRUSTEE_URL=$(oc get route kbs-service \ -n trustee-operator-system -o jsonpath='{.spec.host}') \ && echo $TRUSTEE_URLCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
initdata.tomlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - URL
-
Specify the Red Hat build of Trustee URL. If you configure the Red Hat build of Trustee with
insecure_httpfor testing purposes, use HTTP. Otherwise, use HTTPS. For production systems, avoid usinginsecure_httpunless you configure your environment to handle TLS externally, for example, with a proxy. - <kbs_certificate>
- Specify the Base64-encoded TLS certificate for the attestation agent.
- kbs_cert
-
Delete the
kbs_certsetting if you configureinsecure_http = truein thekbs-configconfig map for Red Hat build of Trustee.
Convert the
initdata.tomlfile to a Base64-encoded string in gzip format in a text file by running the following command:cat initdata.toml | gzip | base64 -w0 > initdata.txt
$ cat initdata.toml | gzip | base64 -w0 > initdata.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Record this string for the peer pods config map or a peer pod manifest.
Calculate the SHA-256 hash of an
initdata.tomlfile and assign its value to thehashvariable by running the following command:hash=$(sha256sum initdata.toml | cut -d' ' -f1)
$ hash=$(sha256sum initdata.toml | cut -d' ' -f1)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign 32 bytes of 0s to the
initial_pcrvariable by running the following command:initial_pcr=0000000000000000000000000000000000000000000000000000000000000000
$ initial_pcr=0000000000000000000000000000000000000000000000000000000000000000Copy to Clipboard Copied! Toggle word wrap Toggle overflow Calculate the SHA-256 hash of
hashandinitial_pcrand assign its value to thePCR8_HASHvariable by running the following command:PCR8_HASH=$(echo -n "$initial_pcr$hash" | xxd -r -p | sha256sum | cut -d' ' -f1) && echo $PCR8_HASH
$ PCR8_HASH=$(echo -n "$initial_pcr$hash" | xxd -r -p | sha256sum | cut -d' ' -f1) && echo $PCR8_HASHCopy to Clipboard Copied! Toggle word wrap Toggle overflow Record the
PCR8_HASHvalue for the. Calculate the SHA-256 hash of aninitdata.tomlfile and assign its value to thehashvariable by running the following command:hash=$(sha256sum initdata.toml | cut -d' ' -f1)
$ hash=$(sha256sum initdata.toml | cut -d' ' -f1)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign 32 bytes of 0s to the
initial_pcrvariable by running the following command:initial_pcr=0000000000000000000000000000000000000000000000000000000000000000
$ initial_pcr=0000000000000000000000000000000000000000000000000000000000000000Copy to Clipboard Copied! Toggle word wrap Toggle overflow Calculate the SHA-256 hash of
hashandinitial_pcrand assign its value to thePCR8_HASHvariable by running the following command:PCR8_HASH=$(echo -n "$initial_pcr$hash" | xxd -r -p | sha256sum | cut -d' ' -f1) && echo $PCR8_HASH
$ PCR8_HASH=$(echo -n "$initial_pcr$hash" | xxd -r -p | sha256sum | cut -d' ' -f1) && echo $PCR8_HASHCopy to Clipboard Copied! Toggle word wrap Toggle overflow Record the
PCR8_HASHvalue for the RVPS config map.
2.8. Creating the peer pods config map Copy linkLink copied to clipboard!
You must create the peer pods config map.
Optional: Add initdata to the peer pods config map to create a default configuration for all peer pods.
Procedure
Obtain the following values from your Azure instance:
Retrieve and record the Azure resource group:
AZURE_RESOURCE_GROUP=$(oc get infrastructure/cluster \ -o jsonpath='{.status.platformStatus.azure.resourceGroupName}') \ && echo "AZURE_RESOURCE_GROUP: \"$AZURE_RESOURCE_GROUP\""$ AZURE_RESOURCE_GROUP=$(oc get infrastructure/cluster \ -o jsonpath='{.status.platformStatus.azure.resourceGroupName}') \ && echo "AZURE_RESOURCE_GROUP: \"$AZURE_RESOURCE_GROUP\""Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve and record the Azure VNet name:
AZURE_VNET_NAME=$(az network vnet list \ --resource-group ${AZURE_RESOURCE_GROUP} \ --query "[].{Name:name}" --output tsv)$ AZURE_VNET_NAME=$(az network vnet list \ --resource-group ${AZURE_RESOURCE_GROUP} \ --query "[].{Name:name}" --output tsv)Copy to Clipboard Copied! Toggle word wrap Toggle overflow This value is used to retrieve the Azure subnet ID.
Retrieve and record the Azure subnet ID:
AZURE_SUBNET_ID=$(az network vnet subnet list \ --resource-group ${AZURE_RESOURCE_GROUP} --vnet-name $AZURE_VNET_NAME \ --query "[].{Id:id} | [? contains(Id, 'worker')]" --output tsv) \ && echo "AZURE_SUBNET_ID: \"$AZURE_SUBNET_ID\""$ AZURE_SUBNET_ID=$(az network vnet subnet list \ --resource-group ${AZURE_RESOURCE_GROUP} --vnet-name $AZURE_VNET_NAME \ --query "[].{Id:id} | [? contains(Id, 'worker')]" --output tsv) \ && echo "AZURE_SUBNET_ID: \"$AZURE_SUBNET_ID\""Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve and record the Azure network security group (NSG) ID:
AZURE_NSG_ID=$(az network nsg list --resource-group ${AZURE_RESOURCE_GROUP} \ --query "[].{Id:id}" --output tsv) && echo "AZURE_NSG_ID: \"$AZURE_NSG_ID\""$ AZURE_NSG_ID=$(az network nsg list --resource-group ${AZURE_RESOURCE_GROUP} \ --query "[].{Id:id}" --output tsv) && echo "AZURE_NSG_ID: \"$AZURE_NSG_ID\""Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve and record the Azure region:
AZURE_REGION=$(az group show --resource-group ${AZURE_RESOURCE_GROUP} \ --query "{Location:location}" --output tsv) \ && echo "AZURE_REGION: \"$AZURE_REGION\""$ AZURE_REGION=$(az group show --resource-group ${AZURE_RESOURCE_GROUP} \ --query "{Location:location}" --output tsv) \ && echo "AZURE_REGION: \"$AZURE_REGION\""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
peer-pods-cm.yamlmanifest file according to the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow AZURE_INSTANCE_SIZE-
Defines the default instance size that is used if the instance size is not defined in the workload object.
"Standard_DC2as_v5"is for AMD SEV-SNP. If your TEE is Intel TDX, specifyStandard_EC4eds_v5. AZURE_IMAGE_ID- Leave this value empty. When you install the Operator, a Job is scheduled to download the default pod VM image from the Red Hat Ecosystem Catalog and upload it to the Azure Image Gallery within the same Azure Resource Group as the OpenShift Container Platform cluster. This image provides root disk integrity protection (dm-verity) and encrypted container storage. See Confidential VMs: The core of confidential containers for details.
AZURE_INSTANCE_SIZES- Specify the allowed instance sizes, without spaces, for creating the pod. You can define smaller instance sizes for workloads that need less memory and fewer CPUs or larger instance sizes for larger workloads.
TAGS-
You can configure custom tags as
key:valuepairs for pod VM instances to track peer pod costs or to identify peer pods in different clusters. PEERPODS_LIMIT_PER_NODE-
You can increase this value to run more peer pods on a node. The default value is
10. ROOT_VOLUME_SIZE- You can increase this value for pods with larger container images. Specify the root volume size in gigabytes for the pod VM. The default and minimum size is 6 GB.
- INITDATA
- Specify the initdata string to create a default configuration for all peer pods. If you add initdata to a peer pod manifest, that setting overrides this global configuration.
Create the config map by running the following command:
oc create -f peer-pods-cm.yaml
$ oc create -f peer-pods-cm.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.9. Applying initdata to a pod Copy linkLink copied to clipboard!
You can override the global INITDATA setting you applied in the peer pods config map by applying customized initdata to a specific pod for special use cases, such as development and testing with a relaxed policy, or when using different Red Hat build of Trustee configurations. You can customize initdata by adding an annotation to the workload pod YAML.
Prerequisite
- You have created an initdata string.
Procedure
Add the initdata string to the pod manifest:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the pod by running the following command:
oc create -f my-pod.yaml
$ oc create -f my-pod.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.10. Selecting a custom peer pod VM image Copy linkLink copied to clipboard!
You can select a custom peer pod virtual machine (VM) image, tailored to your workload requirements by adding an annotation to the pod manifest. The custom image overrides the default image specified in the peer pods config map.
Prerequisites
- You have the ID of a custom pod VM image, which is compatible with your cloud provider or hypervisor.
Procedure
Create a
my-pod-manifest.yamlfile according to the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the pod by running the following command:
oc create -f my-pod-manifest.yaml
$ oc create -f my-pod-manifest.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.11. Creating the KataConfig custom resource Copy linkLink copied to clipboard!
You must create the KataConfig custom resource (CR) to install kata-remote as a runtime class on your worker nodes.
OpenShift sandboxed containers installs kata-remote as a secondary, optional runtime on the cluster and not as the primary runtime.
Creating the KataConfig CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. The following factors can increase the reboot time:
- A large OpenShift Container Platform deployment with a greater number of worker nodes.
- Activation of the BIOS and Diagnostics utility.
- Deployment on a hard disk drive rather than an SSD.
- Deployment on physical nodes such as bare metal, rather than on virtual nodes.
- A slow CPU and network.
Procedure
Create an
example-kataconfig.yamlmanifest file according to the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Optional: If you have applied node labels to install
kata-remoteon specific nodes, specify the key and value, for example,cc: 'true'.
Create the
KataConfigCR by running the following command:oc create -f example-kataconfig.yaml
$ oc create -f example-kataconfig.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow The new
KataConfigCR is created and installskata-remoteas a runtime class on the worker nodes.Wait for the
kata-remoteinstallation to complete and the worker nodes to reboot before verifying the installation.Monitor the installation progress by running the following command:
watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p"
$ watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p"Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the status of all workers under
kataNodesisinstalledand the conditionInProgressisFalsewithout specifying a reason, thekata-remoteis installed on the cluster.Verify the daemon set by running the following command:
oc get -n openshift-sandboxed-containers-operator ds/osc-caa-ds
$ oc get -n openshift-sandboxed-containers-operator ds/osc-caa-dsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the runtime classes by running the following command:
oc get runtimeclass
$ oc get runtimeclassCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME HANDLER AGE kata-remote kata-remote 152m
NAME HANDLER AGE kata-remote kata-remote 152mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.12. Verifying attestation Copy linkLink copied to clipboard!
You can verify the attestation process by creating a test pod with a relaxed Kata agent policy and retrieving its key.
This procedure is an example to verify that attestation is working. Do not write sensitive data to standard I/O, because the data can be captured by using a memory dump. Only data written to memory is encrypted.
Procedure
Create a
test-pod.yamlmanifest file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Optional: Setting initdata in a pod annotation overrides the global
INITDATAsetting in the peer pods config map.
Create the pod by running the following command:
oc create -f test-pod.yaml
$ oc create -f test-pod.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to the pod by running the following command:
oc exec -it ocp-cc-pod -- bash
$ oc exec -it ocp-cc-pod -- bashCopy to Clipboard Copied! Toggle word wrap Toggle overflow Fetch the pod secret by running the following command:
curl http://127.0.0.1:8006/cdh/resource/default/kbsres1/key1
$ curl http://127.0.0.1:8006/cdh/resource/default/kbsres1/key1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
res1val1/ #
res1val1/ #Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 3. Deploying confidential containers on IBM Z and IBM LinuxONE Copy linkLink copied to clipboard!
You deploy confidential containers on a Red Hat OpenShift Container Platform cluster on IBM Z® and IBM® LinuxONE for your workloads.
Confidential containers on IBM Z® and IBM® LinuxONE is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You deploy confidential containers by performing the following steps:
- Install the OpenShift sandboxed containers Operator.
- Create the peer pods secret.
- Enable the confidential containers feature gate.
-
Optional: If you pull a peer pod VM image from a private registry such as
registry.access.redhat.com, configure the pull secret for peer pods. Create initdata to initialize a peer pod with sensitive or workload-specific data at runtime. See About initdata for details.
ImportantDo not use the default permissive Kata Agent policy in a production environment. You must configure a restrictive policy, preferably by creating initdata.
As a minimum requirement, you must disable
ExecProcessRequestto prevent a cluster administrator from accessing sensitive data by running theoc execcommand on a confidential containers pod.- Create the peer pods config map. You can add initdata to the config map to create a default global configuration for your peer pods.
- Optional: Add initdata to a pod manifest to override the global initdata configuration you set in the peer pods config map.
- Optional: Select a custom peer pod VM image.
-
Create the
KataConfigCR. - Verify the attestation process.
IBM® Hyper Protect Confidential Container (HPCC) for Red Hat OpenShift Container Platform is now production-ready. HPCC enables Confidential Computing technology at the enterprise scale by providing a multiparty Hyper Protect Contract, deployment attestation, and validation of container runtime and OCI image integrity.
HPCC is supported by IBM Z17® and IBM® LinuxONE Emperor 5. For more information, see the IBM HPCC documentation.
3.1. Prerequisites Copy linkLink copied to clipboard!
- You have installed the latest version of Red Hat OpenShift Container Platform on the cluster where you are running your confidential containers workload.
- You have deployed Red Hat build of Trustee on an OpenShift Container Platform cluster in a trusted environment. For more information, see Deploying Red Hat build of Trustee.
- You are using LinuxONE Emperor 4.
- You have enabled Secure Unpack Facility on your Logical Partition (LPAR), which is necessary for the IBM Secure Execution. For more information, see Enabling the KVM host for IBM Secure Execution.
3.2. Installing the OpenShift sandboxed containers Operator Copy linkLink copied to clipboard!
You install the OpenShift sandboxed containers Operator by using the command line interface (CLI).
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole.
Procedure
Create an
osc-namespace.yamlmanifest file:apiVersion: v1 kind: Namespace metadata: name: openshift-sandboxed-containers-operator
apiVersion: v1 kind: Namespace metadata: name: openshift-sandboxed-containers-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the namespace by running the following command:
oc create -f osc-namespace.yaml
$ oc create -f osc-namespace.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
osc-operatorgroup.yamlmanifest file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the operator group by running the following command:
oc create -f osc-operatorgroup.yaml
$ oc create -f osc-operatorgroup.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
osc-subscription.yamlmanifest file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the subscription by running the following command:
oc create -f osc-subscription.yaml
$ oc create -f osc-subscription.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the Operator is correctly installed by running the following command:
oc get csv -n openshift-sandboxed-containers-operator
$ oc get csv -n openshift-sandboxed-containers-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command can take several minutes to complete.
Watch the process by running the following command:
watch oc get csv -n openshift-sandboxed-containers-operator
$ watch oc get csv -n openshift-sandboxed-containers-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DISPLAY VERSION REPLACES PHASE openshift-sandboxed-containers openshift-sandboxed-containers-operator 1.10.3 1.9.0 Succeeded
NAME DISPLAY VERSION REPLACES PHASE openshift-sandboxed-containers openshift-sandboxed-containers-operator 1.10.3 1.9.0 SucceededCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3. Creating the peer pods secret Copy linkLink copied to clipboard!
You must create a peer pods secret. The secret stores credentials for creating the pod virtual machine (VM) image and peer pod instances.
Prerequisites
LIBVIRT_URI. This value is the default gateway IP address of the libvirt network. Check your libvirt network setup to obtain this value.NoteIf libvirt uses the default bridge virtual network, you can obtain the
LIBVIRT_URIby running the following commands:virtint=$(bridge_line=$(virsh net-info default | grep Bridge); echo "${bridge_line//Bridge:/}" | tr -d [:blank:]) LIBVIRT_URI=$( ip -4 addr show $virtint | grep -oP '(?<=inet\s)\d+(\.\d+){3}') LIBVIRT_GATEWAY_URI="qemu+ssh://root@${LIBVIRT_URI}/system?no_verify=1"$ virtint=$(bridge_line=$(virsh net-info default | grep Bridge); echo "${bridge_line//Bridge:/}" | tr -d [:blank:]) $ LIBVIRT_URI=$( ip -4 addr show $virtint | grep -oP '(?<=inet\s)\d+(\.\d+){3}') $ LIBVIRT_GATEWAY_URI="qemu+ssh://root@${LIBVIRT_URI}/system?no_verify=1"Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
REDHAT_OFFLINE_TOKEN. You have generated this token to download the RHEL image at Red Hat API Tokens. -
HOST_KEY_CERTS. The Host Key Document (HKD) certificate enables secure execution on IBM Z®. For more information, see Obtaining a host key document from Resource Link in the IBM documentation.
Procedure
Create a
peer-pods-secret.yamlmanifest file according to the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the secret by running the following command:
oc create -f peer-pods-secret.yaml
$ oc create -f peer-pods-secret.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4. Enabling the confidential containers feature gate Copy linkLink copied to clipboard!
You enable the confidential containers feature gate by creating the osc-feature-gates config map.
Procedure
Create a
cc-feature-gate.yamlmanifest file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
osc-feature-gatesconfig map by running the following command:oc create -f cc-feature-gate.yaml
$ oc create -f cc-feature-gate.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.5. Configuring the pull secret for peer pods Copy linkLink copied to clipboard!
To pull pod VM images from a private registry, you must configure the pull secret for peer pods.
Then, you can link the pull secret to the default service account or you can specify the pull secret in the peer pod manifest.
Procedure
Set the
NSvariable to the namespace where you deploy your peer pods:NS=<namespace>
$ NS=<namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the pull secret to the peer pod namespace:
oc get secret pull-secret -n openshift-config -o yaml \ | sed "s/namespace: openshift-config/namespace: ${NS}/" \ | oc apply -n "${NS}" -f -$ oc get secret pull-secret -n openshift-config -o yaml \ | sed "s/namespace: openshift-config/namespace: ${NS}/" \ | oc apply -n "${NS}" -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use the cluster pull secret, as in this example, or a custom pull secret.
Optional: Link the pull secret to the default service account:
oc secrets link default pull-secret --for=pull -n ${NS}$ oc secrets link default pull-secret --for=pull -n ${NS}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, add the pull secret to the peer pod manifest:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.6. Creating initdata Copy linkLink copied to clipboard!
You create initdata to securely initialize a peer pod with sensitive or workload-specific data at runtime, thus avoiding the need to embed this data in a virtual machine image. This approach provides additional security by reducing the risk of exposure of confidential information and eliminates the need for custom image builds.
In a production environment, you must create initdata to override the default permissive Kata agent policy.
You can specify initdata in the peer pods config map, for global configuration, or in a peer pod manifest, for a specific pod. The initdata value in a peer pod manifest overrides the value set in the peer pods config map.
You must delete the kbs_cert setting if you configure insecure_http = true in the kbs-config config map for Red Hat build of Trustee.
Procedure
Obtain the Red Hat build of Trustee IP address by running the following command:
oc get node $(oc get pod -n trustee-operator-system \ -o jsonpath='{.items[0].spec.nodeName}') \ -o jsonpath='{.status.addresses[?(@.type=="InternalIP")].address}'$ oc get node $(oc get pod -n trustee-operator-system \ -o jsonpath='{.items[0].spec.nodeName}') \ -o jsonpath='{.status.addresses[?(@.type=="InternalIP")].address}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
192.168.122.22
192.168.122.22Copy to Clipboard Copied! Toggle word wrap Toggle overflow Obtain the port by running the following command:
oc get svc kbs-service -n trustee-operator-system
$ oc get svc kbs-service -n trustee-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kbs-service NodePort 172.30.116.11 <none> 8080:32178/TCP 12d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kbs-service NodePort 172.30.116.11 <none> 8080:32178/TCP 12dCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
initdata.tomlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - URL
-
Specify the Red Hat build of Trustee IP address and the port, for example,
https://192.168.122.22:32178. - <kbs_certificate>
- Specify the Base64-encoded TLS certificate for the attestation agent.
- kbs_cert
-
Delete the
kbs_certsetting if you configureinsecure_http = truein thekbs-configconfig map for Red Hat build of Trustee.
Convert the
initdata.tomlfile to a Base64-encoded string in gzip format in a text file by running the following command:cat initdata.toml | gzip | base64 -w0 > initdata.txt
$ cat initdata.toml | gzip | base64 -w0 > initdata.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Record this string for the peer pods config map or a peer pod manifest.
Calculate the SHA-256 hash of an
initdata.tomlfile and assign its value to thehashvariable by running the following command:hash=$(sha256sum initdata.toml | cut -d' ' -f1)
$ hash=$(sha256sum initdata.toml | cut -d' ' -f1)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign 32 bytes of 0s to the
initial_pcrvariable by running the following command:initial_pcr=0000000000000000000000000000000000000000000000000000000000000000
$ initial_pcr=0000000000000000000000000000000000000000000000000000000000000000Copy to Clipboard Copied! Toggle word wrap Toggle overflow Calculate the SHA-256 hash of
hashandinitial_pcrand assign its value to thePCR8_HASHvariable by running the following command:PCR8_HASH=$(echo -n "$initial_pcr$hash" | xxd -r -p | sha256sum | cut -d' ' -f1) && echo $PCR8_HASH
$ PCR8_HASH=$(echo -n "$initial_pcr$hash" | xxd -r -p | sha256sum | cut -d' ' -f1) && echo $PCR8_HASHCopy to Clipboard Copied! Toggle word wrap Toggle overflow Record the
PCR8_HASHvalue for the. Calculate the SHA-256 hash of aninitdata.tomlfile and assign its value to thehashvariable by running the following command:hash=$(sha256sum initdata.toml | cut -d' ' -f1)
$ hash=$(sha256sum initdata.toml | cut -d' ' -f1)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign 32 bytes of 0s to the
initial_pcrvariable by running the following command:initial_pcr=0000000000000000000000000000000000000000000000000000000000000000
$ initial_pcr=0000000000000000000000000000000000000000000000000000000000000000Copy to Clipboard Copied! Toggle word wrap Toggle overflow Calculate the SHA-256 hash of
hashandinitial_pcrand assign its value to thePCR8_HASHvariable by running the following command:PCR8_HASH=$(echo -n "$initial_pcr$hash" | xxd -r -p | sha256sum | cut -d' ' -f1) && echo $PCR8_HASH
$ PCR8_HASH=$(echo -n "$initial_pcr$hash" | xxd -r -p | sha256sum | cut -d' ' -f1) && echo $PCR8_HASHCopy to Clipboard Copied! Toggle word wrap Toggle overflow Record the
PCR8_HASHvalue for the RVPS config map.
3.7. Creating the peer pods config map Copy linkLink copied to clipboard!
You must create the peer pods config map.
Optional: Add initdata to the peer pods config map to create a default configuration for all peer pods.
Procedure
Create a
peer-pods-cm.yamlmanifest file according to the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow LIBVIRT_POOL- If you have manually configured the libvirt pool, use the same name as in your KVM host configuration.
LIBVIRT_VOL_NAME- If you have manually configured the libvirt volume, use the same name as in your KVM host configuration.
LIBVIRT_DIR_NAME-
Specify the libvirt directory for storing virtual machine disk images, such as
.qcow2, or.rawfiles. To ensure libvirt has read and write access permissions, use a subdirectory of the libvirt storage directory. The default is/var/lib/libvirt/images/. LIBVIRT_NET- Specify a libvirt network if you do not want to use the default network.
PEERPODS_LIMIT_PER_NODE-
You can increase this value to run more peer pods on a node. The default value is
10. ROOT_VOLUME_SIZE- You can increase this value for pods with larger container images. Specify the root volume size in gigabytes for the pod VM. The default and minimum size is 6 GB.
- INITDATA
- Specify the initdata string to create a default configuration for all peer pods. If you add initdata to a peer pod manifest, that setting overrides this global configuration.
Create the config map by running the following command:
oc create -f peer-pods-cm.yaml
$ oc create -f peer-pods-cm.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.8. Applying initdata to a pod Copy linkLink copied to clipboard!
You can override the global INITDATA setting you applied in the peer pods config map by applying customized initdata to a specific pod for special use cases, such as development and testing with a relaxed policy, or when using different Red Hat build of Trustee configurations. You can customize initdata by adding an annotation to the workload pod YAML.
Prerequisite
- You have created an initdata string.
Procedure
Add the initdata string to the pod manifest:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the pod by running the following command:
oc create -f my-pod.yaml
$ oc create -f my-pod.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.9. Selecting a custom peer pod VM image Copy linkLink copied to clipboard!
You can select a custom peer pod virtual machine (VM) image, tailored to your workload requirements by adding an annotation to the pod manifest. The custom image overrides the default image specified in the peer pods config map.
You create a new libvirt volume in your libvirt pool and upload the custom peer pod VM image to the new volume. Then, you update the pod manifest to use the custom peer pod VM image.
Procedure
Set the
LIBVIRT_POOLvariable by running the following command:export LIBVIRT_POOL=<libvirt_pool>
$ export LIBVIRT_POOL=<libvirt_pool>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
LIBVIRT_VOL_NAMEvariable to a new libvirt volume by running the following command:export LIBVIRT_VOL_NAME=<new_libvirt_volume>
$ export LIBVIRT_VOL_NAME=<new_libvirt_volume>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a libvirt volume for the pool by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Upload the custom peer pod VM image to the new libvirt volume:
virsh -c qemu:///system vol-upload \ --vol $LIBVIRT_VOL_NAME <custom_podvm_image.qcow2> \ --pool $LIBVIRT_POOL --sparse
$ virsh -c qemu:///system vol-upload \ --vol $LIBVIRT_VOL_NAME <custom_podvm_image.qcow2> \ --pool $LIBVIRT_POOL --sparseCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
my-pod-manifest.yamlfile according to the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the pod by running the following command:
oc create -f my-pod-manifest.yaml
$ oc create -f my-pod-manifest.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.10. Creating the KataConfig custom resource Copy linkLink copied to clipboard!
You must create the KataConfig custom resource (CR) to install kata-remote as a runtime class on your worker nodes.
OpenShift sandboxed containers installs kata-remote as a secondary, optional runtime on the cluster and not as the primary runtime.
Creating the KataConfig CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. The following factors can increase the reboot time:
- A large OpenShift Container Platform deployment with a greater number of worker nodes.
- Activation of the BIOS and Diagnostics utility.
- Deployment on a hard disk drive rather than an SSD.
- Deployment on physical nodes such as bare metal, rather than on virtual nodes.
- A slow CPU and network.
Procedure
Create an
example-kataconfig.yamlmanifest file according to the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Optional: If you have applied node labels to install
kata-remoteon specific nodes, specify the key and value, for example,cc: 'true'.
Create the
KataConfigCR by running the following command:oc create -f example-kataconfig.yaml
$ oc create -f example-kataconfig.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow The new
KataConfigCR is created and installskata-remoteas a runtime class on the worker nodes.Wait for the
kata-remoteinstallation to complete and the worker nodes to reboot before verifying the installation.Monitor the installation progress by running the following command:
watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p"
$ watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p"Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the status of all workers under
kataNodesisinstalledand the conditionInProgressisFalsewithout specifying a reason, thekata-remoteis installed on the cluster.Verify that you have built the peer pod image and uploaded it to the libvirt volume by running the following command:
oc describe configmap peer-pods-cm -n openshift-sandboxed-containers-operator
$ oc describe configmap peer-pods-cm -n openshift-sandboxed-containers-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the
kata-ocmachine config pool progress to ensure that it is in theUPDATEDstate, whenUPDATEDMACHINECOUNTequalsMACHINECOUNT, by running the following command:watch oc get mcp/kata-oc
$ watch oc get mcp/kata-ocCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the daemon set by running the following command:
oc get -n openshift-sandboxed-containers-operator ds/osc-caa-ds
$ oc get -n openshift-sandboxed-containers-operator ds/osc-caa-dsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the runtime classes by running the following command:
oc get runtimeclass
$ oc get runtimeclassCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME HANDLER AGE kata-remote kata-remote 152m
NAME HANDLER AGE kata-remote kata-remote 152mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.11. Verifying attestation Copy linkLink copied to clipboard!
You can verify the attestation process by creating a BusyBox pod. The pod image deploys the confidential workload where you can retrieve the key.
This procedure is an example to verify that attestation is working. Do not write sensitive data to standard I/O, because the data can be captured by using a memory dump. Only data written to memory is encrypted.
Procedure
Create a
test-pod.yamlmanifest file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the pod by running the following command:
oc create -f test-pod.yaml
$ oc create -f test-pod.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to the pod by running the following command:
oc exec -it busybox -n default -- /bin/sh
$ oc exec -it busybox -n default -- /bin/shCopy to Clipboard Copied! Toggle word wrap Toggle overflow Fetch the pod secret by running the following command:
wget http://127.0.0.1:8006/cdh/resource/default/kbsres1/key1
$ wget http://127.0.0.1:8006/cdh/resource/default/kbsres1/key1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Connecting to 127.0.0.1:8006 (127.0.0.1:8006) saving to 'key1' key1 100% |*******************************************| 8 0:00:00 ETA 'key1' saved
Connecting to 127.0.0.1:8006 (127.0.0.1:8006) saving to 'key1' key1 100% |*******************************************| 8 0:00:00 ETA 'key1' savedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Display the
key1value by running the following command:cat key1
$ cat key1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
res1val1/ #
res1val1/ #Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Uninstalling confidential containers Copy linkLink copied to clipboard!
You uninstall confidential containers by uninstalling OpenShift sandboxed containers and its components on your workload cluster.
Then, you uninstall the Red Hat build of Trustee Operator and its components. See Uninstalling Red Hat build of Trustee for details.
You uninstall OpenShift sandboxed containers by performing the following tasks:
- Delete the workload pods.
-
Delete the
KataConfigcustom resource (CR). - Uninstall the OpenShift sandboxed containers Operator.
-
Delete the
KataConfigcustom resource definition (CRD).
You must delete the workload pods before deleting the KataConfig CR. The pod names usually have the prefix podvm and custom tags, if provided. If you deployed OpenShift sandboxed containers on a cloud provider and any resources remain after following these procedures, you might receive an unexpected bill for those resources from your cloud provider. Once you complete uninstalling OpenShift sandboxed containers on a cloud provider, check the cloud provider console to ensure that the procedures deleted all of the resources.
4.1. Deleting workload pods Copy linkLink copied to clipboard!
You can delete the OpenShift sandboxed containers workload pods by using the CLI.
Prerequisites
-
You have the JSON processor (
jq) utility installed.
Procedure
Search for the pods by running the following command:
oc get pods -A -o json | jq -r '.items[] | \ select(.spec.runtimeClassName == "<runtime>").metadata.name'
$ oc get pods -A -o json | jq -r '.items[] | \ select(.spec.runtimeClassName == "<runtime>").metadata.name'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete each pod by running the following command:
oc delete pod <pod>
$ oc delete pod <pod>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
When uninstalling OpenShift sandboxed containers deployed using a cloud provider, you must delete all of the pods. Any remaining pod resources might result in an unexpected bill from your cloud provider.
4.2. Deleting the KataConfig custom resource Copy linkLink copied to clipboard!
You delete the KataConfig custom resource (CR) by using the command line.
Procedure
Delete the
KataConfigCR by running the following command:oc delete kataconfig example-kataconfig
$ oc delete kataconfig example-kataconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the CR removal by running the following command:
oc get kataconfig example-kataconfig
$ oc get kataconfig example-kataconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
No example-kataconfig instances exist
No example-kataconfig instances existCopy to Clipboard Copied! Toggle word wrap Toggle overflow
You must ensure that all pods are deleted. Any remaining pod resources might result in an unexpected bill from your cloud provider.
4.3. Uninstalling the OpenShift sandboxed containers Operator Copy linkLink copied to clipboard!
You uninstall the OpenShift sandboxed containers Operator by using the command line.
Procedure
Delete the subscription by running the following command:
oc delete subscription sandboxed-containers-operator -n openshift-sandboxed-containers-operator
$ oc delete subscription sandboxed-containers-operator -n openshift-sandboxed-containers-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the namespace by running the following command:
oc delete namespace openshift-sandboxed-containers-operator
$ oc delete namespace openshift-sandboxed-containers-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4. Deleting the KataConfig CRD Copy linkLink copied to clipboard!
You delete the KataConfig custom resource definition (CRD) by using the command line.
Prerequisites
-
You have deleted the
KataConfigcustom resource. - You have uninstalled the OpenShift sandboxed containers Operator.
Procedure
Delete the
KataConfigCRD by running the following command:oc delete crd kataconfigs.kataconfiguration.openshift.io
$ oc delete crd kataconfigs.kataconfiguration.openshift.ioCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the CRD was deleted by running the following command:
oc get crd kataconfigs.kataconfiguration.openshift.io
$ oc get crd kataconfigs.kataconfiguration.openshift.ioCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Unknown CRD kataconfigs.kataconfiguration.openshift.io
Unknown CRD kataconfigs.kataconfiguration.openshift.ioCopy to Clipboard Copied! Toggle word wrap Toggle overflow