Chapter 4. Deploying on Azure
You can deploy OpenShift sandboxed containers and Confidential Containers on Microsoft Azure Cloud Computing Services.
Cluster requirements
- You have installed Red Hat OpenShift Container Platform 4.14 or later on the cluster where you are installing the OpenShift sandboxed containers Operator.
- Your cluster has at least one worker node.
4.1. Peer pod resource requirements
You must ensure that your cluster has sufficient resources.
Peer pod virtual machines (VMs) require resources in two locations:
-
The worker node. The worker node stores metadata, Kata shim resources (
containerd-shim-kata-v2
), remote-hypervisor resources (cloud-api-adaptor
), and the tunnel setup between the worker nodes and the peer pod VM. - The cloud instance. This is the actual peer pod VM running in the cloud.
The CPU and memory resources used in the Kubernetes worker node are handled by the pod overhead included in the RuntimeClass (kata-remote
) definition used for creating peer pods.
The total number of peer pod VMs running in the cloud is defined as Kubernetes Node extended resources. This limit is per node and is set by the limit
attribute in the peerpodConfig
custom resource (CR).
The peerpodConfig
CR, named peerpodconfig-openshift
, is created when you create the kataConfig
CR and enable peer pods, and is located in the openshift-sandboxed-containers-operator
namespace.
The following peerpodConfig
CR example displays the default spec
values:
apiVersion: confidentialcontainers.org/v1alpha1
kind: PeerPodConfig
metadata:
name: peerpodconfig-openshift
namespace: openshift-sandboxed-containers-operator
spec:
cloudSecretName: peer-pods-secret
configMapName: peer-pods-cm
limit: "10" 1
nodeSelector:
node-role.kubernetes.io/kata-oc: ""
- 1
- The default limit is 10 VMs per node.
The extended resource is named kata.peerpods.io/vm
, and enables the Kubernetes scheduler to handle capacity tracking and accounting.
You can edit the limit per node based on the requirements for your environment after you install the OpenShift sandboxed containers Operator.
A mutating webhook adds the extended resource kata.peerpods.io/vm
to the pod specification. It also removes any resource-specific entries from the pod specification, if present. This enables the Kubernetes scheduler to account for these extended resources, ensuring the peer pod is only scheduled when resources are available.
The mutating webhook modifies a Kubernetes pod as follows:
-
The mutating webhook checks the pod for the expected
RuntimeClassName
value, specified in theTARGET_RUNTIME_CLASS
environment variable. If the value in the pod specification does not match the value in theTARGET_RUNTIME_CLASS
, the webhook exits without modifying the pod. If the
RuntimeClassName
values match, the webhook makes the following changes to the pod spec:-
The webhook removes every resource specification from the
resources
field of all containers and init containers in the pod. -
The webhook adds the extended resource (
kata.peerpods.io/vm
) to the spec by modifying the resources field of the first container in the pod. The extended resourcekata.peerpods.io/vm
is used by the Kubernetes scheduler for accounting purposes.
-
The webhook removes every resource specification from the
The mutating webhook excludes specific system namespaces in OpenShift Container Platform from mutation. If a peer pod is created in those system namespaces, then resource accounting using Kubernetes extended resources does not work unless the pod spec includes the extended resource.
As a best practice, define a cluster-wide policy to only allow peer pod creation in specific namespaces.
4.2. Deploying OpenShift sandboxed containers by using the web console
You can deploy OpenShift sandboxed containers on Azure by using the OpenShift Container Platform web console to perform the following tasks:
- Install the OpenShift sandboxed containers Operator.
- Create the peer pods secret.
- Create the peer pods config map.
- Create the Azure secret.
-
Create the
KataConfig
custom resource. - Configure the OpenShift sandboxed containers workload objects.
4.2.1. Installing the OpenShift sandboxed containers Operator
You can install the OpenShift sandboxed containers Operator by using the OpenShift Container Platform web console.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
-
In the web console, navigate to Operators
OperatorHub. -
In the Filter by keyword field, type
OpenShift sandboxed containers
. - Select the OpenShift sandboxed containers Operator tile and click Install.
- On the Install Operator page, select stable from the list of available Update Channel options.
Verify that Operator recommended Namespace is selected for Installed Namespace. This installs the Operator in the mandatory
openshift-sandboxed-containers-operator
namespace. If this namespace does not yet exist, it is automatically created.NoteAttempting to install the OpenShift sandboxed containers Operator in a namespace other than
openshift-sandboxed-containers-operator
causes the installation to fail.- Verify that Automatic is selected for Approval Strategy. Automatic is the default value, and enables automatic updates to OpenShift sandboxed containers when a new z-stream release is available.
- Click Install.
-
Navigate to Operators
Installed Operators to verify that the Operator is installed.
Additional resources
- Using Operator Lifecycle Manager on restricted networks.
- Configuring proxy support in Operator Lifecycle Manager for disconnected environments.
4.2.2. Creating the peer pods secret
You must create the peer pods secret for OpenShift sandboxed containers.
The secret stores credentials for creating the pod virtual machine (VM) image and peer pod instances.
By default, the OpenShift sandboxed containers Operator creates the secret based on the credentials used to create the cluster. However, you can manually create a secret that uses different credentials.
Prerequisites
- You have installed and configured the Azure CLI tool.
Procedure
Retrieve the Azure subscription ID by running the following command:
$ AZURE_SUBSCRIPTION_ID=$(az account list --query "[?isDefault].id" \ -o tsv) && echo "AZURE_SUBSCRIPTION_ID: \"$AZURE_SUBSCRIPTION_ID\""
Generate the RBAC content by running the following command:
$ az ad sp create-for-rbac --role Contributor --scopes /subscriptions/$AZURE_SUBSCRIPTION_ID \ --query "{ client_id: appId, client_secret: password, tenant_id: tenant }"
Example output
{ "client_id": `AZURE_CLIENT_ID`, "client_secret": `AZURE_CLIENT_SECRET`, "tenant_id": `AZURE_TENANT_ID` }
-
Record the RBAC output to use in the
secret
object. -
In the OpenShift Container Platform web console, navigate to Operators
Installed Operators. - Click the OpenShift sandboxed containers Operator tile.
- Click the Import icon (+) on the top right corner.
In the Import YAML window, paste the following YAML manifest:
apiVersion: v1 kind: Secret metadata: name: peer-pods-secret namespace: openshift-sandboxed-containers-operator type: Opaque stringData: AZURE_CLIENT_ID: "<azure_client_id>" 1 AZURE_CLIENT_SECRET: "<azure_client_secret>" 2 AZURE_TENANT_ID: "<azure_tenant_id>" 3 AZURE_SUBSCRIPTION_ID: "<azure_subscription_id>" 4
- Click Save to apply the changes.
-
Navigate to Workloads
Secrets to verify the peer pods secret.
4.2.3. Creating the peer pods config map
You must create the peer pods config map for OpenShift sandboxed containers.
Procedure
Obtain the following values from your Azure instance:
Retrieve and record the Azure resource group:
$ AZURE_RESOURCE_GROUP=$(oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.azure.resourceGroupName}') && echo "AZURE_RESOURCE_GROUP: \"$AZURE_RESOURCE_GROUP\""
Retrieve and record the Azure VNet name:
$ AZURE_VNET_NAME=$(az network vnet list --resource-group ${AZURE_RESOURCE_GROUP} --query "[].{Name:name}" --output tsv)
This value is used to retrieve the Azure subnet ID.
Retrieve and record the Azure subnet ID:
$ AZURE_SUBNET_ID=$(az network vnet subnet list --resource-group ${AZURE_RESOURCE_GROUP} --vnet-name $AZURE_VNET_NAME --query "[].{Id:id} | [? contains(Id, 'worker')]" --output tsv) && echo "AZURE_SUBNET_ID: \"$AZURE_SUBNET_ID\""
Retrieve and record the Azure network security group (NSG) ID:
$ AZURE_NSG_ID=$(az network nsg list --resource-group ${AZURE_RESOURCE_GROUP} --query "[].{Id:id}" --output tsv) && echo "AZURE_NSG_ID: \"$AZURE_NSG_ID\""
Retrieve and record the Azure region:
$ AZURE_REGION=$(az group show --resource-group ${AZURE_RESOURCE_GROUP} --query "{Location:location}" --output tsv) && echo "AZURE_REGION: \"$AZURE_REGION\""
-
In the OpenShift Container Platform web console, navigate to Operators
Installed Operators. - Select the OpenShift sandboxed containers Operator from the list of operators.
- Click the Import icon (+) in the top right corner.
In the Import YAML window, paste the following YAML manifest:
apiVersion: v1 kind: ConfigMap metadata: name: peer-pods-cm namespace: openshift-sandboxed-containers-operator data: CLOUD_PROVIDER: "azure" VXLAN_PORT: "9000" AZURE_INSTANCE_SIZE: "Standard_B2als_v2" 1 AZURE_INSTANCE_SIZES: "Standard_B2als_v2,Standard_D2as_v5,Standard_D4as_v5,Standard_D2ads_v5" 2 AZURE_SUBNET_ID: "<azure_subnet_id>" 3 AZURE_NSG_ID: "<azure_nsg_id>" 4 PROXY_TIMEOUT: "5m" AZURE_IMAGE_ID: "<azure_image_id>" 5 AZURE_REGION: "<azure_region>" 6 AZURE_RESOURCE_GROUP: "<azure_resource_group>" 7 DISABLECVM: "true"
- 1
- This value is the default if an instance size is not defined in the workload.
- 2
- Lists all of the instance sizes you can specify when creating the pod. This allows you to define smaller instance sizes for workloads that need less memory and fewer CPUs or larger instance sizes for larger workloads.
- 3
- Specify the
AZURE_SUBNET_ID
value that you retrieved. - 4
- Specify the
AZURE_NSG_ID
value that you retrieved. - 5
- Optional: By default, this value is populated when you run the
KataConfig
CR, using an Azure image ID based on your cluster credentials. If you create your own Azure image, specify the correct image ID. - 6
- Specify the
AZURE_REGION
value you retrieved. - 7
- Specify the
AZURE_RESOURCE_GROUP
value you retrieved.
Click Save to apply the changes.
$ oc set env ds/peerpodconfig-ctrl-caa-daemon \ -n openshift-sandboxed-containers-operator REBOOT="$(date)"
-
Navigate to Workloads
ConfigMaps to view the new config map.
4.2.4. Creating the Azure secret
You must create the secret for Azure.
Procedure
- Log in to your OpenShift Container Platform cluster.
Generate an SSH key pair by running the following command:
$ ssh-keygen -f ./id_rsa -N ""
-
In the OpenShift Container Platform web console, navigate to Workloads
Secrets. - On the Secrets page, verify that you are in the openshift-sandboxed-containers-operator project.
- Click Create and select Key/value secret.
-
In the Secret name field, enter
ssh-key-secret
. -
In the Key field, enter
id_rsa.pub
. - In the Value field, paste your public SSH key.
- Click Create.
Delete the SSH keys you created:
$ shred --remove id_rsa.pub id_rsa
4.2.5. Creating the KataConfig custom resource
You must create the KataConfig
custom resource (CR) to install kata-remote
as a RuntimeClass
on your worker nodes.
The kata-remote
runtime class is installed on all worker nodes by default. If you want to install kata-remote
on specific nodes, you can add labels to those nodes and then define the label in the KataConfig
CR.
OpenShift sandboxed containers installs kata-remote
as a secondary, optional runtime on the cluster and not as the primary runtime.
Creating the KataConfig
CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. The following factors might increase the reboot time:
- A larger OpenShift Container Platform deployment with a greater number of worker nodes.
- Activation of the BIOS and Diagnostics utility.
- Deployment on a hard disk drive rather than an SSD.
- Deployment on physical nodes such as bare metal, rather than on virtual nodes.
- A slow CPU and network.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Optional: You have installed the Node Feature Discovery Operator if you want to enable node eligibility checks.
Procedure
-
In the OpenShift Container Platform web console, navigate to Operators
Installed Operators. - Select the OpenShift sandboxed containers Operator.
- On the KataConfig tab, click Create KataConfig.
Enter the following details:
-
Name: Optional: The default name is
example-kataconfig
. -
Labels: Optional: Enter any relevant, identifying attributes to the
KataConfig
resource. Each label represents a key-value pair. - enablePeerPods: Select for public cloud, IBM Z®, and IBM® LinuxONE deployments.
kataConfigPoolSelector. Optional: To install
kata-remote
on selected nodes, add a match expression for the labels on the selected nodes:- Expand the kataConfigPoolSelector area.
- In the kataConfigPoolSelector area, expand matchExpressions. This is a list of label selector requirements.
- Click Add matchExpressions.
- In the Key field, enter the label key the selector applies to.
-
In the Operator field, enter the key’s relationship to the label values. Valid operators are
In
,NotIn
,Exists
, andDoesNotExist
. - Expand the Values area and then click Add value.
-
In the Value field, enter
true
orfalse
for key label value.
-
logLevel: Define the level of log data retrieved for nodes with the
kata-remote
runtime class.
-
Name: Optional: The default name is
Click Create. The
KataConfig
CR is created and installs thekata-remote
runtime class on the worker nodes.Wait for the
kata-remote
installation to complete and the worker nodes to reboot before verifying the installation.
Verification
-
On the KataConfig tab, click the
KataConfig
CR to view its details. Click the YAML tab to view the
status
stanza.The
status
stanza contains theconditions
andkataNodes
keys. The value ofstatus.kataNodes
is an array of nodes, each of which lists nodes in a particular state ofkata-remote
installation. A message appears each time there is an update.Click Reload to refresh the YAML.
When all workers in the
status.kataNodes
array display the valuesinstalled
andconditions.InProgress: False
with no specified reason, thekata-remote
is installed on the cluster.
Additional resources
Verifying the pod VM image
After kata-remote
is installed on your cluster, the OpenShift sandboxed containers Operator creates a pod VM image, which is used to create peer pods. This process can take a long time because the image is created on the cloud instance. You can verify that the pod VM image was created successfully by checking the config map that you created for the cloud provider.
Procedure
-
Navigate to Workloads
ConfigMaps. - Click the provider config map to view its details.
- Click the YAML tab.
Check the
status
stanza of the YAML file.If the
AZURE_IMAGE_ID
parameter is populated, the pod VM image was created successfully.
Troubleshooting
Retrieve the events log by running the following command:
$ oc get events -n openshift-sandboxed-containers-operator --field-selector involvedObject.name=osc-podvm-image-creation
Retrieve the job log by running the following command:
$ oc logs -n openshift-sandboxed-containers-operator jobs/osc-podvm-image-creation
If you cannot resolve the issue, submit a Red Hat Support case and attach the output of both logs.
4.2.6. Configuring workload objects
You must configure OpenShift sandboxed containers workload objects by setting kata-remote
as the runtime class for the following pod-templated objects:
-
Pod
objects -
ReplicaSet
objects -
ReplicationController
objects -
StatefulSet
objects -
Deployment
objects -
DeploymentConfig
objects
Do not deploy workloads in an Operator namespace. Create a dedicated namespace for these resources.
You can define whether the workload should be deployed using the default instance size, which you defined in the config map, by adding an annotation to the YAML file.
If you do not want to define the instance size manually, you can add an annotation to use an automatic instance size, based on the memory available.
Prerequisites
-
You have created the
KataConfig
custom resource (CR).
Procedure
-
In the OpenShift Container Platform web console, navigate to Workloads
workload type, for example, Pods. - On the workload type page, click an object to view its details.
- Click the YAML tab.
Add
spec.runtimeClassName: kata-remote
to the manifest of each pod-templated workload object as in the following example:apiVersion: v1 kind: <object> # ... spec: runtimeClassName: kata-remote # ...
Add an annotation to the pod-templated object to use a manually defined instance size or an automatic instance size:
To use a manually defined instance size, add the following annotation:
apiVersion: v1 kind: <object> metadata: annotations: io.katacontainers.config.hypervisor.machine_type: "Standard_B2als_v2" 1 # ...
- 1
- Specify the instance size that you defined in the config map.
To use an automatic instance size, add the following annotations:
apiVersion: v1 kind: <Pod> metadata: annotations: io.katacontainers.config.hypervisor.default_vcpus: <vcpus> io.katacontainers.config.hypervisor.default_memory: <memory> # ...
Define the amount of memory available for the workload to use. The workload will run on an automatic instance size based on the amount of memory available.
Click Save to apply the changes.
OpenShift Container Platform creates the workload object and begins scheduling it.
Verification
-
Inspect the
spec.runtimeClassName
field of a pod-templated object. If the value iskata-remote
, then the workload is running on OpenShift sandboxed containers, using peer pods.
4.3. Deploying OpenShift sandboxed containers by using the command line
You can deploy OpenShift sandboxed containers on Azure by using the command line interface (CLI) to perform the following tasks:
- Install the OpenShift sandboxed containers Operator.
- Optional: Change the number of virtual machines running on each worker node.
- Create the peer pods secret.
- Create the peer pods config map.
- Create the Azure secret.
-
Create the
KataConfig
custom resource. - Configure the OpenShift sandboxed containers workload objects.
4.3.1. Installing the OpenShift sandboxed containers Operator
You can install the OpenShift sandboxed containers Operator by using the CLI.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Create an
osc-namespace.yaml
manifest file:apiVersion: v1 kind: Namespace metadata: name: openshift-sandboxed-containers-operator
Create the namespace by running the following command:
$ oc apply -f osc-namespace.yaml
Create an
osc-operatorgroup.yaml
manifest file:apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sandboxed-containers-operator-group namespace: openshift-sandboxed-containers-operator spec: targetNamespaces: - openshift-sandboxed-containers-operator
Create the operator group by running the following command:
$ oc apply -f osc-operatorgroup.yaml
Create an
osc-subscription.yaml
manifest file:apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sandboxed-containers-operator namespace: openshift-sandboxed-containers-operator spec: channel: stable installPlanApproval: Automatic name: sandboxed-containers-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: sandboxed-containers-operator.v1.7.0
Create the subscription by running the following command:
$ oc apply -f osc-subscription.yaml
Verify that the Operator is correctly installed by running the following command:
$ oc get csv -n openshift-sandboxed-containers-operator
This command can take several minutes to complete.
Watch the process by running the following command:
$ watch oc get csv -n openshift-sandboxed-containers-operator
Example output
NAME DISPLAY VERSION REPLACES PHASE openshift-sandboxed-containers openshift-sandboxed-containers-operator 1.7.0 1.6.0 Succeeded
Additional resources
- Using Operator Lifecycle Manager on restricted networks.
- Configuring proxy support in Operator Lifecycle Manager for disconnected environments.
4.3.2. Modifying the number of peer pod VMs per node
You can change the limit of peer pod virtual machines (VMs) per node by editing the peerpodConfig
custom resource (CR).
Procedure
Check the current limit by running the following command:
$ oc get peerpodconfig peerpodconfig-openshift -n openshift-sandboxed-containers-operator \ -o jsonpath='{.spec.limit}{"\n"}'
Modify the
limit
attribute of thepeerpodConfig
CR by running the following command:$ oc patch peerpodconfig peerpodconfig-openshift -n openshift-sandboxed-containers-operator \ --type merge --patch '{"spec":{"limit":"<value>"}}' 1
- 1
- Replace <value> with the limit you want to define.
4.3.3. Creating the peer pods secret
You must create the peer pods secret for OpenShift sandboxed containers.
The secret stores credentials for creating the pod virtual machine (VM) image and peer pod instances.
By default, the OpenShift sandboxed containers Operator creates the secret based on the credentials used to create the cluster. However, you can manually create a secret that uses different credentials.
Prerequisites
- You have installed and configured the Azure CLI tool.
Procedure
Retrieve the Azure subscription ID by running the following command:
$ AZURE_SUBSCRIPTION_ID=$(az account list --query "[?isDefault].id" \ -o tsv) && echo "AZURE_SUBSCRIPTION_ID: \"$AZURE_SUBSCRIPTION_ID\""
Generate the RBAC content by running the following command:
$ az ad sp create-for-rbac --role Contributor --scopes /subscriptions/$AZURE_SUBSCRIPTION_ID \ --query "{ client_id: appId, client_secret: password, tenant_id: tenant }"
Example output
{ "client_id": `AZURE_CLIENT_ID`, "client_secret": `AZURE_CLIENT_SECRET`, "tenant_id": `AZURE_TENANT_ID` }
-
Record the RBAC output to use in the
secret
object. Create a
peer-pods-secret.yaml
manifest file according to the following example:apiVersion: v1 kind: Secret metadata: name: peer-pods-secret namespace: openshift-sandboxed-containers-operator type: Opaque stringData: AZURE_CLIENT_ID: "<azure_client_id>" 1 AZURE_CLIENT_SECRET: "<azure_client_secret>" 2 AZURE_TENANT_ID: "<azure_tenant_id>" 3 AZURE_SUBSCRIPTION_ID: "<azure_subscription_id>" 4
Create the secret by running the following command:
$ oc apply -f peer-pods-secret.yaml
Optional: To update an existing peer pods config map, restart the
peerpodconfig-ctrl-caa-daemon
daemon set by running the following command:$ oc set env ds/peerpodconfig-ctrl-caa-daemon \ -n openshift-sandboxed-containers-operator REBOOT="$(date)"
4.3.4. Creating the peer pods config map
You must create the peer pods config map for OpenShift sandboxed containers.
Procedure
Obtain the following values from your Azure instance:
Retrieve and record the Azure resource group:
$ AZURE_RESOURCE_GROUP=$(oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.azure.resourceGroupName}') && echo "AZURE_RESOURCE_GROUP: \"$AZURE_RESOURCE_GROUP\""
Retrieve and record the Azure VNet name:
$ AZURE_VNET_NAME=$(az network vnet list --resource-group ${AZURE_RESOURCE_GROUP} --query "[].{Name:name}" --output tsv)
This value is used to retrieve the Azure subnet ID.
Retrieve and record the Azure subnet ID:
$ AZURE_SUBNET_ID=$(az network vnet subnet list --resource-group ${AZURE_RESOURCE_GROUP} --vnet-name $AZURE_VNET_NAME --query "[].{Id:id} | [? contains(Id, 'worker')]" --output tsv) && echo "AZURE_SUBNET_ID: \"$AZURE_SUBNET_ID\""
Retrieve and record the Azure network security group (NSG) ID:
$ AZURE_NSG_ID=$(az network nsg list --resource-group ${AZURE_RESOURCE_GROUP} --query "[].{Id:id}" --output tsv) && echo "AZURE_NSG_ID: \"$AZURE_NSG_ID\""
Retrieve and record the Azure region:
$ AZURE_REGION=$(az group show --resource-group ${AZURE_RESOURCE_GROUP} --query "{Location:location}" --output tsv) && echo "AZURE_REGION: \"$AZURE_REGION\""
Create a
peer-pods-cm.yaml
manifest file according to the following example:apiVersion: v1 kind: ConfigMap metadata: name: peer-pods-cm namespace: openshift-sandboxed-containers-operator data: CLOUD_PROVIDER: "azure" VXLAN_PORT: "9000" AZURE_INSTANCE_SIZE: "Standard_B2als_v2" 1 AZURE_INSTANCE_SIZES: "Standard_B2als_v2,Standard_D2as_v5,Standard_D4as_v5,Standard_D2ads_v5" 2 AZURE_SUBNET_ID: "<azure_subnet_id>" 3 AZURE_NSG_ID: "<azure_nsg_id>" 4 PROXY_TIMEOUT: "5m" AZURE_IMAGE_ID: "<azure_image_id>" 5 AZURE_REGION: "<azure_region>" 6 AZURE_RESOURCE_GROUP: "<azure_resource_group>" 7 DISABLECVM: "true"
- 1
- This value is the default if an instance size is not defined in the workload.
- 2
- Lists all of the instance sizes you can specify when creating the pod. This allows you to define smaller instance sizes for workloads that need less memory and fewer CPUs or larger instance sizes for larger workloads.
- 3
- Specify the
AZURE_SUBNET_ID
value that you retrieved. - 4
- Specify the
AZURE_NSG_ID
value that you retrieved. - 5
- Optional: By default, this value is populated when you run the
KataConfig
CR, using an Azure image ID based on your cluster credentials. If you create your own Azure image, specify the correct image ID. - 6
- Specify the
AZURE_REGION
value you retrieved. - 7
- Specify the
AZURE_RESOURCE_GROUP
value you retrieved.
Create the config map by running the following command:
$ oc apply -f peer-pods-cm.yaml
Optional: To update an existing peer pods config map, restart the
peerpodconfig-ctrl-caa-daemon
daemon set by running the following command:$ oc set env ds/peerpodconfig-ctrl-caa-daemon \ -n openshift-sandboxed-containers-operator REBOOT="$(date)"
4.3.5. Creating the Azure secret
You must create the secret for Azure.
Procedure
- Log in to your OpenShift Container Platform cluster.
Generate an SSH key pair by running the following command:
$ ssh-keygen -f ./id_rsa -N ""
Create the
Secret
object by running the following command:$ oc create secret generic ssh-key-secret \ -n openshift-sandboxed-containers-operator \ --from-file=id_rsa.pub=./id_rsa.pub \ --from-file=id_rsa=./id_rsa
Delete the SSH keys you created:
$ shred --remove id_rsa.pub id_rsa
4.3.6. Creating the KataConfig custom resource
You must create the KataConfig
custom resource (CR) to install kata-remote
as a runtime class on your worker nodes.
Creating the KataConfig
CR triggers the OpenShift sandboxed containers Operator to do the following: * Create a RuntimeClass
CR named kata-remote
with a default configuration. This enables users to configure workloads to use kata-remote
as the runtime by referencing the CR in the RuntimeClassName
field. This CR also specifies the resource overhead for the runtime.
OpenShift sandboxed containers installs kata-remote
as a secondary, optional runtime on the cluster and not as the primary runtime.
Creating the KataConfig
CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. Factors that impede reboot time are as follows:
- A larger OpenShift Container Platform deployment with a greater number of worker nodes.
- Activation of the BIOS and Diagnostics utility.
- Deployment on a hard disk drive rather than an SSD.
- Deployment on physical nodes such as bare metal, rather than on virtual nodes.
- A slow CPU and network.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Create an
example-kataconfig.yaml
manifest file according to the following example:apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: example-kataconfig spec: enablePeerPods: true logLevel: info # kataConfigPoolSelector: # matchLabels: # <label_key>: '<label_value>' 1
- 1
- Optional: If you have applied node labels to install
kata-remote
on specific nodes, specify the key and value, for example,osc: 'true'
.
Create the
KataConfig
CR by running the following command:$ oc apply -f example-kataconfig.yaml
The new
KataConfig
CR is created and installskata-remote
as a runtime class on the worker nodes.Wait for the
kata-remote
installation to complete and the worker nodes to reboot before verifying the installation.Monitor the installation progress by running the following command:
$ watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p"
When the status of all workers under
kataNodes
isinstalled
and the conditionInProgress
isFalse
without specifying a reason, thekata-remote
is installed on the cluster.Verify the daemon set by running the following command:
$ oc get -n openshift-sandboxed-containers-operator ds/peerpodconfig-ctrl-caa-daemon
Verify the runtime classes by running the following command:
$ oc get runtimeclass
Example output
NAME HANDLER AGE kata kata 152m kata-remote kata-remote 152m
Verifying the pod VM image
After kata-remote
is installed on your cluster, the OpenShift sandboxed containers Operator creates a pod VM image, which is used to create peer pods. This process can take a long time because the image is created on the cloud instance. You can verify that the pod VM image was created successfully by checking the config map that you created for the cloud provider.
Procedure
Obtain the config map you created for the peer pods:
$ oc get configmap peer-pods-cm -n openshift-sandboxed-containers-operator -o yaml
Check the
status
stanza of the YAML file.If the
AZURE_IMAGE_ID
parameter is populated, the pod VM image was created successfully.
Troubleshooting
Retrieve the events log by running the following command:
$ oc get events -n openshift-sandboxed-containers-operator --field-selector involvedObject.name=osc-podvm-image-creation
Retrieve the job log by running the following command:
$ oc logs -n openshift-sandboxed-containers-operator jobs/osc-podvm-image-creation
If you cannot resolve the issue, submit a Red Hat Support case and attach the output of both logs.
4.3.7. Configuring workload objects
You must configure OpenShift sandboxed containers workload objects by setting kata-remote
as the runtime class for the following pod-templated objects:
-
Pod
objects -
ReplicaSet
objects -
ReplicationController
objects -
StatefulSet
objects -
Deployment
objects -
DeploymentConfig
objects
Do not deploy workloads in an Operator namespace. Create a dedicated namespace for these resources.
You can define whether the workload should be deployed using the default instance size, which you defined in the config map, by adding an annotation to the YAML file.
If you do not want to define the instance size manually, you can add an annotation to use an automatic instance size, based on the memory available.
Prerequisites
-
You have created the
KataConfig
custom resource (CR).
Procedure
Add
spec.runtimeClassName: kata-remote
to the manifest of each pod-templated workload object as in the following example:apiVersion: v1 kind: <object> # ... spec: runtimeClassName: kata-remote # ...
Add an annotation to the pod-templated object to use a manually defined instance size or an automatic instance size:
To use a manually defined instance size, add the following annotation:
apiVersion: v1 kind: <object> metadata: annotations: io.katacontainers.config.hypervisor.machine_type: "Standard_B2als_v2" 1 # ...
- 1
- Specify the instance size that you defined in the config map.
To use an automatic instance size, add the following annotations:
apiVersion: v1 kind: <Pod> metadata: annotations: io.katacontainers.config.hypervisor.default_vcpus: <vcpus> io.katacontainers.config.hypervisor.default_memory: <memory> # ...
Define the amount of memory available for the workload to use. The workload will run on an automatic instance size based on the amount of memory available.
Apply the changes to the workload object by running the following command:
$ oc apply -f <object.yaml>
OpenShift Container Platform creates the workload object and begins scheduling it.
Verification
-
Inspect the
spec.runtimeClassName
field of a pod-templated object. If the value iskata-remote
, then the workload is running on OpenShift sandboxed containers, using peer pods.
4.4. Deploying Confidential Containers on Azure
You can deploy Confidential Containers on Microsoft Azure Cloud Computing Services after you deploy OpenShift sandboxed containers.
Confidential Containers on Azure is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Cluster requirements
- You have installed Red Hat OpenShift Container Platform 4.15 or later on the cluster where you are installing the Confidential compute attestation Operator.
You deploy Confidential Containers by performing the following steps:
- Install the Confidential compute attestation Operator.
- Create the route for Trustee.
- Enable the Confidential Containers feature gate.
- Update the peer pods config map.
-
Delete the
KataConfig
custom resource (CR). -
Re-create the
KataConfig
CR. - Create the Trustee authentication secret.
- Create the Trustee config map.
Configure attestation policies:
- Create reference values.
- Create secrets for attested clients.
- Create the resource access policy.
- Optional: Create an attestation policy that overrides the default policy.
- If your TEE is Intel Trust Domain Extensions, configure the Provisioning Certificate Caching Service.
-
Create the
KbsConfig
CR. - Verify the attestation process.
4.4.1. Installing the Confidential compute attestation Operator
You can install the Confidential compute attestation Operator on Azure by using the CLI.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Create a
trustee-namespace.yaml
manifest file:apiVersion: v1 kind: Namespace metadata: name: trustee-operator-system
Create the
trustee-operator-system
namespace by running the following command:$ oc apply -f trustee-namespace.yaml
Create a
trustee-operatorgroup.yaml
manifest file:apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: trustee-operator-group namespace: trustee-operator-system spec: targetNamespaces: - trustee-operator-system
Create the operator group by running the following command:
$ oc apply -f trustee-operatorgroup.yaml
Create a
trustee-subscription.yaml
manifest file:apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: trustee-operator namespace: trustee-operator-system spec: channel: stable installPlanApproval: Automatic name: trustee-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: trustee-operator.v0.1.0
Create the subscription by running the following command:
$ oc apply -f trustee-subscription.yaml
Verify that the Operator is correctly installed by running the following command:
$ oc get csv -n trustee-operator-system
This command can take several minutes to complete.
Watch the process by running the following command:
$ watch oc get csv -n trustee-operator-system
Example output
NAME DISPLAY PHASE trustee-operator.v0.1.0 Trustee Operator 0.1.0 Succeeded
4.4.2. Creating the route for Trustee
You can create a secure route with edge TLS termination for Trustee. External ingress traffic reaches the router pods as HTTPS and passes on to the Trustee pods as HTTP.
Prerequisites
- You have enabled the Confidential Containers feature gate.
- You have installed the Confidential compute attestation Operator.
Procedure
Create an edge route by running the following command:
$ oc create route edge --service=kbs-service --port kbs-port \ -n trustee-operator-system
NoteNote: Currently, only a route with a valid CA-signed certificate is supported. You cannot use a route with self-signed certificate.
Set the
TRUSTEE_HOST
variable by running the following command:$ TRUSTEE_HOST=$(oc get route -n trustee-operator-system kbs-service \ -o jsonpath={.spec.host})
Verify the route by running the following command:
$ echo $TRUSTEE_HOST
Example output
kbs-service-trustee-operator-system.apps.memvjias.eastus.aroapp.io
Record this value for the peer pods config map.
4.4.3. Enabling the Confidential Containers feature gate
You must enable the Confidential Containers feature gate.
Procedure
Create a
cc-feature-gate.yaml
manifest file:apiVersion: v1 kind: ConfigMap metadata: name: osc-feature-gates namespace: openshift-sandboxed-containers-operator data: confidential: "true"
Create the config map by running the following command:
$ oc apply -f cc-feature-gate.yaml
4.4.4. Updating the peer pods config map
You must update the peer pods config map for Confidential Containers.
Set Secure Boot to true
to enable it by default. The default value is false
, which presents a security risk.
Procedure
Obtain the following values from your Azure instance:
Retrieve and record the Azure resource group:
$ AZURE_RESOURCE_GROUP=$(oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.azure.resourceGroupName}') && echo "AZURE_RESOURCE_GROUP: \"$AZURE_RESOURCE_GROUP\""
Retrieve and record the Azure VNet name:
$ AZURE_VNET_NAME=$(az network vnet list --resource-group ${AZURE_RESOURCE_GROUP} --query "[].{Name:name}" --output tsv)
This value is used to retrieve the Azure subnet ID.
Retrieve and record the Azure subnet ID:
$ AZURE_SUBNET_ID=$(az network vnet subnet list --resource-group ${AZURE_RESOURCE_GROUP} --vnet-name $AZURE_VNET_NAME --query "[].{Id:id} | [? contains(Id, 'worker')]" --output tsv) && echo "AZURE_SUBNET_ID: \"$AZURE_SUBNET_ID\""
Retrieve and record the Azure network security group (NSG) ID:
$ AZURE_NSG_ID=$(az network nsg list --resource-group ${AZURE_RESOURCE_GROUP} --query "[].{Id:id}" --output tsv) && echo "AZURE_NSG_ID: \"$AZURE_NSG_ID\""
Retrieve and record the Azure region:
$ AZURE_REGION=$(az group show --resource-group ${AZURE_RESOURCE_GROUP} --query "{Location:location}" --output tsv) && echo "AZURE_REGION: \"$AZURE_REGION\""
Create a
peer-pods-cm.yaml
manifest file according to the following example:apiVersion: v1 kind: ConfigMap metadata: name: peer-pods-cm namespace: openshift-sandboxed-containers-operator data: CLOUD_PROVIDER: "azure" VXLAN_PORT: "9000" AZURE_INSTANCE_SIZE: "Standard_DC2as_v5" 1 AZURE_INSTANCE_SIZES: "Standard_DC2as_v5,Standard_DC4as_v5,Standard_DC8as_v5,Standard_DC16as_v5" 2 AZURE_SUBNET_ID: "<azure_subnet_id>" 3 AZURE_NSG_ID: "<azure_nsg_id>" 4 PROXY_TIMEOUT: "5m" AZURE_IMAGE_ID: "<azure_image_id>" 5 AZURE_REGION: "<azure_region>" 6 AZURE_RESOURCE_GROUP: "<azure_resource_group>" 7 DISABLECVM: "false" AA_KBC_PARAMS: "cc_kbc::https://${TRUSTEE_HOST}" 8 ENABLE_SECURE_BOOT: "true" 9
- 1
- This value is the default if an instance size is not defined in the workload.
- 2
- Lists all of the instance sizes you can specify when creating the pod. This allows you to define smaller instance sizes for workloads that need less memory and fewer CPUs or larger instance sizes for larger workloads.
- 3
- Specify the
AZURE_SUBNET_ID
value that you retrieved. - 4
- Specify the
AZURE_NSG_ID
value that you retrieved. - 5
- Optional: By default, this value is populated when you run the
KataConfig
CR, using an Azure image ID based on your cluster credentials. If you create your own Azure image, specify the correct image ID. - 6
- Specify the
AZURE_REGION
value you retrieved. - 7
- Specify the
AZURE_RESOURCE_GROUP
value you retrieved. - 8
- Specify the host name of the Trustee route.
- 9
- Specify
true
to enable Secure Boot by default.
Create the config map by running the following command:
$ oc apply -f peer-pods-cm.yaml
Restart the
peerpodconfig-ctrl-caa-daemon
daemon set by running the following command:$ oc set env ds/peerpodconfig-ctrl-caa-daemon \ -n openshift-sandboxed-containers-operator REBOOT="$(date)"
4.4.5. Deleting the KataConfig custom resource
You can delete the KataConfig
custom resource (CR) by using the command line.
Deleting the KataConfig
CR removes the runtime and its related resources from your cluster.
Deleting the KataConfig
CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. Factors that impede reboot time are as follows:
- A larger OpenShift Container Platform deployment with a greater number of worker nodes.
- Activation of the BIOS and Diagnostics utility.
- Deployment on a hard drive rather than an SSD.
- Deployment on physical nodes such as bare metal, rather than on virtual nodes.
- A slow CPU and network.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Delete the
KataConfig
CR by running the following command:$ oc delete kataconfig example-kataconfig
The OpenShift sandboxed containers Operator removes all resources that were initially created to enable the runtime on your cluster.
ImportantWhen you delete the
KataConfig
CR, the CLI stops responding until all worker nodes reboot. You must for the deletion process to complete before performing the verification.Verify that the custom resource was deleted by running the following command:
$ oc get kataconfig example-kataconfig
Example output
No example-kataconfig instances exist
4.4.6. Re-creating the KataConfig custom resource
You must re-create the KataConfig
custom resource (CR) for Confidential Containers.
Creating the KataConfig
CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. Factors that impede reboot time are as follows:
- A larger OpenShift Container Platform deployment with a greater number of worker nodes.
- Activation of the BIOS and Diagnostics utility.
- Deployment on a hard disk drive rather than an SSD.
- Deployment on physical nodes such as bare metal, rather than on virtual nodes.
- A slow CPU and network.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Create an
example-kataconfig.yaml
manifest file according to the following example:apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: example-kataconfig spec: enablePeerPods: true logLevel: info # kataConfigPoolSelector: # matchLabels: # <label_key>: '<label_value>' 1
- 1
- Optional: If you have applied node labels to install
kata-remote
on specific nodes, specify the key and value, for example,cc: 'true'
.
Create the
KataConfig
CR by running the following command:$ oc apply -f example-kataconfig.yaml
The new
KataConfig
CR is created and installskata-remote
as a runtime class on the worker nodes.Wait for the
kata-remote
installation to complete and the worker nodes to reboot before verifying the installation.Monitor the installation progress by running the following command:
$ watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p"
When the status of all workers under
kataNodes
isinstalled
and the conditionInProgress
isFalse
without specifying a reason, thekata-remote
is installed on the cluster.Verify the daemon set by running the following command:
$ oc get -n openshift-sandboxed-containers-operator ds/peerpodconfig-ctrl-caa-daemon
Verify the runtime classes by running the following command:
$ oc get runtimeclass
Example output
NAME HANDLER AGE kata kata 152m kata-remote kata-remote 152m
4.4.7. Creating the Trustee authentication secret
You must create the authentication secret for Trustee.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Create a private key by running the following command:
$ openssl genpkey -algorithm ed25519 > privateKey
Create a public key by running the following command:
$ openssl pkey -in privateKey -pubout -out publicKey
Create a secret by running the following command:
$ oc create secret generic kbs-auth-public-key --from-file=publicKey -n trustee-operator-system
Verify the secret by running the following command:
$ oc get secret -n trustee-operator-system
4.4.8. Creating the Trustee config map
You must create the config map to configure the Trustee server.
Prerequisites
- You have created a route for Trustee.
Procedure
Create a
kbs-config-cm.yaml
manifest file:apiVersion: v1 kind: ConfigMap metadata: name: kbs-config-cm namespace: trustee-operator-system data: kbs-config.json: | { "insecure_http" : true, "sockets": ["0.0.0.0:8080"], "auth_public_key": "/etc/auth-secret/publicKey", "attestation_token_config": { "attestation_token_type": "CoCo" }, "repository_config": { "type": "LocalFs", "dir_path": "/opt/confidential-containers/kbs/repository" }, "as_config": { "work_dir": "/opt/confidential-containers/attestation-service", "policy_engine": "opa", "attestation_token_broker": "Simple", "attestation_token_config": { "duration_min": 5 }, "rvps_config": { "store_type": "LocalJson", "store_config": { "file_path": "/opt/confidential-containers/rvps/reference-values/reference-values.json" } } }, "policy_engine_config": { "policy_path": "/opt/confidential-containers/opa/policy.rego" } }
Create the config map by running the following command:
$ oc apply -f kbs-config-cm.yaml
4.4.9. Configuring attestation policies
You can configure the following attestation policy settings:
- Reference values
You can configure reference values for the Reference Value Provider Service (RVPS) by specifying the trusted digests of your hardware platform.
The client collects measurements from the running software, the Trusted Execution Environment (TEE) hardware and firmware and it submits a quote with the claims to the Attestation Server. These measurements must match the trusted digests registered to the Trustee. This process ensures that the confidential VM (CVM) is running the expected software stack and has not been tampered with.
- Secrets for clients
- You must create one or more secrets to share with attested clients.
- Resource access policy
You must configure a policy for the Trustee policy engine to determine which resources to access.
Do not confuse the Trustee policy engine with the Attestation Service policy engine, which determines the validity of TEE evidence.
- Attestation policy
- Optional: You can overwrite the default attestation policy by creating your own attestation policy.
- Provisioning Certificate Caching Service for TDX
If your TEE is Intel Trust Domain Extensions (TDX), you must configure the Provisioning Certificate Caching Service (PCCS). The PCCS retrieves Provisioning Certification Key (PCK) certificates and caches them in a local database.
ImportantDo not use the public Intel PCCS service. Use a local caching service on-premise or on the public cloud.
Procedure
Create an
rvps-configmap.yaml
manifest file:apiVersion: v1 kind: ConfigMap metadata: name: rvps-reference-values namespace: trustee-operator-system data: reference-values.json: | [ 1 ]
- 1
- Specify the trusted digests for your hardware platform if required. Otherwise, leave it empty.
Create the RVPS config map by running the following command:
$ oc apply -f rvps-configmap.yaml
Create one or more secrets to share with attested clients according to the following example:
$ oc create secret generic kbsres1 --from-literal key1=<res1val1> \ --from-literal key2=<res1val2> -n trustee-operator-system
In this example, the
kbsres1
secret has two entries (key1
,key2
), which the Trustee clients retrieve. You can add more secrets according to your requirements.Create a
resourcepolicy-configmap.yaml
manifest file:apiVersion: v1 kind: ConfigMap metadata: name: resource-policy namespace: trustee-operator-system data: policy.rego: | 1 package policy 2 default allow = false allow { input["tee"] != "sample" }
- 1
- The name of the resource policy,
policy.rego
, must match the resource policy defined in the Trustee config map. - 2
- The resource policy follows the Open Policy Agent specification. This example allows the retrieval of all resources when the TEE is not the sample attester.
Create the resource policy config map by running the following command:
$ oc apply -f resourcepolicy-configmap.yaml
Optional: Create an
attestation-policy.yaml
manifest file according to the following example:apiVersion: v1 kind: ConfigMap metadata: name: attestation-policy namespace: trustee-operator-system data: default.rego: | package policy 1 import future.keywords.every default allow = false allow { every k, v in input { judge_field(k, v) } } judge_field(input_key, input_value) { has_key(data.reference, input_key) reference_value := data.reference[input_key] match_value(reference_value, input_value) } judge_field(input_key, input_value) { not has_key(data.reference, input_key) } match_value(reference_value, input_value) { not is_array(reference_value) input_value == reference_value } match_value(reference_value, input_value) { is_array(reference_value) array_include(reference_value, input_value) } array_include(reference_value_array, input_value) { reference_value_array == [] } array_include(reference_value_array, input_value) { reference_value_array != [] some i reference_value_array[i] == input_value } has_key(m, k) { _ = m[k] }
- 1
- The attestation policy follows the Open Policy Agent specification. In this example, the attestation policy compares the claims provided in the attestation report to the reference values registered in the RVPS database. The attestation process is successful only if all the values match.
Create the attestation policy config map by running the following command:
$ oc apply -f attestation-policy.yaml
If your TEE is Intel TDX, create a
tdx-config.yaml
manifest file:apiVersion: v1 kind: ConfigMap metadata: name: tdx-config namespace: trustee-operator-system data: sgx_default_qcnl.conf: | \ { "collateral_service": "https://api.trustedservices.intel.com/sgx/certification/v4/", "pccs_url": "<pccs_url>" 1 }
- 1
- Specify the PCCS URL, for example,
https://localhost:8081/sgx/certification/v4/
.
Create the TDX config map by running the following command:
$ oc apply -f tdx-config.yaml
4.4.10. Creating the KbsConfig custom resource
You must create the KbsConfig
custom resource (CR) to launch Trustee.
Then, you check the Trustee pods and pod logs to verify the configuration.
Procedure
Create a
kbsconfig-cr.yaml
manifest file:apiVersion: confidentialcontainers.org/v1alpha1 kind: KbsConfig metadata: labels: app.kubernetes.io/name: kbsconfig app.kubernetes.io/instance: kbsconfig app.kubernetes.io/part-of: trustee-operator app.kubernetes.io/managed-by: kustomize app.kubernetes.io/created-by: trustee-operator name: kbsconfig namespace: trustee-operator-system spec: kbsConfigMapName: kbs-config-cm kbsAuthSecretName: kbs-auth-public-key kbsDeploymentType: AllInOneDeployment kbsRvpsRefValuesConfigMapName: rvps-reference-values kbsSecretResources: ["kbsres1"] kbsResourcePolicyConfigMapName: resource-policy
Create the
KbsConfig
CR by running the following command:$ oc apply -f kbsconfig-cr.yaml
Verification
Set the default project by running the following command:
$ oc project trustee-operator-system
Check the pods by running the following command:
$ oc get pods -n trustee-operator-system
Example output
NAME READY STATUS RESTARTS AGE trustee-deployment-8585f98449-9bbgl 1/1 Running 0 22m trustee-operator-controller-manager-5fbd44cd97-55dlh 2/2 Running 0 59m
Set the
POD_NAME
environmental variable by running the following command:$ POD_NAME=$(oc get pods -l app=kbs -o jsonpath='{.items[0].metadata.name}' -n trustee-operator-system)
Check the pod logs by running the following command:
$ oc logs -n trustee-operator-system $POD_NAME
Example output
[2024-05-30T13:44:24Z INFO kbs] Using config file /etc/kbs-config/kbs-config.json [2024-05-30T13:44:24Z WARN attestation_service::rvps] No RVPS address provided and will launch a built-in rvps [2024-05-30T13:44:24Z INFO attestation_service::token::simple] No Token Signer key in config file, create an ephemeral key and without CA pubkey cert [2024-05-30T13:44:24Z INFO api_server] Starting HTTPS server at [0.0.0.0:8080] [2024-05-30T13:44:24Z INFO actix_server::builder] starting 12 workers [2024-05-30T13:44:24Z INFO actix_server::server] Tokio runtime found; starting in existing Tokio runtime
4.4.11. Verifying the attestation process
You can verify the attestation process by creating a test pod and retrieving its secret.
This procedure is an example to verify that attestation is working. Do not write sensitive data to standard I/O because the data can be captured by using a memory dump. Only data written to memory is encrypted.
By default, an agent side policy embedded in the pod VM image disables the exec
and log
APIs for a Confidential Containers pod. This policy ensures that sensitive data is not written to standard I/O.
In a test scenario, you can override the restriction at runtime by adding a policy annotation to the pod. For Technology Preview, runtime policy annotations are not verified by remote attestation.
Prerequisites
- You have created a route if the Trustee server and the test pod are not running in the same cluster.
Procedure
Create a
verification-pod.yaml
manifest file:apiVersion: v1 kind: Pod metadata: name: ocp-cc-pod labels: app: ocp-cc-pod annotations: io.katacontainers.config.agent.policy: cGFja2FnZSBhZ2VudF9wb2xpY3kKCmRlZmF1bHQgQWRkQVJQTmVpZ2hib3JzUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgQWRkU3dhcFJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IENsb3NlU3RkaW5SZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBDb3B5RmlsZVJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IENyZWF0ZUNvbnRhaW5lclJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IENyZWF0ZVNhbmRib3hSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBEZXN0cm95U2FuZGJveFJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IEV4ZWNQcm9jZXNzUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgR2V0TWV0cmljc1JlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IEdldE9PTUV2ZW50UmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgR3Vlc3REZXRhaWxzUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgTGlzdEludGVyZmFjZXNSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBMaXN0Um91dGVzUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgTWVtSG90cGx1Z0J5UHJvYmVSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBPbmxpbmVDUFVNZW1SZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBQYXVzZUNvbnRhaW5lclJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFB1bGxJbWFnZVJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFJlYWRTdHJlYW1SZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBSZW1vdmVDb250YWluZXJSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBSZW1vdmVTdGFsZVZpcnRpb2ZzU2hhcmVNb3VudHNSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBSZXNlZWRSYW5kb21EZXZSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBSZXN1bWVDb250YWluZXJSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBTZXRHdWVzdERhdGVUaW1lUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgU2V0UG9saWN5UmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgU2lnbmFsUHJvY2Vzc1JlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFN0YXJ0Q29udGFpbmVyUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgU3RhcnRUcmFjaW5nUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgU3RhdHNDb250YWluZXJSZXF1ZXN0IDo9IHRydWUKZGVmYXVsdCBTdG9wVHJhY2luZ1JlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFR0eVdpblJlc2l6ZVJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFVwZGF0ZUNvbnRhaW5lclJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFVwZGF0ZUVwaGVtZXJhbE1vdW50c1JlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFVwZGF0ZUludGVyZmFjZVJlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFVwZGF0ZVJvdXRlc1JlcXVlc3QgOj0gdHJ1ZQpkZWZhdWx0IFdhaXRQcm9jZXNzUmVxdWVzdCA6PSB0cnVlCmRlZmF1bHQgV3JpdGVTdHJlYW1SZXF1ZXN0IDo9IHRydWUK 1 spec: runtimeClassName: kata-remote containers: - name: skr-openshift image: registry.access.redhat.com/ubi9/ubi:9.3 command: - sleep - "36000" securityContext: privileged: false seccompProfile: type: RuntimeDefault
- 1
- This pod annotation overrides the policy that prevents sensitive data from being written to standard I/O.
Create the pod by running the following command:
$ oc create -f verification-pod.yaml
Connect to the Bash shell of the
ocp-cc-pod
by running the following command:$ oc exec -it ocp-cc-pod -- bash
Fetch the pod secret by running the following command:
$ curl http://127.0.0.1:8006/cdh/resource/default/kbsres1/key1
Example output
res1val1
The Trustee server returns the secret only if the attestation is successful.