Chapter 5. Deploying OpenShift sandboxed containers on Google Cloud
You can deploy OpenShift sandboxed containers on Google Cloud.
OpenShift sandboxed containers deploys peer pods. The peer pod design circumvents the need for nested virtualization. For more information, see peer pod and Peer pods technical deep dive.
Cluster requirements
- You have installed OpenShift Container Platform 4.17 or later on the cluster where you are installing the OpenShift sandboxed containers Operator for Google Cloud.
- Your cluster has at least one worker node.
For more information, see Installing on Google Cloud in the OpenShift Container Platform documentation.
5.1. Peer pod resource requirements Copy linkLink copied to clipboard!
You must ensure that your cluster has sufficient resources.
Peer pod virtual machines (VMs) require resources in two locations:
-
The worker node. The worker node stores metadata, Kata shim resources (
containerd-shim-kata-v2
), remote-hypervisor resources (cloud-api-adaptor
), and the tunnel setup between the worker nodes and the peer pod VM. - The cloud instance. This is the actual peer pod VM running in the cloud.
The CPU and memory resources used in the Kubernetes worker node are handled by the pod overhead included in the RuntimeClass (kata-remote
) definition used for creating peer pods.
The total number of peer pod VMs running in the cloud is defined as Kubernetes Node extended resources. This limit is per node and is set by the PEERPODS_LIMIT_PER_NODE
attribute in the peer-pods-cm
config map.
The extended resource is named kata.peerpods.io/vm
, and enables the Kubernetes scheduler to handle capacity tracking and accounting.
You can edit the limit per node based on the requirements for your environment after you install the OpenShift sandboxed containers Operator.
A mutating webhook adds the extended resource kata.peerpods.io/vm
to the pod specification. It also removes any resource-specific entries from the pod specification, if present. This enables the Kubernetes scheduler to account for these extended resources, ensuring the peer pod is only scheduled when resources are available.
The mutating webhook modifies a Kubernetes pod as follows:
-
The mutating webhook checks the pod for the expected
RuntimeClassName
value, specified in theTARGET_RUNTIME_CLASS
environment variable. If the value in the pod specification does not match the value in theTARGET_RUNTIME_CLASS
, the webhook exits without modifying the pod. If the
RuntimeClassName
values match, the webhook makes the following changes to the pod spec:-
The webhook removes every resource specification from the
resources
field of all containers and init containers in the pod. -
The webhook adds the extended resource (
kata.peerpods.io/vm
) to the spec by modifying the resources field of the first container in the pod. The extended resourcekata.peerpods.io/vm
is used by the Kubernetes scheduler for accounting purposes.
-
The webhook removes every resource specification from the
The mutating webhook excludes specific system namespaces in OpenShift Container Platform from mutation. If a peer pod is created in those system namespaces, then resource accounting using Kubernetes extended resources does not work unless the pod spec includes the extended resource.
As a best practice, define a cluster-wide policy to only allow peer pod creation in specific namespaces.
5.2. Deploying OpenShift sandboxed containers by using the web console Copy linkLink copied to clipboard!
You can deploy OpenShift sandboxed containers on Google Cloud by using the OpenShift Container Platform web console to perform the following tasks:
- Install the OpenShift sandboxed containers Operator.
- Optional: Enable port 15150 to allow internal communication with peer pods.
- Optional: Create the peer pods secret if you uninstalled the Cloud Credential Operator, which is installed with the OpenShift sandboxed containers Operator.
- Optional: Customize the Kata agent policy.
- Create the peer pods config map.
- Optional: Create the peer pod virtual machine (VM) image and VM image config map.
-
Create the
KataConfig
custom resource. - Configure the OpenShift sandboxed containers workload objects.
5.2.1. Installing the OpenShift sandboxed containers Operator Copy linkLink copied to clipboard!
You can install the OpenShift sandboxed containers Operator by using the OpenShift Container Platform web console.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
-
In the web console, navigate to Operators
OperatorHub. -
In the Filter by keyword field, type
OpenShift sandboxed containers
. - Select the OpenShift sandboxed containers Operator tile and click Install.
- On the Install Operator page, select stable from the list of available Update Channel options.
Verify that Operator recommended Namespace is selected for Installed Namespace. This installs the Operator in the mandatory
openshift-sandboxed-containers-operator
namespace. If this namespace does not yet exist, it is automatically created.NoteAttempting to install the OpenShift sandboxed containers Operator in a namespace other than
openshift-sandboxed-containers-operator
causes the installation to fail.- Verify that Automatic is selected for Approval Strategy. Automatic is the default value, and enables automatic updates to OpenShift sandboxed containers when a new z-stream release is available.
- Click Install.
-
Navigate to Operators
Installed Operators to verify that the Operator is installed.
5.2.2. Enabling port 15150 for Google Cloud Copy linkLink copied to clipboard!
You must enable port 15150 on the OpenShift Container Platform to allow internal communication with peer pods running on Compute Engine.
Prerequisites
- You have installed the Google Cloud command line interface (CLI) tool.
-
You have access to the OpenShift Container Platform cluster as a user with the
roles/container.admin
role.
Procedure
Set the project ID variable by running the following command:
export GCP_PROJECT_ID="<project_id>"
$ export GCP_PROJECT_ID="<project_id>"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to Google Cloud by running the following command:
gcloud auth login
$ gcloud auth login
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the Google Cloud project ID by running the following command:
gcloud config set project ${GCP_PROJECT_ID}
$ gcloud config set project ${GCP_PROJECT_ID}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open port 15150 by running the following command:
gcloud compute firewall-rules create allow-port-15150-restricted \ --project=${GCP_PROJECT_ID} \ --network=default \ --allow=tcp:15150 \ --source-ranges=<external_ip_cidr-1>[,<external_ip_cidr-2>,...]
$ gcloud compute firewall-rules create allow-port-15150-restricted \ --project=${GCP_PROJECT_ID} \ --network=default \ --allow=tcp:15150 \ --source-ranges=<external_ip_cidr-1>[,<external_ip_cidr-2>,...]
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify one or more IP addresses or ranges in CIDR format, separated by commas. For example,
203.0.113.5/32,198.51.100.0/24
.
Verification
Verify that port 15150 is open by running the following command:
gcloud compute firewall-rule list
$ gcloud compute firewall-rule list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.3. Creating the peer pods secret Copy linkLink copied to clipboard!
When the peer pods secret is empty and the Cloud Credential Operator (CCO) is installed, the OpenShift sandboxed containers Operator uses the CCO to retrieve the secret. If you have uninstalled the CCO, you must create the peer pods secret for OpenShift sandboxed containers manually or the peer pods will fail to operate.
The secret stores credentials for creating the pod virtual machine (VM) image and peer pod instances.
By default, the OpenShift sandboxed containers Operator creates the secret based on the credentials used to create the cluster. However, you can manually create a secret that uses different credentials.
Prerequisites
-
You have created a Google Cloud service account with permissions such as
roles/compute.instanceAdmin.v1
to manage Compute Engine resources.
Procedure
-
In the Google Cloud console, navigate to IAM & Admin
Service Accounts Keys and save your key as a JSON file. Convert the JSON file to a single line string by running the following command:
cat <key_file>.json | jq -c .
$ cat <key_file>.json | jq -c .
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
In the OpenShift Container Platform web console, navigate to Operators
Installed Operators. - Click the OpenShift sandboxed containers Operator tile.
- Click the Import icon (+) on the top right corner.
In the Import YAML window, paste the following YAML manifest:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<gc_service_account_key_json>
with the single-line string you created from the Google Cloud service account key JSON file.
- Click Save to apply the changes.
-
Navigate to Workloads
Secrets to verify the peer pods secret.
5.2.4. Creating the peer pods config map Copy linkLink copied to clipboard!
You must create the peer pods config map for OpenShift sandboxed containers.
Procedure
Log in to your Compute Engine instance to set the following environmental variables:
Get the project ID by running the following command:
GCP_PROJECT_ID=$(gcloud config get-value project)
$ GCP_PROJECT_ID=$(gcloud config get-value project)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the zone by running the following command:
GCP_ZONE=$(gcloud config get-value compute/zone)
$ GCP_ZONE=$(gcloud config get-value compute/zone)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve a list of network names by running the following command:
gcloud compute networks list --format="value(name)"
$ gcloud compute networks list --format="value(name)"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the network by running the following command:
GCP_NETWORK=<network_name>
$ GCP_NETWORK=<network_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<network_name>
with the name of the network.
-
In the OpenShift Container Platform web console, navigate to Operators
Installed Operators. - Select the OpenShift sandboxed containers Operator from the list of operators.
- Click the Import icon (+) in the top right corner.
In the Import YAML window, paste the following YAML manifest:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the project ID you want to use.
- 2
- Specify the
GCP_ZONE
value that you retrieved. This zone will run the workload. - 3
- Specify the machine type that matches the requirements of your workload.
- 4
- Specify the
GCP_NETWORK
value you retrieved. - 5
- Specify the maximum number of peer pods that can be created per node. The default value is
10
. - 6
- You can configure custom tags as
key:value
pairs for pod VM instances to track peer pod costs or to identify peer pods in different clusters.
- Click Save to apply the changes.
-
Navigate to Workloads
ConfigMaps to view the new config map.
5.2.5. Creating the peer pod VM image Copy linkLink copied to clipboard!
You must create a QCOW2 peer pod virtual machine (VM) image.
Prerequisites
-
You have installed
podman
. - You have access to a container registry.
Procedure
Clone the OpenShift sandboxed containers repository by running the following command:
git clone https://github.com/openshift/sandboxed-containers-operator.git
$ git clone https://github.com/openshift/sandboxed-containers-operator.git
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate to
sandboxed-containers-operator/config/peerpods/podvm/bootc
by running the following command:cd sandboxed-containers-operator/config/peerpods/podvm/bootc
$ cd sandboxed-containers-operator/config/peerpods/podvm/bootc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to
registry.redhat.io
by running the following command:podman login registry.redhat.io
$ podman login registry.redhat.io
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You must log in to
registry.redhat.io
, because thepodman build
process must access theContainerfile.rhel
container image hosted on the registry.Set the image path for your container registry by running the following command:
IMG="<container_registry_url>/<username>/podvm-bootc:latest"
$ IMG="<container_registry_url>/<username>/podvm-bootc:latest"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Build the pod VM
bootc
image by running the following command:podman build -t ${IMG} -f Containerfile.rhel .
$ podman build -t ${IMG} -f Containerfile.rhel .
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to your container registry by running the following command:
podman login <container_registry_url>
$ podman login <container_registry_url>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the image to your container registry by running the following command:
podman push ${IMG}
$ podman push ${IMG}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For testing and development, you can make the image public.
Verify the
podvm-bootc
image by running the following command:podman images
$ podman images
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
REPOSITORY TAG IMAGE ID CREATED SIZE example.com/example_user/podvm-bootc latest 88ddab975a07 2 seconds ago 1.82 GB
REPOSITORY TAG IMAGE ID CREATED SIZE example.com/example_user/podvm-bootc latest 88ddab975a07 2 seconds ago 1.82 GB
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.6. Creating the peer pod VM image config map Copy linkLink copied to clipboard!
Create a config map for the pod virtual machine (VM) image.
Procedure
-
In the OpenShift Container Platform web console, navigate to Operators
Installed Operators. - Select the OpenShift sandboxed containers Operator from the list of operators.
- Click the Import icon (+) in the top right corner.
In the Import YAML window, paste the following YAML manifest:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Save to apply the changes.
-
Navigate to Workloads
ConfigMaps to view the new config map.
5.2.7. Customizing the Kata agent policy Copy linkLink copied to clipboard!
The Kata agent policy is a security mechanism that controls agent API requests for pods running with the Kata runtime. Written in Rego and enforced by the Kata agent within the pod virtual machine (VM), this policy determines which operations are allowed or denied.
You can override the default policy with a custom one for specific use cases, such as development and testing where security is not a concern. For example, you might run in an environment where the control plane can be trusted. You can apply a custom policy in several ways:
- Embedding it in the pod VM image.
- Patching the peer pods config map.
- Adding an annotation to the workload pod YAML.
For production systems, the preferred method is to use initdata to override the Kata agent policy. The following procedure applies a custom policy to an individual pod using the io.katacontainers.config.agent.policy
annotation. The policy is provided in Base64-encoded Rego format. This approach overrides the default policy at pod creation without modifying the pod VM image.
A custom policy replaces the default policy entirely. To modify only specific APIs, include the full policy and adjust the relevant rules.
Procedure
Create a
policy.rego
file with your custom policy. The following example shows all configurable APIs, withexec
andlog
enabled for demonstration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This policy enables the
exec
(ExecProcessRequest
) andlog
(ReadStreamRequest
) APIs. Adjust thetrue
orfalse
values to customize the policy further based on your needs.Convert the
policy.rego
file to a Base64-encoded string by running the following command:base64 -w0 policy.rego
$ base64 -w0 policy.rego
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the output for use in the yaml file.
-
In the OpenShift Container Platform web console, navigate to Operators
Installed Operators. - Select the OpenShift sandboxed containers Operator from the list of operators.
- Click the Import icon (+) in the top right corner.
In the Import YAML window, paste the following YAML manifest and add the Base64-encoded policy to it:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Save to apply the changes.
5.2.8. Creating the KataConfig custom resource Copy linkLink copied to clipboard!
You must create the KataConfig
custom resource (CR) to install kata-remote
as a RuntimeClass
on your worker nodes.
The kata-remote
runtime class is installed on all worker nodes by default. If you want to install kata-remote
on specific nodes, you can add labels to those nodes and then define the label in the KataConfig
CR.
OpenShift sandboxed containers installs kata-remote
as a secondary, optional runtime on the cluster and not as the primary runtime.
Creating the KataConfig
CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. The following factors might increase the reboot time:
- A larger OpenShift Container Platform deployment with a greater number of worker nodes.
- Activation of the BIOS and Diagnostics utility.
- Deployment on a hard disk drive rather than an SSD.
- Deployment on physical nodes such as bare metal, rather than on virtual nodes.
- A slow CPU and network.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - Optional: You have installed the Node Feature Discovery Operator if you want to enable node eligibility checks.
Procedure
-
In the OpenShift Container Platform web console, navigate to Operators
Installed Operators. - Select the OpenShift sandboxed containers Operator.
- On the KataConfig tab, click Create KataConfig.
Enter the following details:
-
Name: Optional: The default name is
example-kataconfig
. -
Labels: Optional: Enter any relevant, identifying attributes to the
KataConfig
resource. Each label represents a key-value pair. - enablePeerPods: Select for public cloud, IBM Z®, and IBM® LinuxONE deployments.
kataConfigPoolSelector. Optional: To install
kata-remote
on selected nodes, add a match expression for the labels on the selected nodes:- Expand the kataConfigPoolSelector area.
- In the kataConfigPoolSelector area, expand matchExpressions. This is a list of label selector requirements.
- Click Add matchExpressions.
- In the Key field, enter the label key the selector applies to.
-
In the Operator field, enter the key’s relationship to the label values. Valid operators are
In
,NotIn
,Exists
, andDoesNotExist
. - Expand the Values area and then click Add value.
-
In the Value field, enter
true
orfalse
for key label value.
-
logLevel: Define the level of log data retrieved for nodes with the
kata-remote
runtime class.
-
Name: Optional: The default name is
Click Create. The
KataConfig
CR is created and installs thekata-remote
runtime class on the worker nodes.Wait for the
kata-remote
installation to complete and the worker nodes to reboot before verifying the installation.
Verification
-
On the KataConfig tab, click the
KataConfig
CR to view its details. Click the YAML tab to view the
status
stanza.The
status
stanza contains theconditions
andkataNodes
keys. The value ofstatus.kataNodes
is an array of nodes, each of which lists nodes in a particular state ofkata-remote
installation. A message appears each time there is an update.Click Reload to refresh the YAML.
When all workers in the
status.kataNodes
array display the valuesinstalled
andconditions.InProgress: False
with no specified reason, thekata-remote
is installed on the cluster.
Additional resources
Verifying the pod VM image
After kata-remote
is installed on your cluster, the OpenShift sandboxed containers Operator creates a pod VM image, which is used to create peer pods. This process can take a long time because the image is created on the cloud instance. You can verify that the pod VM image was created successfully by checking the config map that you created for the cloud provider.
Procedure
-
Navigate to Workloads
ConfigMaps. - Click the provider config map to view its details.
- Click the YAML tab.
Check the
status
stanza of the YAML file.If the
PODVM_IMAGE_NAME
parameter is populated, the pod VM image was created successfully.
Troubleshooting
Retrieve the events log by running the following command:
oc get events -n openshift-sandboxed-containers-operator --field-selector involvedObject.name=osc-podvm-image-creation
$ oc get events -n openshift-sandboxed-containers-operator --field-selector involvedObject.name=osc-podvm-image-creation
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the job log by running the following command:
oc logs -n openshift-sandboxed-containers-operator jobs/osc-podvm-image-creation
$ oc logs -n openshift-sandboxed-containers-operator jobs/osc-podvm-image-creation
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If you cannot resolve the issue, submit a Red Hat Support case and attach the output of both logs.
5.2.9. Configuring workload objects Copy linkLink copied to clipboard!
You must configure OpenShift sandboxed containers workload objects by setting kata-remote
as the runtime class for the following pod-templated objects:
-
Pod
objects -
ReplicaSet
objects -
ReplicationController
objects -
StatefulSet
objects -
Deployment
objects -
DeploymentConfig
objects
Do not deploy workloads in an Operator namespace. Create a dedicated namespace for these resources.
Prerequisites
-
You have created the
KataConfig
custom resource (CR).
Procedure
Add
spec.runtimeClassName: kata-remote
to the manifest of each pod-templated workload object as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow OpenShift Container Platform creates the workload object and begins scheduling it.
Verification
-
Inspect the
spec.runtimeClassName
field of a pod-templated object. If the value iskata-remote
, then the workload is running on OpenShift sandboxed containers, using peer pods.
5.3. Deploying OpenShift sandboxed containers by using the command line Copy linkLink copied to clipboard!
You can deploy OpenShift sandboxed containers on Google Cloud by using the command line interface (CLI) to perform the following tasks:
- Install the OpenShift sandboxed containers Operator.
- Optional: Enable port 15150 to allow internal communication with peer pods.
- Optional: Create the peer pods secret if you uninstalled the Cloud Credential Operator, which is installed with the OpenShift sandboxed containers Operator.
- Create the peer pods config map.
- Create the pod VM image config map.
- Optional: Customize the Kata agent policy.
-
Create the
KataConfig
custom resource. - Optional: Modify the number of virtual machines running on each worker node.
- Configure the OpenShift sandboxed containers workload objects.
5.3.1. Installing the OpenShift sandboxed containers Operator Copy linkLink copied to clipboard!
You can install the OpenShift sandboxed containers Operator by using the CLI.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). -
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Create an
osc-namespace.yaml
manifest file:apiVersion: v1 kind: Namespace metadata: name: openshift-sandboxed-containers-operator
apiVersion: v1 kind: Namespace metadata: name: openshift-sandboxed-containers-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the namespace by running the following command:
oc apply -f osc-namespace.yaml
$ oc apply -f osc-namespace.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
osc-operatorgroup.yaml
manifest file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the operator group by running the following command:
oc apply -f osc-operatorgroup.yaml
$ oc apply -f osc-operatorgroup.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
osc-subscription.yaml
manifest file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the subscription by running the following command:
oc apply -f osc-subscription.yaml
$ oc apply -f osc-subscription.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the Operator is correctly installed by running the following command:
oc get csv -n openshift-sandboxed-containers-operator
$ oc get csv -n openshift-sandboxed-containers-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command can take several minutes to complete.
Watch the process by running the following command:
watch oc get csv -n openshift-sandboxed-containers-operator
$ watch oc get csv -n openshift-sandboxed-containers-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME DISPLAY VERSION REPLACES PHASE openshift-sandboxed-containers openshift-sandboxed-containers-operator 1.9.0 1.8.1 Succeeded
NAME DISPLAY VERSION REPLACES PHASE openshift-sandboxed-containers openshift-sandboxed-containers-operator 1.9.0 1.8.1 Succeeded
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.2. Enabling port 15150 for Google Cloud Copy linkLink copied to clipboard!
You must enable port 15150 on the OpenShift Container Platform to allow internal communication with peer pods running on Compute Engine.
Prerequisites
- You have installed the Google Cloud command line interface (CLI) tool.
-
You have access to the OpenShift Container Platform cluster as a user with the
roles/container.admin
role.
Procedure
Set the project ID variable by running the following command:
export GCP_PROJECT_ID="<project_id>"
$ export GCP_PROJECT_ID="<project_id>"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to Google Cloud by running the following command:
gcloud auth login
$ gcloud auth login
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the Google Cloud project ID by running the following command:
gcloud config set project ${GCP_PROJECT_ID}
$ gcloud config set project ${GCP_PROJECT_ID}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open port 15150 by running the following command:
gcloud compute firewall-rules create allow-port-15150-restricted \ --project=${GCP_PROJECT_ID} \ --network=default \ --allow=tcp:15150 \ --source-ranges=<external_ip_cidr-1>[,<external_ip_cidr-2>,...]
$ gcloud compute firewall-rules create allow-port-15150-restricted \ --project=${GCP_PROJECT_ID} \ --network=default \ --allow=tcp:15150 \ --source-ranges=<external_ip_cidr-1>[,<external_ip_cidr-2>,...]
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify one or more IP addresses or ranges in CIDR format, separated by commas. For example,
203.0.113.5/32,198.51.100.0/24
.
Verification
Verify that port 15150 is open by running the following command:
gcloud compute firewall-rule list
$ gcloud compute firewall-rule list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.3. Creating the peer pods secret Copy linkLink copied to clipboard!
When the peer pods secret is empty and the Cloud Credential Operator (CCO) is installed, the OpenShift sandboxed containers Operator uses the CCO to retrieve the secret. If you have uninstalled the CCO, you must create the peer pods secret for OpenShift sandboxed containers manually or the peer pods will fail to operate.
The secret stores credentials for creating the pod virtual machine (VM) image and peer pod instances.
By default, the OpenShift sandboxed containers Operator creates the secret based on the credentials used to create the cluster. However, you can manually create a secret that uses different credentials.
Prerequisites
-
You have created a Google Cloud service account with permissions such as
roles/compute.instanceAdmin.v1
to manage Compute Engine resources. -
You have installed the Google Cloud SDK (
gcloud
) and authenticated it with your service account.
Procedure
Create a Google Cloud service account key and save it as a JSON file by running the following command:
gcloud iam service-accounts keys create <key_filename>.json \ --iam-account=<service_account_email_address>
$ gcloud iam service-accounts keys create <key_filename>.json \ --iam-account=<service_account_email_address>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Convert the JSON file to a single line string by running the following command:
cat <key_file>.json | jq -c .
$ cat <key_file>.json | jq -c .
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
peer-pods-secret.yaml
manifest file according to the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<gc_service_account_key_json>
with the single-line string you created from the Google Cloud service account key JSON file.
Create the secret by running the following command:
oc apply -f peer-pods-secret.yaml
$ oc apply -f peer-pods-secret.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.4. Creating the peer pods config map Copy linkLink copied to clipboard!
You must create the peer pods config map for OpenShift sandboxed containers.
Procedure
Log in to your Compute Engine instance to set the following environmental variables:
Get the project ID by running the following command:
GCP_PROJECT_ID=$(gcloud config get-value project)
$ GCP_PROJECT_ID=$(gcloud config get-value project)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the zone by running the following command:
GCP_ZONE=$(gcloud config get-value compute/zone)
$ GCP_ZONE=$(gcloud config get-value compute/zone)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve a list of network names by running the following command:
gcloud compute networks list --format="value(name)"
$ gcloud compute networks list --format="value(name)"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the network by running the following command:
GCP_NETWORK=<network_name>
$ GCP_NETWORK=<network_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<network_name>
with the name of the network.
Create a
peer-pods-cm.yaml
manifest file according to the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the project ID you want to use.
- 2
- Specify the
GCP_ZONE
value that you retrieved. This zone will run the workload. - 3
- Specify the machine type that matches the requirements of your workload.
- 4
- Specify the
GCP_NETWORK
value you retrieved. - 5
- Specify the maximum number of peer pods that can be created per node. The default value is
10
. - 6
- You can configure custom tags as
key:value
pairs for pod VM instances to track peer pod costs or to identify peer pods in different clusters.
Create the config map by running the following command:
oc apply -f peer-pods-cm.yaml
$ oc apply -f peer-pods-cm.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.5. Creating the peer pod VM image Copy linkLink copied to clipboard!
You must create a QCOW2 peer pod virtual machine (VM) image.
Prerequisites
-
You have installed
podman
. - You have access to a container registry.
Procedure
Clone the OpenShift sandboxed containers repository by running the following command:
git clone https://github.com/openshift/sandboxed-containers-operator.git
$ git clone https://github.com/openshift/sandboxed-containers-operator.git
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate to
sandboxed-containers-operator/config/peerpods/podvm/bootc
by running the following command:cd sandboxed-containers-operator/config/peerpods/podvm/bootc
$ cd sandboxed-containers-operator/config/peerpods/podvm/bootc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to
registry.redhat.io
by running the following command:podman login registry.redhat.io
$ podman login registry.redhat.io
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You must log in to
registry.redhat.io
, because thepodman build
process must access theContainerfile.rhel
container image hosted on the registry.Set the image path for your container registry by running the following command:
IMG="<container_registry_url>/<username>/podvm-bootc:latest"
$ IMG="<container_registry_url>/<username>/podvm-bootc:latest"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Build the pod VM
bootc
image by running the following command:podman build -t ${IMG} -f Containerfile.rhel .
$ podman build -t ${IMG} -f Containerfile.rhel .
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to your container registry by running the following command:
podman login <container_registry_url>
$ podman login <container_registry_url>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the image to your container registry by running the following command:
podman push ${IMG}
$ podman push ${IMG}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For testing and development, you can make the image public.
Verify the
podvm-bootc
image by running the following command:podman images
$ podman images
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
REPOSITORY TAG IMAGE ID CREATED SIZE example.com/example_user/podvm-bootc latest 88ddab975a07 2 seconds ago 1.82 GB
REPOSITORY TAG IMAGE ID CREATED SIZE example.com/example_user/podvm-bootc latest 88ddab975a07 2 seconds ago 1.82 GB
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.6. Creating the peer pod VM image config map Copy linkLink copied to clipboard!
Create a config map for the pod virtual machine (VM) image.
Procedure
Create a config map manifest for the pod VM image named
gc-podvm-image-cm.yaml
with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the config map by running the following command:
oc apply -f gc-podvm-image-cm.yaml
$ oc apply -f gc-podvm-image-cm.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.7. Customizing the Kata agent policy Copy linkLink copied to clipboard!
The Kata agent policy is a security mechanism that controls agent API requests for pods running with the Kata runtime. Written in Rego and enforced by the Kata agent within the pod virtual machine (VM), this policy determines which operations are allowed or denied.
You can override the default policy with a custom one for specific use cases, such as development and testing where security is not a concern. For example, you might run in an environment where the control plane can be trusted. You can apply a custom policy in several ways:
- Embedding it in the pod VM image.
- Patching the peer pods config map.
- Adding an annotation to the workload pod YAML.
For production systems, the preferred method is to use initdata to override the Kata agent policy. The following procedure applies a custom policy to an individual pod using the io.katacontainers.config.agent.policy
annotation. The policy is provided in Base64-encoded Rego format. This approach overrides the default policy at pod creation without modifying the pod VM image.
A custom policy replaces the default policy entirely. To modify only specific APIs, include the full policy and adjust the relevant rules.
Procedure
Create a
policy.rego
file with your custom policy. The following example shows all configurable APIs, withexec
andlog
enabled for demonstration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This policy enables the
exec
(ExecProcessRequest
) andlog
(ReadStreamRequest
) APIs. Adjust thetrue
orfalse
values to customize the policy further based on your needs.Convert the
policy.rego
file to a Base64-encoded string by running the following command:base64 -w0 policy.rego
$ base64 -w0 policy.rego
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the output for use in the yaml file.
Add the Base64-encoded policy to a
my-pod.yaml
pod specification file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the pod manifest by running the following command:
oc apply -f my-pod.yaml
$ oc apply -f my-pod.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.8. Creating the KataConfig custom resource Copy linkLink copied to clipboard!
You must create the KataConfig
custom resource (CR) to install kata-remote
as a runtime class on your worker nodes.
Creating the KataConfig
CR triggers the OpenShift sandboxed containers Operator to do the following:
-
Create a
RuntimeClass
CR namedkata-remote
with a default configuration. This enables users to configure workloads to usekata-remote
as the runtime by referencing the CR in theRuntimeClassName
field. This CR also specifies the resource overhead for the runtime.
OpenShift sandboxed containers installs kata-remote
as a secondary, optional runtime on the cluster and not as the primary runtime.
Creating the KataConfig
CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. Factors that impede reboot time are as follows:
- A larger OpenShift Container Platform deployment with a greater number of worker nodes.
- Activation of the BIOS and Diagnostics utility.
- Deployment on a hard disk drive rather than an SSD.
- Deployment on physical nodes such as bare metal, rather than on virtual nodes.
- A slow CPU and network.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Create an
example-kataconfig.yaml
manifest file according to the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Optional: If you have applied node labels to install
kata-remote
on specific nodes, specify the key and value, for example,osc: 'true'
.
Create the
KataConfig
CR by running the following command:oc apply -f example-kataconfig.yaml
$ oc apply -f example-kataconfig.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The new
KataConfig
CR is created and installskata-remote
as a runtime class on the worker nodes.Wait for the
kata-remote
installation to complete and the worker nodes to reboot before verifying the installation.Monitor the installation progress by running the following command:
watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p"
$ watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the status of all workers under
kataNodes
isinstalled
and the conditionInProgress
isFalse
without specifying a reason, thekata-remote
is installed on the cluster.Verify the daemon set by running the following command:
oc get -n openshift-sandboxed-containers-operator ds/osc-caa-ds
$ oc get -n openshift-sandboxed-containers-operator ds/osc-caa-ds
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the runtime classes by running the following command:
oc get runtimeclass
$ oc get runtimeclass
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME HANDLER AGE kata kata 152m kata-remote kata-remote 152m
NAME HANDLER AGE kata kata 152m kata-remote kata-remote 152m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.9. Modifying the number of peer pod VMs per node Copy linkLink copied to clipboard!
You can modify the limit of peer pod virtual machines (VMs) per node by editing the peerpodConfig
custom resource (CR).
Procedure
Check the current limit by running the following command:
oc get peerpodconfig peerpodconfig-openshift -n openshift-sandboxed-containers-operator \ -o jsonpath='{.spec.limit}{"\n"}'
$ oc get peerpodconfig peerpodconfig-openshift -n openshift-sandboxed-containers-operator \ -o jsonpath='{.spec.limit}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Modify the
limit
attribute of thepeerpodConfig
CR by running the following command:oc patch peerpodconfig peerpodconfig-openshift -n openshift-sandboxed-containers-operator \ --type merge --patch '{"spec":{"limit":"<value>"}}'
$ oc patch peerpodconfig peerpodconfig-openshift -n openshift-sandboxed-containers-operator \ --type merge --patch '{"spec":{"limit":"<value>"}}'
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace <value> with the limit you want to define.
Verifying the pod VM image
After kata-remote
is installed on your cluster, the OpenShift sandboxed containers Operator creates a pod VM image, which is used to create peer pods. This process can take a long time because the image is created on the cloud instance. You can verify that the pod VM image was created successfully by checking the config map that you created for the cloud provider.
Procedure
Obtain the config map you created for the peer pods:
oc get configmap peer-pods-cm -n openshift-sandboxed-containers-operator -o yaml
$ oc get configmap peer-pods-cm -n openshift-sandboxed-containers-operator -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the
status
stanza of the YAML file.If the
PODVM_IMAGE_NAME
parameter is populated, the pod VM image was created successfully.
Troubleshooting
Retrieve the events log by running the following command:
oc get events -n openshift-sandboxed-containers-operator --field-selector involvedObject.name=osc-podvm-image-creation
$ oc get events -n openshift-sandboxed-containers-operator --field-selector involvedObject.name=osc-podvm-image-creation
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the job log by running the following command:
oc logs -n openshift-sandboxed-containers-operator jobs/osc-podvm-image-creation
$ oc logs -n openshift-sandboxed-containers-operator jobs/osc-podvm-image-creation
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If you cannot resolve the issue, submit a Red Hat Support case and attach the output of both logs.
5.3.10. Configuring workload objects Copy linkLink copied to clipboard!
You must configure OpenShift sandboxed containers workload objects by setting kata-remote
as the runtime class for the following pod-templated objects:
-
Pod
objects -
ReplicaSet
objects -
ReplicationController
objects -
StatefulSet
objects -
Deployment
objects -
DeploymentConfig
objects
Do not deploy workloads in an Operator namespace. Create a dedicated namespace for these resources.
Prerequisites
-
You have created the
KataConfig
custom resource (CR).
Procedure
Add
spec.runtimeClassName: kata-remote
to the manifest of each pod-templated workload object as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow OpenShift Container Platform creates the workload object and begins scheduling it.
Verification
-
Inspect the
spec.runtimeClassName
field of a pod-templated object. If the value iskata-remote
, then the workload is running on OpenShift sandboxed containers, using peer pods.