Chapter 2. Deploying OpenShift sandboxed containers on bare metal


You can deploy OpenShift sandboxed containers on bare metal,

You deploy OpenShift sandboxed containers by performing the following steps:

  1. Install the OpenShift sandboxed containers Operator on the OpenShift Container Platform cluster.
  2. Optional: Install the Local Storage Operator to configure a local block storage device.
  3. Optional: Install the Node Feature Discovery (NFD) Operator to configure node eligibility checks.
  4. Create the KataConfig custom resource.
  5. Optional: Modify the number of virtual machines running on each worker node.
  6. Optional: Modify the pod overhead.
  7. Configure your workload for OpenShift sandboxed containers.

2.1. Prerequisites

  • You have installed Red Hat OpenShift Container Platform 4.16 or later.
  • Your OpenShift Container Platform cluster has at least one worker node.

You install the OpenShift sandboxed containers Operator by using the command line interface (CLI).

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

Procedure

  1. Create an osc-namespace.yaml manifest file:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-sandboxed-containers-operator
    Copy to Clipboard Toggle word wrap
  2. Create the namespace by running the following command:

    $ oc apply -f osc-namespace.yaml
    Copy to Clipboard Toggle word wrap
  3. Create an osc-operatorgroup.yaml manifest file:

    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: sandboxed-containers-operator-group
      namespace: openshift-sandboxed-containers-operator
    spec:
      targetNamespaces:
      - openshift-sandboxed-containers-operator
    Copy to Clipboard Toggle word wrap
  4. Create the operator group by running the following command:

    $ oc apply -f osc-operatorgroup.yaml
    Copy to Clipboard Toggle word wrap
  5. Create an osc-subscription.yaml manifest file:

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: sandboxed-containers-operator
      namespace: openshift-sandboxed-containers-operator
    spec:
      channel: stable
      installPlanApproval: Automatic
      name: sandboxed-containers-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      startingCSV: sandboxed-containers-operator.v1.10.1
    Copy to Clipboard Toggle word wrap
  6. Create the subscription by running the following command:

    $ oc apply -f osc-subscription.yaml
    Copy to Clipboard Toggle word wrap
  7. Verify that the Operator is correctly installed by running the following command:

    $ oc get csv -n openshift-sandboxed-containers-operator
    Copy to Clipboard Toggle word wrap

    This command can take several minutes to complete.

  8. Watch the process by running the following command:

    $ watch oc get csv -n openshift-sandboxed-containers-operator
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                             DISPLAY                                  VERSION             REPLACES                   PHASE
    openshift-sandboxed-containers   openshift-sandboxed-containers-operator  1.10.1    1.9.0        Succeeded
    Copy to Clipboard Toggle word wrap

2.3. Optional configurations

You can configure the following options after you install the OpenShift sandboxed containers Operator.

2.3.1. Provisioning local block volumes

You can use local block volumes with OpenShift sandboxed containers. You must first provision the local block volumes by using the Local Storage Operator (LSO). Then you must enable the nodes with the local block volumes to run OpenShift sandboxed containers workloads.

You can provision local block volumes for OpenShift sandboxed containers by using the Local Storage Operator (LSO). The local volume provisioner looks for any block volume devices at the paths specified in the defined resource.

Prerequisites

  • You have installed the Local Storage Operator.
  • You have a local disk that meets the following conditions:

    • It is attached to a node.
    • It is not mounted.
    • It does not contain partitions.

Procedure

  1. Create the local volume resource. This resource must define the nodes and paths to the local volumes.

    Note

    Do not use different storage class names for the same device. Doing so creates multiple persistent volumes (PVs).

    Example: Block

    apiVersion: "local.storage.openshift.io/v1"
    kind: "LocalVolume"
    metadata:
      name: "local-disks"
      namespace: "openshift-local-storage" 
    1
    
    spec:
      nodeSelector: 
    2
    
        nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - ip-10-0-136-143
              - ip-10-0-140-255
              - ip-10-0-144-180
      storageClassDevices:
        - storageClassName: "local-sc" 
    3
    
          forceWipeDevicesAndDestroyAllData: false 
    4
    
          volumeMode: Block
          devicePaths: 
    5
    
            - /path/to/device 
    6
    Copy to Clipboard Toggle word wrap

    1
    The namespace where the Local Storage Operator is installed.
    2
    Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node. If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes.
    3
    The name of the storage class to use when creating persistent volume objects.
    4
    This setting defines whether or not to call wipefs, which removes partition table signatures (magic strings) making the disk ready to use for Local Storage Operator provisioning. No other data besides signatures is erased. The default is "false" (wipefs is not invoked). Setting forceWipeDevicesAndDestroyAllData to "true" can be useful in scenarios where previous data can remain on disks that need to be re-used. In these scenarios, setting this field to true eliminates the need for administrators to erase the disks manually.
    5
    The path containing a list of local storage devices to choose from. You must use this path when enabling a node with a local block device to run OpenShift sandboxed containers workloads.
    6
    Replace this value with the filepath to your LocalVolume resource by-id, such as /dev/disk/by-id/wwn. PVs are created for these local disks when the provisioner is deployed successfully.
  2. Create the local volume resource in your OpenShift Container Platform cluster. Specify the file you just created:

    $ oc apply -f <local-volume>.yaml
    Copy to Clipboard Toggle word wrap
  3. Verify that the provisioner was created and that the corresponding daemon sets were created:

    $ oc get all -n openshift-local-storage
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                          READY   STATUS    RESTARTS   AGE
    pod/diskmaker-manager-9wzms                   1/1     Running   0          5m43s
    pod/diskmaker-manager-jgvjp                   1/1     Running   0          5m43s
    pod/diskmaker-manager-tbdsj                   1/1     Running   0          5m43s
    pod/local-storage-operator-7db4bd9f79-t6k87   1/1     Running   0          14m
    
    NAME                                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
    service/local-storage-operator-metrics   ClusterIP   172.30.135.36   <none>        8383/TCP,8686/TCP   14m
    
    NAME                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    daemonset.apps/diskmaker-manager   3         3         3       3            3           <none>          5m43s
    
    NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/local-storage-operator   1/1     1            1           14m
    
    NAME                                                DESIRED   CURRENT   READY   AGE
    replicaset.apps/local-storage-operator-7db4bd9f79   1         1         1       14m
    Copy to Clipboard Toggle word wrap

    Note the desired and current number of daemon set processes. A desired count of 0 indicates that the label selectors were invalid.

  4. Verify that the persistent volumes were created:

    $ oc get pv
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
    local-pv-1cec77cf   100Gi      RWO            Delete           Available           local-sc                88m
    local-pv-2ef7cd2a   100Gi      RWO            Delete           Available           local-sc                82m
    local-pv-3fa1c73    100Gi      RWO            Delete           Available           local-sc                48m
    Copy to Clipboard Toggle word wrap

Important

Editing the LocalVolume object does not change existing persistent volumes because doing so might result in a destructive operation.

2.3.2. Enabling nodes to use a local block device

You can configure nodes with a local block device to run OpenShift sandboxed containers workloads at the paths specified in the defined volume resource.

Prerequisites

  • You provisioned a block device using the Local Storage Operator (LSO).

Procedure

  • Enable each node with a local block device to run OpenShift sandboxed containers workloads by running the following command:

    $ oc debug node/worker-0 -- chcon -vt container_file_t /host/path/to/device
    Copy to Clipboard Toggle word wrap

    The /path/to/device must be the same path you defined when creating the local storage resource.

    Example output

    system_u:object_r:container_file_t:s0 /host/path/to/device
    Copy to Clipboard Toggle word wrap

You create a NodeFeatureDiscovery custom resource (CR) to define the configuration parameters that the Node Feature Discovery (NFD) Operator checks to determine that the worker nodes can support OpenShift sandboxed containers.

Note

To install the kata runtime on only selected worker nodes that you know are eligible, apply the feature.node.kubernetes.io/runtime.kata=true label to the selected nodes and set checkNodeEligibility: true in the KataConfig CR.

To install the kata runtime on all worker nodes, set checkNodeEligibility: false in the KataConfig CR.

In both these scenarios, you do not need to create the NodeFeatureDiscovery CR. You should only apply the feature.node.kubernetes.io/runtime.kata=true label manually if you are sure that the node is eligible to run OpenShift sandboxed containers.

The following procedure applies the feature.node.kubernetes.io/runtime.kata=true label to all eligible nodes and configures the KataConfig resource to check for node eligibility.

Prerequisites

  • You have installed the NFD Operator.

Procedure

  1. Create an nfd.yaml manifest file according to the following example:

    apiVersion: nfd.openshift.io/v1
    kind: NodeFeatureDiscovery
    metadata:
      name: nfd-kata
      namespace: openshift-nfd
    spec:
      workerConfig:
        configData: |
          sources:
            custom:
              - name: "feature.node.kubernetes.io/runtime.kata"
                matchOn:
                  - cpuId: ["SSE4", "VMX"]
                    loadedKMod: ["kvm", "kvm_intel"]
                  - cpuId: ["SSE4", "SVM"]
                    loadedKMod: ["kvm", "kvm_amd"]
    # ...
    Copy to Clipboard Toggle word wrap
  2. Create the NodeFeatureDiscovery CR:

    $ oc create -f nfd.yaml
    Copy to Clipboard Toggle word wrap

    The NodeFeatureDiscovery CR applies the feature.node.kubernetes.io/runtime.kata=true label to all qualifying worker nodes.

  1. Create a kata-config.yaml manifest file according to the following example:

    apiVersion: kataconfiguration.openshift.io/v1
    kind: KataConfig
    metadata:
      name: example-kataconfig
    spec:
      checkNodeEligibility: true
    Copy to Clipboard Toggle word wrap
  2. Create the KataConfig CR:

    $ oc create -f kata-config.yaml
    Copy to Clipboard Toggle word wrap

Verification

  • Verify that qualifying nodes in the cluster have the correct label applied:

    $ oc get nodes --selector='feature.node.kubernetes.io/runtime.kata=true'
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                           STATUS                     ROLES    AGE     VERSION
    compute-3.example.com          Ready                      worker   4h38m   v1.25.0
    compute-2.example.com          Ready                      worker   4h35m   v1.25.0
    Copy to Clipboard Toggle word wrap

2.4. Creating the KataConfig custom resource

You must create the KataConfig custom resource (CR) to install kata as a runtime class on your worker nodes.

OpenShift sandboxed containers installs kata as a secondary, optional runtime on the cluster and not as the primary runtime.

Creating the KataConfig CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. The following factors can increase the reboot time:

  • A large OpenShift Container Platform deployment with a greater number of worker nodes.
  • Activation of the BIOS and Diagnostics utility.
  • Deployment on a hard disk drive rather than an SSD.
  • Deployment on physical nodes such as bare metal, rather than on virtual nodes.
  • A slow CPU and network.

Procedure

  1. Create an example-kataconfig.yaml manifest file according to the following example:

    apiVersion: kataconfiguration.openshift.io/v1
    kind: KataConfig
    metadata:
      name: example-kataconfig
    spec:
      checkNodeEligibility: false 
    1
    
      logLevel: info
    #  kataConfigPoolSelector:
    #    matchLabels:
    #      <label_key>: '<label_value>' 
    2
    Copy to Clipboard Toggle word wrap
    1
    Optional: Set`checkNodeEligibility` to true to run node eligibility checks if you have installed the Node Feature Discovery Operator.
    2
    Optional: If you have applied node labels to install OpenShift sandboxed containers on specific nodes, specify the key and value.
  2. Create the KataConfig CR by running the following command:

    $ oc apply -f example-kataconfig.yaml
    Copy to Clipboard Toggle word wrap

    The new KataConfig CR is created and installs kata as a runtime class on the worker nodes.

    Wait for the kata installation to complete and the worker nodes to reboot before verifying the installation.

  3. Monitor the installation progress by running the following command:

    $ watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p"
    Copy to Clipboard Toggle word wrap

    When the status of all workers under kataNodes is installed and the condition InProgress is False without specifying a reason, the kata is installed on the cluster.

2.5. Modifying pod overhead

Pod overhead describes the amount of system resources that a pod on a node uses. You can modify the pod overhead by changing the spec.overhead field for a RuntimeClass custom resource. For example, if the configuration that you run for your containers consumes more than 350Mi of memory for the QEMU process and guest kernel data, you can alter the RuntimeClass overhead to suit your needs.

When performing any kind of file system I/O in the guest, file buffers are allocated in the guest kernel. The file buffers are also mapped in the QEMU process on the host, as well as in the virtiofsd process.

For example, if you use 300Mi of file buffer cache in the guest, both QEMU and virtiofsd appear to use 300Mi additional memory. However, the same memory is being used in all three cases. Therefore, the total memory usage is only 300Mi, mapped in three different places. This is correctly accounted for when reporting the memory utilization metrics.

Note

The default values are supported by Red Hat. Changing default overhead values is not supported and can result in technical issues.

Procedure

  1. Obtain the RuntimeClass object by running the following command:

    $ oc describe runtimeclass kata
    Copy to Clipboard Toggle word wrap
  2. Update the overhead.podFixed.memory and cpu values and save as the file as runtimeclass.yaml:

    kind: RuntimeClass
    apiVersion: node.k8s.io/v1
    metadata:
      name: kata
    overhead:
      podFixed:
        memory: "500Mi"
        cpu: "500m"
    Copy to Clipboard Toggle word wrap
  3. Apply the changes by running the following command:

    $ oc apply -f runtimeclass.yaml
    Copy to Clipboard Toggle word wrap

You configure your workload for OpenShift sandboxed containers by setting kata as the runtime class for the following pod-templated objects:

  • Pod objects
  • ReplicaSet objects
  • ReplicationController objects
  • StatefulSet objects
  • Deployment objects
  • DeploymentConfig objects
Important

Do not deploy workloads in an Operator namespace. Create a dedicated namespace for these resources.

Prerequisites

  • You have created the KataConfig custom resource (CR).

Procedure

  1. Add spec.runtimeClassName: kata to the manifest of each pod-templated workload object as in the following example:

    apiVersion: v1
    kind: <object>
    # ...
    spec:
      runtimeClassName: kata
    # ...
    Copy to Clipboard Toggle word wrap
  2. Apply the changes to the workload object by running the following command:

    $ oc apply -f <object.yaml>
    Copy to Clipboard Toggle word wrap

    OpenShift Container Platform creates the workload object and begins scheduling it.

Verification

  • Inspect the spec.runtimeClassName field of a pod-templated object. If the value is kata, then the workload is running on OpenShift sandboxed containers.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat