Chapter 2. Deploying OpenShift sandboxed containers on bare metal
You can deploy OpenShift sandboxed containers on bare metal,
You deploy OpenShift sandboxed containers by performing the following steps:
- Install the OpenShift sandboxed containers Operator on the OpenShift Container Platform cluster.
- Optional: Install the Local Storage Operator to configure a local block storage device.
- Optional: Install the Node Feature Discovery (NFD) Operator to configure node eligibility checks.
-
Create the
KataConfigcustom resource. - Optional: Modify the number of virtual machines running on each worker node.
- Optional: Modify the pod overhead.
- Configure your workload for OpenShift sandboxed containers.
2.1. Prerequisites Copy linkLink copied to clipboard!
- You have installed the latest version of Red Hat OpenShift Container Platform.
- Your OpenShift Container Platform cluster has at least one worker node.
2.2. Installing the OpenShift sandboxed containers Operator Copy linkLink copied to clipboard!
You install the OpenShift sandboxed containers Operator by using the command line interface (CLI).
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole.
Procedure
Create an
osc-namespace.yamlmanifest file:apiVersion: v1 kind: Namespace metadata: name: openshift-sandboxed-containers-operatorCreate the namespace by running the following command:
$ oc create -f osc-namespace.yamlCreate an
osc-operatorgroup.yamlmanifest file:apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sandboxed-containers-operator-group namespace: openshift-sandboxed-containers-operator spec: targetNamespaces: - openshift-sandboxed-containers-operatorCreate the operator group by running the following command:
$ oc create -f osc-operatorgroup.yamlCreate an
osc-subscription.yamlmanifest file:apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sandboxed-containers-operator namespace: openshift-sandboxed-containers-operator spec: channel: stable installPlanApproval: Automatic name: sandboxed-containers-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: sandboxed-containers-operator.v1.10.3Create the subscription by running the following command:
$ oc create -f osc-subscription.yamlVerify that the Operator is correctly installed by running the following command:
$ oc get csv -n openshift-sandboxed-containers-operatorThis command can take several minutes to complete.
Watch the process by running the following command:
$ watch oc get csv -n openshift-sandboxed-containers-operatorExample output
NAME DISPLAY VERSION REPLACES PHASE openshift-sandboxed-containers openshift-sandboxed-containers-operator 1.10.3 1.9.0 Succeeded
2.3. Optional configurations Copy linkLink copied to clipboard!
You can configure the following options after you install the OpenShift sandboxed containers Operator.
2.3.1. Provisioning local block volumes Copy linkLink copied to clipboard!
You can use local block volumes with OpenShift sandboxed containers. You must first provision the local block volumes by using the Local Storage Operator (LSO). Then you must enable the nodes with the local block volumes to run OpenShift sandboxed containers workloads.
You can provision local block volumes for OpenShift sandboxed containers by using the Local Storage Operator (LSO). The local volume provisioner looks for any block volume devices at the paths specified in the defined resource.
Prerequisites
- You have installed the Local Storage Operator.
You have a local disk that meets the following conditions:
- It is attached to a node.
- It is not mounted.
- It does not contain partitions.
Procedure
Create the local volume resource. This resource must define the nodes and paths to the local volumes.
NoteDo not use different storage class names for the same device. Doing so creates multiple persistent volumes (PVs).
Example: Block
apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage"1 spec: nodeSelector:2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: "local-sc"3 forceWipeDevicesAndDestroyAllData: false4 volumeMode: Block devicePaths:5 - /path/to/device6 - 1
- The namespace where the Local Storage Operator is installed.
- 2
- Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from
oc get node. If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes. - 3
- The name of the storage class to use when creating persistent volume objects.
- 4
- This setting defines whether or not to call
wipefs, which removes partition table signatures (magic strings) making the disk ready to use for Local Storage Operator provisioning. No other data besides signatures is erased. The default is "false" (wipefsis not invoked). SettingforceWipeDevicesAndDestroyAllDatato "true" can be useful in scenarios where previous data can remain on disks that need to be re-used. In these scenarios, setting this field to true eliminates the need for administrators to erase the disks manually. - 5
- The path containing a list of local storage devices to choose from. You must use this path when enabling a node with a local block device to run OpenShift sandboxed containers workloads.
- 6
- Replace this value with the filepath to your
LocalVolumeresourceby-id, such as/dev/disk/by-id/wwn. PVs are created for these local disks when the provisioner is deployed successfully.
Create the local volume resource in your OpenShift Container Platform cluster. Specify the file you just created:
$ oc create -f <local-volume>.yamlVerify that the provisioner was created and that the corresponding daemon sets were created:
$ oc get all -n openshift-local-storageExample output
NAME READY STATUS RESTARTS AGE pod/diskmaker-manager-9wzms 1/1 Running 0 5m43s pod/diskmaker-manager-jgvjp 1/1 Running 0 5m43s pod/diskmaker-manager-tbdsj 1/1 Running 0 5m43s pod/local-storage-operator-7db4bd9f79-t6k87 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator-metrics ClusterIP 172.30.135.36 <none> 8383/TCP,8686/TCP 14m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/diskmaker-manager 3 3 3 3 3 <none> 5m43s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-7db4bd9f79 1 1 1 14mNote the
desiredandcurrentnumber of daemon set processes. Adesiredcount of0indicates that the label selectors were invalid.Verify that the persistent volumes were created:
$ oc get pvExample output
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m
Editing the LocalVolume object does not change existing persistent volumes because doing so might result in a destructive operation.
2.3.2. Enabling nodes to use a local block device Copy linkLink copied to clipboard!
You can configure nodes with a local block device to run OpenShift sandboxed containers workloads at the paths specified in the defined volume resource.
Prerequisites
- You provisioned a block device using the Local Storage Operator (LSO).
Procedure
Enable each node with a local block device to run OpenShift sandboxed containers workloads by running the following command:
$ oc debug node/worker-0 -- chcon -vt container_file_t /host/path/to/deviceThe
/path/to/devicemust be the same path you defined when creating the local storage resource.Example output
system_u:object_r:container_file_t:s0 /host/path/to/device
2.3.3. Creating a NodeFeatureDiscovery custom resource Copy linkLink copied to clipboard!
You create a NodeFeatureDiscovery custom resource (CR) to define the configuration parameters that the Node Feature Discovery (NFD) Operator checks to determine that the worker nodes can support OpenShift sandboxed containers.
To install the kata runtime on only selected worker nodes that you know are eligible, apply the feature.node.kubernetes.io/runtime.kata=true label to the selected nodes and set checkNodeEligibility: true in the KataConfig CR.
To install the kata runtime on all worker nodes, set checkNodeEligibility: false in the KataConfig CR.
In both these scenarios, you do not need to create the NodeFeatureDiscovery CR. You should only apply the feature.node.kubernetes.io/runtime.kata=true label manually if you are sure that the node is eligible to run OpenShift sandboxed containers.
The following procedure applies the feature.node.kubernetes.io/runtime.kata=true label to all eligible nodes and configures the KataConfig resource to check for node eligibility.
Prerequisites
- You have installed the NFD Operator.
Procedure
Create an
nfd.yamlmanifest file according to the following example:apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-kata namespace: openshift-nfd spec: workerConfig: configData: | sources: custom: - name: "feature.node.kubernetes.io/runtime.kata" matchOn: - cpuId: ["SSE4", "VMX"] loadedKMod: ["kvm", "kvm_intel"] - cpuId: ["SSE4", "SVM"] loadedKMod: ["kvm", "kvm_amd"] # ...Create the
NodeFeatureDiscoveryCR:$ oc create -f nfd.yamlThe
NodeFeatureDiscoveryCR applies thefeature.node.kubernetes.io/runtime.kata=truelabel to all qualifying worker nodes.
Create a
kata-config.yamlmanifest file according to the following example:apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: example-kataconfig spec: checkNodeEligibility: trueCreate the
KataConfigCR:$ oc create -f kata-config.yaml
Verification
Verify that qualifying nodes in the cluster have the correct label applied:
$ oc get nodes --selector='feature.node.kubernetes.io/runtime.kata=true'Example output
NAME STATUS ROLES AGE VERSION compute-3.example.com Ready worker 4h38m v1.25.0 compute-2.example.com Ready worker 4h35m v1.25.0
2.4. Creating the KataConfig custom resource Copy linkLink copied to clipboard!
You must create the KataConfig custom resource (CR) to install kata as a runtime class on your worker nodes.
OpenShift sandboxed containers installs kata as a secondary, optional runtime on the cluster and not as the primary runtime.
Creating the KataConfig CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. The following factors can increase the reboot time:
- A large OpenShift Container Platform deployment with a greater number of worker nodes.
- Activation of the BIOS and Diagnostics utility.
- Deployment on a hard disk drive rather than an SSD.
- Deployment on physical nodes such as bare metal, rather than on virtual nodes.
- A slow CPU and network.
Procedure
Create an
example-kataconfig.yamlmanifest file according to the following example:apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: example-kataconfig spec: checkNodeEligibility: false1 logLevel: info # kataConfigPoolSelector: # matchLabels: # <label_key>: '<label_value>'2 Create the
KataConfigCR by running the following command:$ oc create -f example-kataconfig.yamlThe new
KataConfigCR is created and installskataas a runtime class on the worker nodes.Wait for the
katainstallation to complete and the worker nodes to reboot before verifying the installation.Monitor the installation progress by running the following command:
$ watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p"When the status of all workers under
kataNodesisinstalledand the conditionInProgressisFalsewithout specifying a reason, thekatais installed on the cluster.
2.5. Modifying pod overhead Copy linkLink copied to clipboard!
Pod overhead describes the amount of system resources that a pod on a node uses. You can modify the pod overhead by changing the spec.overhead field for a RuntimeClass custom resource. For example, if the configuration that you run for your containers consumes more than 350Mi of memory for the QEMU process and guest kernel data, you can alter the RuntimeClass overhead to suit your needs.
When performing any kind of file system I/O in the guest, file buffers are allocated in the guest kernel. The file buffers are also mapped in the QEMU process on the host, as well as in the virtiofsd process.
For example, if you use 300Mi of file buffer cache in the guest, both QEMU and virtiofsd appear to use 300Mi additional memory. However, the same memory is being used in all three cases. Therefore, the total memory usage is only 300Mi, mapped in three different places. This is correctly accounted for when reporting the memory utilization metrics.
The default values are supported by Red Hat. Changing default overhead values is not supported and can result in technical issues.
Procedure
Obtain the
RuntimeClassobject by running the following command:$ oc describe runtimeclass kataUpdate the
overhead.podFixed.memoryandcpuvalues and save as the file asruntimeclass.yaml:kind: RuntimeClass apiVersion: node.k8s.io/v1 metadata: name: kata overhead: podFixed: memory: "500Mi" cpu: "500m"Apply the changes by running the following command:
$ oc apply -f runtimeclass.yaml
2.6. Configuring your workload for OpenShift sandboxed containers Copy linkLink copied to clipboard!
You configure your workload for OpenShift sandboxed containers by setting kata as the runtime class for the following pod-templated objects:
-
Podobjects -
ReplicaSetobjects -
ReplicationControllerobjects -
StatefulSetobjects -
Deploymentobjects -
DeploymentConfigobjects
Do not deploy workloads in an Operator namespace. Create a dedicated namespace for these resources.
Prerequisites
-
You have created the
KataConfigcustom resource (CR).
Procedure
Add
spec.runtimeClassName: katato the manifest of each pod-templated workload object as in the following example:apiVersion: v1 kind: <object> # ... spec: runtimeClassName: kata # ...Apply the changes to the workload object by running the following command:
$ oc apply -f <object.yaml>OpenShift Container Platform creates the workload object and begins scheduling it.
Verification
-
Inspect the
spec.runtimeClassNamefield of a pod-templated object. If the value iskata, then the workload is running on OpenShift sandboxed containers.