Este contenido no está disponible en el idioma seleccionado.

Chapter 19. Provisioning real-time and low latency workloads


To achieve low latency and consistent response times for OpenShift Container Platform applications, use the Node Tuning Operator. This Operator implements automatic tuning to optimize your cluster for high-performance computing workloads.

You use the performance profile configuration to make these changes.

You can update the kernel to kernel-rt, reserve CPUs for cluster and operating system housekeeping duties, including pod infra containers, isolate CPUs for application containers to run the workloads, and disable unused CPUs to reduce power consumption.

Note

When writing your applications, follow the general recommendations described in RHEL for Real Time processes and threads.

19.1. Scheduling a low latency workload onto a compute node

To run low latency workloads, schedule them onto a compute node associated with a performance profile that configures real-time capabilities. This ensures that the node is tuned to meet the specific timing and performance requirements of your application.

Note

To schedule a workload on specific nodes, use label selectors in the Pod custom resource (CR). The label selectors must match the nodes that are attached to the machine config pool that was configured for low latency by the Node Tuning Operator.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have logged in as a user with cluster-admin privileges.
  • You have applied a performance profile in the cluster that tunes compute nodes for low latency workloads.

Procedure

  1. Create a Pod CR for the low latency workload and apply it in the cluster, for example:

    Example Pod spec configured to use real-time processing

    apiVersion: v1
    kind: Pod
    metadata:
      name: dynamic-low-latency-pod
      annotations:
        cpu-quota.crio.io: "disable"
        cpu-load-balancing.crio.io: "disable"
        irq-load-balancing.crio.io: "disable"
    spec:
      securityContext:
        runAsNonRoot: true
        seccompProfile:
          type: RuntimeDefault
      containers:
      - name: dynamic-low-latency-pod
        image: "registry.redhat.io/openshift4/cnf-tests-rhel8:v4.19"
        command: ["sleep", "10h"]
        resources:
          requests:
            cpu: 2
            memory: "200M"
          limits:
            cpu: 2
            memory: "200M"
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop: [ALL]
      nodeSelector:
        node-role.kubernetes.io/worker-cnf: ""
      runtimeClassName: performance-dynamic-low-latency-profile 
    1
    
    # ...
    Copy to Clipboard Toggle word wrap

    where

    metadata.annotations.cpu-quota.crio.io
    Disables the CPU completely fair scheduler (CFS) quota at the pod run time.
    metadata.annotations.cpu-load-balancing.crio.io
    Disables CPU load balancing.
    metadata.annotations.irq-load-balancing.crio.io
    Opts the pod out of interrupt handling on the node.
    spec.nodeSelector.node-role.kubernetes.io/worker-cnf
    The nodeSelector label must match the label that you specify in the Node CR.
    spec.runtimeClassName
    runtimeClassName must match the name of the performance profile configured in the cluster.
  2. Enter the pod runtimeClassName in the form performance-<profile_name>, where <profile_name> is the name from the PerformanceProfile YAML. In the previous YAML example, the name is performance-dynamic-low-latency-profile.
  3. Ensure the pod is running correctly. Status should be running, and the correct cnf-worker node should be set.

    $ oc get pod -o wide
    Copy to Clipboard Toggle word wrap

    Expected output

    NAME                     READY   STATUS    RESTARTS   AGE     IP           NODE
    dynamic-low-latency-pod  1/1     Running   0          5h33m   10.131.0.10  cnf-worker.example.com
    Copy to Clipboard Toggle word wrap

  4. Get the CPUs that the pod configured for IRQ dynamic load balancing runs on:

    $ oc exec -it dynamic-low-latency-pod -- /bin/bash -c "grep Cpus_allowed_list /proc/self/status | awk '{print $2}'"
    Copy to Clipboard Toggle word wrap

    Expected output

    Cpus_allowed_list:  2-3
    Copy to Clipboard Toggle word wrap

Verification

Ensure the node configuration is applied correctly.

  1. Log in to the node to verify the configuration.

    $ oc debug node/<node-name>
    Copy to Clipboard Toggle word wrap
  2. Verify that you can use the node file system:

    sh-4.4# chroot /host
    Copy to Clipboard Toggle word wrap

    Expected output

    sh-4.4#
    Copy to Clipboard Toggle word wrap

  3. Ensure the default system CPU affinity mask does not include the dynamic-low-latency-pod CPUs, for example, CPUs 2 and 3.

    sh-4.4# cat /proc/irq/default_smp_affinity
    Copy to Clipboard Toggle word wrap

    Example output

    33
    Copy to Clipboard Toggle word wrap

  4. Ensure the system IRQs are not configured to run on the dynamic-low-latency-pod CPUs:

    sh-4.4# find /proc/irq/ -name smp_affinity_list -exec sh -c 'i="$1"; mask=$(cat $i); file=$(echo $i); echo $file: $mask' _ {} \;
    Copy to Clipboard Toggle word wrap

    Example output

    /proc/irq/0/smp_affinity_list: 0-5
    /proc/irq/1/smp_affinity_list: 5
    /proc/irq/2/smp_affinity_list: 0-5
    /proc/irq/3/smp_affinity_list: 0-5
    /proc/irq/4/smp_affinity_list: 0
    /proc/irq/5/smp_affinity_list: 0-5
    /proc/irq/6/smp_affinity_list: 0-5
    /proc/irq/7/smp_affinity_list: 0-5
    /proc/irq/8/smp_affinity_list: 4
    /proc/irq/9/smp_affinity_list: 4
    /proc/irq/10/smp_affinity_list: 0-5
    /proc/irq/11/smp_affinity_list: 0
    /proc/irq/12/smp_affinity_list: 1
    /proc/irq/13/smp_affinity_list: 0-5
    /proc/irq/14/smp_affinity_list: 1
    /proc/irq/15/smp_affinity_list: 0
    /proc/irq/24/smp_affinity_list: 1
    /proc/irq/25/smp_affinity_list: 1
    /proc/irq/26/smp_affinity_list: 1
    /proc/irq/27/smp_affinity_list: 5
    /proc/irq/28/smp_affinity_list: 1
    /proc/irq/29/smp_affinity_list: 0
    /proc/irq/30/smp_affinity_list: 0-5
    Copy to Clipboard Toggle word wrap

    Warning

    When you tune nodes for low latency, the usage of execution probes in conjunction with applications that require guaranteed CPUs can cause latency spikes. Use other probes, such as a properly configured set of network probes, as an alternative.

19.2. Creating a pod with a guaranteed QoS class

You can create a pod with a quality of service (QoS) class of Guaranteed for high-performance workloads. Configuring a pod with a QoS class of Guaranteed ensures that the pod has priority access to the specified CPU and memory resources.

To create a pod with a QoS class of Guaranteed, you must apply the following specifications:

  • Set identical values for the memory limit and memory request fields for each container in the pod.
  • Set identical values for CPU limit and CPU request fields for each container in the pod.

In general, a pod with a QoS class of Guaranteed will not be evicted from a node. One exception is during resource contention caused by system daemons exceeding reserved resources. In this scenario, the kubelet might evict pods to preserve node stability, starting with the lowest priority pods.

Prerequisites

  • Access to the cluster as a user with the cluster-admin role.
  • The OpenShift CLI (oc).

Procedure

  1. Create a namespace for the pod by running the following command:

    $ oc create namespace qos-example
    Copy to Clipboard Toggle word wrap
    • qos-example: Specifies a qos-example example namespace.

      Example output

      namespace/qos-example created
      Copy to Clipboard Toggle word wrap

  2. Create the Pod resource:

    1. Create a YAML file that defines the Pod resource:

      Example qos-example.yaml file

      apiVersion: v1
      kind: Pod
      metadata:
        name: qos-demo
        namespace: qos-example
      spec:
        securityContext:
          runAsNonRoot: true
          seccompProfile:
            type: RuntimeDefault
        containers:
        - name: qos-demo-ctr
          image: quay.io/openshifttest/hello-openshift:openshift
          resources:
            limits:
              memory: "200Mi"
              cpu: "1"
            requests:
              memory: "200Mi"
              cpu: "1"
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop: [ALL]
      Copy to Clipboard Toggle word wrap

      where:

      spec.containers.image
      Specifies public image, such as the hello-openshift image.
      spec.containers.resources.limits.memory
      Specifies a memory limit of 200 MB.
      spec.containers.resources.limits.cpu
      Specifies a CPU limit of 1 CPU.
      spec.containers.resources.requests.memory
      Specifies a memory request of 200 MB.
      spec.containers.resources.requests.cpu

      Specifies a CPU request of 1 CPU.

      Note

      If you specify a memory limit for a container, but do not specify a memory request, OpenShift Container Platform automatically assigns a memory request that matches the limit. Similarly, if you specify a CPU limit for a container, but do not specify a CPU request, OpenShift Container Platform automatically assigns a CPU request that matches the limit.

    2. Create the Pod resource by running the following command:

      $ oc apply -f qos-example.yaml --namespace=qos-example
      Copy to Clipboard Toggle word wrap

      Example output

      pod/qos-demo created
      Copy to Clipboard Toggle word wrap

Verification

  • View the qosClass value for the pod by running the following command:

    $ oc get pod qos-demo --namespace=qos-example --output=yaml | grep qosClass
    Copy to Clipboard Toggle word wrap

    Example output

        qosClass: Guaranteed
    Copy to Clipboard Toggle word wrap

19.3. Disabling CPU load balancing in a Pod

To optimize performance, disable or enable CPU load balancing for your Pods. CRI-O implements this functionality and applies the configuration only when specific requirements are met.

Functionality to disable or enable CPU load balancing is implemented on the CRI-O level. The code under the CRI-O disables or enables CPU load balancing only when the following requirements are met.

  • The pod must use the performance-<profile-name> runtime class. You can get the proper name by looking at the status of the performance profile, as shown here:

    apiVersion: performance.openshift.io/v2
    kind: PerformanceProfile
    ...
    status:
      ...
      runtimeClass: performance-manual
    Copy to Clipboard Toggle word wrap

The Node Tuning Operator is responsible for the creation of the high-performance runtime handler config snippet under relevant nodes and for creation of the high-performance runtime class under the cluster. It will have the same content as the default runtime handler except that it enables the CPU load balancing configuration functionality.

To disable the CPU load balancing for the pod, the Pod specification must include the following fields:

apiVersion: v1
kind: Pod
metadata:
  #...
  annotations:
    #...
    cpu-load-balancing.crio.io: "disable"
    #...
  #...
spec:
  #...
  runtimeClassName: performance-<profile_name>
  #...
Copy to Clipboard Toggle word wrap
Note

Only disable CPU load balancing when the CPU manager static policy is enabled and for pods with guaranteed QoS that use whole CPUs. Otherwise, disabling CPU load balancing can affect the performance of other containers in the cluster.

19.4. Disabling power saving mode for high priority pods

To protect high priority workloads when using power saving configurations on a node, apply performance settings at the pod level. This ensures that the configuration applies to all cores used by the pod, maintaining performance stability.

By disabling P-states and C-states at the pod level, you can configure high priority workloads for best performance and lowest latency.

Expand
Table 19.1. Configuration for high priority workloads
AnnotationPossible ValuesDescription

cpu-c-states.crio.io:

  • "enable"
  • "disable"
  • "max_latency:microseconds"

This annotation allows you to enable or disable C-states for each CPU. Alternatively, you can also specify a maximum latency in microseconds for the C-states. For example, enable C-states with a maximum latency of 10 microseconds with the setting cpu-c-states.crio.io: "max_latency:10". Set the value to "disable" to provide the best performance for a pod.

cpu-freq-governor.crio.io:

Any supported cpufreq governor.

Sets the cpufreq governor for each CPU. The "performance" governor is recommended for high priority workloads.

Prerequisites

  • You have configured power saving in the performance profile for the node where the high priority workload pods are scheduled.

Procedure

  1. Add the required annotations to your high priority workload pods. The annotations override the default settings.

    Example high priority workload annotation

    apiVersion: v1
    kind: Pod
    metadata:
      #...
      annotations:
        #...
        cpu-c-states.crio.io: "disable"
        cpu-freq-governor.crio.io: "performance"
        #...
      #...
    spec:
      #...
      runtimeClassName: performance-<profile_name>
      #...
    Copy to Clipboard Toggle word wrap

  2. Restart the pods to apply the annotation.

19.5. Disabling CPU CFS quota

To prevent CPU throttling for latency-sensitive workloads, disable the CPU CFS quota. This configuration allows pods to use unallocated CPU resources on the node, ensuring consistent application performance.

Procedure

  • To eliminate CPU throttling for pinned pods, create a pod with the cpu-quota.crio.io: "disable" annotation. This annotation disables the CPU completely fair scheduler (CFS) quota when the pod runs.

    Example pod specification with cpu-quota.crio.io disabled

    apiVersion: v1
    kind: Pod
    metadata:
      annotations:
          cpu-quota.crio.io: "disable"
    spec:
        runtimeClassName: performance-<profile_name>
    #...
    Copy to Clipboard Toggle word wrap

    Note

    Only disable CPU CFS quota when the CPU manager static policy is enabled and for pods with guaranteed QoS that use whole CPUs. For example, pods that contain CPU-pinned containers. Otherwise, disabling CPU CFS quota can affect the performance of other containers in the cluster.

To achieve low latency for workloads, some containers require that the CPUs they are pinned to do not process device interrupts. You can use the irq-load-balancing.crio.io pod annotation to control whether device interrupts are processed on CPUs where the pinned containers are running.

To disable interrupt processing for CPUs where containers belonging to individual pods are pinned, ensure that globallyDisableIrqLoadBalancing is set to false in the performance profile. In the pod specification, set the irq-load-balancing.crio.io pod annotation to disable, as demonstrated in the following example:

apiVersion: performance.openshift.io/v2
kind: Pod
metadata:
  annotations:
      irq-load-balancing.crio.io: "disable"
spec:
    runtimeClassName: performance-<profile_name>
...
Copy to Clipboard Toggle word wrap
Red Hat logoGithubredditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar. Explore nuestras recientes actualizaciones.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

Theme

© 2026 Red Hat
Volver arriba