Chapter 16. Workload partitioning


In resource-constrained environments, you can use workload partitioning to isolate OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs.

The minimum number of reserved CPUs required for the cluster management is four CPU Hyper-Threads (HTs). With workload partitioning, you annotate the set of cluster management pods and a set of typical add-on Operators for inclusion in the cluster management workload partition. These pods operate normally within the minimum size CPU configuration. Additional Operators or workloads outside of the set of minimum cluster management pods require additional CPUs to be added to the workload partition.

Workload partitioning isolates user workloads from platform workloads using standard Kubernetes scheduling capabilities.

The following changes are required for workload partitioning:

  1. In the install-config.yaml file, add the additional field: cpuPartitioningMode.

    apiVersion: v1
    baseDomain: devcluster.openshift.com
    cpuPartitioningMode: AllNodes 1
    compute:
      - architecture: amd64
        hyperthreading: Enabled
        name: worker
        platform: {}
        replicas: 3
    controlPlane:
      architecture: amd64
      hyperthreading: Enabled
      name: master
      platform: {}
      replicas: 3
    1
    Sets up a cluster for CPU partitioning at install time. The default value is None.
    Note

    Workload partitioning can only be enabled during cluster installation. You cannot disable workload partitioning postinstallation.

  2. In the performance profile, specify the isolated and reserved CPUs.

    Recommended performance profile configuration

    apiVersion: performance.openshift.io/v2
    kind: PerformanceProfile
    metadata:
      # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml
      # matches this name: include=openshift-node-performance-${PerformanceProfile.metadata.name}
      # Also in file 'validatorCRs/informDuValidator.yaml':
      # name: 50-performance-${PerformanceProfile.metadata.name}
      name: openshift-node-performance-profile
      annotations:
        ran.openshift.io/reference-configuration: "ran-du.redhat.com"
    spec:
      additionalKernelArgs:
        - "rcupdate.rcu_normal_after_boot=0"
        - "efi=runtime"
        - "vfio_pci.enable_sriov=1"
        - "vfio_pci.disable_idle_d3=1"
        - "module_blacklist=irdma"
      cpu:
        isolated: $isolated
        reserved: $reserved
      hugepages:
        defaultHugepagesSize: $defaultHugepagesSize
        pages:
          - size: $size
            count: $count
            node: $node
      machineConfigPoolSelector:
        pools.operator.machineconfiguration.openshift.io/$mcp: ""
      nodeSelector:
        node-role.kubernetes.io/$mcp: ""
      numa:
        topologyPolicy: "restricted"
      # To use the standard (non-realtime) kernel, set enabled to false
      realTimeKernel:
        enabled: true
      workloadHints:
        # WorkloadHints defines the set of upper level flags for different type of workloads.
        # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints
        # for detailed descriptions of each item.
        # The configuration below is set for a low latency, performance mode.
        realTime: true
        highPowerConsumption: false
        perPodPowerManagement: false

    Table 16.1. PerformanceProfile CR options for single-node OpenShift clusters
    PerformanceProfile CR fieldDescription

    metadata.name

    Ensure that name matches the following fields set in related GitOps ZTP custom resources (CRs):

    • include=openshift-node-performance-${PerformanceProfile.metadata.name} in TunedPerformancePatch.yaml
    • name: 50-performance-${PerformanceProfile.metadata.name} in validatorCRs/informDuValidator.yaml

    spec.additionalKernelArgs

    "efi=runtime" Configures UEFI secure boot for the cluster host.

    spec.cpu.isolated

    Set the isolated CPUs. Ensure all of the Hyper-Threading pairs match.

    Important

    The reserved and isolated CPU pools must not overlap and together must span all available cores. CPU cores that are not accounted for cause an undefined behaviour in the system.

    spec.cpu.reserved

    Set the reserved CPUs. When workload partitioning is enabled, system processes, kernel threads, and system container threads are restricted to these CPUs. All CPUs that are not isolated should be reserved.

    spec.hugepages.pages

    • Set the number of huge pages (count)
    • Set the huge pages size (size).
    • Set node to the NUMA node where the hugepages are allocated (node)

    spec.realTimeKernel

    Set enabled to true to use the realtime kernel.

    spec.workloadHints

    Use workloadHints to define the set of top level flags for different type of workloads. The example configuration configures the cluster for low latency and high performance.

Workload partitioning introduces an extended management.workload.openshift.io/cores resource type for platform pods. kubelet advertises the resources and CPU requests by pods allocated to the pool within the corresponding resource. When workload partitioning is enabled, the management.workload.openshift.io/cores resource allows the scheduler to correctly assign pods based on the cpushares capacity of the host, not just the default cpuset.

Additional resources

  • For the recommended workload partitioning configuration for single-node OpenShift clusters, see Workload partitioning.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.