Chapter 17. Workload partitioning
Workload partitioning is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
In resource-constrained environments, you can use workload partitioning to isolate OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs.
The minimum number of reserved CPUs required for the cluster management is four CPU Hyper-Threads (HTs). With workload partitioning, you annotate the set of cluster management pods and a set of typical add-on Operators for inclusion in the cluster management workload partition. These pods operate normally within the minimum size CPU configuration. Additional Operators or workloads outside of the set of minimum cluster management pods require additional CPUs to be added to the workload partition.
Workload partitioning isolates user workloads from platform workloads using standard Kubernetes scheduling capabilities.
The following changes are required for workload partitioning:
In the
install-config.yaml
file, add the additional field:cpuPartitioningMode
.apiVersion: v1 baseDomain: devcluster.openshift.com cpuPartitioningMode: AllNodes 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3
- 1
- Sets up a cluster for CPU partitioning at install time. The default value is
None
.
NoteWorkload partitioning can only be enabled during cluster installation. You cannot disable workload partitioning postinstallation.
In the performance profile, specify the
isolated
andreserved
CPUs.Recommended performance profile configuration
apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - "rcupdate.rcu_normal_after_boot=0" - "efi=runtime" - "module_blacklist=irdma" cpu: isolated: 2-51,54-103 reserved: 0-1,52-53 hugepages: defaultHugepagesSize: 1G pages: - count: 32 size: 1G node: 0 machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: "" nodeSelector: node-role.kubernetes.io/master: '' numa: topologyPolicy: "restricted" realTimeKernel: enabled: true workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: false
Table 17.1. PerformanceProfile CR options for single-node OpenShift clusters PerformanceProfile CR field Description metadata.name
Ensure that
name
matches the following fields set in related GitOps ZTP custom resources (CRs):-
include=openshift-node-performance-${PerformanceProfile.metadata.name}
inTunedPerformancePatch.yaml
-
name: 50-performance-${PerformanceProfile.metadata.name}
invalidatorCRs/informDuValidator.yaml
spec.additionalKernelArgs
"efi=runtime"
Configures UEFI secure boot for the cluster host.spec.cpu.isolated
Set the isolated CPUs. Ensure all of the Hyper-Threading pairs match.
ImportantThe reserved and isolated CPU pools must not overlap and together must span all available cores. CPU cores that are not accounted for cause an undefined behaviour in the system.
spec.cpu.reserved
Set the reserved CPUs. When workload partitioning is enabled, system processes, kernel threads, and system container threads are restricted to these CPUs. All CPUs that are not isolated should be reserved.
spec.hugepages.pages
-
Set the number of huge pages (
count
) -
Set the huge pages size (
size
). -
Set
node
to the NUMA node where thehugepages
are allocated (node
)
spec.realTimeKernel
Set
enabled
totrue
to use the realtime kernel.spec.workloadHints
Use
workloadHints
to define the set of top level flags for different type of workloads. The example configuration configures the cluster for low latency and high performance.-
Workload partitioning introduces an extended management.workload.openshift.io/cores
resource type for platform pods. kubelet advertises the resources and CPU requests by pods allocated to the pool within the corresponding resource. When workload partitioning is enabled, the management.workload.openshift.io/cores
resource allows the scheduler to correctly assign pods based on the cpushares
capacity of the host, not just the default cpuset
.
Additional resources
- For the recommended workload partitioning configuration for single-node OpenShift clusters, see Workload partitioning.