Este conteúdo não está disponível no idioma selecionado.
Chapter 22. Workload partitioning
To prevent platform processes from interrupting your applications, configure workload partitioning. This isolates OpenShift Container Platform services and infrastructure pods to a reserved set of CPUs, ensuring that the remaining compute resources are available exclusively for your customer workloads.
The minimum number of reserved CPUs required for the cluster management is four CPU Hyper-Threads (HTs).
In the context of enabling workload partitioning and managing CPU resources effectively, the cluster might not permit incorrectly configured nodes to join the cluster through a node admission webhook. When the workload partitioning feature is enabled, the machine config pools for control plane nodes and compute nodes get supplied with configurations for nodes to use. Adding new nodes to these pools ensures the pools correctly get configured before joining the cluster.
Currently, nodes must have uniform configurations per machine config pool to ensure that correct CPU affinity is set across all nodes within that pool. After admission, nodes within the cluster identify themselves as supporting a new resource type called management.workload.openshift.io/cores and accurately report their CPU capacity. Workload partitioning can be enabled during cluster installation only by adding the additional field cpuPartitioningMode to the install-config.yaml file.
When workload partitioning is enabled, the management.workload.openshift.io/cores resource allows the scheduler to correctly assign pods based on the cpushares capacity of the host, not just the default cpuset. This ensures more precise allocation of resources for workload partitioning scenarios.
Workload partitioning ensures that CPU requests and limits specified in the pod’s configuration are respected. In OpenShift Container Platform 4.16 or later, accurate CPU usage limits are set for platform pods through CPU partitioning. As workload partitioning uses the custom resource type of management.workload.openshift.io/cores, the values for requests and limits are the same due to a requirement by Kubernetes for extended resources. However, the annotations modified by workload partitioning correctly reflect the desired limits.
Extended resources cannot be overcommitted, so request and limit must be equal if both are present in a container spec.
22.1. Enabling workload partitioning Copiar o linkLink copiado para a área de transferência!
To partition cluster management pods into a specified CPU affinity, enable workload partitioning. This configuration ensures that management pods operate within the reserved CPU limits defined in your Performance Profile, preventing them from consuming resources intended for customer workloads.
Consider additional post-installation Operators that use workload partitioning when calculating how many reserved CPU cores to set aside for the platform.
Workload partitioning isolates user workloads from platform workloads using standard Kubernetes scheduling capabilities.
You can enable workload partitioning only during cluster installation. You cannot disable workload partitioning post-installation. However, you can change the CPU configuration for reserved and isolated CPUs post-installation.
The procedure demonstrates enabling workload partitioning cluster-wide.
Procedure
In the
install-config.yamlfile, add the additional fieldcpuPartitioningModeand set it toAllNodes.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
cpuPartitioningMode: Specifies the cluster to set up for CPU partitioning at install time. The default value isNone, which ensures that no CPU partitioning is enabled at install time.
-
22.2. Performance profiles and workload partitioning Copiar o linkLink copiado para a área de transferência!
To enable workload partitioning, apply a performance profile. This configuration specifies the isolated and reserved CPUs, ensuring that customer workloads run on dedicated cores without interruption from platform processes.
An appropriately configured performance profile specifies the isolated and reserved CPUs. Create a performance profile by using the Performance Profile Creator (PPC) tool.
Sample performance profile configuration
| PerformanceProfile CR field | Description |
|---|---|
|
|
Ensure that
|
|
|
|
|
| Set the isolated CPUs. Ensure all of the Hyper-Threading pairs match. Important The reserved and isolated CPU pools must not overlap and together must span all available cores. CPU cores that are not accounted for cause an undefined behaviour in the system. |
|
| Set the reserved CPUs. When workload partitioning is enabled, system processes, kernel threads, and system container threads are restricted to these CPUs. All CPUs that are not isolated should be reserved. |
|
|
|
|
|
Set |
|
|
Use |