Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 18. Tuning hosted control planes for low latency with the performance profile
Tune hosted control planes for low latency by applying a performance profile. With the performance profile, you can restrict CPUs for infrastructure and application containers and configure huge pages, Hyper-Threading, and CPU partitions for latency-sensitive processes.
18.1. Creating a performance profile for hosted control planes Copier lienLien copié sur presse-papiers!
You can create a cluster performance profile by using the Performance Profile Creator (PPC) tool. The PPC is a function of the Node Tuning Operator.
The PPC combines information about your cluster with user-supplied configurations to generate a performance profile that is appropriate to your hardware, topology, and use-case.
The following is a high-level workflow for creating and applying a performance profile in your cluster:
-
Gather information about your cluster using the
must-gather
command. - Use the PPC tool to create a performance profile.
- Apply the performance profile to your cluster.
18.1.1. Gathering data about your hosted control planes cluster for the PPC Copier lienLien copié sur presse-papiers!
The Performance Profile Creator (PPC) tool requires must-gather
data. As a cluster administrator, run the must-gather
command to capture information about your cluster.
Prerequisites
-
You have
cluster-admin
role access to the management cluster. -
You installed the OpenShift CLI (
oc
).
Procedure
Export the management cluster
kubeconfig
file by running the following command:export MGMT_KUBECONFIG=<path_to_mgmt_kubeconfig>
$ export MGMT_KUBECONFIG=<path_to_mgmt_kubeconfig>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List all node pools across all namespaces by running the following command:
oc --kubeconfig="$MGMT_KUBECONFIG" get np -A
$ oc --kubeconfig="$MGMT_KUBECONFIG" get np -A
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE clusters democluster-us-east-1a democluster 1 1 False False 4.17.0 False True
NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE clusters democluster-us-east-1a democluster 1 1 False False 4.17.0 False True
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
The output shows the namespace
clusters
in the management cluster where theNodePool
resource is defined. -
The name of the
NodePool
resource, for exampledemocluster-us-east-1a
. -
The
HostedCluster
thisNodePool
belongs to. For example,democluster
.
-
The output shows the namespace
On the management cluster, run the following command to list available secrets:
oc get secrets -n clusters
$ oc get secrets -n clusters
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the
kubeconfig
file for the hosted cluster by running the following command:oc get secret <secret_name> -n <cluster_namespace> -o jsonpath='{.data.kubeconfig}' | base64 -d > hosted-cluster-kubeconfig
$ oc get secret <secret_name> -n <cluster_namespace> -o jsonpath='{.data.kubeconfig}' | base64 -d > hosted-cluster-kubeconfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
oc get secret democluster-admin-kubeconfig -n clusters -o jsonpath='{.data.kubeconfig}' | base64 -d > hosted-cluster-kubeconfig
$ oc get secret democluster-admin-kubeconfig -n clusters -o jsonpath='{.data.kubeconfig}' | base64 -d > hosted-cluster-kubeconfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To create a
must-gather
bundle for the hosted cluster, open a separate terminal window and run the following commands:Export the hosted cluster
kubeconfig
file:export HC_KUBECONFIG=<path_to_hosted_cluster_kubeconfig>
$ export HC_KUBECONFIG=<path_to_hosted_cluster_kubeconfig>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
export HC_KUBECONFIG=~/hostedcpkube/hosted-cluster-kubeconfig
$ export HC_KUBECONFIG=~/hostedcpkube/hosted-cluster-kubeconfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Navigate to the directory where you want to store the
must-gather
data. Gather the troubleshooting data for your hosted cluster:
oc --kubeconfig="$HC_KUBECONFIG" adm must-gather
$ oc --kubeconfig="$HC_KUBECONFIG" adm must-gather
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a compressed file from the
must-gather
directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:tar -czvf must-gather.tar.gz must-gather.local.1203869488012141147
$ tar -czvf must-gather.tar.gz must-gather.local.1203869488012141147
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.1.2. Running the Performance Profile Creator on a hosted cluster using Podman Copier lienLien copié sur presse-papiers!
As a cluster administrator, you can use Podman with the Performance Profile Creator (PPC) tool to create a performance profile.
For more information about PPC arguments, see "Performance Profile Creator arguments".
The PPC tool is designed to be hosted-cluster aware. When it detects a hosted cluster from the must-gather
data it automatically takes the following actions:
- Recognizes that there is no machine config pool (MCP).
- Uses node pools as the source of truth for compute node configurations instead of MCPs.
-
Does not require you to specify the
node-pool-name
value explicitly unless you want to target a specific pool.
The PPC uses the must-gather
data from your hosted cluster to create the performance profile. If you make any changes to your cluster, such as relabeling a node targeted for performance configuration, you must re-create the must-gather
data before running PPC again.
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role. - A hosted cluster is installed.
-
Installation of Podman and the OpenShift CLI (
oc
). - Access to the Node Tuning Operator image.
-
Access to the
must-gather
data for your cluster.
Procedure
On the hosted cluster, use Podman to authenticate to
registry.redhat.io
by running the following command:podman login registry.redhat.io
$ podman login registry.redhat.io
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Username: <user_name> Password: <password>
Username: <user_name> Password: <password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a performance profile on the hosted cluster, by running the following command. The example uses sample PPC arguments and values:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Mounts the local directory where the output of an
oc adm must-gather
was created into the container. - 2
- Specifies two reserved CPUs.
- 3
- Disables the real-time kernel.
- 4
- Disables reserved CPUs splitting across NUMA nodes.
- 5
- Specifies the NUMA topology policy. If installing the NUMA Resources Operator, this must be set to
single-numa-node
. - 6
- Specifies minimal latency at the cost of increased power consumption.
- 7
- Specifies one offlined CPU.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the created YAML file by running the following command:
cat my-hosted-cp-performance-profile
$ cat my-hosted-cp-performance-profile
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.1.3. Configuring low-latency tuning in a hosted cluster Copier lienLien copié sur presse-papiers!
To set low latency with the performance profile on the nodes in your hosted cluster, you can use the Node Tuning Operator. In hosted control planes, you can configure low-latency tuning by creating config maps that contain Tuned
objects and referencing those config maps in your node pools. The tuned object in this case is a PerformanceProfile
object that defines the performance profile you want to apply to the nodes in a node pool.
Procedure
Export the management cluster
kubeconfig
file by running the following command:export MGMT_KUBECONFIG=<path_to_mgmt_kubeconfig>
$ export MGMT_KUBECONFIG=<path_to_mgmt_kubeconfig>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
ConfigMap
object in the management cluster by running the following command:oc --kubeconfig="$MGMT_KUBECONFIG" apply -f my-hosted-cp-performance-profile.yaml
$ oc --kubeconfig="$MGMT_KUBECONFIG" apply -f my-hosted-cp-performance-profile.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
NodePool
object in theclusters
namespace adding thespec.tuningConfig
field and the name of the created performance profile in that field by running the following command:oc edit np -n clusters
$ oc edit np -n clusters
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can reference the same profile in multiple node pools. In hosted control planes, the Node Tuning Operator appends a hash of the node pool name and namespace to the name of the
Tuned
custom resources to distinguish them. After you make the changes, the system detects that a configuration change is required and starts a rolling update of the nodes in that pool to apply the new configuration.
Verification
List all node pools across all namespaces by running the following command:
oc --kubeconfig="$MGMT_KUBECONFIG" get np -A
$ oc --kubeconfig="$MGMT_KUBECONFIG" get np -A
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE clusters democluster-us-east-1a democluster 1 1 False False 4.17.0 False True
NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE clusters democluster-us-east-1a democluster 1 1 False False 4.17.0 False True
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
UPDATINGCONFIG
field indicates whether the node pool is in the process of updating its configuration. During this update, theUPDATINGCONFIG
field in the node pool’s status becomesTrue
. The new configuration is considered fully applied only when theUPDATINGCONFIG
field returns toFalse
.List all config maps in the
clusters-democluster
namespace by running the following command:oc --kubeconfig="$MGMT_KUBECONFIG" get cm -n clusters-democluster
$ oc --kubeconfig="$MGMT_KUBECONFIG" get cm -n clusters-democluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output shows a kubeletconfig
kubeletconfig-performance-democluster-us-east-1a
and a performance profileperformance-democluster-us-east-1a
has been created. The Node Tuning Operator syncs theTuned
objects into the hosted cluster. You can verify whichTuned
objects are defined and which profiles are applied to each node.List available secrets on the management cluster by running the following command:
oc get secrets -n clusters
$ oc get secrets -n clusters
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the
kubeconfig
file for the hosted cluster by running the following command:oc get secret <secret_name> -n clusters -o jsonpath='{.data.kubeconfig}' | base64 -d > hosted-cluster-kubeconfig
$ oc get secret <secret_name> -n clusters -o jsonpath='{.data.kubeconfig}' | base64 -d > hosted-cluster-kubeconfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
oc get secret democluster-admin-kubeconfig -n clusters -o jsonpath='{.data.kubeconfig}' | base64 -d > hosted-cluster-kubeconfig
$ oc get secret democluster-admin-kubeconfig -n clusters -o jsonpath='{.data.kubeconfig}' | base64 -d > hosted-cluster-kubeconfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Export the hosted cluster kubeconfig by running the following command:
export HC_KUBECONFIG=<path_to_hosted-cluster-kubeconfig>
$ export HC_KUBECONFIG=<path_to_hosted-cluster-kubeconfig>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the kubeletconfig is mirrored in the hosted cluster by running the following command:
oc --kubeconfig="$HC_KUBECONFIG" get cm -n openshift-config-managed | grep kubelet
$ oc --kubeconfig="$HC_KUBECONFIG" get cm -n openshift-config-managed | grep kubelet
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
kubelet-serving-ca 1 79m kubeletconfig-performance-democluster-us-east-1a 1 15m
kubelet-serving-ca 1 79m kubeletconfig-performance-democluster-us-east-1a 1 15m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
single-numa-node
policy is set on the hosted cluster by running the following command:oc --kubeconfig="$HC_KUBECONFIG" get cm kubeletconfig-performance-democluster-us-east-1a -o yaml -n openshift-config-managed | grep single
$ oc --kubeconfig="$HC_KUBECONFIG" get cm kubeletconfig-performance-democluster-us-east-1a -o yaml -n openshift-config-managed | grep single
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
topologyManagerPolicy: single-numa-node
topologyManagerPolicy: single-numa-node
Copy to Clipboard Copied! Toggle word wrap Toggle overflow