This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Questo contenuto non è disponibile nella lingua selezionata.
Chapter 2. Recommended host practices
This topic provides recommended host practices for OpenShift Container Platform.
2.1. Recommended node host practices Copia collegamentoCollegamento copiato negli appunti!
The OpenShift Container Platform node configuration file contains important options. For example, two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore
and maxPods
.
When both options are in use, the lower of the two values limits the number of pods on a node. Exceeding these values can result in:
- Increased CPU utilization.
- Slow pod scheduling.
- Potential out-of-memory scenarios, depending on the amount of memory in the node.
- Exhausting the pool of IP addresses.
- Resource overcommitting, leading to poor user application performance.
In Kubernetes, a pod that is holding a single container actually uses two containers. The second container is used to set up networking prior to the actual container starting. Therefore, a system running 10 pods will actually have 20 containers running.
podsPerCore
sets the number of pods the node can run based on the number of processor cores on the node. For example, if podsPerCore
is set to 10
on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40
.
kubeletConfig: podsPerCore: 10
kubeletConfig:
podsPerCore: 10
Setting podsPerCore
to 0
disables this limit. The default is 0
. podsPerCore
cannot exceed maxPods
.
maxPods
sets the number of pods the node can run to a fixed value, regardless of the properties of the node.
kubeletConfig: maxPods: 250
kubeletConfig:
maxPods: 250
2.2. Creating a KubeletConfig CRD to edit kubelet parameters Copia collegamentoCollegamento copiato negli appunti!
The kubelet configuration is currently serialized as an Ignition configuration, so it can be directly edited. However, there is also a new kubelet-config-controller
added to the Machine Config Controller (MCC). This allows you to create a KubeletConfig
custom resource (CR) to edit the kubelet parameters.
Procedure
Run:
oc get machineconfig
$ oc get machineconfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This provides a list of the available machine configuration objects you can select. By default, the two kubelet-related configs are
01-master-kubelet
and01-worker-kubelet
.To check the current value of max pods per node, run:
oc describe node <node-ip> | grep Allocatable -A6
# oc describe node <node-ip> | grep Allocatable -A6
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Look for
value: pods: <value>
.For example:
oc describe node ip-172-31-128-158.us-east-2.compute.internal | grep Allocatable -A6
# oc describe node ip-172-31-128-158.us-east-2.compute.internal | grep Allocatable -A6
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To set the max pods per node on the worker nodes, create a custom resource file that contains the kubelet configuration. For example,
change-maxPods-cr.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The rate at which the kubelet talks to the API server depends on queries per second (QPS) and burst values. The default values,
50
forkubeAPIQPS
and100
forkubeAPIBurst
, are good enough if there are limited pods running on each node. Updating the kubelet QPS and burst rates is recommended if there are enough CPU and memory resources on the node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run:
oc label machineconfigpool worker custom-kubelet=large-pods
$ oc label machineconfigpool worker custom-kubelet=large-pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run:
oc create -f change-maxPods-cr.yaml
$ oc create -f change-maxPods-cr.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run:
oc get kubeletconfig
$ oc get kubeletconfig
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This should return
set-max-pods
.Depending on the number of worker nodes in the cluster, wait for the worker nodes to be rebooted one by one. For a cluster with 3 worker nodes, this could take about 10 to 15 minutes.
Check for
maxPods
changing for the worker nodes:oc describe node
$ oc describe node
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the change by running:
oc get kubeletconfigs set-max-pods -o yaml
$ oc get kubeletconfigs set-max-pods -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This should show a status of
True
andtype:Success
Procedure
By default, only one machine is allowed to be unavailable when applying the kubelet-related configuration to the available worker nodes. For a large cluster, it can take a long time for the configuration change to be reflected. At any time, you can adjust the number of machines that are updating to speed up the process.
Run:
oc edit machineconfigpool worker
$ oc edit machineconfigpool worker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set
maxUnavailable
to the desired value.spec: maxUnavailable: <node_count>
spec: maxUnavailable: <node_count>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantWhen setting the value, consider the number of worker nodes that can be unavailable without affecting the applications running on the cluster.
2.3. Control plane node sizing Copia collegamentoCollegamento copiato negli appunti!
The control plane node resource requirements depend on the number of nodes in the cluster. The following control plane node size recommendations are based on the results of control plane density focused testing. The control plane tests create the following objects across the cluster in each of the namespaces depending on the node counts:
- 12 image streams
- 3 build configurations
- 6 builds
- 1 deployment with 2 pod replicas mounting two secrets each
- 2 deployments with 1 pod replica mounting two secrets
- 3 services pointing to the previous deployments
- 3 routes pointing to the previous deployments
- 10 secrets, 2 of which are mounted by the previous deployments
- 10 config maps, 2 of which are mounted by the previous deployments
Number of worker nodes | Cluster load (namespaces) | CPU cores | Memory (GB) |
---|---|---|---|
25 | 500 | 4 | 16 |
100 | 1000 | 8 | 32 |
250 | 4000 | 16 | 96 |
On a cluster with three masters or control plane nodes, the CPU and memory usage will spike up when one of the nodes is stopped, rebooted or fails because the remaining two nodes must handle the load in order to be highly available. This is also expected during upgrades because the masters are cordoned, drained, and rebooted serially to apply the operating system updates, as well as the control plane Operators update. To avoid cascading failures on large and dense clusters, keep the overall resource usage on the master nodes to at most half of all available capacity to handle the resource usage spikes. Increase the CPU and memory on the master nodes accordingly.
The node sizing varies depending on the number of nodes and object counts in the cluster. It also depends on whether the objects are actively being created on the cluster. During object creation, the control plane is more active in terms of resource usage compared to when the objects are in the running
phase.
If you used an installer-provisioned infrastructure installation method, you cannot modify the control plane node size in a running OpenShift Container Platform 4.5 cluster. Instead, you must estimate your total node count and use the suggested control plane node size during installation.
The recommendations are based on the data points captured on OpenShift Container Platform clusters with OpenShiftSDN as the network plug-in.
In OpenShift Container Platform 4.5, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and previous versions. The sizes are determined taking that into consideration.
2.4. Recommended etcd practices Copia collegamentoCollegamento copiato negli appunti!
For large and dense clusters, etcd can suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentation, must be performed to free up space in the data store. It is highly recommended that you monitor Prometheus for etcd metrics and defragment it when required before etcd raises a cluster-wide alarm that puts the cluster into a maintenance mode, which only accepts key reads and deletes. Some of the key metrics to monitor are etcd_server_quota_backend_bytes
which is the current quota limit, etcd_mvcc_db_total_size_in_use_in_bytes
which indicates the actual database usage after a history compaction, and etcd_debugging_mvcc_db_total_size_in_bytes
which shows the database size including free space waiting for defragmentation. Instructions on defragging etcd can be found in the Defragmenting etcd data
section.
Etcd writes data to disk, so its performance strongly depends on disk performance. Etcd persists proposals on disk. Slow disks and disk activity from other processes might cause long fsync latencies, causing etcd to miss heartbeats, inability to commit new proposals to the disk on time, which can cause request timeouts and temporary leader loss. It is highly recommended to run etcd on machines backed by SSD/NVMe disks with low latency and high throughput.
Some of the key metrics to monitor on a deployed OpenShift Container Platform cluster are p99 of etcd disk write ahead log duration and the number of etcd leader changes. Use Prometheus to track these metrics. etcd_disk_wal_fsync_duration_seconds_bucket
reports the etcd disk fsync duration, etcd_server_leader_changes_seen_total
reports the leader changes. To rule out a slow disk and confirm that the disk is reasonably fast, 99th percentile of the etcd_disk_wal_fsync_duration_seconds_bucket
should be less than 10ms.
Fio, a I/O benchmarking tool can be used to validate the hardware for etcd before or after creating the OpenShift cluster. Run fio and analyze the results:
Assuming container runtimes like podman or docker are installed on the machine under test and the path etcd writes the data exists - /var/lib/etcd, run:
Procedure
Run the following if using podman:
sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/openshift-scale/etcd-perf
$ sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/openshift-scale/etcd-perf
Alternatively, run the following if using docker:
sudo docker run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/openshift-scale/etcd-perf
$ sudo docker run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/openshift-scale/etcd-perf
The output reports whether the disk is fast enough to host etcd by comparing the 99th percentile of the fsync metric captured from the run to see if it is less than 10ms.
Etcd replicates the requests among all the members, so its performance strongly depends on network input/output (IO) latency. High network latencies result in etcd heartbeats taking longer than the election timeout, which leads to leader elections that are disruptive to the cluster. A key metric to monitor on a deployed OpenShift Container Platform cluster is the 99th percentile of etcd network peer latency on each etcd cluster member. Use Prometheus to track the metric. histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket[2m]))
reports the round trip time for etcd to finish replicating the client requests between the members; it should be less than 50 ms.
2.5. Defragmenting etcd data Copia collegamentoCollegamento copiato negli appunti!
Manual defragmentation must be performed periodically to reclaim disk space after etcd history compaction and other events cause disk fragmentation.
History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system.
Because etcd writes data to disk, its performance strongly depends on disk performance. Consider defragmenting etcd every month, twice a month, or as needed for your cluster. You can also monitor the etcd_db_total_size_in_bytes
metric to determine whether defragmentation is necessary.
Defragmenting etcd is a blocking action. The etcd member will not response until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover.
Follow this procedure to defragment etcd data on each etcd member.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Determine which etcd member is the leader, because the leader should be defragmented last.
Get the list of etcd pods:
oc get pods -n openshift-etcd -o wide | grep etcd
$ oc get pods -n openshift-etcd -o wide | grep etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>
etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Choose a pod and run the following command to determine which etcd member is the leader:
oc rsh -n openshift-etcd etcd-ip-10-0-159-225.us-west-1.compute.internal etcdctl endpoint status --cluster -w table
$ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.us-west-1.compute.internal etcdctl endpoint status --cluster -w table
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Based on the
IS LEADER
column of this output, thehttps://10.0.199.170:2379
endpoint is the leader. Matching this endpoint with the output of the previous step, the pod name of the leader isetcd-ip-10-0-199-170.example.redhat.com
.
Defragment an etcd member.
Connect to the running etcd container, passing in the name of a pod that is not the leader:
oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com
$ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Unset the
ETCDCTL_ENDPOINTS
environment variable:unset ETCDCTL_ENDPOINTS
sh-4.4# unset ETCDCTL_ENDPOINTS
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Defragment the etcd member:
etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag
sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Finished defragmenting etcd member[https://localhost:2379]
Finished defragmenting etcd member[https://localhost:2379]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If a timeout error occurs, increase the value for
--command-timeout
until the command succeeds.Verify that the database size was reduced:
etcdctl endpoint status -w table --cluster
sh-4.4# etcdctl endpoint status -w table --cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example shows that the database size for this etcd member is now 41 MB as opposed to the starting size of 104 MB.
Repeat these steps to connect to each of the other etcd members and defragment them. Always defragment the leader last.
Wait at least one minute between defragmentation actions to allow the etcd pod to recover. Until the etcd pod recovers, the etcd member will not respond.
If any
NOSPACE
alarms were triggered due to the space quota being exceeded, clear them.Check if there are any
NOSPACE
alarms:etcdctl alarm list
sh-4.4# etcdctl alarm list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
memberID:12345678912345678912 alarm:NOSPACE
memberID:12345678912345678912 alarm:NOSPACE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Clear the alarms:
etcdctl alarm disarm
sh-4.4# etcdctl alarm disarm
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6. OpenShift Container Platform infrastructure components Copia collegamentoCollegamento copiato negli appunti!
The following infrastructure workloads do not incur OpenShift Container Platform worker subscriptions:
- Kubernetes and OpenShift Container Platform control plane services that run on masters
- The default router
- The integrated container image registry
- The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects
- Cluster aggregated logging
- Service brokers
- Red Hat Quay
- Red Hat OpenShift Container Storage
- Red Hat Advanced Cluster Manager
Any node that runs any other container, pod, or component is a worker node that your subscription must cover.
2.7. Moving the monitoring solution Copia collegamentoCollegamento copiato negli appunti!
By default, the Prometheus Cluster Monitoring stack, which contains Prometheus, Grafana, and AlertManager, is deployed to provide cluster monitoring. It is managed by the Cluster Monitoring Operator. To move its components to different machines, you create and apply a custom config map.
Procedure
Save the following
ConfigMap
definition as thecluster-monitoring-configmap.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Running this config map forces the components of the monitoring stack to redeploy to infrastructure nodes.
Apply the new config map:
oc create -f cluster-monitoring-configmap.yaml
$ oc create -f cluster-monitoring-configmap.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Watch the monitoring pods move to the new machines:
watch 'oc get pod -n openshift-monitoring -o wide'
$ watch 'oc get pod -n openshift-monitoring -o wide'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If a component has not moved to the
infra
node, delete the pod with this component:oc delete pod -n openshift-monitoring <pod>
$ oc delete pod -n openshift-monitoring <pod>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The component from the deleted pod is re-created on the
infra
node.
2.8. Moving the default registry Copia collegamentoCollegamento copiato negli appunti!
You configure the registry Operator to deploy its pods to different nodes.
Prerequisites
- Configure additional machine sets in your OpenShift Container Platform cluster.
Procedure
View the
config/instance
object:oc get configs.imageregistry.operator.openshift.io/cluster -o yaml
$ oc get configs.imageregistry.operator.openshift.io/cluster -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
config/instance
object:oc edit configs.imageregistry.operator.openshift.io/cluster
$ oc edit configs.imageregistry.operator.openshift.io/cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following lines of text the
spec
section of the object:nodeSelector: node-role.kubernetes.io/infra: ""
nodeSelector: node-role.kubernetes.io/infra: ""
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the registry pod has been moved to the infrastructure node.
Run the following command to identify the node where the registry pod is located:
oc get pods -o wide -n openshift-image-registry
$ oc get pods -o wide -n openshift-image-registry
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm the node has the label you specified:
oc describe node <node_name>
$ oc describe node <node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the command output and confirm that
node-role.kubernetes.io/infra
is in theLABELS
list.
2.9. Moving the router Copia collegamentoCollegamento copiato negli appunti!
You can deploy the router pod to a different machine set. By default, the pod is deployed to a worker node.
Prerequisites
- Configure additional machine sets in your OpenShift Container Platform cluster.
Procedure
View the
IngressController
custom resource for the router Operator:oc get ingresscontroller default -n openshift-ingress-operator -o yaml
$ oc get ingresscontroller default -n openshift-ingress-operator -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The command output resembles the following text:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
ingresscontroller
resource and change thenodeSelector
to use theinfra
label:oc edit ingresscontroller default -n openshift-ingress-operator
$ oc edit ingresscontroller default -n openshift-ingress-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
nodeSelector
stanza that references theinfra
label to thespec
section, as shown:spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/infra: ""
spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/infra: ""
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the router pod is running on the
infra
node.View the list of router pods and note the node name of the running pod:
oc get pod -n openshift-ingress -o wide
$ oc get pod -n openshift-ingress -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the running pod is on the
ip-10-0-217-226.ec2.internal
node.View the node status of the running pod:
oc get node <node_name>
$ oc get node <node_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.18.3
NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.18.3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Because the role list includes
infra
, the pod is running on the correct node.
2.10. Infrastructure node sizing Copia collegamentoCollegamento copiato negli appunti!
The infrastructure node resource requirements depend on the cluster age, nodes, and objects in the cluster, as these factors can lead to an increase in the number of metrics or time series in Prometheus. The following infrastructure node size recommendations are based on the results of cluster maximums and control plane density focused testing.
Number of worker nodes | CPU cores | Memory (GB) |
---|---|---|
25 | 4 | 32 |
100 | 8 | 64 |
250 | 32 | 192 |
500 | 32 | 192 |
These sizing recommendations are based on scale tests, which create a large number of objects across the cluster. These tests include reaching some of the cluster maximums. In the case of 250 and 500 node counts on a OpenShift Container Platform 4.5 cluster, these maximums are 10000 namespaces with 61000 pods, 10000 deployments, 181000 secrets, 400 config maps, and so on. Prometheus is a highly memory intensive application; the resource usage depends on various factors including the number of nodes, objects, the Prometheus metrics scraping interval, metrics or time series, and the age of the cluster. The disk size also depends on the retention period. You must take these factors into consideration and size them accordingly.
The sizing recommendations are applicable only for the infrastructure components which gets installed during the cluster install - Prometheus, Router and Registry. Logging is a day two operation and the recommendations do not take it into account.
In OpenShift Container Platform 4.5, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and previous versions. This influences the stated sizing recommendations.