Este conteúdo não está disponível no idioma selecionado.
Chapter 3. Ensuring reliable etcd performance and scalability
To ensure optimal performance with etcd, it’s important to understand the conditions that affect performance, including node scaling, leader election, log replication, tuning, latency, network jitter, peer round trip time, database size, and Kubernetes API transaction rates.
3.1. Leader election and log replication of etcd Copiar o linkLink copiado para a área de transferência!
etcd is a consistent, distributed key-value store that operates as a cluster of replicated nodes. Following the Raft algorithm, etcd operates by electing one node as the leader and the others as followers. The leader maintains the system’s current state and ensures that the followers are up-to-date.
The leader node is responsible for log replication. It handles incoming write transactions from the client and writes a Raft log entry that it then broadcasts to the followers.
When an etcd client such as kube-apiserver
connects to an etcd member that is requesting an action that requires a quorum, such as writing a value, if the etcd member is a follower, it returns a message indicating the transaction should be sent to the leader.
When the etcd client requests an action that requires a quorum from the leader, the leader keeps the client connection open while it writes the local Raft log, broadcasts the log to the followers, and waits for the majority of the followers to acknowledge to have committed the log without failures. Only then does the leader send the acknowledgment to the etcd client and close the session. If failure notifications are received from the followers and the majority fails to reach a consensus, the leader returns the error message to the client and closes the session.
3.2. Node scaling for etcd Copiar o linkLink copiado para a área de transferência!
In general, clusters must have 3 control plane nodes. However, if your cluster is installed on a bare metal platform, it can have up to 5 control plane nodes. If an existing bare-metal cluster has fewer than 5 control plane nodes, you can scale the cluster up as a postinstallation task.
For example, to scale from 3 to 4 control plane nodes after installation, you can add a host and install it as a control plane node. Then, the etcd Operator scales accordingly to account for the additional control plane node.
Scaling a cluster to 4 or 5 control plane nodes is available only on bare metal platforms.
For more information about how to scale control plane nodes by using the Assisted Installer, see "Adding hosts with the API" and "Replacing a control plane node in a healthy cluster".
While adding control plane nodes can increase reliability and availability, it can decrease throughput and increase latency, affecting performance.
The following table shows failure tolerance for clusters of different sizes:
Cluster size | Majority | Failure tolerance |
---|---|---|
1 node | 1 | 0 |
3 nodes | 2 | 1 |
4 nodes | 3 | 1 |
5 nodes | 3 | 2 |
For more information about recovering from quorum loss, see "Restoring to a previous cluster state".
3.3. Effects of disk latency on etcd Copiar o linkLink copiado para a área de transferência!
An etcd cluster is sensitive to disk latencies. To understand the disk latency that is experienced by etcd in your control plane environment, run the fio
tests or suite.
Make sure that the final report classifies the disk as appropriate for etcd, as shown in the following example:
... 99th percentile of fsync is 5865472 ns 99th percentile of the fsync is within the recommended threshold: - 20 ms, the disk can be used to host etcd
...
99th percentile of fsync is 5865472 ns
99th percentile of the fsync is within the recommended threshold: - 20 ms, the disk can be used to host etcd
When a high latency disk is used, a message states that the disk is not recommended for etcd, as shown in the following example:
... 99th percentile of fsync is 15865472 ns 99th percentile of the fsync is greater than the recommended value which is 20 ms, faster disks are recommended to host etcd for better performance
...
99th percentile of fsync is 15865472 ns
99th percentile of the fsync is greater than the recommended value which is 20 ms, faster disks are recommended to host etcd for better performance
When you use cluster deployments that span multiple data centers that are using disks for etcd that do not meet the recommended latency, it increases the chances of service-affecting failures and dramatically reduces the network latency that the control plane can sustain.
3.4. Monitoring consensus latency for etcd Copiar o linkLink copiado para a área de transferência!
By using the etcdctl
CLI, you can monitor the latency for reaching consensus as experienced by etcd. You must identify one of the etcd pods and then retrieve the endpoint health.
This procedure, which validates and monitors cluster health, can be run only on an active cluster.
Prerequisites
- During planning for cluster deployment, you completed the disk and network tests.
Procedure
Enter the following command:
oc get pods -n openshift-etcd -l app=etcd
# oc get pods -n openshift-etcd -l app=etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE etcd-m0 4/4 Running 4 8h etcd-m1 4/4 Running 4 8h etcd-m2 4/4 Running 4 8h
NAME READY STATUS RESTARTS AGE etcd-m0 4/4 Running 4 8h etcd-m1 4/4 Running 4 8h etcd-m2 4/4 Running 4 8h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command. To better understand the etcd latency for consensus, you can run this command on a precise watch cycle for a few minutes to observe that the numbers remain below the ~66 ms threshold. The closer the consensus time is to 100 ms, the more likely the cluster will experience service-affecting events and instability.
oc exec -ti etcd-m0 -- etcdctl endpoint health -w table
# oc exec -ti etcd-m0 -- etcdctl endpoint health -w table
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command:
oc exec -ti etcd-m0 -- watch -dp -c etcdctl endpoint health -w table
# oc exec -ti etcd-m0 -- watch -dp -c etcdctl endpoint health -w table
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.5. Moving etcd to a different disk Copiar o linkLink copiado para a área de transferência!
You can move etcd from a shared disk to a separate disk to prevent or resolve performance issues.
The Machine Config Operator (MCO) is responsible for mounting a secondary disk for OpenShift Container Platform 4.20 container storage.
This encoded script only supports device names for the following device types:
- SCSI or SATA
-
/dev/sd*
- Virtual device
-
/dev/vd*
- NVMe
-
/dev/nvme*[0-9]*n*
Limitations
-
When the new disk is attached to the cluster, the etcd database is part of the root mount. It is not part of the secondary disk or the intended disk when the primary node is recreated. As a result, the primary node will not create a separate
/var/lib/etcd
mount.
Prerequisites
- You have a backup of your cluster’s etcd data.
-
You have installed the OpenShift CLI (
oc
). -
You have access to the cluster with
cluster-admin
privileges. - Add additional disks before uploading the machine configuration.
-
The
MachineConfigPool
must matchmetadata.labels[machineconfiguration.openshift.io/role]
. This applies to a controller, worker, or a custom pool.
This procedure does not move parts of the root file system, such as /var/
, to another disk or partition on an installed node.
This procedure is not supported when using control plane machine sets.
Procedure
Attach the new disk to the cluster and verify that the disk is detected in the node by running the
lsblk
command in a debug shell:oc debug node/<node_name>
$ oc debug node/<node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow lsblk
# lsblk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note the device name of the new disk reported by the
lsblk
command.Create the following script and name it
etcd-find-secondary-device.sh
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<device_type_glob>
with a shell glob for your block device type. For SCSI or SATA drives, use/dev/sd*
; for virtual drives, use/dev/vd*
; for NVMe drives, use/dev/nvme*[0-9]*n*
.
Create a base64-encoded string from the
etcd-find-secondary-device.sh
script and note its contents:base64 -w0 etcd-find-secondary-device.sh
$ base64 -w0 etcd-find-secondary-device.sh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
MachineConfig
YAML file namedetcd-mc.yml
with contents such as the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<encoded_etcd_find_secondary_device_script>
with the encoded script contents that you noted.
Apply the created
MachineConfig
YAML file:oc create -f etcd-mc.yml
$ oc create -f etcd-mc.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Run the
grep /var/lib/etcd /proc/mounts
command in a debug shell for the node to ensure that the disk is mounted:oc debug node/<node_name>
$ oc debug node/<node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow grep -w "/var/lib/etcd" /proc/mounts
# grep -w "/var/lib/etcd" /proc/mounts
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
/dev/sdb /var/lib/etcd xfs rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0
/dev/sdb /var/lib/etcd xfs rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.6. Defragmenting etcd data Copiar o linkLink copiado para a área de transferência!
For large and dense clusters, etcd can suffer from poor performance if the keyspace grows too large and exceeds the space quota. Periodically maintain and defragment etcd to free up space in the data store. Monitor Prometheus for etcd metrics and defragment it when required; otherwise, etcd can raise a cluster-wide alarm that puts the cluster into a maintenance mode that accepts only key reads and deletes.
Monitor these key metrics:
-
etcd_server_quota_backend_bytes
, which is the current quota limit -
etcd_mvcc_db_total_size_in_use_in_bytes
, which indicates the actual database usage after a history compaction -
etcd_mvcc_db_total_size_in_bytes
, which shows the database size, including free space waiting for defragmentation
Defragment etcd data to reclaim disk space after events that cause disk fragmentation, such as etcd history compaction.
History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system.
Defragmentation occurs automatically, but you can also trigger it manually.
Automatic defragmentation is good for most cases, because the etcd operator uses cluster information to determine the most efficient operation for the user.
3.6.1. Automatic defragmentation Copiar o linkLink copiado para a área de transferência!
The etcd Operator automatically defragments disks. No manual intervention is needed.
Verify that the defragmentation process is successful by viewing one of these logs:
- etcd logs
- cluster-etcd-operator pod
- operator status error log
Automatic defragmentation can cause leader election failure in various OpenShift core components, such as the Kubernetes controller manager, which triggers a restart of the failing component. The restart is harmless and either triggers failover to the next running instance or the component resumes work again after the restart.
Example log output for successful defragmentation
etcd member has been defragmented: <member_name>, memberID: <member_id>
etcd member has been defragmented: <member_name>, memberID: <member_id>
Example log output for unsuccessful defragmentation
failed defrag on member: <member_name>, memberID: <member_id>: <error_message>
failed defrag on member: <member_name>, memberID: <member_id>: <error_message>
3.6.2. Manual defragmentation Copiar o linkLink copiado para a área de transferência!
A Prometheus alert indicates when you need to use manual defragmentation. The alert is displayed in two cases:
- When etcd uses more than 50% of its available space for more than 10 minutes
- When etcd is actively using less than 50% of its total database size for more than 10 minutes
You can also determine whether defragmentation is needed by checking the etcd database size in MB that will be freed by defragmentation with the PromQL expression: (etcd_mvcc_db_total_size_in_bytes - etcd_mvcc_db_total_size_in_use_in_bytes)/1024/1024
Defragmenting etcd is a blocking action. The etcd member will not respond until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover.
Follow this procedure to defragment etcd data on each etcd member.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Determine which etcd member is the leader, because the leader should be defragmented last.
Get the list of etcd pods:
oc -n openshift-etcd get pods -l k8s-app=etcd -o wide
$ oc -n openshift-etcd get pods -l k8s-app=etcd -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>
etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Choose a pod and run the following command to determine which etcd member is the leader:
oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table
$ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Based on the
IS LEADER
column of this output, thehttps://10.0.199.170:2379
endpoint is the leader. Matching this endpoint with the output of the previous step, the pod name of the leader isetcd-ip-10-0-199-170.example.redhat.com
.
Defragment an etcd member.
Connect to the running etcd container, passing in the name of a pod that is not the leader:
oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com
$ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Unset the
ETCDCTL_ENDPOINTS
environment variable:unset ETCDCTL_ENDPOINTS
sh-4.4# unset ETCDCTL_ENDPOINTS
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Defragment the etcd member:
etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag
sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Finished defragmenting etcd member[https://localhost:2379]
Finished defragmenting etcd member[https://localhost:2379]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If a timeout error occurs, increase the value for
--command-timeout
until the command succeeds.Verify that the database size was reduced:
etcdctl endpoint status -w table --cluster
sh-4.4# etcdctl endpoint status -w table --cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example shows that the database size for this etcd member is now 41 MB as opposed to the starting size of 104 MB.
Repeat these steps to connect to each of the other etcd members and defragment them. Always defragment the leader last.
Wait at least one minute between defragmentation actions to allow the etcd pod to recover. Until the etcd pod recovers, the etcd member will not respond.
If any
NOSPACE
alarms were triggered due to the space quota being exceeded, clear them.Check if there are any
NOSPACE
alarms:etcdctl alarm list
sh-4.4# etcdctl alarm list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
memberID:12345678912345678912 alarm:NOSPACE
memberID:12345678912345678912 alarm:NOSPACE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Clear the alarms:
etcdctl alarm disarm
sh-4.4# etcdctl alarm disarm
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.7. Setting tuning parameters for etcd Copiar o linkLink copiado para a área de transferência!
You can set the control plane hardware speed to "Standard"
, "Slower"
, or the default, which is ""
.
The default setting allows the system to decide which speed to use. This value enables upgrades from versions where this feature does not exist, as the system can select values from previous versions.
By selecting one of the other values, you are overriding the default. If you see many leader elections due to timeouts or missed heartbeats and your system is set to ""
or "Standard"
, set the hardware speed to "Slower"
to make the system more tolerant to the increased latency.
3.7.1. Changing hardware speed tolerance Copiar o linkLink copiado para a área de transferência!
To change the hardware speed tolerance for etcd, complete the following steps.
Procedure
Check to see what the current value is by entering the following command:
oc describe etcd/cluster | grep "Control Plane Hardware Speed"
$ oc describe etcd/cluster | grep "Control Plane Hardware Speed"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Control Plane Hardware Speed: <VALUE>
Control Plane Hardware Speed: <VALUE>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the output is empty, the field has not been set and should be considered as the default ("").
Change the value by entering the following command. Replace
<value>
with one of the valid values:""
,"Standard"
, or"Slower"
:oc patch etcd/cluster --type=merge -p '{"spec": {"controlPlaneHardwareSpeed": "<value>"}}'
$ oc patch etcd/cluster --type=merge -p '{"spec": {"controlPlaneHardwareSpeed": "<value>"}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following table indicates the heartbeat interval and leader election timeout for each profile. These values are subject to change.
Expand Profile
ETCD_HEARTBEAT_INTERVAL
ETCD_LEADER_ELECTION_TIMEOUT
""
Varies depending on platform
Varies depending on platform
Standard
100
1000
Slower
500
2500
Review the output:
Example output
etcd.operator.openshift.io/cluster patched
etcd.operator.openshift.io/cluster patched
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you enter any value besides the valid values, error output is displayed. For example, if you entered
"Faster"
as the value, the output is as follows:Example output
The Etcd "cluster" is invalid: spec.controlPlaneHardwareSpeed: Unsupported value: "Faster": supported values: "", "Standard", "Slower"
The Etcd "cluster" is invalid: spec.controlPlaneHardwareSpeed: Unsupported value: "Faster": supported values: "", "Standard", "Slower"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the value was changed by entering the following command:
oc describe etcd/cluster | grep "Control Plane Hardware Speed"
$ oc describe etcd/cluster | grep "Control Plane Hardware Speed"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Control Plane Hardware Speed: ""
Control Plane Hardware Speed: ""
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for etcd pods to roll out:
oc get pods -n openshift-etcd -w
$ oc get pods -n openshift-etcd -w
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following output shows the expected entries for master-0. Before you continue, wait until all masters show a status of
4/4 Running
.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to review to the values:
oc describe -n openshift-etcd pod/<ETCD_PODNAME> | grep -e HEARTBEAT_INTERVAL -e ELECTION_TIMEOUT
$ oc describe -n openshift-etcd pod/<ETCD_PODNAME> | grep -e HEARTBEAT_INTERVAL -e ELECTION_TIMEOUT
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThese values might not have changed from the default.
3.8. OpenShift Container Platform timer tunables for etcd Copiar o linkLink copiado para a área de transferência!
OpenShift Container Platform maintains etcd timers that are optimized for each platform. OpenShift Container Platform has prescribed validated values that are optimized for each platform provider. The default etcd timers with platform=none
or platform=metal
are as follows:
- name: ETCD_ELECTION_TIMEOUT value: "1000" ... - name: ETCD_HEARTBEAT_INTERVAL value: "100"
- name: ETCD_ELECTION_TIMEOUT
value: "1000"
...
- name: ETCD_HEARTBEAT_INTERVAL
value: "100"
From an etcd perspective, the two key values are election timeout and heartbeat interval:
- Heartbeat interval
- The frequency with which the leader notifies followers that it is still the leader.
- Election timeout
- This timeout is how long a follower node will go without hearing a heartbeat before it attempts to become leader itself.
These values do not provide the whole story for the control plane or even etcd. An etcd cluster is sensitive to disk latencies. Because etcd must persist proposals to its log, disk activity from other processes might cause long fsync latencies. The consequence is that etcd might miss heartbeats, causing request timeouts and temporary leader loss. During a leader loss and reelection, the Kubernetes API cannot process any request that causes a service-affecting event and instability of the cluster.
3.9. Determining the size of the etcd database and understanding its effects Copiar o linkLink copiado para a área de transferência!
The size of the etcd database has a direct impact on the time to complete the etcd defragmentation process. OpenShift Container Platform automatically runs the etcd defragmentation on one etcd member at a time when it detects at least 45% fragmentation. During the defragmentation process, the etcd member cannot process any requests. On small etcd databases, the defragmentation process happens in less than a second. With larger etcd databases, the disk latency directly impacts the fragmentation time, causing additional latency, as operations are blocked while defragmentation happens.
The size of the etcd database is a factor to consider when network partitions isolate a control plane node for a period and the control plane needs to resync after communication is re-established.
Minimal options exist for controlling the size of the etcd database, as it depends on the operators and applications in the system. When you consider the latency range under which the system will operate, account for the effects of synchronization or defragmentation per size of the etcd database.
The magnitude of the effects is specific to the deployment. The time to complete a defragmentation will cause degradation in the transaction rate, as the etcd member cannot accept updates during the defragmentation process. Similarly, the time for the etcd re-synchronization for large databases with high change rate affects the transaction rate and transaction latency on the system.
Consider the following two examples for the type of impacts to plan for.
- Example of the effect of etcd defragementation based on database size
- Writing an etcd database of 1 GB to a slow 7200 RPMs disk at 80 Mbit/s takes about 1 minute and 40 seconds. In such a scenario, the defragmentation process takes at least this long, if not longer, to complete the defragmentation.
- Example of the effect of database size on etcd synchronization
- If there is a change of 10% of the etcd database during the disconnection of one of the control plane nodes, the resync needs to transfer at least 100 MB. Transferring 100 MB over a 1 Gbps link takes 800 ms. On clusters with regular transactions with the Kubernetes API, the larger the etcd database size, the more network instabilities will cause control plane instabilities.
You can determine the size of an etcd database by using the OpenShift Container Platform console or by running commands in the etcdctl
tool.
Procedure
- To find the database size in the OpenShift Container Platform console, go to the etcd dashboard to view a plot that reports the size of the etcd database.
To find the database size by using the etcdctl tool, you can enter two commands:
Enter the following command to list the pods:
oc get pods -n openshift-etcd -l app=etcd
# oc get pods -n openshift-etcd -l app=etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE etcd-m0 4/4 Running 4 22h etcd-m1 4/4 Running 4 22h etcd-m2 4/4 Running 4 22h
NAME READY STATUS RESTARTS AGE etcd-m0 4/4 Running 4 22h etcd-m1 4/4 Running 4 22h etcd-m2 4/4 Running 4 22h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command and view the database size in the output:
oc exec -t etcd-m0 -- etcdctl endpoint status -w simple | cut -d, -f 1,3,4
# oc exec -t etcd-m0 -- etcdctl endpoint status -w simple | cut -d, -f 1,3,4
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
https://198.18.111.12:2379, 3.5.6, 1.1 GB https://198.18.111.13:2379, 3.5.6, 1.1 GB https://198.18.111.14:2379, 3.5.6, 1.1 GB
https://198.18.111.12:2379, 3.5.6, 1.1 GB https://198.18.111.13:2379, 3.5.6, 1.1 GB https://198.18.111.14:2379, 3.5.6, 1.1 GB
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.10. Increasing the database size for etcd Copiar o linkLink copiado para a área de transferência!
You can set the disk quota in gibibytes (GiB) for each etcd instance. If you set a disk quota for your etcd instance, you can specify integer values from 8 to 32. The default value is 8. You can specify only increasing values.
You might want to increase the disk quota if you encounter a low space
alert. This alert indicates that the cluster is too large to fit in etcd despite automatic compaction and defragmentation. If you see this alert, you need to increase the disk quota immediately because after etcd runs out of space, writes fail.
Another scenario where you might want to increase the disk quota is if you encounter an excessive database growth
alert. This alert is a warning that the database might grow too large in the next four hours. In this scenario, consider increasing the disk quota so that you do not eventually encounter a low space
alert and possible write fails.
If you increase the disk quota, the disk space that you specify is not immediately reserved. Instead, etcd can grow to that size if needed. Ensure that etcd is running on a dedicated disk that is larger than the value that you specify for the disk quota.
For large etcd databases, the control plane nodes must have additional memory and storage. Because you must account for the API server cache, the minimum memory required is at least three times the configured size of the etcd database.
Increasing the database size for etcd is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
3.10.1. Changing the etcd database size Copiar o linkLink copiado para a área de transferência!
To change the database size for etcd, complete the following steps.
Procedure
Check the current value of the disk quota for each etcd instance by entering the following command:
oc describe etcd/cluster | grep "Backend Quota"
$ oc describe etcd/cluster | grep "Backend Quota"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Backend Quota Gi B: <value>
Backend Quota Gi B: <value>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the value of the disk quota by entering the following command:
oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": <value>}}'
$ oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": <value>}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd.operator.openshift.io/cluster patched
etcd.operator.openshift.io/cluster patched
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the new value for the disk quota is set by entering the following command:
oc describe etcd/cluster | grep "Backend Quota"
$ oc describe etcd/cluster | grep "Backend Quota"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The etcd Operator automatically rolls out the etcd instances with the new values.
Verify that the etcd pods are up and running by entering the following command:
oc get pods -n openshift-etcd
$ oc get pods -n openshift-etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following output shows the expected entries.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the disk quota value is updated for the etcd pod by entering the following command:
oc describe -n openshift-etcd pod/<etcd_podname> | grep "ETCD_QUOTA_BACKEND_BYTES"
$ oc describe -n openshift-etcd pod/<etcd_podname> | grep "ETCD_QUOTA_BACKEND_BYTES"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The value might not have changed from the default value of
8
.Example output
ETCD_QUOTA_BACKEND_BYTES: 8589934592
ETCD_QUOTA_BACKEND_BYTES: 8589934592
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhile the value that you set is an integer in GiB, the value shown in the output is converted to bytes.
3.10.2. Troubleshooting Copiar o linkLink copiado para a área de transferência!
If you encounter issues when you try to increase the database size for etcd, the following troubleshooting steps might help.
3.10.2.1. Value is too small Copiar o linkLink copiado para a área de transferência!
If the value that you specify is less than 8
, you see the following error message:
oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": 5}}'
$ oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": 5}}'
Example error message
The Etcd "cluster" is invalid: * spec.backendQuotaGiB: Invalid value: 5: spec.backendQuotaGiB in body should be greater than or equal to 8 * spec.backendQuotaGiB: Invalid value: "integer": etcd backendQuotaGiB may not be decreased
The Etcd "cluster" is invalid:
* spec.backendQuotaGiB: Invalid value: 5: spec.backendQuotaGiB in body should be greater than or equal to 8
* spec.backendQuotaGiB: Invalid value: "integer": etcd backendQuotaGiB may not be decreased
To resolve this issue, specify an integer between 8
and 32
.
3.10.2.2. Value is too large Copiar o linkLink copiado para a área de transferência!
If the value that you specify is greater than 32
, you see the following error message:
oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": 64}}'
$ oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": 64}}'
Example error message
The Etcd "cluster" is invalid: spec.backendQuotaGiB: Invalid value: 64: spec.backendQuotaGiB in body should be less than or equal to 32
The Etcd "cluster" is invalid: spec.backendQuotaGiB: Invalid value: 64: spec.backendQuotaGiB in body should be less than or equal to 32
To resolve this issue, specify an integer between 8
and 32
.
3.10.2.3. Value is decreasing Copiar o linkLink copiado para a área de transferência!
If the value is set to a valid value between 8
and 32
, you cannot decrease the value. Otherwise, you see an error message.
Check to see the current value by entering the following command:
oc describe etcd/cluster | grep "Backend Quota"
$ oc describe etcd/cluster | grep "Backend Quota"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Backend Quota Gi B: 10
Backend Quota Gi B: 10
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Decrease the disk quota value by entering the following command:
oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": 8}}'
$ oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": 8}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example error message
The Etcd "cluster" is invalid: spec.backendQuotaGiB: Invalid value: "integer": etcd backendQuotaGiB may not be decreased
The Etcd "cluster" is invalid: spec.backendQuotaGiB: Invalid value: "integer": etcd backendQuotaGiB may not be decreased
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
To resolve this issue, specify an integer greater than
10
.
3.11. Measuring network jitter between control plane nodes Copiar o linkLink copiado para a área de transferência!
The value of the heartbeat interval should be around the maximum of the average round-trip time (RTT) between members, normally around 1.5 times the round-trip time. With the OpenShift Container Platform default heartbeat interval of 100 ms, the recommended RTT between control plane nodes is less than approximately 33 ms with a maximum of less than 66 ms (66 ms multiplied by 1.5 equals 99 ms). For more information, see "Setting tuning parameters for etcd". Any network latency that is higher might cause service-affecting events and cluster instability.
The network latency is influenced by many factors, including but not limited to the following factors:
- The technology of the transport networks, such as copper, fiber, wireless, or satellite
- The number and quality of the network devices in the transport network
A good evaluation reference is the comparison of the network latency in the organization with the commercial latencies that are published by telecommunications providers, such as monthly IP latency statistics.
Consider network latency with network jitter for more accurate calculations. Network jitter is the variance in network latency or, more specifically, the variation in the delay of received packets. On ideal network conditions, the jitter is as close to zero as possible. Network jitter affects the network latency calculations for etcd because the actual network latency over time will be the RTT plus or minus jitter. For example, a network with a maximum latency of 80 ms and jitter of 30 ms will experience latencies of 110 ms, which means etcd is missing heartbeats, causing request timeouts and temporary leader loss. During a leader loss and reelection, the Kubernetes API cannot process any request that causes a service-affecting event and instability of the cluster.
It’s important to measure the network jitter among all control plane nodes. To do so, you can use the iPerf3
tool in UDP mode.
Prerequisite
You built your own iPerf image. For more information, see the following Red Hat Knowledgebase articles
Procedure
Connect to one of the control plane nodes and run the iPerf container as iPerf server in host network mode. When you are running in server mode, the tool accepts TCP and UDP tests. Enter the following command, being careful to replace
<iperf_image>
with your iPerf image:podman run -ti --rm --net host <iperf_image> iperf3 -s
# podman run -ti --rm --net host <iperf_image> iperf3 -s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to another control plane node and run the iPerf in UDP client mode by entering the following command:
podman run -ti --rm --net host <iperf_image> iperf3 -u -c <node_iperf_server> -t 300
# podman run -ti --rm --net host <iperf_image> iperf3 -u -c <node_iperf_server> -t 300
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The default test runs for 10 seconds, and at the end, the client output shows the average jitter from the client perspective.
Run the debug node mode by entering the following command:
oc debug node/m1
# oc debug node/m1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Starting pod/m1-debug ... To use host binaries, run `chroot /host` Pod IP: 198.18.111.13 If you don't see a command prompt, try pressing enter.
Starting pod/m1-debug ... To use host binaries, run `chroot /host` Pod IP: 198.18.111.13 If you don't see a command prompt, try pressing enter.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following commands:
chroot /host
sh-4.4# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow podman run -ti --rm --net host <iperf_image> iperf3 -u -c m0
sh-4.4# podman run -ti --rm --net host <iperf_image> iperf3 -u -c m0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the iPerf server, the output shows the jitter on every second interval. The average is shown at the end. For the purpose of this test, you want to identify the maximum jitter that is experienced during the test, ignoring the output of the first second as it might contain an invalid measurement. Enter the following command:
oc debug node/m0
# oc debug node/m0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Starting pod/m0-debug ... To use host binaries, run `chroot /host` Pod IP: 198.18.111.12 If you don't see a command prompt, try pressing enter.
Starting pod/m0-debug ... To use host binaries, run `chroot /host` Pod IP: 198.18.111.12 If you don't see a command prompt, try pressing enter.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following commands:
chroot /host
sh-4.4# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow podman run -ti --rm --net host <iperf_image> iperf3 -s
sh-4.4# podman run -ti --rm --net host <iperf_image> iperf3 -s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Add the calculated jitter as a penalty to the network latency. For example, if the network latency is 80 ms and the jitter is 30 ms, consider an effective network latency of 110 ms for the purposes of the control plane. In this example, that value goes above the 100 ms threshold, and the system will miss heartbeats.
When you calculate the network latency for etcd, use the effective network latency, which is the sum of the following equation:
RTT + jitter
You might be able to use the average jitter value to calculate the penalty, but the cluster can sporadically miss heartbeats if the etcd heartbeat timer is lower than the sum of the following equation:
RTT + max(jitter)
Instead, consider using the 99th percentile or max jitter value for a more resilient deployment:
Effective Network Latency = RTT + max(jitter)
3.12. How etcd peer round trip time affects performance Copiar o linkLink copiado para a área de transferência!
The etcd peer round trip time is an end-to-end test metric on how quickly something can be replicated among members. It shows the latency of etcd to finish replicating a client request among all the etcd members. The etcd peer round trip time is not the same thing as the network round trip time.
You can monitor various etcd metrics on dashboards in the OpenShift Container Platform console. In the console, click Observe
Near the end of the etcd dashboard, you can find a plot that summarizes the etcd peer round trip time.
These etcd metrics are collected by the OpenShift metrics system in Prometheus. You can access them from the CLI by following the Red Hat Knowledgebase solution, How to query from the command line Prometheus statistics.
Get token to connect to Prometheus
# Get token to connect to Prometheus
SECRET=$(oc get secret -n openshift-user-workload-monitoring | grep prometheus-user-workload-token | head -n 1 | awk '{print $1 }')
export TOKEN=$(oc get secret $SECRET -n openshift-user-workload-monitoring -o json | jq -r '.data.token' | base64 -d)
export THANOS_QUERIER_HOST=$(oc get route thanos-querier -n openshift-monitoring -o json | jq -r '.spec.host')
Queries must be URL-encoded. The following example shows how to retrieve the metrics that are reporting the round trip time (in seconds) for etcd to finish replicating the client requests among the members:
The following metrics are also relevant to understanding etcd performance:
- etcd_disk_wal_fsync_duration_seconds_bucket
- Reports the etcd WAL fsync duration.
- etcd_disk_backend_commit_duration_seconds_bucket
- Reports the etcd backend commit latency duration.
- etcd_server_leader_changes_seen_total
- Reports the leader changes.
3.13. Determining Kubernetes API transaction rate for your environment Copiar o linkLink copiado para a área de transferência!
When you are using stretched control planes, the Kubernetes API transaction rate depends on the characteristics of the particular deployment. Specifically, it depends on the following combined factors:
- The etcd disk latency
- The etcd round trip time
- The size of objects that are being written to the API
As a result, when you use stretched control planes, cluster administrators must test the environment to determine the sustained transaction rate that is possible for the environment. The kube-burner
tool is useful for that purpose. The binary includes a wrapper for testing OpenShift clusters: kube-burner-ocp
. You can use kube-burner-ocp
to test cluster or node density. To test the control plane, kube-burner-ocp
has three workload profiles: cluster-density, cluster-density-v2, and cluster-density-ms. Each workload profile creates a series of resources that are designed to load the control plane. For more information about each profile, see the kube-burner-ocp
workload documentation.
Procedure
Enter a command to create and delete resources. The following example shows a command that creates and deletes resources within 20 minutes:
kube-burner ocp cluster-density-ms --churn-duration 20m --churn-delay 0s --iterations 10 --timeout 30m
# kube-burner ocp cluster-density-ms --churn-duration 20m --churn-delay 0s --iterations 10 --timeout 30m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
The OpenShift Container Platform console provides a dashboard with all the relevant API performance information. To access API performance information, click Observe
Dashboards, and from the Dashboards menu, click API Performance. During the run, observe the API performance dashboard in the OpenShift Container Platform console by clicking Observe
Dashboards, and from the Dashboards menu, click API Performance. On the dashboard, notice how the control plane responds during load and the 99th percentile transaction rate it can achieve for the execution of various verbs and request rates by read and write. Use this information and the knowledge of your organization’s workload to determine the load that the organization can put in the clusters for the specific stretched control plane deployment.