Chapter 3. Performance considerations for etcd


To ensure optimal performance and scalability for etcd in OpenShift Container Platform, you can complete the following practices.

3.1. Node scaling for etcd

In general, clusters must have 3 control plane nodes. However, if your cluster is installed on a bare metal platform, it can have up to 5 control plane nodes. If an existing bare-metal cluster has fewer than 5 control plane nodes, you can scale the cluster up as a postinstallation task.

For example, to scale from 3 to 4 control plane nodes after installation, you can add a host and install it as a control plane node. Then, the etcd Operator scales accordingly to account for the additional control plane node.

Scaling a cluster to 4 or 5 control plane nodes is available only on bare metal platforms.

For more information about how to scale control plane nodes by using the Assisted Installer, see "Adding hosts with the API" and "Replacing a control plane node in a healthy cluster".

The following table shows failure tolerance for clusters of different sizes:

Table 3.1. Failure tolerances by cluster size
Cluster sizeMajorityFailure tolerance

1 node

1

0

3 nodes

2

1

4 nodes

3

1

5 nodes

3

2

For more information about recovering from quorum loss, see "Restoring to a previous cluster state".

3.2. Moving etcd to a different disk

You can move etcd from a shared disk to a separate disk to prevent or resolve performance issues.

The Machine Config Operator (MCO) is responsible for mounting a secondary disk for OpenShift Container Platform 4.19 container storage.

Note

This encoded script only supports device names for the following device types:

SCSI or SATA
/dev/sd*
Virtual device
/dev/vd*
NVMe
/dev/nvme*[0-9]*n*

Limitations

  • When the new disk is attached to the cluster, the etcd database is part of the root mount. It is not part of the secondary disk or the intended disk when the primary node is recreated. As a result, the primary node will not create a separate /var/lib/etcd mount.

Prerequisites

  • You have a backup of your cluster’s etcd data.
  • You have installed the OpenShift CLI (oc).
  • You have access to the cluster with cluster-admin privileges.
  • Add additional disks before uploading the machine configuration.
  • The MachineConfigPool must match metadata.labels[machineconfiguration.openshift.io/role]. This applies to a controller, worker, or a custom pool.
Note

This procedure does not move parts of the root file system, such as /var/, to another disk or partition on an installed node.

Important

This procedure is not supported when using control plane machine sets.

Procedure

  1. Attach the new disk to the cluster and verify that the disk is detected in the node by running the lsblk command in a debug shell:

    $ oc debug node/<node_name>
    Copy to Clipboard
    # lsblk
    Copy to Clipboard

    Note the device name of the new disk reported by the lsblk command.

  2. Create the following script and name it etcd-find-secondary-device.sh:

    #!/bin/bash
    set -uo pipefail
    
    for device in <device_type_glob>; do 
    1
    
    /usr/sbin/blkid "${device}" &> /dev/null
     if [ $? == 2  ]; then
        echo "secondary device found ${device}"
        echo "creating filesystem for etcd mount"
        mkfs.xfs -L var-lib-etcd -f "${device}" &> /dev/null
        udevadm settle
        touch /etc/var-lib-etcd-mount
        exit
     fi
    done
    echo "Couldn't find secondary block device!" >&2
    exit 77
    Copy to Clipboard
    1
    Replace <device_type_glob> with a shell glob for your block device type. For SCSI or SATA drives, use /dev/sd*; for virtual drives, use /dev/vd*; for NVMe drives, use /dev/nvme*[0-9]*n*.
  3. Create a base64-encoded string from the etcd-find-secondary-device.sh script and note its contents:

    $ base64 -w0 etcd-find-secondary-device.sh
    Copy to Clipboard
  4. Create a MachineConfig YAML file named etcd-mc.yml with contents such as the following:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: master
      name: 98-var-lib-etcd
    spec:
      config:
        ignition:
          version: 3.5.0
        storage:
          files:
            - path: /etc/find-secondary-device
              mode: 0755
              contents:
                source: data:text/plain;charset=utf-8;base64,<encoded_etcd_find_secondary_device_script> 
    1
    
        systemd:
          units:
            - name: find-secondary-device.service
              enabled: true
              contents: |
                [Unit]
                Description=Find secondary device
                DefaultDependencies=false
                After=systemd-udev-settle.service
                Before=local-fs-pre.target
                ConditionPathExists=!/etc/var-lib-etcd-mount
    
                [Service]
                RemainAfterExit=yes
                ExecStart=/etc/find-secondary-device
    
                RestartForceExitStatus=77
    
                [Install]
                WantedBy=multi-user.target
            - name: var-lib-etcd.mount
              enabled: true
              contents: |
                [Unit]
                Before=local-fs.target
    
                [Mount]
                What=/dev/disk/by-label/var-lib-etcd
                Where=/var/lib/etcd
                Type=xfs
                TimeoutSec=120s
    
                [Install]
                RequiredBy=local-fs.target
            - name: sync-var-lib-etcd-to-etcd.service
              enabled: true
              contents: |
                [Unit]
                Description=Sync etcd data if new mount is empty
                DefaultDependencies=no
                After=var-lib-etcd.mount var.mount
                Before=crio.service
    
                [Service]
                Type=oneshot
                RemainAfterExit=yes
                ExecCondition=/usr/bin/test ! -d /var/lib/etcd/member
                ExecStart=/usr/sbin/setsebool -P rsync_full_access 1
                ExecStart=/bin/rsync -ar /sysroot/ostree/deploy/rhcos/var/lib/etcd/ /var/lib/etcd/
                ExecStart=/usr/sbin/semanage fcontext -a -t container_var_lib_t '/var/lib/etcd(/.*)?'
                ExecStart=/usr/sbin/setsebool -P rsync_full_access 0
                TimeoutSec=0
    
                [Install]
                WantedBy=multi-user.target graphical.target
            - name: restorecon-var-lib-etcd.service
              enabled: true
              contents: |
                [Unit]
                Description=Restore recursive SELinux security contexts
                DefaultDependencies=no
                After=var-lib-etcd.mount
                Before=crio.service
    
                [Service]
                Type=oneshot
                RemainAfterExit=yes
                ExecStart=/sbin/restorecon -R /var/lib/etcd/
                TimeoutSec=0
    
                [Install]
                WantedBy=multi-user.target graphical.target
    Copy to Clipboard
    1
    Replace <encoded_etcd_find_secondary_device_script> with the encoded script contents that you noted.
  5. Apply the created MachineConfig YAML file:

    $ oc create -f etcd-mc.yml
    Copy to Clipboard

Verification steps

  • Run the grep /var/lib/etcd /proc/mounts command in a debug shell for the node to ensure that the disk is mounted:

    $ oc debug node/<node_name>
    Copy to Clipboard
    # grep -w "/var/lib/etcd" /proc/mounts
    Copy to Clipboard

    Example output

    /dev/sdb /var/lib/etcd xfs rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0
    Copy to Clipboard

3.3. Defragmenting etcd data

For large and dense clusters, etcd can suffer from poor performance if the keyspace grows too large and exceeds the space quota. Periodically maintain and defragment etcd to free up space in the data store. Monitor Prometheus for etcd metrics and defragment it when required; otherwise, etcd can raise a cluster-wide alarm that puts the cluster into a maintenance mode that accepts only key reads and deletes.

Monitor these key metrics:

  • etcd_server_quota_backend_bytes, which is the current quota limit
  • etcd_mvcc_db_total_size_in_use_in_bytes, which indicates the actual database usage after a history compaction
  • etcd_mvcc_db_total_size_in_bytes, which shows the database size, including free space waiting for defragmentation

Defragment etcd data to reclaim disk space after events that cause disk fragmentation, such as etcd history compaction.

History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system.

Defragmentation occurs automatically, but you can also trigger it manually.

Note

Automatic defragmentation is good for most cases, because the etcd operator uses cluster information to determine the most efficient operation for the user.

3.3.1. Automatic defragmentation

The etcd Operator automatically defragments disks. No manual intervention is needed.

Verify that the defragmentation process is successful by viewing one of these logs:

  • etcd logs
  • cluster-etcd-operator pod
  • operator status error log
Warning

Automatic defragmentation can cause leader election failure in various OpenShift core components, such as the Kubernetes controller manager, which triggers a restart of the failing component. The restart is harmless and either triggers failover to the next running instance or the component resumes work again after the restart.

Example log output for successful defragmentation

etcd member has been defragmented: <member_name>, memberID: <member_id>
Copy to Clipboard

Example log output for unsuccessful defragmentation

failed defrag on member: <member_name>, memberID: <member_id>: <error_message>
Copy to Clipboard

3.3.2. Manual defragmentation

A Prometheus alert indicates when you need to use manual defragmentation. The alert is displayed in two cases:

  • When etcd uses more than 50% of its available space for more than 10 minutes
  • When etcd is actively using less than 50% of its total database size for more than 10 minutes

You can also determine whether defragmentation is needed by checking the etcd database size in MB that will be freed by defragmentation with the PromQL expression: (etcd_mvcc_db_total_size_in_bytes - etcd_mvcc_db_total_size_in_use_in_bytes)/1024/1024

Warning

Defragmenting etcd is a blocking action. The etcd member will not respond until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover.

Follow this procedure to defragment etcd data on each etcd member.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

Procedure

  1. Determine which etcd member is the leader, because the leader should be defragmented last.

    1. Get the list of etcd pods:

      $ oc -n openshift-etcd get pods -l k8s-app=etcd -o wide
      Copy to Clipboard

      Example output

      etcd-ip-10-0-159-225.example.redhat.com                3/3     Running     0          175m   10.0.159.225   ip-10-0-159-225.example.redhat.com   <none>           <none>
      etcd-ip-10-0-191-37.example.redhat.com                 3/3     Running     0          173m   10.0.191.37    ip-10-0-191-37.example.redhat.com    <none>           <none>
      etcd-ip-10-0-199-170.example.redhat.com                3/3     Running     0          176m   10.0.199.170   ip-10-0-199-170.example.redhat.com   <none>           <none>
      Copy to Clipboard

    2. Choose a pod and run the following command to determine which etcd member is the leader:

      $ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table
      Copy to Clipboard

      Example output

      Defaulting container name to etcdctl.
      Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod.
      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
      |         ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
      |  https://10.0.191.37:2379 | 251cd44483d811c3 |   3.5.9 |  104 MB |     false |      false |         7 |      91624 |              91624 |        |
      | https://10.0.159.225:2379 | 264c7c58ecbdabee |   3.5.9 |  104 MB |     false |      false |         7 |      91624 |              91624 |        |
      | https://10.0.199.170:2379 | 9ac311f93915cc79 |   3.5.9 |  104 MB |      true |      false |         7 |      91624 |              91624 |        |
      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
      Copy to Clipboard

      Based on the IS LEADER column of this output, the https://10.0.199.170:2379 endpoint is the leader. Matching this endpoint with the output of the previous step, the pod name of the leader is etcd-ip-10-0-199-170.example.redhat.com.

  2. Defragment an etcd member.

    1. Connect to the running etcd container, passing in the name of a pod that is not the leader:

      $ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com
      Copy to Clipboard
    2. Unset the ETCDCTL_ENDPOINTS environment variable:

      sh-4.4# unset ETCDCTL_ENDPOINTS
      Copy to Clipboard
    3. Defragment the etcd member:

      sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag
      Copy to Clipboard

      Example output

      Finished defragmenting etcd member[https://localhost:2379]
      Copy to Clipboard

      If a timeout error occurs, increase the value for --command-timeout until the command succeeds.

    4. Verify that the database size was reduced:

      sh-4.4# etcdctl endpoint status -w table --cluster
      Copy to Clipboard

      Example output

      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
      |         ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
      |  https://10.0.191.37:2379 | 251cd44483d811c3 |   3.5.9 |  104 MB |     false |      false |         7 |      91624 |              91624 |        |
      | https://10.0.159.225:2379 | 264c7c58ecbdabee |   3.5.9 |   41 MB |     false |      false |         7 |      91624 |              91624 |        | 
      1
      
      | https://10.0.199.170:2379 | 9ac311f93915cc79 |   3.5.9 |  104 MB |      true |      false |         7 |      91624 |              91624 |        |
      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
      Copy to Clipboard

      This example shows that the database size for this etcd member is now 41 MB as opposed to the starting size of 104 MB.

    5. Repeat these steps to connect to each of the other etcd members and defragment them. Always defragment the leader last.

      Wait at least one minute between defragmentation actions to allow the etcd pod to recover. Until the etcd pod recovers, the etcd member will not respond.

  3. If any NOSPACE alarms were triggered due to the space quota being exceeded, clear them.

    1. Check if there are any NOSPACE alarms:

      sh-4.4# etcdctl alarm list
      Copy to Clipboard

      Example output

      memberID:12345678912345678912 alarm:NOSPACE
      Copy to Clipboard

    2. Clear the alarms:

      sh-4.4# etcdctl alarm disarm
      Copy to Clipboard

3.4. Setting tuning parameters for etcd

You can set the control plane hardware speed to "Standard", "Slower", or the default, which is "".

The default setting allows the system to decide which speed to use. This value enables upgrades from versions where this feature does not exist, as the system can select values from previous versions.

By selecting one of the other values, you are overriding the default. If you see many leader elections due to timeouts or missed heartbeats and your system is set to "" or "Standard", set the hardware speed to "Slower" to make the system more tolerant to the increased latency.

3.4.1. Changing hardware speed tolerance

To change the hardware speed tolerance for etcd, complete the following steps.

Procedure

  1. Check to see what the current value is by entering the following command:

    $ oc describe etcd/cluster | grep "Control Plane Hardware Speed"
    Copy to Clipboard

    Example output

    Control Plane Hardware Speed:  <VALUE>
    Copy to Clipboard

    Note

    If the output is empty, the field has not been set and should be considered as the default ("").

  2. Change the value by entering the following command. Replace <value> with one of the valid values: "", "Standard", or "Slower":

    $ oc patch etcd/cluster --type=merge -p '{"spec": {"controlPlaneHardwareSpeed": "<value>"}}'
    Copy to Clipboard

    The following table indicates the heartbeat interval and leader election timeout for each profile. These values are subject to change.

    Profile

    ETCD_HEARTBEAT_INTERVAL

    ETCD_LEADER_ELECTION_TIMEOUT

    ""

    Varies depending on platform

    Varies depending on platform

    Standard

    100

    1000

    Slower

    500

    2500

  3. Review the output:

    Example output

    etcd.operator.openshift.io/cluster patched
    Copy to Clipboard

    If you enter any value besides the valid values, error output is displayed. For example, if you entered "Faster" as the value, the output is as follows:

    Example output

    The Etcd "cluster" is invalid: spec.controlPlaneHardwareSpeed: Unsupported value: "Faster": supported values: "", "Standard", "Slower"
    Copy to Clipboard

  4. Verify that the value was changed by entering the following command:

    $ oc describe etcd/cluster | grep "Control Plane Hardware Speed"
    Copy to Clipboard

    Example output

    Control Plane Hardware Speed:  ""
    Copy to Clipboard

  5. Wait for etcd pods to roll out:

    $ oc get pods -n openshift-etcd -w
    Copy to Clipboard

    The following output shows the expected entries for master-0. Before you continue, wait until all masters show a status of 4/4 Running.

    Example output

    installer-9-ci-ln-qkgs94t-72292-9clnd-master-0           0/1     Pending             0          0s
    installer-9-ci-ln-qkgs94t-72292-9clnd-master-0           0/1     Pending             0          0s
    installer-9-ci-ln-qkgs94t-72292-9clnd-master-0           0/1     ContainerCreating   0          0s
    installer-9-ci-ln-qkgs94t-72292-9clnd-master-0           0/1     ContainerCreating   0          1s
    installer-9-ci-ln-qkgs94t-72292-9clnd-master-0           1/1     Running             0          2s
    installer-9-ci-ln-qkgs94t-72292-9clnd-master-0           0/1     Completed           0          34s
    installer-9-ci-ln-qkgs94t-72292-9clnd-master-0           0/1     Completed           0          36s
    installer-9-ci-ln-qkgs94t-72292-9clnd-master-0           0/1     Completed           0          36s
    etcd-guard-ci-ln-qkgs94t-72292-9clnd-master-0            0/1     Running             0          26m
    etcd-ci-ln-qkgs94t-72292-9clnd-master-0                  4/4     Terminating         0          11m
    etcd-ci-ln-qkgs94t-72292-9clnd-master-0                  4/4     Terminating         0          11m
    etcd-ci-ln-qkgs94t-72292-9clnd-master-0                  0/4     Pending             0          0s
    etcd-ci-ln-qkgs94t-72292-9clnd-master-0                  0/4     Init:1/3            0          1s
    etcd-ci-ln-qkgs94t-72292-9clnd-master-0                  0/4     Init:2/3            0          2s
    etcd-ci-ln-qkgs94t-72292-9clnd-master-0                  0/4     PodInitializing     0          3s
    etcd-ci-ln-qkgs94t-72292-9clnd-master-0                  3/4     Running             0          4s
    etcd-guard-ci-ln-qkgs94t-72292-9clnd-master-0            1/1     Running             0          26m
    etcd-ci-ln-qkgs94t-72292-9clnd-master-0                  3/4     Running             0          20s
    etcd-ci-ln-qkgs94t-72292-9clnd-master-0                  4/4     Running             0          20s
    Copy to Clipboard

  6. Enter the following command to review to the values:

    $ oc describe -n openshift-etcd pod/<ETCD_PODNAME> | grep -e HEARTBEAT_INTERVAL -e ELECTION_TIMEOUT
    Copy to Clipboard
    Note

    These values might not have changed from the default.

3.5. Increasing the database size for etcd

You can set the disk quota in gibibytes (GiB) for each etcd instance. If you set a disk quota for your etcd instance, you can specify integer values from 8 to 32. The default value is 8. You can specify only increasing values.

You might want to increase the disk quota if you encounter a low space alert. This alert indicates that the cluster is too large to fit in etcd despite automatic compaction and defragmentation. If you see this alert, you need to increase the disk quota immediately because after etcd runs out of space, writes fail.

Another scenario where you might want to increase the disk quota is if you encounter an excessive database growth alert. This alert is a warning that the database might grow too large in the next four hours. In this scenario, consider increasing the disk quota so that you do not eventually encounter a low space alert and possible write fails.

If you increase the disk quota, the disk space that you specify is not immediately reserved. Instead, etcd can grow to that size if needed. Ensure that etcd is running on a dedicated disk that is larger than the value that you specify for the disk quota.

For large etcd databases, the control plane nodes must have additional memory and storage. Because you must account for the API server cache, the minimum memory required is at least three times the configured size of the etcd database.

Important

Increasing the database size for etcd is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

3.5.1. Changing the etcd database size

To change the database size for etcd, complete the following steps.

Procedure

  1. Check the current value of the disk quota for each etcd instance by entering the following command:

    $ oc describe etcd/cluster | grep "Backend Quota"
    Copy to Clipboard

    Example output

    Backend Quota Gi B: <value>
    Copy to Clipboard

  2. Change the value of the disk quota by entering the following command:

    $ oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": <value>}}'
    Copy to Clipboard

    Example output

    etcd.operator.openshift.io/cluster patched
    Copy to Clipboard

Verification

  1. Verify that the new value for the disk quota is set by entering the following command:

    $ oc describe etcd/cluster | grep "Backend Quota"
    Copy to Clipboard

    The etcd Operator automatically rolls out the etcd instances with the new values.

  2. Verify that the etcd pods are up and running by entering the following command:

    $ oc get pods -n openshift-etcd
    Copy to Clipboard

    The following output shows the expected entries.

    Example output

    NAME                                                   READY   STATUS      RESTARTS   AGE
    etcd-ci-ln-b6kfsw2-72292-mzwbq-master-0                4/4     Running     0          39m
    etcd-ci-ln-b6kfsw2-72292-mzwbq-master-1                4/4     Running     0          37m
    etcd-ci-ln-b6kfsw2-72292-mzwbq-master-2                4/4     Running     0          41m
    etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-0          1/1     Running     0          51m
    etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-1          1/1     Running     0          49m
    etcd-guard-ci-ln-b6kfsw2-72292-mzwbq-master-2          1/1     Running     0          54m
    installer-5-ci-ln-b6kfsw2-72292-mzwbq-master-1         0/1     Completed   0          51m
    installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-0         0/1     Completed   0          46m
    installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-1         0/1     Completed   0          44m
    installer-7-ci-ln-b6kfsw2-72292-mzwbq-master-2         0/1     Completed   0          49m
    installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-0         0/1     Completed   0          40m
    installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-1         0/1     Completed   0          38m
    installer-8-ci-ln-b6kfsw2-72292-mzwbq-master-2         0/1     Completed   0          42m
    revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-0   0/1     Completed   0          43m
    revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-1   0/1     Completed   0          43m
    revision-pruner-7-ci-ln-b6kfsw2-72292-mzwbq-master-2   0/1     Completed   0          43m
    revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-0   0/1     Completed   0          42m
    revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-1   0/1     Completed   0          42m
    revision-pruner-8-ci-ln-b6kfsw2-72292-mzwbq-master-2   0/1     Completed   0          42m
    Copy to Clipboard

  3. Verify that the disk quota value is updated for the etcd pod by entering the following command:

    $ oc describe -n openshift-etcd pod/<etcd_podname> | grep "ETCD_QUOTA_BACKEND_BYTES"
    Copy to Clipboard

    The value might not have changed from the default value of 8.

    Example output

    ETCD_QUOTA_BACKEND_BYTES:                               8589934592
    Copy to Clipboard

    Note

    While the value that you set is an integer in GiB, the value shown in the output is converted to bytes.

3.5.2. Troubleshooting

If you encounter issues when you try to increase the database size for etcd, the following troubleshooting steps might help.

3.5.2.1. Value is too small

If the value that you specify is less than 8, you see the following error message:

$ oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": 5}}'
Copy to Clipboard

Example error message

The Etcd "cluster" is invalid:
* spec.backendQuotaGiB: Invalid value: 5: spec.backendQuotaGiB in body should be greater than or equal to 8
* spec.backendQuotaGiB: Invalid value: "integer": etcd backendQuotaGiB may not be decreased
Copy to Clipboard

To resolve this issue, specify an integer between 8 and 32.

3.5.2.2. Value is too large

If the value that you specify is greater than 32, you see the following error message:

$ oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": 64}}'
Copy to Clipboard

Example error message

The Etcd "cluster" is invalid: spec.backendQuotaGiB: Invalid value: 64: spec.backendQuotaGiB in body should be less than or equal to 32
Copy to Clipboard

To resolve this issue, specify an integer between 8 and 32.

3.5.2.3. Value is decreasing

If the value is set to a valid value between 8 and 32, you cannot decrease the value. Otherwise, you see an error message.

  1. Check to see the current value by entering the following command:

    $ oc describe etcd/cluster | grep "Backend Quota"
    Copy to Clipboard

    Example output

    Backend Quota Gi B: 10
    Copy to Clipboard

  2. Decrease the disk quota value by entering the following command:

    $ oc patch etcd/cluster --type=merge -p '{"spec": {"backendQuotaGiB": 8}}'
    Copy to Clipboard

    Example error message

    The Etcd "cluster" is invalid: spec.backendQuotaGiB: Invalid value: "integer": etcd backendQuotaGiB may not be decreased
    Copy to Clipboard

  3. To resolve this issue, specify an integer greater than 10.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat