Este conteúdo não está disponível no idioma selecionado.

Chapter 7. Migrating etcd Data (v2 to v3)


7.1. Overview

While etcd was updated from etcd v2 to v3 in a previous release, OpenShift Container Platform continued using an etcd v2 data model and API for both new and upgraded clusters. Starting with OpenShift Container Platform 3.6, new installations began using the v3 data model as well, providing improved performance and scalability.

For existing clusters that upgraded to OpenShift Container Platform 3.6, however, the etcd data must be migrated from v2 to v3 as a post-upgrade step. This must be performed using openshift-ansible version 3.6.173.0.21 or later.

Until OpenShift Container Platform 3.6, it was possible to deploy a cluster with an embedded etcd. As of OpenShift Container Platform 3.7, this is no longer possible. See Migrating Embedded etcd to External etcd.

The etcd v2 to v3 data migration is performed as an offline migration which means all etcd members and master services are stopped during the migration. Large clusters with up to 600MiB of etcd data can expect a 10 to 15 minute outage of the API, web console, and controllers.

This migration process performs the following steps:

  1. Stop the master API and controller services.
  2. Perform an etcd backup on all etcd members.
  3. Perform a migration on the first etcd host
  4. Remove etcd data from any remaining etcd hosts.
  5. Perform an etcd scaleup operation adding additional etcd hosts one by one.
  6. Re-introduce TTL information on specific keys.
  7. Reconfigure the masters for etcd v3 storage.
  8. Start the master API and controller services.

7.2. Before You Begin

You can only begin the etcd data migration process after upgrading to OpenShift Container Platform 3.6, as previous versions are not compatible with etcd v3 storage. Additionally, the upgrade to OpenShift Container Platform 3.6 reconfigures cluster DNS services to run on every node, rather than on the masters, which ensures that, even when master services are taken down, existing pods continue to function as expected.

Older deployments with embedded etcd with the etcd API version v2 need to migrate to the external etcd before migrating data. See Migrating Embedded etcd to External etcd.

7.3. Running the Automated Migration Playbook

Important

If the migration playbooks fail before the masters are reconfigured to support etcd v3 storage, you must roll back the migration process. Contact support for more assistance.

A migration playbook is provided to automate all aspects of the process; this is the preferred method for performing the migration. You must have access to your existing inventory file with both masters and etcd hosts defined in their separate groups.

  1. Pull the latest subscription data from Red Hat Subscription Manager (RHSM):

    # subscription-manager refresh
    Copy to Clipboard Toggle word wrap
  2. To get the latest playbooks, manually disable the OpenShift Container Platform 3.6 channel and enable the 3.7 channel on the host you are running the migration from:

    # subscription-manager repos --disable="rhel-7-server-ose-3.6-rpms" \
        --enable="rhel-7-server-ose-3.7-rpms" \
        --enable="rhel-7-server-extras-rpms" \
        --enable="rhel-7-fast-datapath-rpms"
    # yum clean all
    Copy to Clipboard Toggle word wrap
  3. The migration can only be performed using openshift-ansible version 3.6.173.0.21 or later. Ensure you have the latest version of the openshift-ansible packages installed:

    # yum upgrade openshift-ansible\*
    Copy to Clipboard Toggle word wrap
  4. Run the migrate.yml playbook using your inventory file:

    # ansible-playbook [-i /path/to/inventory] \
        /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-etcd/migrate.yml
    Copy to Clipboard Toggle word wrap

7.4. Running the Migration Manually

The following procedure describes the steps required to successfully migrate the cluster (implemented as part of the Ansible etcd migration playbook).

  1. Create an etcd backup.
  2. Stop masters and wait for etcd convergence:

    1. Stop all master services:

      # systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
      Copy to Clipboard Toggle word wrap
    2. Ensure that the etcd cluster is healthy by running the following command:

      # etcdctl --ca-file=/etc/etcd/master.etcd-ca.crt  \
      --cert-file=/etc/etcd/master.etcd-client.crt \
      --key-file=/etc/etcd/master.etcd-client.key \
      --endpoints https://etcd-test-1:2379,https://etcd-test-2:2379,https://etcd-test-3:2379 \
      cluster-health
      Copy to Clipboard Toggle word wrap

      Before the migration can proceed, the etcd cluster must be healthy.

      The output for a healthy cluster is similar to the following:

      member 2a3d833935d9d076 is healthy: got healthy result from https://etcd-test-1:2379
      member a83a3258059fee18 is healthy: got healthy result from https://etcd-test-2:2379
      member 22a9f2ddf18fee5f is healthy: got healthy result from https://etcd-test-3:2379
      cluster is healthy
      Copy to Clipboard Toggle word wrap
    3. Ensure the raft index value of the etcd members does not vary by more than one. To check the raft indexes run the following command:

      # ETCDCTL_API=3 etcdctl --cacert="/etc/etcd/master.etcd-ca.crt" \
      --cert="/etc/etcd/master.etcd-client.crt" \
      --key="/etc/etcd/master.etcd-client.key" \
      --endpoints <etcd-endpoint>,<etcd-endpoint>,<etcd-endpoint>\ 
      1
      
      -w table endpoint status
      Copy to Clipboard Toggle word wrap
      1
      A list of comma-separated URIs of the hosts that comprise your etcd cluster. For example:
      --endpoints https://etcd-test-1:2379,https://etcd-test-2:2379,https://etcd-test-3:2379
      Copy to Clipboard Toggle word wrap

      The output is similar to the following:

      +------------------+------------------+---------+---------+-----------+-----------+------------+
      |     ENDPOINT     |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
      +------------------+------------------+---------+---------+-----------+-----------+------------+
      | etcd-test-1:2379 | 2a3d833935d9d076 | 3.1.9   | 25 kB   | false     |       415 |        995 |
      | etcd-test-2:2379 | a83a3258059fee18 | 3.1.9   | 25 kB   | true      |       415 |        994 |
      | etcd-test-3:2379 | 22a9f2ddf18fee5f | 3.1.9   | 25 kB   | false     |       415 |        995 |
      +------------------+------------------+---------+---------+-----------+-----------+------------+
      Copy to Clipboard Toggle word wrap

      Here, the indexes of 995, 994, and 995 are considered converged and you can proceed with the migration. If the minimum and maximum of raft indexes over all etcd members differ by more than one unit, wait a minute and try the command again.

  3. Migrate and scale up etcd:

    Warning

    The migration should not be run repeatedly, as new v2 data can overwrite v3 data that has already migrated.

    1. Stop etcd on all etcd hosts:

      # systemctl stop etcd
      Copy to Clipboard Toggle word wrap
    2. Run the following command (with the etcd daemon stopped) on your first etcd host to perform the migration:

      # ETCDCTL_API=3 etcdctl migrate --data-dir=/var/lib/etcd
      Copy to Clipboard Toggle word wrap

      The --data-dir target can in a different location depending on the deployment. For example, embedded etcd operates over the /var/lib/origin/openshift.local.etcd directory, and etcd run as a system container operates over the /var/lib/etcd/etcd.etcd directory.

      When complete, the migration responds with the following message if successful:

      finished transforming keys
      Copy to Clipboard Toggle word wrap

      If there is no v2 data, it responds with:

      no v2 keys to migrate
      Copy to Clipboard Toggle word wrap
    3. On each remaining etcd host, move the existing member directory to a backup location:

      $ mv /var/lib/etcd/member /var/lib/etcd/member.old
      Copy to Clipboard Toggle word wrap
    4. Create a new cluster on the first host:

      # echo "ETCD_FORCE_NEW_CLUSTER=true" >> /etc/etcd/etcd.conf
      # systemctl start etcd
      # sed -i '/ETCD_FORCE_NEW_CLUSTER=true/d' /etc/etcd/etcd.conf
      # systemctl restart etcd
      Copy to Clipboard Toggle word wrap
    5. Scale up additional etcd hosts by following the Adding Additional etcd Members documentation.
    6. When the etcdctl migrate command is run without the --no-ttl option, TTL keys are migrated as well. Given that the TTL keys in v2 data are replaced with leases in v3 data, you must attach leases to all migrated TTL keys (with the etcd daemon running).

      After your etcd cluster is back online with all members, re-introduce the TTL information by running the following on the first master:

      $ oc adm migrate etcd-ttl --etcd-address=https://<ip_address>:2379 \
          --cacert=/etc/etcd/master.etcd-ca.crt \
          --cert=/etc/etcd/master.etcd-client.crt \
          --key=/etc/etcd/master.etcd-client.key \
          --ttl-keys-prefix '/kubernetes.io/events' \
          --lease-duration 1h
      $ oc adm migrate etcd-ttl --etcd-address=https://<ip_address>:2379 \
          --cacert=/etc/etcd/master.etcd-ca.crt \
          --cert=/etc/etcd/master.etcd-client.crt \
          --key=/etc/etcd/master.etcd-client.key \
          --ttl-keys-prefix '/kubernetes.io/masterleases' \
          --lease-duration 10s
      $ oc adm migrate etcd-ttl --etcd-address=https://<ip_address>:2379 \
          --cacert=/etc/etcd/master.etcd-ca.crt \
          --cert=/etc/etcd/master.etcd-client.crt \
          --key=/etc/etcd/master.etcd-client.key \
          --ttl-keys-prefix '/openshift.io/oauth/accesstokens' \
          --lease-duration 86400s
      $ oc adm migrate etcd-ttl --etcd-address=https://<ip_address>:2379 \
          --cacert=/etc/etcd/master.etcd-ca.crt \
          --cert=/etc/etcd/master.etcd-client.crt \
          --key=/etc/etcd/master.etcd-client.key \
          --ttl-keys-prefix '/openshift.io/oauth/authorizetokens' \
          --lease-duration 500s
      $ oc adm migrate etcd-ttl --etcd-address=https://<ip_address>:2379 \
          --cacert=/etc/etcd/master.etcd-ca.crt \
          --cert=/etc/etcd/master.etcd-client.crt \
          --key=/etc/etcd/master.etcd-client.key \
          --ttl-keys-prefix '/openshift.io/leases/controllers' \
          --lease-duration 10s
      Copy to Clipboard Toggle word wrap
  4. Reconfigure the master:

    1. After the migration is complete, update the master configuration file, by default /etc/etcd/master-config.yaml, so the master daemons can use the new storage back end:

      kubernetesMasterConfig:
        apiServerArguments:
          storage-backend:
          - etcd3
          storage-media-type:
          - application/vnd.kubernetes.protobuf
      Copy to Clipboard Toggle word wrap
    2. Restart your services, run:

      # systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
      Copy to Clipboard Toggle word wrap

7.5. Recovering from Migration Issues

If you discover problems after the migration has completed, you may wish to restore from a backup:

  1. Stop the master services:

    # systemctl stop atomic-openshift-master-api atomic-openshift-master-controllers
    Copy to Clipboard Toggle word wrap
  2. Remove the storage-backend and storage-media-type keys from from kubernetesMasterConfig.apiServerArguments section in the master configuration file on each master:

    kubernetesMasterConfig:
      apiServerArguments:
       ...
    Copy to Clipboard Toggle word wrap
  3. Restore from backups that were taken prior to the migration, located in a timestamped directory under /var/lib/etcd, such as:

    /var/lib/etcd/openshift-backup-pre-migration20170825135732
    Copy to Clipboard Toggle word wrap

    Use the procedure described in Restoring etcd.

  4. Restart master services; run:

    # systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
    Copy to Clipboard Toggle word wrap
Voltar ao topo
Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2025 Red Hat