This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Este conteúdo não está disponível no idioma selecionado.
Chapter 7. Migrating etcd Data (v2 to v3)
7.1. Overview Copiar o linkLink copiado para a área de transferência!
While etcd was updated from etcd v2 to v3 in a previous release, OpenShift Container Platform continued using an etcd v2 data model and API for both new and upgraded clusters. Starting with OpenShift Container Platform 3.6, new installations began using the v3 data model as well, providing improved performance and scalability.
For existing clusters that upgraded to OpenShift Container Platform 3.6, however, the etcd data must be migrated from v2 to v3 as a post-upgrade step. This must be performed using openshift-ansible version 3.6.173.0.21 or later.
Until OpenShift Container Platform 3.6, it was possible to deploy a cluster with an embedded etcd. As of OpenShift Container Platform 3.7, this is no longer possible. See Migrating Embedded etcd to External etcd.
The etcd v2 to v3 data migration is performed as an offline migration which means all etcd members and master services are stopped during the migration. Large clusters with up to 600MiB of etcd data can expect a 10 to 15 minute outage of the API, web console, and controllers.
This migration process performs the following steps:
- Stop the master API and controller services.
- Perform an etcd backup on all etcd members.
- Perform a migration on the first etcd host
- Remove etcd data from any remaining etcd hosts.
- Perform an etcd scaleup operation adding additional etcd hosts one by one.
- Re-introduce TTL information on specific keys.
- Reconfigure the masters for etcd v3 storage.
- Start the master API and controller services.
7.2. Before You Begin Copiar o linkLink copiado para a área de transferência!
You can only begin the etcd data migration process after upgrading to OpenShift Container Platform 3.6, as previous versions are not compatible with etcd v3 storage. Additionally, the upgrade to OpenShift Container Platform 3.6 reconfigures cluster DNS services to run on every node, rather than on the masters, which ensures that, even when master services are taken down, existing pods continue to function as expected.
Older deployments with embedded etcd with the etcd API version v2 need to migrate to the external etcd before migrating data. See Migrating Embedded etcd to External etcd.
7.3. Running the Automated Migration Playbook Copiar o linkLink copiado para a área de transferência!
If the migration playbooks fail before the masters are reconfigured to support etcd v3 storage, you must roll back the migration process. Contact support for more assistance.
A migration playbook is provided to automate all aspects of the process; this is the preferred method for performing the migration. You must have access to your existing inventory file with both masters and etcd hosts defined in their separate groups.
Pull the latest subscription data from Red Hat Subscription Manager (RHSM):
subscription-manager refresh
# subscription-manager refresh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To get the latest playbooks, manually disable the OpenShift Container Platform 3.6 channel and enable the 3.7 channel on the host you are running the migration from:
subscription-manager repos --disable="rhel-7-server-ose-3.6-rpms" \ --enable="rhel-7-server-ose-3.7-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-fast-datapath-rpms" yum clean all
# subscription-manager repos --disable="rhel-7-server-ose-3.6-rpms" \ --enable="rhel-7-server-ose-3.7-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-fast-datapath-rpms" # yum clean all
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The migration can only be performed using openshift-ansible version 3.6.173.0.21 or later. Ensure you have the latest version of the openshift-ansible packages installed:
yum upgrade openshift-ansible\*
# yum upgrade openshift-ansible\*
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the migrate.yml playbook using your inventory file:
ansible-playbook [-i /path/to/inventory] \ /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-etcd/migrate.yml
# ansible-playbook [-i /path/to/inventory] \ /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-etcd/migrate.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4. Running the Migration Manually Copiar o linkLink copiado para a área de transferência!
The following procedure describes the steps required to successfully migrate the cluster (implemented as part of the Ansible etcd migration playbook).
- Create an etcd backup.
Stop masters and wait for etcd convergence:
Stop all master services:
systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
# systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the etcd cluster is healthy by running the following command:
etcdctl --ca-file=/etc/etcd/master.etcd-ca.crt \ --cert-file=/etc/etcd/master.etcd-client.crt \ --key-file=/etc/etcd/master.etcd-client.key \ --endpoints https://etcd-test-1:2379,https://etcd-test-2:2379,https://etcd-test-3:2379 \ cluster-health
# etcdctl --ca-file=/etc/etcd/master.etcd-ca.crt \ --cert-file=/etc/etcd/master.etcd-client.crt \ --key-file=/etc/etcd/master.etcd-client.key \ --endpoints https://etcd-test-1:2379,https://etcd-test-2:2379,https://etcd-test-3:2379 \ cluster-health
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Before the migration can proceed, the etcd cluster must be healthy.
The output for a healthy cluster is similar to the following:
member 2a3d833935d9d076 is healthy: got healthy result from https://etcd-test-1:2379 member a83a3258059fee18 is healthy: got healthy result from https://etcd-test-2:2379 member 22a9f2ddf18fee5f is healthy: got healthy result from https://etcd-test-3:2379 cluster is healthy
member 2a3d833935d9d076 is healthy: got healthy result from https://etcd-test-1:2379 member a83a3258059fee18 is healthy: got healthy result from https://etcd-test-2:2379 member 22a9f2ddf18fee5f is healthy: got healthy result from https://etcd-test-3:2379 cluster is healthy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure the raft index value of the etcd members does not vary by more than one. To check the raft indexes run the following command:
ETCDCTL_API=3 etcdctl --cacert="/etc/etcd/master.etcd-ca.crt" \ --cert="/etc/etcd/master.etcd-client.crt" \ --key="/etc/etcd/master.etcd-client.key" \ --endpoints <etcd-endpoint>,<etcd-endpoint>,<etcd-endpoint>\ -w table endpoint status
# ETCDCTL_API=3 etcdctl --cacert="/etc/etcd/master.etcd-ca.crt" \ --cert="/etc/etcd/master.etcd-client.crt" \ --key="/etc/etcd/master.etcd-client.key" \ --endpoints <etcd-endpoint>,<etcd-endpoint>,<etcd-endpoint>\
1 -w table endpoint status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- A list of comma-separated URIs of the hosts that comprise your etcd cluster. For example:
--endpoints https://etcd-test-1:2379,https://etcd-test-2:2379,https://etcd-test-3:2379
--endpoints https://etcd-test-1:2379,https://etcd-test-2:2379,https://etcd-test-3:2379
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output is similar to the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Here, the indexes of 995, 994, and 995 are considered converged and you can proceed with the migration. If the minimum and maximum of raft indexes over all etcd members differ by more than one unit, wait a minute and try the command again.
Migrate and scale up etcd:
WarningThe migration should not be run repeatedly, as new v2 data can overwrite v3 data that has already migrated.
Stop etcd on all etcd hosts:
systemctl stop etcd
# systemctl stop etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command (with the etcd daemon stopped) on your first etcd host to perform the migration:
ETCDCTL_API=3 etcdctl migrate --data-dir=/var/lib/etcd
# ETCDCTL_API=3 etcdctl migrate --data-dir=/var/lib/etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
--data-dir
target can in a different location depending on the deployment. For example, embedded etcd operates over the /var/lib/origin/openshift.local.etcd directory, and etcd run as a system container operates over the /var/lib/etcd/etcd.etcd directory.When complete, the migration responds with the following message if successful:
finished transforming keys
finished transforming keys
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If there is no v2 data, it responds with:
no v2 keys to migrate
no v2 keys to migrate
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On each remaining etcd host, move the existing member directory to a backup location:
mv /var/lib/etcd/member /var/lib/etcd/member.old
$ mv /var/lib/etcd/member /var/lib/etcd/member.old
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new cluster on the first host:
echo "ETCD_FORCE_NEW_CLUSTER=true" >> /etc/etcd/etcd.conf systemctl start etcd sed -i '/ETCD_FORCE_NEW_CLUSTER=true/d' /etc/etcd/etcd.conf systemctl restart etcd
# echo "ETCD_FORCE_NEW_CLUSTER=true" >> /etc/etcd/etcd.conf # systemctl start etcd # sed -i '/ETCD_FORCE_NEW_CLUSTER=true/d' /etc/etcd/etcd.conf # systemctl restart etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Scale up additional etcd hosts by following the Adding Additional etcd Members documentation.
When the
etcdctl migrate
command is run without the--no-ttl
option, TTL keys are migrated as well. Given that the TTL keys in v2 data are replaced with leases in v3 data, you must attach leases to all migrated TTL keys (with the etcd daemon running).After your etcd cluster is back online with all members, re-introduce the TTL information by running the following on the first master:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Reconfigure the master:
After the migration is complete, update the master configuration file, by default /etc/etcd/master-config.yaml, so the master daemons can use the new storage back end:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart your services, run:
systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
# systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.5. Recovering from Migration Issues Copiar o linkLink copiado para a área de transferência!
If you discover problems after the migration has completed, you may wish to restore from a backup:
Stop the master services:
systemctl stop atomic-openshift-master-api atomic-openshift-master-controllers
# systemctl stop atomic-openshift-master-api atomic-openshift-master-controllers
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the
storage-backend
andstorage-media-type
keys from fromkubernetesMasterConfig.apiServerArguments
section in the master configuration file on each master:kubernetesMasterConfig: apiServerArguments: ...
kubernetesMasterConfig: apiServerArguments: ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restore from backups that were taken prior to the migration, located in a timestamped directory under /var/lib/etcd, such as:
/var/lib/etcd/openshift-backup-pre-migration20170825135732
/var/lib/etcd/openshift-backup-pre-migration20170825135732
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the procedure described in Restoring etcd.
Restart master services; run:
systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
# systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
Copy to Clipboard Copied! Toggle word wrap Toggle overflow