This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.3.4. Migrating your applications
You can migrate your applications by using the Migration Toolkit for Containers (MTC) web console or on the command line.
3.4.1. Prerequisites 复制链接链接已复制到粘贴板!
The Migration Toolkit for Containers (MTC) has the following prerequisites:
-
You must be logged in as a user with
cluster-admin
privileges on all clusters. - The MTC version must be the same on all clusters.
Clusters:
- The source cluster must be upgraded to the latest MTC z-stream release.
-
The cluster on which the
migration-controller
pod is running must have unrestricted network access to the other clusters. - The clusters must have unrestricted network access to each other.
- The clusters must have unrestricted network access to the replication repository.
- The clusters must be able to communicate using OpenShift routes on port 443.
- The clusters must have no critical conditions.
- The clusters must be in a ready state.
Volume migration:
- The persistent volumes (PVs) must be valid.
- The PVs must be bound to persistent volume claims.
- If you copy the PVs by using the move method, the clusters must have unrestricted network access to the remote volume.
If you copy the PVs by using the snapshot copy method, the following prerequisites apply:
- The cloud provider must support snapshots.
- The volumes must have the same cloud provider.
- The volumes must be located in the same geographic region.
- The volumes must have the same storage class.
- If you perform a direct volume migration in a proxy environment, you must configure an Stunnel TCP proxy.
- If you perform a direct image migration, you must expose the internal registry of the source cluster to external traffic.
3.4.1.1. Creating a CA certificate bundle file 复制链接链接已复制到粘贴板!
If you use a self-signed certificate to secure a cluster or a replication repository for the Migration Toolkit for Containers (MTC), certificate verification might fail with the following error message: Certificate signed by unknown authority
.
You can create a custom CA certificate bundle file and upload it in the MTC web console when you add a cluster or a replication repository.
Procedure
Download a CA certificate from a remote endpoint and save it as a CA bundle file:
echo -n | openssl s_client -connect <host_FQDN>:<port> \ | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert>
$ echo -n | openssl s_client -connect <host_FQDN>:<port> \
| sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert>
3.4.1.2. Configuring a proxy for direct volume migration 复制链接链接已复制到粘贴板!
If you are performing direct volume migration from a source cluster behind a proxy, you must configure an Stunnel proxy in the MigrationController
custom resource (CR). Stunnel creates a transparent tunnel between the source and target clusters for the TCP connection without changing the certificates.
Direct volume migration supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy.
Prerequisites
-
You must be logged in as a user with
cluster-admin
privileges on all clusters.
Procedure
-
Log in to the cluster on which the
MigrationController
pod runs. Get the
MigrationController
CR manifest:oc get migrationcontroller <migration_controller> -n openshift-migration
$ oc get migrationcontroller <migration_controller> -n openshift-migration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
stunnel_tcp_proxy
parameter:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the Stunnel proxy:
http://<user_name>:<password>@<ip_address>:<port>
.
-
Save the manifest as
migration-controller.yaml
. Apply the updated manifest:
oc replace -f migration-controller.yaml -n openshift-migration
$ oc replace -f migration-controller.yaml -n openshift-migration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can write an Ansible playbook to use as a migration hook. The hook is added to a migration plan by using the MTC web console or by specifying values for the spec.hooks
parameters in the MigPlan
custom resource (CR) manifest.
The Ansible playbook is mounted onto a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan
CR. The hook container uses a specified service account token so that the tasks do not require authentication before they run in the cluster.
3.4.1.3.1. Ansible modules 复制链接链接已复制到粘贴板!
You can use the Ansible shell
module to run oc
commands.
Example shell
module
- hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces
- hosts: localhost
gather_facts: false
tasks:
- name: get pod name
shell: oc get po --all-namespaces
You can use kubernetes.core
modules, such as k8s_info
, to interact with Kubernetes resources.
Example k8s_facts
module
You can use the fail
module to produce a non-zero exit status in cases where a non-zero exit status would not normally be produced, ensuring that the success or failure of a hook is detected. Hooks run as jobs and the success or failure status of a hook is based on the exit status of the job container.
Example fail
module
3.4.1.3.2. Environment variables 复制链接链接已复制到粘贴板!
The MigPlan
CR name and migration namespaces are passed as environment variables to the hook container. These variables are accessed by using the lookup
plug-in.
Example environment variables
3.4.1.4. Additional resources 复制链接链接已复制到粘贴板!
You can configure clusters and a replication repository by using the MTC web console. Then, you can create and run a migration plan.
3.4.2.1. Launching the MTC web console 复制链接链接已复制到粘贴板!
You can launch the Migration Toolkit for Containers (MTC) web console in a browser.
Prerequisites
- The MTC web console must have network access to the OpenShift Container Platform web console.
- The MTC web console must have network access to the OAuth authorization server.
Procedure
- Log in to the OpenShift Container Platform cluster on which you have installed MTC.
Obtain the MTC web console URL by entering the following command:
oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'
$ oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output resembles the following:
https://migration-openshift-migration.apps.cluster.openshift.com
.Launch a browser and navigate to the MTC web console.
注意If you try to access the MTC web console immediately after installing the Migration Toolkit for Containers Operator, the console might not load because the Operator is still configuring the cluster. Wait a few minutes and retry.
- If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster API server. The web page guides you through the process of accepting the remaining certificates.
- Log in with your OpenShift Container Platform username and password.
You can add a cluster to the Migration Toolkit for Containers (MTC) web console.
Prerequisites
If you are using Azure snapshots to copy data:
- You must specify the Azure resource group name for the cluster.
- The clusters must be in the same Azure resource group.
- The clusters must be in the same geographic location.
Procedure
- Log in to the cluster.
Obtain the
migration-controller
service account token:oc sa get-token migration-controller -n openshift-migration
$ oc sa get-token migration-controller -n openshift-migration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - In the MTC web console, click Clusters.
- Click Add cluster.
Fill in the following fields:
-
Cluster name: The cluster name can contain lower-case letters (
a-z
) and numbers (0-9
). It must not contain spaces or international characters. -
URL: Specify the API server URL, for example,
https://<www.example.com>:8443
. -
Service account token: Paste the
migration-controller
service account token. Exposed route host to image registry: If you are using direct image migration, specify the exposed route to the image registry of the source cluster, for example,
www.example.apps.cluster.com
.You can specify a port. The default port is
5000
.- Azure cluster: You must select this option if you use Azure snapshots to copy your data.
- Azure resource group: This field is displayed if Azure cluster is selected. Specify the Azure resource group.
- Require SSL verification: Optional: Select this option to verify SSL connections to the cluster.
- CA bundle file: This field is displayed if Require SSL verification is selected. If you created a custom CA certificate bundle file for self-signed certificates, click Browse, select the CA bundle file, and upload it.
-
Cluster name: The cluster name can contain lower-case letters (
Click Add cluster.
The cluster appears in the Clusters list.
You can add an object storage bucket as a replication repository to the Migration Toolkit for Containers (MTC) web console.
Prerequisites
- You must configure an object storage bucket for migrating the data.
Procedure
- In the MTC web console, click Replication repositories.
- Click Add repository.
Select a Storage provider type and fill in the following fields:
AWS for AWS S3, MCG, and generic S3 providers:
- Replication repository name: Specify the replication repository name in the MTC web console.
- S3 bucket name: Specify the name of the S3 bucket you created.
- S3 bucket region: Specify the S3 bucket region. Required for AWS S3. Optional for other S3 providers.
-
S3 endpoint: Specify the URL of the S3 service, not the bucket, for example,
https://<s3-storage.apps.cluster.com>
. Required for a generic S3 provider. You must use thehttps://
prefix. -
S3 provider access key: Specify the
<AWS_SECRET_ACCESS_KEY>
for AWS or the S3 provider access key for MCG. -
S3 provider secret access key: Specify the
<AWS_ACCESS_KEY_ID>
for AWS or the S3 provider secret access key for MCG. - Require SSL verification: Clear this check box if you are using a generic S3 provider.
- If you use a custom CA bundle, click Browse and browse to the Base64-encoded CA bundle file.
GCP:
- Replication repository name: Specify the replication repository name in the MTC web console.
- GCP bucket name: Specify the name of the GCP bucket.
-
GCP credential JSON blob: Specify the string in the
credentials-velero
file.
Azure:
- Replication repository name: Specify the replication repository name in the MTC web console.
- Azure resource group: Specify the resource group of the Azure Blob storage.
- Azure storage account name: Specify the Azure Blob storage account name.
-
Azure credentials - INI file contents: Specify the string in the
credentials-velero
file.
- Click Add repository and wait for connection validation.
Click Close.
The new repository appears in the Replication repositories list.
You can create a migration plan in the Migration Toolkit for Containers (MTC) web console.
Prerequisites
-
You must be logged in as a user with
cluster-admin
privileges on all clusters. - You must ensure that the same MTC version is installed on all clusters.
- You must add the clusters and the replication repository to the MTC web console.
- If you want to use the move data copy method to migrate a persistent volume (PV), the source and target clusters must have uninterrupted network access to the remote volume.
-
If you want to use direct image migration, the
MigCluster
custom resource manifest of the source cluster must specify the exposed route of the internal image registry.
Procedure
- In the MTC web console, click Migration plans.
- Click Add migration plan.
Enter the Plan name and click Next.
The migration plan name must not exceed 253 lower-case alphanumeric characters (
a-z, 0-9
) and must not contain spaces or underscores (_
).- Select a Source cluster.
- Select a Target cluster.
- Select a Replication repository.
- Select the projects to be migrated and click Next.
- Select a Source cluster, a Target cluster, and a Repository, and click Next.
- On the Namespaces page, select the projects to be migrated and click Next.
On the Persistent volumes page, click a Migration type for each PV:
- The Copy option copies the data from the PV of a source cluster to the replication repository and then restores the data on a newly created PV, with similar characteristics, in the target cluster.
- The Move option unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using.
- Click Next.
On the Copy options page, select a Copy method for each PV:
- Snapshot copy backs up and restores data using the cloud provider’s snapshot functionality. It is significantly faster than Filesystem copy.
Filesystem copy backs up the files on the source cluster and restores them on the target cluster.
The file system copy method is required for direct volume migration.
- You can select Verify copy to verify data migrated with Filesystem copy. Data is verified by generating a checksum for each source file and checking the checksum after restoration. Data verification significantly reduces performance.
Select a Target storage class.
If you selected Filesystem copy, you can change the target storage class.
- Click Next.
On the Migration options page, the Direct image migration option is selected if you specified an exposed image registry route for the source cluster. The Direct PV migration option is selected if you are migrating data with Filesystem copy.
The direct migration options copy images and files directly from the source cluster to the target cluster. This option is much faster than copying images and files from the source cluster to the replication repository and then from the replication repository to the target cluster.
- Click Next.
Optional: On the Hooks page, click Add Hook to add a hook to the migration plan.
A hook runs custom code. You can add up to four hooks to a single migration plan. Each hook runs during a different migration step.
- Enter the name of the hook to display in the web console.
- If the hook is an Ansible playbook, select Ansible playbook and click Browse to upload the playbook or paste the contents of the playbook in the field.
- Optional: Specify an Ansible runtime image if you are not using the default hook image.
If the hook is not an Ansible playbook, select Custom container image and specify the image name and path.
A custom container image can include Ansible playbooks.
- Select Source cluster or Target cluster.
- Enter the Service account name and the Service account namespace.
Select the migration step for the hook:
- preBackup: Before the application workload is backed up on the source cluster
- postBackup: After the application workload is backed up on the source cluster
- preRestore: Before the application workload is restored on the target cluster
- postRestore: After the application workload is restored on the target cluster
- Click Add.
Click Finish.
The migration plan is displayed in the Migration plans list.
3.4.2.5. Running a migration plan in the MTC web console 复制链接链接已复制到粘贴板!
You can stage or migrate applications and data with the migration plan you created in the Migration Toolkit for Containers (MTC) web console.
During migration, MTC sets the reclaim policy of migrated persistent volumes (PVs) to Retain
on the target cluster.
The Backup
custom resource contains a PVOriginalReclaimPolicy
annotation that indicates the original reclaim policy. You can manually restore the reclaim policy of the migrated PVs.
Prerequisites
The MTC web console must contain the following:
-
Source cluster in a
Ready
state -
Target cluster in a
Ready
state - Replication repository
- Valid migration plan
Procedure
- Log in to the source cluster.
Delete old images:
oc adm prune images
$ oc adm prune images
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Log in to the MTC web console and click Migration plans.
Click the Options menu
next to a migration plan and select Stage to copy data from the source cluster to the target cluster without stopping the application.
You can run Stage multiple times to reduce the actual migration time.
-
When you are ready to migrate the application workload, the Options menu
beside a migration plan and select Migrate.
- Optional: In the Migrate window, you can select Do not stop applications on the source cluster during migration.
- Click Migrate.
When the migration is complete, verify that the application migrated successfully in the OpenShift Container Platform web console:
-
Click Home
Projects. - Click the migrated project to view its status.
- In the Routes section, click Location to verify that the application is functioning, if applicable.
-
Click Workloads
Pods to verify that the pods are running in the migrated namespace. -
Click Storage
Persistent volumes to verify that the migrated persistent volume is correctly provisioned.
-
Click Home
3.4.3. Migrating your applications from the command line 复制链接链接已复制到粘贴板!
You can migrate your applications on the command line by using the MTC custom resources (CRs).
You can migrate applications from a local cluster to a remote cluster, from a remote cluster to a local cluster, and between remote clusters.
MTC terminology
The following terms are relevant for configuring clusters:
host
cluster:-
The
migration-controller
pod runs on thehost
cluster. -
A
host
cluster does not require an exposed secure registry route for direct image migration.
-
The
-
Local cluster: The local cluster is often the same as the
host
cluster but this is not a requirement. Remote cluster:
- A remote cluster must have an exposed secure registry route for direct image migration.
-
A remote cluster must have a
Secret
CR containing themigration-controller
service account token.
The following terms are relevant for performing a migration:
- Source cluster: Cluster from which the applications are migrated.
- Destination cluster: Cluster to which the applications are migrated.
You can migrate your applications on the command line with the Migration Toolkit for Containers (MTC) API.
You can migrate applications from a local cluster to a remote cluster, from a remote cluster to a local cluster, and between remote clusters.
This procedure describes how to perform indirect migration and direct migration:
- Indirect migration: Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster.
- Direct migration: Images or volumes are copied directly from the source cluster to the destination cluster. Direct image migration and direct volume migration have significant performance benefits.
You create the following custom resources (CRs) to perform a migration:
MigCluster
CR: Defines ahost
, local, or remote clusterThe
migration-controller
pod runs on thehost
cluster.-
Secret
CR: Contains credentials for a remote cluster or storage MigStorage
CR: Defines a replication repositoryDifferent storage providers require different parameters in the
MigStorage
CR manifest.-
MigPlan
CR: Defines a migration plan MigMigration
CR: Performs a migration defined in an associatedMigPlan
You can create multiple
MigMigration
CRs for a singleMigPlan
CR for the following purposes:- To perform stage migrations, which copy most of the data without stopping the application, before running a migration. Stage migrations improve the performance of the migration.
- To cancel a migration in progress
- To roll back a completed migration
Prerequisites
-
You must have
cluster-admin
privileges for all clusters. -
You must install the OpenShift Container Platform CLI (
oc
). - You must install the Migration Toolkit for Containers Operator on all clusters.
- The version of the installed Migration Toolkit for Containers Operator must be the same on all clusters.
- You must configure an object storage as a replication repository.
- If you are using direct image migration, you must expose a secure registry route on all remote clusters.
- If you are using direct volume migration, the source cluster must not have an HTTP proxy configured.
Procedure
Create a
MigCluster
CR manifest for thehost
cluster calledhost-cluster.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
MigCluster
CR for thehost
cluster:oc create -f host-cluster.yaml -n openshift-migration
$ oc create -f host-cluster.yaml -n openshift-migration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
Secret
CR manifest for each remote cluster calledcluster-secret.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the base64-encoded
migration-controller
service account (SA) token of the remote cluster.
You can obtain the SA token by running the following command:
oc sa get-token migration-controller -n openshift-migration | base64 -w 0
$ oc sa get-token migration-controller -n openshift-migration | base64 -w 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
Secret
CR for each remote cluster:oc create -f cluster-secret.yaml
$ oc create -f cluster-secret.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
MigCluster
CR manifest for each remote cluster calledremote-cluster.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Optional: Specify the exposed registry route, for example,
docker-registry-default.apps.example.com
if you are using direct image migration. - 2
- SSL verification is enabled if
false
. CA certificates are not required or checked iftrue
. - 3
- Specify the
Secret
CR of the remote cluster. - 4
- Specify the URL of the remote cluster.
Create a
MigCluster
CR for each remote cluster:oc create -f remote-cluster.yaml -n openshift-migration
$ oc create -f remote-cluster.yaml -n openshift-migration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that all clusters are in a
Ready
state:oc describe cluster <cluster_name>
$ oc describe cluster <cluster_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
Secret
CR manifest for the replication repository calledstorage-secret.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow AWS credentials are base64-encoded by default. If you are using another storage provider, you must encode your credentials by running the following command with each key:
echo -n "<key>" | base64 -w 0
$ echo -n "<key>" | base64 -w 0
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the key ID or the secret key. Both keys must be base64-encoded.
Create the
Secret
CR for the replication repository:oc create -f storage-secret.yaml
$ oc create -f storage-secret.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
MigStorage
CR manifest for the replication repository calledmigstorage.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the bucket name.
- 2
- Specify the
Secrets
CR of the object storage. You must ensure that the credentials stored in theSecrets
CR of the object storage are correct. - 3
- Specify the storage provider.
- 4
- Optional: If you are copying data by using snapshots, specify the
Secrets
CR of the object storage. You must ensure that the credentials stored in theSecrets
CR of the object storage are correct. - 5
- Optional: If you are copying data by using snapshots, specify the storage provider.
Create the
MigStorage
CR:oc create -f migstorage.yaml -n openshift-migration
$ oc create -f migstorage.yaml -n openshift-migration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
MigStorage
CR is in aReady
state:oc describe migstorage <migstorage_name>
$ oc describe migstorage <migstorage_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
MigPlan
CR manifest calledmigplan.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
MigPlan
CR:oc create -f migplan.yaml -n openshift-migration
$ oc create -f migplan.yaml -n openshift-migration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the
MigPlan
instance to verify that it is in aReady
state:oc describe migplan <migplan_name> -n openshift-migration
$ oc describe migplan <migplan_name> -n openshift-migration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
MigMigration
CR manifest calledmigmigration.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
MigMigration
CR to start the migration defined in theMigPlan
CR:oc create -f migmigration.yaml -n openshift-migration
$ oc create -f migmigration.yaml -n openshift-migration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the progress of the migration by watching the
MigMigration
CR:oc watch migmigration <migmigration_name> -n openshift-migration
$ oc watch migmigration <migmigration_name> -n openshift-migration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output resembles the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.3.2. MTC custom resource manifests 复制链接链接已复制到粘贴板!
Migration Toolkit for Containers (MTC) uses the following custom resource (CR) manifests to create CRs for migrating applications.
3.4.3.2.1. DirectImageMigration 复制链接链接已复制到粘贴板!
The DirectImageMigration
CR copies images directly from the source cluster to the destination cluster.
3.4.3.2.2. DirectImageStreamMigration 复制链接链接已复制到粘贴板!
The DirectImageStreamMigration
CR copies image stream references directly from the source cluster to the destination cluster.
3.4.3.2.3. DirectVolumeMigration 复制链接链接已复制到粘贴板!
The DirectVolumeMigration
CR copies persistent volumes (PVs) directly from the source cluster to the destination cluster.
- 1
- Namespaces are created for the PVs on the destination cluster if
true
. - 2
- The
DirectVolumeMigrationProgress
CRs are deleted after migration iftrue
. The default value isfalse
so thatDirectVolumeMigrationProgress
CRs are retained for troubleshooting. - 3
- Update the cluster name if the destination cluster is not the host cluster.
- 4
- Specify one or more PVCs to be migrated with direct volume migration.
- 5
- Specify the namespace of each PVC.
- 6
- Specify the
MigCluster
CR name of the source cluster.
3.4.3.2.4. DirectVolumeMigrationProgress 复制链接链接已复制到粘贴板!
The DirectVolumeMigrationProgress
CR shows the progress of the DirectVolumeMigration
CR.
3.4.3.2.5. MigAnalytic 复制链接链接已复制到粘贴板!
The MigAnalytic
CR collects the number of images, Kubernetes resources, and the PV capacity from an associated MigPlan
CR.
- 1
- Specify the
MigPlan
CR name associated with theMigAnalytic
CR. - 2
- Specify the
MigPlan
CR name associated with theMigAnalytic
CR. - 3
- Optional: The number of images is returned if
true
. - 4
- Optional: Returns the number, kind, and API version of the Kubernetes resources if
true
. - 5
- Optional: Returns the PV capacity if
true
. - 6
- Returns a list of image names if
true
. Default isfalse
so that the output is not excessively long. - 7
- Optional: Specify the maximum number of image names to return if
listImages
istrue
. - 8
- Specify the
MigPlan
CR name associated with theMigAnalytic
CR.
3.4.3.2.6. MigCluster 复制链接链接已复制到粘贴板!
The MigCluster
CR defines a host, local, or remote cluster.
- 1
- Optional: Update the cluster name if the
migration-controller
pod is not running on this cluster. - 2
- The
migration-controller
pod runs on this cluster iftrue
. - 3
- Optional: If the storage provider is Microsoft Azure, specify the resource group.
- 4
- Optional: If you created a certificate bundle for self-signed CA certificates and if the
insecure
parameter value isfalse
, specify the base64-encoded certificate bundle. - 5
- SSL verification is enabled if
false
. - 6
- The cluster is validated if
true
. - 7
- The
restic
pods are restarted on the source cluster after thestage
pods are created iftrue
. - 8
- Optional: If you are using direct image migration, specify the exposed registry path of a remote cluster.
- 9
- Specify the URL of the remote cluster.
- 10
- Specify the name of the
Secret
CR for the remote cluster.
3.4.3.2.7. MigHook 复制链接链接已复制到粘贴板!
The MigHook
CR defines an Ansible playbook or a custom image that runs tasks at a specified stage of the migration.
- 1
- Optional: A unique hash is appended to the value for this parameter so that each migration hook has a unique name. You do not need to specify the value of the
name
parameter. - 2
- Specify the migration hook name, unless you specify the value of the
generateName
parameter. - 3
- Optional: Specify the maximum number of seconds that a hook can run. The default value is
1800
. - 4
- The hook is a custom image if
true
. The custom image can include Ansible or it can be written in a different programming language. - 5
- Specify the custom image, for example,
quay.io/konveyor/hook-runner:latest
. Required ifcustom
istrue
. - 6
- Specify the entire base64-encoded Ansible playbook. Required if
custom
isfalse
. - 7
- Specify
source
ordestination
as the cluster on which the hook will run.
3.4.3.2.8. MigMigration 复制链接链接已复制到粘贴板!
The MigMigration
CR runs an associated MigPlan
CR.
You can create multiple MigMigration
CRs associated with the same MigPlan
CR for the following scenarios:
- You can run multiple stage or incremental migrations to copy data without stopping the pods on the source cluster. Running stage migrations improves the performance of the actual migration.
- You can cancel a migration in progress.
- You can roll back a migration.
- 1
- A migration in progress is canceled if
true
. - 2
- A completed migration is rolled back if
true
. - 3
- Data is copied incrementally and the pods on the source cluster are not stopped if
true
. - 4
- The pods on the source cluster are scaled to
0
after theBackup
stage of a migration iftrue
. - 5
- The labels and annotations applied during the migration are retained if
true
. - 6
- The status of the migrated pods on the destination cluster are checked and the names of pods that are not in a
Running
state are returned iftrue
. - 7
migPlanRef.name
: Specify the name of the associatedMigPlan
CR.
3.4.3.2.9. MigPlan 复制链接链接已复制到粘贴板!
The MigPlan
CR defines the parameters of a migration plan. It contains a group of virtual machines that are being migrated with the same parameters.
- 1
- The migration has completed if
true
. You cannot create anotherMigMigration
CR for thisMigPlan
CR. - 2
- Specify the name of the source cluster
MigCluster
CR. - 3
- Specify the name of the destination cluster
MigCluster
CR. - 4
- Optional: You can specify up to four migration hooks.
- 5
- Optional: Specify the namespace in which the hook will run.
- 6
- Optional: Specify the migration phase during which a hook runs. One hook can be assigned to one phase. The expected values are
PreBackup
,PostBackup
,PreRestore
, andPostRestore
. - 7
- Optional: Specify the name of the
MigHook
CR. - 8
- Optional: Specify the namespace of
MigHook
CR. - 9
- Optional: Specify a service account with
cluster-admin
privileges. - 10
- Direct image migration is disabled if
true
. Images are copied from the source cluster to the replication repository and from the replication repository to the destination cluster. - 11
- Direct volume migration is disabled if
true
. PVs are copied from the source cluster to the replication repository and from the replication repository to the destination cluster. - 12
- Specify the name of
MigStorage
CR. - 13
- Specify one or more namespaces.
- 14
- The
MigPlan
CR is validated iftrue
.
3.4.3.2.10. MigStorage 复制链接链接已复制到粘贴板!
The MigStorage
CR describes the object storage for the replication repository. You can configure Amazon Web Services, Microsoft Azure, Google Cloud Storage, and generic S3-compatible cloud storage, for example, Minio or NooBaa.
Different providers require different parameters.
- 1
- Specify the storage provider.
- 2
- Optional: If you are using the snapshot copy method, specify the storage provider.
- 3
- If you are using AWS, specify the bucket name.
- 4
- If you are using AWS, specify the bucket region, for example,
us-east-1
. - 5
- Specify the name of the
Secret
CR that you created for theMigStorage
CR. - 6
- Optional: If you are using the AWS Key Management Service, specify the unique identifier of the key.
- 7
- Optional: If you granted public access to the AWS bucket, specify the bucket URL.
- 8
- Optional: Specify the AWS signature version for authenticating requests to the bucket, for example,
4
. - 9
- Optional: If you are using the snapshot copy method, specify the geographical region of the clusters.
- 10
- Optional: If you are using the snapshot copy method, specify the name of the
Secret
CR that you created for theMigStorage
CR. - 11
- The cluster is validated if
true
.
3.4.3.3. Additional resources 复制链接链接已复制到粘贴板!
3.4.4. Configuring a migration plan 复制链接链接已复制到粘贴板!
You can increase the number of objects to be migrated or exclude resources from the migration.
3.4.4.1. Increasing limits for large migrations 复制链接链接已复制到粘贴板!
You can increase the limits on migration objects and container resources for large migrations with the Migration Toolkit for Containers (MTC).
You must test these changes before you perform a migration in a production environment.
Procedure
Edit the
MigrationController
custom resource (CR) manifest:oc edit migrationcontroller -n openshift-migration
$ oc edit migrationcontroller -n openshift-migration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the following parameters:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the number of CPUs available to the
MigrationController
CR. - 2
- Specifies the amount of memory available to the
MigrationController
CR. - 3
- Specifies the number of CPU units available for
MigrationController
CR requests.100m
represents 0.1 CPU units (100 * 1e-3). - 4
- Specifies the amount of memory available for
MigrationController
CR requests. - 5
- Specifies the number of persistent volumes that can be migrated.
- 6
- Specifies the number of pods that can be migrated.
- 7
- Specifies the number of namespaces that can be migrated.
Create a migration plan that uses the updated parameters to verify the changes.
If your migration plan exceeds the
MigrationController
CR limits, the MTC console displays a warning message when you save the migration plan.
3.4.4.2. Excluding resources from a migration plan 复制链接链接已复制到粘贴板!
You can exclude resources, for example, image streams, persistent volumes (PVs), or subscriptions, from a Migration Toolkit for Containers (MTC) migration plan in order to reduce the resource load for migration or to migrate images or PVs with a different tool.
By default, the MTC excludes service catalog resources and Operator Lifecycle Manager (OLM) resources from migration. These resources are parts of the service catalog API group and the OLM API group, neither of which is supported for migration at this time.
Procedure
Edit the
MigrationController
custom resource manifest:oc edit migrationcontroller <migration_controller> -n openshift-migration
$ oc edit migrationcontroller <migration_controller> -n openshift-migration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
spec
section by adding a parameter to exclude specific resources or by adding a resource to theexcluded_resources
parameter if it does not have its own exclusion parameter:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add
disable_image_migration: true
to exclude image streams from the migration. Do not edit theexcluded_resources
parameter.imagestreams
is added toexcluded_resources
when theMigrationController
pod restarts. - 2
- Add
disable_pv_migration: true
to exclude PVs from the migration plan. Do not edit theexcluded_resources
parameter.persistentvolumes
andpersistentvolumeclaims
are added toexcluded_resources
when theMigrationController
pod restarts. Disabling PV migration also disables PV discovery when you create the migration plan. - 3
- You can add OpenShift Container Platform resources to the
excluded_resources
list. Do not delete the default excluded resources. These resources are problematic to migrate and must be excluded.
-
Wait two minutes for the
MigrationController
pod to restart so that the changes are applied. Verify that the resource is excluded:
oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1
$ oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output contains the excluded resources:
Example output
- name: EXCLUDED_RESOURCES value: imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims
- name: EXCLUDED_RESOURCES value: imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims
Copy to Clipboard Copied! Toggle word wrap Toggle overflow