Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 10. Advanced migration options
You can automate your migrations and modify the
MigPlan
MigrationController
10.1. Terminology Link kopierenLink in die Zwischenablage kopiert!
| Term | Definition |
|---|---|
| Source cluster | Cluster from which the applications are migrated. |
| Destination cluster[1] | Cluster to which the applications are migrated. |
| Replication repository | Object storage used for copying images, volumes, and Kubernetes objects during indirect migration or for Kubernetes objects during direct volume migration or direct image migration. The replication repository must be accessible to all clusters. |
| Host cluster | Cluster on which the
The host cluster does not require an exposed registry route for direct image migration. |
| Remote cluster | A remote cluster is usually the source cluster but this is not required. A remote cluster requires a
A remote cluster requires an exposed secure registry route for direct image migration. |
| Indirect migration | Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster. |
| Direct volume migration | Persistent volumes are copied directly from the source cluster to the destination cluster. |
| Direct image migration | Images are copied directly from the source cluster to the destination cluster. |
| Stage migration | Data is copied to the destination cluster without stopping the application. Running a stage migration multiple times reduces the duration of the cutover migration. |
| Cutover migration | The application is stopped on the source cluster and its resources are migrated to the destination cluster. |
| State migration | Application state is migrated by copying specific persistent volume claims to the destination cluster. |
| Rollback migration | Rollback migration rolls back a completed migration. |
1 Called the target cluster in the MTC web console.
10.2. Migrating applications by using the command line Link kopierenLink in die Zwischenablage kopiert!
You can migrate applications with the MTC API by using the command-line interface (CLI) in order to automate the migration.
10.2.1. Migration prerequisites Link kopierenLink in die Zwischenablage kopiert!
-
You must be logged in as a user with privileges on all clusters.
cluster-admin
Direct image migration
- You must ensure that the secure OpenShift image registry of the source cluster is exposed.
- You must create a route to the exposed registry.
Direct volume migration
- If your clusters use proxies, you must configure an Stunnel TCP proxy.
Clusters
- The source cluster must be upgraded to the latest MTC z-stream release.
- The MTC version must be the same on all clusters.
Network
- The clusters have unrestricted network access to each other and to the replication repository.
-
If you copy the persistent volumes with , the clusters must have unrestricted network access to the remote volumes.
move You must enable the following ports on an OpenShift Container Platform 4 cluster:
-
(API server)
6443 -
(routes)
443 -
(DNS)
53
-
-
You must enable port on the replication repository if you are using TLS.
443
Persistent volumes (PVs)
- The PVs must be valid.
- The PVs must be bound to persistent volume claims.
If you use snapshots to copy the PVs, the following additional prerequisites apply:
- The cloud provider must support snapshots.
- The PVs must have the same cloud provider.
- The PVs must be located in the same geographic region.
- The PVs must have the same storage class.
10.2.2. Creating a registry route for direct image migration Link kopierenLink in die Zwischenablage kopiert!
For direct image migration, you must create a route to the exposed OpenShift image registry on all remote clusters.
Prerequisites
The OpenShift image registry must be exposed to external traffic on all remote clusters.
The OpenShift Container Platform 4 registry is exposed by default.
Procedure
To create a route to an OpenShift Container Platform 4 registry, run the following command:
$ oc create route passthrough --service=image-registry -n openshift-image-registry
10.2.3. Proxy configuration Link kopierenLink in die Zwischenablage kopiert!
For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the
MigrationController
proxy
For OpenShift Container Platform 4.2 to 4.14, the MTC inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings.
10.2.3.1. Direct volume migration Link kopierenLink in die Zwischenablage kopiert!
Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy.
If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy.
10.2.3.1.1. TCP proxy setup for DVM Link kopierenLink in die Zwischenablage kopiert!
You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the
stunnel_tcp_proxy
MigrationController
apiVersion: migration.openshift.io/v1alpha1
kind: MigrationController
metadata:
name: migration-controller
namespace: openshift-migration
spec:
[...]
stunnel_tcp_proxy: http://username:password@ip:port
Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC.
10.2.3.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? Link kopierenLink in die Zwischenablage kopiert!
You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel.
Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy.
10.2.3.1.3. Known issue Link kopierenLink in die Zwischenablage kopiert!
Migration fails with error Upgrade request required
The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message
Upgrade request required
In addition to supporting the SPDY protocol, the proxy or firewall also must pass the
Upgrade
Upgrade
Upgrade request required
Upgrade
10.2.3.2. Tuning network policies for migrations Link kopierenLink in die Zwischenablage kopiert!
OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration.
Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions.
10.2.3.2.1. NetworkPolicy configuration Link kopierenLink in die Zwischenablage kopiert!
10.2.3.2.1.1. Egress traffic from Rsync pods Link kopierenLink in die Zwischenablage kopiert!
You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the
NetworkPolicy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress-from-rsync-pods
spec:
podSelector:
matchLabels:
owner: directvolumemigration
app: directvolumemigration-rsync-transfer
egress:
- {}
policyTypes:
- Egress
10.2.3.2.1.2. Ingress traffic to Rsync pods Link kopierenLink in die Zwischenablage kopiert!
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress-from-rsync-pods
spec:
podSelector:
matchLabels:
owner: directvolumemigration
app: directvolumemigration-rsync-transfer
ingress:
- {}
policyTypes:
- Ingress
10.2.3.2.2. EgressNetworkPolicy configuration Link kopierenLink in die Zwischenablage kopiert!
The
EgressNetworkPolicy
Unlike the
NetworkPolicy
Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two:
apiVersion: network.openshift.io/v1
kind: EgressNetworkPolicy
metadata:
name: test-egress-policy
namespace: <namespace>
spec:
egress:
- to:
cidrSelector: <cidr_of_source_or_target_cluster>
type: Deny
10.2.3.2.3. Choosing alternate endpoints for data transfer Link kopierenLink in die Zwischenablage kopiert!
By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow.
For each cluster, you can configure an endpoint by setting the
rsync_endpoint_type
MigrationController
apiVersion: migration.openshift.io/v1alpha1
kind: MigrationController
metadata:
name: migration-controller
namespace: openshift-migration
spec:
[...]
rsync_endpoint_type: [NodePort|ClusterIP|Route]
10.2.3.2.4. Configuring supplemental groups for Rsync pods Link kopierenLink in die Zwischenablage kopiert!
When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access:
| Variable | Type | Default | Description |
|---|---|---|---|
|
| string | Not set | Comma-separated list of supplemental groups for source Rsync pods |
|
| string | Not set | Comma-separated list of supplemental groups for target Rsync pods |
Example usage
The
MigrationController
spec:
src_supplemental_groups: "1000,2000"
target_supplemental_groups: "2000,3000"
10.2.3.3. Configuring proxies Link kopierenLink in die Zwischenablage kopiert!
Prerequisites
-
You must be logged in as a user with privileges on all clusters.
cluster-admin
Procedure
Get the
CR manifest:MigrationController$ oc get migrationcontroller <migration_controller> -n openshift-migrationUpdate the proxy parameters:
apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port>1 noProxy: example.com2 Preface a domain with
to match subdomains only. For example,.matches.y.com, but notx.y.com. Usey.comto bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the*field from the installation configuration, you must add them to this list to prevent connection issues.networking.machineNetwork[].cidrThis field is ignored if neither the
nor thehttpProxyfield is set.httpsProxy-
Save the manifest as .
migration-controller.yaml Apply the updated manifest:
$ oc replace -f migration-controller.yaml -n openshift-migration
10.2.4. Migrating an application by using the MTC API Link kopierenLink in die Zwischenablage kopiert!
You can migrate an application from the command line by using the Migration Toolkit for Containers (MTC) API.
Procedure
Create a
CR manifest for the host cluster:MigCluster$ cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOFCreate a
object manifest for each remote cluster:Secret$ cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token>1 EOF- 1
- Specify the base64-encoded
migration-controllerservice account (SA) token of the remote cluster. You can obtain the token by running the following command:
$ oc sa get-token migration-controller -n openshift-migration | base64 -w 0Create a
CR manifest for each remote cluster:MigCluster$ cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster>1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route>2 insecure: false3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret>4 namespace: openshift-config url: <remote_cluster_url>5 EOF- 1
- Specify the
ClusterCR of the remote cluster. - 2
- Optional: For direct image migration, specify the exposed registry route.
- 3
- SSL verification is enabled if
false. CA certificates are not required or checked iftrue. - 4
- Specify the
Secretobject of the remote cluster. - 5
- Specify the URL of the remote cluster.
Verify that all clusters are in a
state:Ready$ oc describe MigCluster <cluster>Create a
object manifest for the replication repository:Secret$ cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64>1 aws-secret-access-key: <secret_key_base64>2 EOFAWS credentials are base64-encoded by default. For other storage providers, you must encode your credentials by running the following command with each key:
$ echo -n "<key>" | base64 -w 01 - 1
- Specify the key ID or the secret key. Both keys must be base64-encoded.
Create a
CR manifest for the replication repository:MigStorage$ cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket>1 credsSecretRef: name: <storage_secret>2 namespace: openshift-config backupStorageProvider: <storage_provider>3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret>4 namespace: openshift-config volumeSnapshotProvider: <storage_provider>5 EOF- 1
- Specify the bucket name.
- 2
- Specify the
SecretsCR of the object storage. You must ensure that the credentials stored in theSecretsCR of the object storage are correct. - 3
- Specify the storage provider.
- 4
- Optional: If you are copying data by using snapshots, specify the
SecretsCR of the object storage. You must ensure that the credentials stored in theSecretsCR of the object storage are correct. - 5
- Optional: If you are copying data by using snapshots, specify the storage provider.
Verify that the
CR is in aMigStoragestate:Ready$ oc describe migstorage <migstorage>Create a
CR manifest:MigPlan$ cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true1 indirectVolumeMigration: true2 migStorageRef: name: <migstorage>3 namespace: openshift-migration namespaces: - <source_namespace_1>4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace>5 srcMigClusterRef: name: <remote_cluster>6 namespace: openshift-migration EOF- 1
- Direct image migration is enabled if
false. - 2
- Direct volume migration is enabled if
false. - 3
- Specify the name of the
MigStorageCR instance. - 4
- Specify one or more source namespaces. By default, the destination namespace has the same name.
- 5
- Specify a destination namespace if it is different from the source namespace.
- 6
- Specify the name of the source cluster
MigClusterinstance.
Verify that the
instance is in aMigPlanstate:Ready$ oc describe migplan <migplan> -n openshift-migrationCreate a
CR manifest to start the migration defined in theMigMigrationinstance:MigPlan$ cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan>1 namespace: openshift-migration quiescePods: true2 stage: false3 rollback: false4 EOFVerify the migration by watching the
CR progress:MigMigration$ oc watch migmigration <migmigration> -n openshift-migrationThe output resembles the following:
Example output
Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration ... Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47
10.2.5. State migration Link kopierenLink in die Zwischenablage kopiert!
You can perform repeatable, state-only migrations by using Migration Toolkit for Containers (MTC) to migrate persistent volume claims (PVCs) that constitute an application’s state. You migrate specified PVCs by excluding other PVCs from the migration plan. You can map the PVCs to ensure that the source and the target PVCs are synchronized. Persistent volume (PV) data is copied to the target cluster. The PV references are not moved, and the application pods continue to run on the source cluster.
State migration is specifically designed to be used in conjunction with external CD mechanisms, such as OpenShift Gitops. You can migrate application manifests using GitOps while migrating the state using MTC.
If you have a CI/CD pipeline, you can migrate stateless components by deploying them on the target cluster. Then you can migrate stateful components by using MTC.
You can perform a state migration between clusters or within the same cluster.
State migration migrates only the components that constitute an application’s state. If you want to migrate an entire namespace, use stage or cutover migration.
Prerequisites
-
The state of the application on the source cluster is persisted in provisioned through
PersistentVolumes.PersistentVolumeClaims - The manifests of the application are available in a central repository that is accessible from both the source and the target clusters.
Procedure
Migrate persistent volume data from the source to the target cluster.
You can perform this step as many times as needed. The source application continues running.
Quiesce the source application.
You can do this by setting the replicas of workload resources to
, either directly on the source cluster or by updating the manifests in GitHub and re-syncing the Argo CD application.0Clone application manifests to the target cluster.
You can use Argo CD to clone the application manifests to the target cluster.
Migrate the remaining volume data from the source to the target cluster.
Migrate any new data created by the application during the state migration process by performing a final data migration.
- If the cloned application is in a quiesced state, unquiesce it.
- Switch the DNS record to the target cluster to re-direct user traffic to the migrated application.
MTC 1.6 cannot quiesce applications automatically when performing state migration. It can only migrate PV data. Therefore, you must use your CD mechanisms for quiescing or unquiescing applications.
MTC 1.7 introduces explicit Stage and Cutover flows. You can use staging to perform initial data transfers as many times as needed. Then you can perform a cutover, in which the source applications are quiesced automatically.
Additional resources
- See Excluding PVCs from migration to select PVCs for state migration.
- See Mapping PVCs to migrate source PV data to provisioned PVCs on the destination cluster.
- See Migrating Kubernetes objects to migrate the Kubernetes objects that constitute an application’s state.
10.3. Migration hooks Link kopierenLink in die Zwischenablage kopiert!
You can add up to four migration hooks to a single migration plan, with each hook running at a different phase of the migration. Migration hooks perform tasks such as customizing application quiescence, manually migrating unsupported data types, and updating applications after migration.
A migration hook runs on a source or a target cluster at one of the following migration steps:
-
: Before resources are backed up on the source cluster.
PreBackup -
: After resources are backed up on the source cluster.
PostBackup -
: Before resources are restored on the target cluster.
PreRestore -
: After resources are restored on the target cluster.
PostRestore
You can create a hook by creating an Ansible playbook that runs with the default Ansible image or with a custom hook container.
Ansible playbook
The Ansible playbook is mounted on a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the
MigPlan
The default Ansible runtime image is
registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel7:1.8
python-openshift
oc
Custom hook container
You can use a custom hook container instead of the default Ansible image.
10.3.1. Writing an Ansible playbook for a migration hook Link kopierenLink in die Zwischenablage kopiert!
You can write an Ansible playbook to use as a migration hook. The hook is added to a migration plan by using the MTC web console or by specifying values for the
spec.hooks
MigPlan
The Ansible playbook is mounted onto a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the
MigPlan
10.3.1.1. Ansible modules Link kopierenLink in die Zwischenablage kopiert!
You can use the Ansible
shell
oc
Example shell module
- hosts: localhost
gather_facts: false
tasks:
- name: get pod name
shell: oc get po --all-namespaces
You can use
kubernetes.core
k8s_info
Example k8s_facts module
- hosts: localhost
gather_facts: false
tasks:
- name: Get pod
k8s_info:
kind: pods
api: v1
namespace: openshift-migration
name: "{{ lookup( 'env', 'HOSTNAME') }}"
register: pods
- name: Print pod name
debug:
msg: "{{ pods.resources[0].metadata.name }}"
You can use the
fail
Example fail module
- hosts: localhost
gather_facts: false
tasks:
- name: Set a boolean
set_fact:
do_fail: true
- name: "fail"
fail:
msg: "Cause a failure"
when: do_fail
10.3.1.2. Environment variables Link kopierenLink in die Zwischenablage kopiert!
The
MigPlan
lookup
Example environment variables
- hosts: localhost
gather_facts: false
tasks:
- set_fact:
namespaces: "{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}"
- debug:
msg: "{{ item }}"
with_items: "{{ namespaces }}"
- debug:
msg: "{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}"
10.4. Migration plan options Link kopierenLink in die Zwischenablage kopiert!
You can exclude, edit, and map components in the
MigPlan
10.4.1. Excluding resources Link kopierenLink in die Zwischenablage kopiert!
You can exclude resources, for example, image streams, persistent volumes (PVs), or subscriptions, from a Migration Toolkit for Containers (MTC) migration plan to reduce the resource load for migration or to migrate images or PVs with a different tool.
By default, the MTC excludes service catalog resources and Operator Lifecycle Manager (OLM) resources from migration. These resources are parts of the service catalog API group and the OLM API group, neither of which is supported for migration at this time.
Procedure
Edit the
custom resource manifest:MigrationController$ oc edit migrationcontroller <migration_controller> -n openshift-migrationUpdate the
section by adding parameters to exclude specific resources. For those resources that do not have their own exclusion parameters, add thespecparameter:additional_excluded_resourcesapiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true1 disable_pv_migration: true2 additional_excluded_resources:3 - resource1 - resource2 ...- 1
- Add
disable_image_migration: trueto exclude image streams from the migration.imagestreamsis added to theexcluded_resourceslist inmain.ymlwhen theMigrationControllerpod restarts. - 2
- Add
disable_pv_migration: trueto exclude PVs from the migration plan.persistentvolumesandpersistentvolumeclaimsare added to theexcluded_resourceslist inmain.ymlwhen theMigrationControllerpod restarts. Disabling PV migration also disables PV discovery when you create the migration plan. - 3
- You can add OpenShift Container Platform resources that you want to exclude to the
additional_excluded_resourceslist.
-
Wait two minutes for the pod to restart so that the changes are applied.
MigrationController Verify that the resource is excluded:
$ oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1The output contains the excluded resources:
Example output
name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims
10.4.2. Mapping namespaces Link kopierenLink in die Zwischenablage kopiert!
If you map namespaces in the
MigPlan
Two source namespaces mapped to the same destination namespace
spec:
namespaces:
- namespace_2
- namespace_1:namespace_2
If you want the source namespace to be mapped to a namespace of the same name, you do not need to create a mapping. By default, a source namespace and a target namespace have the same name.
Incorrect namespace mapping
spec:
namespaces:
- namespace_1:namespace_1
Correct namespace reference
spec:
namespaces:
- namespace_1
10.4.3. Excluding persistent volume claims Link kopierenLink in die Zwischenablage kopiert!
You select persistent volume claims (PVCs) for state migration by excluding the PVCs that you do not want to migrate. You exclude PVCs by setting the
spec.persistentVolumes.pvc.selection.action
MigPlan
Prerequisites
-
CR is in a
MigPlanstate.Ready
Procedure
Add the
parameter to thespec.persistentVolumes.pvc.selection.actionCR and set it toMigPlan:skipapiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: ... persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: ... selection: action: skip
10.4.4. Mapping persistent volume claims Link kopierenLink in die Zwischenablage kopiert!
You can migrate persistent volume (PV) data from the source cluster to persistent volume claims (PVCs) that are already provisioned in the destination cluster in the
MigPlan
You map PVCs by updating the
spec.persistentVolumes.pvc.name
MigPlan
Prerequisites
-
CR is in a
MigPlanstate.Ready
Procedure
Update the
parameter in thespec.persistentVolumes.pvc.nameCR:MigPlanapiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: ... persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc>1 - 1
- Specify the PVC on the source cluster and the PVC on the destination cluster. If the destination PVC does not exist, it will be created. You can use this mapping to change the PVC name during migration.
10.4.5. Editing persistent volume attributes Link kopierenLink in die Zwischenablage kopiert!
After you create a
MigPlan
MigrationController
spec.persistentVolumes
status.destStorageClasses
MigPlan
You can edit the values in the
spec.persistentVolumes.selection
spec.persistentVolumes.selection
MigPlan
MigrationController
The default value for the
spec.persistentVolumes.selection.storageClass
-
If the source cluster PV is Gluster or NFS, the default is either , for
cephfs, oraccessMode: ReadWriteMany, forcephrbd.accessMode: ReadWriteOnce -
If the PV is neither Gluster nor NFS or if or
cephfsare not available, the default is a storage class for the same provisioner.cephrbd - If a storage class for the same provisioner is not available, the default is the default storage class of the destination cluster.
You can change the
storageClass
name
status.destStorageClasses
MigPlan
If the
storageClass
Prerequisites
-
CR is in a
MigPlanstate.Ready
Procedure
Edit the
values in thespec.persistentVolumes.selectionCR:MigPlanapiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy>1 copyMethod: <filesystem>2 verify: true3 storageClass: <gp2>4 accessMode: <ReadWriteMany>5 storageClass: cephfs- 1
- Allowed values are
move,copy, andskip. If only one action is supported, the default value is the supported action. If multiple actions are supported, the default value iscopy. - 2
- Allowed values are
snapshotandfilesystem. Default value isfilesystem. - 3
- The
verifyparameter is displayed if you select the verification option for file system copy in the MTC web console. You can set it tofalse. - 4
- You can change the default value to the value of any
nameparameter in thestatus.destStorageClassesblock of theMigPlanCR. If no value is specified, the PV will have no storage class after migration. - 5
- Allowed values are
ReadWriteOnceandReadWriteMany. If this value is not specified, the default is the access mode of the source cluster PVC. You can only edit the access mode in theMigPlanCR. You cannot edit it by using the MTC web console.
10.4.6. Converting storage classes in the MTC web console Link kopierenLink in die Zwischenablage kopiert!
You can convert the storage class of a persistent volume (PV) by migrating it within the same cluster. To do so, you must create and run a migration plan in the Migration Toolkit for Containers (MTC) web console.
Prerequisites
-
You must be logged in as a user with privileges on the cluster on which MTC is running.
cluster-admin - You must add the cluster to the MTC web console.
Procedure
- In the left-side navigation pane of the OpenShift Container Platform web console, click Projects.
In the list of projects, click your project.
The Project details page opens.
- Click the DeploymentConfig name. Note the name of its running pod.
- Open the YAML tab of the project. Find the PVs and note the names of their corresponding persistent volume claims (PVCs).
- In the MTC web console, click Migration plans.
- Click Add migration plan.
Enter the Plan name.
The migration plan name must contain 3 to 63 lower-case alphanumeric characters (
) and must not contain spaces or underscores (a-z, 0-9)._- From the Migration type menu, select Storage class conversion.
- From the Source cluster list, select the desired cluster for storage class conversion.
Click Next.
The Namespaces page opens.
- Select the required project.
Click Next.
The Persistent volumes page opens. The page displays the PVs in the project, all selected by default.
- For each PV, select the desired target storage class.
Click Next.
The wizard validates the new migration plan and shows that it is ready.
Click Close.
The new plan appears on the Migration plans page.
To start the conversion, click the options menu of the new plan.
Under Migrations, two options are displayed, Stage and Cutover.
NoteCutover migration updates PVC references in the applications.
Stage migration does not update PVC references in the applications.
Select the desired option.
Depending on which option you selected, the Stage migration or Cutover migration notification appears.
Click Migrate.
Depending on which option you selected, the Stage started or Cutover started message appears.
To see the status of the current migration, click the number in the Migrations column.
The Migrations page opens.
To see more details on the current migration and monitor its progress, select the migration from the Type column.
The Migration details page opens. When the migration progresses to the DirectVolume step and the status of the step becomes
, you can click View details and see the detailed status of the copies.Running Rsync Pods to migrate Persistent Volume data- In the breadcrumb bar, click Stage or Cutover and wait for all steps to complete.
Open the PersistentVolumeClaims tab of the OpenShift Container Platform web console.
You can see new PVCs with the names of the initial PVCs but ending in
, which are using the target storage class.new- In the left-side navigation pane, click Pods. See that the pod of your project is running again.
Additional resources
-
For details about the and
moveactions, see MTC workflow.copy -
For details about the action, see Excluding PVCs from migration.
skip - For details about the file system and snapshot copy methods, see About data copy methods.
10.4.7. Performing a state migration of Kubernetes objects by using the MTC API Link kopierenLink in die Zwischenablage kopiert!
After you migrate all the PV data, you can use the Migration Toolkit for Containers (MTC) API to perform a one-time state migration of Kubernetes objects that constitute an application.
You do this by configuring
MigPlan
MigMigration
MigPlan
Selecting Kubernetes resources is an API-only feature. You must update the
MigPlan
MigMigration
After migration, the
closed
MigPlan
true
MigMigration
MigPlan
You add Kubernetes objects to the
MigPlan
-
Adding the Kubernetes objects to the section. When the
includedResourcesfield is specified in theincludedResourcesCR, the plan takes a list ofMigPlanas input. Only resources present in the list are included in the migration.group-kind -
Adding the optional parameter to filter the
labelSelectorin theincludedResources. When this field is specified, only resources matching the label selector are included in the migration. For example, you can filter a list ofMigPlanandSecretresources by using the labelConfigMapas a filter.app: frontend
Procedure
Update the
CR to include Kubernetes resources and, optionally, to filter the included resources by adding theMigPlanparameter:labelSelectorTo update the
CR to include Kubernetes resources:MigPlanapiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind>1 group: "" - kind: <kind> group: ""- 1
- Specify the Kubernetes object, for example,
SecretorConfigMap.
Optional: To filter the included resources by adding the
parameter:labelSelectorapiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind>1 group: "" - kind: <kind> group: "" ... labelSelector: matchLabels: <label>2
Create a
CR to migrate the selected Kubernetes resources. Verify that the correctMigMigrationis referenced inMigPlan:migPlanRefapiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false
10.5. Migration controller options Link kopierenLink in die Zwischenablage kopiert!
You can edit migration plan limits, enable persistent volume resizing, or enable cached Kubernetes clients in the
MigrationController
10.5.1. Increasing limits for large migrations Link kopierenLink in die Zwischenablage kopiert!
You can increase the limits on migration objects and container resources for large migrations with the Migration Toolkit for Containers (MTC).
You must test these changes before you perform a migration in a production environment.
Procedure
Edit the
custom resource (CR) manifest:MigrationController$ oc edit migrationcontroller -n openshift-migrationUpdate the following parameters:
... mig_controller_limits_cpu: "1"1 mig_controller_limits_memory: "10Gi"2 ... mig_controller_requests_cpu: "100m"3 mig_controller_requests_memory: "350Mi"4 ... mig_pv_limit: 1005 mig_pod_limit: 1006 mig_namespace_limit: 107 ...- 1
- Specifies the number of CPUs available to the
MigrationControllerCR. - 2
- Specifies the amount of memory available to the
MigrationControllerCR. - 3
- Specifies the number of CPU units available for
MigrationControllerCR requests.100mrepresents 0.1 CPU units (100 * 1e-3). - 4
- Specifies the amount of memory available for
MigrationControllerCR requests. - 5
- Specifies the number of persistent volumes that can be migrated.
- 6
- Specifies the number of pods that can be migrated.
- 7
- Specifies the number of namespaces that can be migrated.
Create a migration plan that uses the updated parameters to verify the changes.
If your migration plan exceeds the
CR limits, the MTC console displays a warning message when you save the migration plan.MigrationController
10.5.2. Enabling persistent volume resizing for direct volume migration Link kopierenLink in die Zwischenablage kopiert!
You can enable persistent volume (PV) resizing for direct volume migration to avoid running out of disk space on the destination cluster.
When the disk usage of a PV reaches a configured level, the
MigrationController
A
pv_resizing_threshold
3%
97%
PVC capacity is calculated according to the following criteria:
-
If the requested storage capacity () of the PVC is not equal to its actual provisioned capacity (
spec.resources.requests.storage), the greater value is used.status.capacity.storage - If a PV is provisioned through a PVC and then subsequently changed so that its PV and PVC capacities no longer match, the greater value is used.
Prerequisites
-
The PVCs must be attached to one or more running pods so that the CR can execute commands.
MigrationController
Procedure
- Log in to the host cluster.
Enable PV resizing by patching the
CR:MigrationController$ oc patch migrationcontroller migration-controller -p '{"spec":{"enable_dvm_pv_resizing":true}}' \1 --type='merge' -n openshift-migration- 1
- Set the value to
falseto disable PV resizing.
Optional: Update the
parameter to increase the threshold:pv_resizing_threshold$ oc patch migrationcontroller migration-controller -p '{"spec":{"pv_resizing_threshold":41}}' \1 --type='merge' -n openshift-migration- 1
- The default value is
3.
When the threshold is exceeded, the following status message is displayed in the
CR status:MigPlanstatus: conditions: ... - category: Warn durable: true lastTransitionTime: "2021-06-17T08:57:01Z" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: "False" type: PvCapacityAdjustmentRequiredNoteFor AWS gp2 storage, this message does not appear unless the
is 42% or greater because of the way gp2 calculates volume usage and size. (BZ#1973148)pv_resizing_threshold
10.5.3. Enabling cached Kubernetes clients Link kopierenLink in die Zwischenablage kopiert!
You can enable cached Kubernetes clients in the
MigrationController
Delegated tasks, for example, Rsync backup for direct volume migration or Velero backup and restore, however, do not show improved performance with cached clients.
Cached clients require extra memory because the
MigrationController
MigCluster
You can increase the memory limits and requests of the
MigrationController
OOMKilled
Procedure
Enable cached clients by running the following command:
$ oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_enable_cache", "value": true}]'Optional: Increase the
CR memory limits by running the following command:MigrationController$ oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_limits_memory", "value": <10Gi>}]'Optional: Increase the
CR memory requests by running the following command:MigrationController$ oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_requests_memory", "value": <350Mi>}]'