This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 3. Installing the Migration Toolkit for Containers
You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.
To install MTC on OpenShift Container Platform 3, see Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3.
			By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a remote cluster.
		
After you have installed MTC, you must configure an object storage to use as a replication repository.
To uninstall MTC, see Uninstalling MTC and deleting resources.
3.1. Compatibility guidelines
You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version.
Definitions
- legacy platform
- OpenShift Container Platform 4.5 and earlier.
- modern platform
- OpenShift Container Platform 4.6 and later.
- legacy operator
- The MTC Operator designed for legacy platforms.
- modern operator
- The MTC Operator designed for modern platforms.
- control cluster
- The cluster that runs the MTC controller and GUI.
- remote cluster
- A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations.
You must use the compatible MTC version for migrating your OpenShift Container Platform clusters. For the migration to succeed both your source cluster and the destination cluster must use the same version of MTC.
MTC 1.7 supports migrations from OpenShift Container Platform 3.11 to 4.8.
MTC 1.8 only supports migrations from OpenShift Container Platform 4.9 and later.
| Details | OpenShift Container Platform 3.11 | OpenShift Container Platform 4.0 to 4.5 | OpenShift Container Platform 4.6 to 4.8 | OpenShift Container Platform 4.9 or later | 
|---|---|---|---|---|
| Stable MTC version | MTC v.1.7.z | MTC v.1.7.z | MTC v.1.7.z | MTC v.1.8.z | 
| Installation | 
								Legacy MTC v.1.7.z operator: Install manually with the  [IMPORTANT] This cluster cannot be the control cluster. | 
								Install with OLM, release channel  | 
								Install with OLM, release channel  | 
Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a modern OpenShift Container Platform cluster in the cloud, where the modern cluster cannot connect to the OpenShift Container Platform 3.11 cluster.
				With MTC v.1.7.z, if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the crane tunnel-api command.
			
With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster.
3.2. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 4.2 to 4.5
You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform versions 4.2 to 4.5.
Prerequisites
- 
						You must be logged in as a user with cluster-adminprivileges on all clusters.
- 
						You must have access to registry.redhat.io.
- 
						You must have podmaninstalled.
Procedure
- Log in to - registry.redhat.iowith your Red Hat Customer Portal credentials:- podman login registry.redhat.io - $ podman login registry.redhat.io- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Download the - operator.ymlfile by entering the following command:- podman cp $(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ - podman cp $(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Download the - controller.ymlfile by entering the following command:- podman cp $(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ - podman cp $(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Log in to your OpenShift Container Platform source cluster.
- Verify that the cluster can authenticate with - registry.redhat.io:- oc run test --image registry.redhat.io/ubi8 --command sleep infinity - $ oc run test --image registry.redhat.io/ubi8 --command sleep infinity- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create the Migration Toolkit for Containers Operator object: - oc create -f operator.yml - $ oc create -f operator.yml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- You can ignoreError from server (AlreadyExists)messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases.
 
- Create the - MigrationControllerobject:- oc create -f controller.yml - $ oc create -f controller.yml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Verify that the MTC pods are running: - oc get pods -n openshift-migration - $ oc get pods -n openshift-migration- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
3.3. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.11
You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.11 by using the Operator Lifecycle Manager.
Prerequisites
- 
						You must be logged in as a user with cluster-adminprivileges on all clusters.
Procedure
- 
						In the OpenShift Container Platform web console, click Operators OperatorHub. 
- Use the Filter by keyword field to find the Migration Toolkit for Containers Operator.
- Select the Migration Toolkit for Containers Operator and click Install.
- Click Install. - On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded. 
- Click Migration Toolkit for Containers Operator.
- Under Provided APIs, locate the Migration Controller tile, and click Create Instance.
- Click Create.
- 
						Click Workloads Pods to verify that the MTC pods are running. 
3.4. Proxy configuration
				For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object.
			
For OpenShift Container Platform 4.2 to 4.11, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings.
3.4.1. Direct volume migration
Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy.
If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy.
3.4.1.1. TCP proxy setup for DVM
						You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy:
					
Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC.
3.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy?
You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel.
Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy.
3.4.1.3. Known issue
Migration fails with error Upgrade request required
							The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required. Workaround: Use a proxy that supports the SPDY protocol.
						
						In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required. Workaround: Ensure that the proxy forwards the Upgrade header.
					
3.4.2. Tuning network policies for migrations
OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration.
Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions.
3.4.2.1. NetworkPolicy configuration
3.4.2.1.1. Egress traffic from Rsync pods
							You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace:
						
3.4.2.1.2. Ingress traffic to Rsync pods
3.4.2.2. EgressNetworkPolicy configuration
						The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster.
					
						Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters.
					
Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two:
3.4.2.3. Choosing alternate endpoints for data transfer
By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow.
						For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR:
					
3.4.2.4. Configuring supplemental groups for Rsync pods
When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access:
| Variable | Type | Default | Description | 
|---|---|---|---|
| 
										 | string | Not set | Comma-separated list of supplemental groups for source Rsync pods | 
| 
										 | string | Not set | Comma-separated list of supplemental groups for target Rsync pods | 
Example usage
							The MigrationController CR can be updated to set values for these supplemental groups:
						
spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000"
spec:
  src_supplemental_groups: "1000,2000"
  target_supplemental_groups: "2000,3000"3.4.3. Configuring proxies
Prerequisites
- 
							You must be logged in as a user with cluster-adminprivileges on all clusters.
Procedure
- Get the - MigrationControllerCR manifest:- oc get migrationcontroller <migration_controller> -n openshift-migration - $ oc get migrationcontroller <migration_controller> -n openshift-migration- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Update the proxy parameters: - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Preface a domain with - .to match subdomains only. For example,- .y.commatches- x.y.com, but not- y.com. Use- *to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the- networking.machineNetwork[].cidrfield from the installation configuration, you must add them to this list to prevent connection issues.- This field is ignored if neither the - httpProxynor the- httpsProxyfield is set.
- 
							Save the manifest as migration-controller.yaml.
- Apply the updated manifest: - oc replace -f migration-controller.yaml -n openshift-migration - $ oc replace -f migration-controller.yaml -n openshift-migration- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
For more information, see Configuring the cluster-wide proxy.
3.4.4. Running Rsync as either root or non-root
This section applies only when you are working with the OpenShift API, not the web console.
					OpenShift environments have the PodSecurityAdmission controller enabled by default. This controller requires cluster administrators to enforce Pod Security Standards by means of namespace labels. All workloads in the cluster are expected to run one of the following Pod Security Standard levels: Privileged, Baseline or Restricted. Every cluster has its own default policy set.
				
To guarantee successful data transfer in all environments, Migration Toolkit for Containers (MTC) 1.7.5 introduced changes in Rsync pods, including running Rsync pods as non-root user by default. This ensures that data transfer is possible even for workloads that do not necessarily require higher privileges. This change was made because it is best to run workloads with the lowest level of privileges possible.
3.4.4.1. Manually overriding default non-root operation for data trannsfer
Although running Rsync pods as non-root user works in most cases, data transfer might fail when you run workloads as root user on the source side. MTC provides two ways to manually override default non-root operation for data transfer:
- Configure all migrations to run an Rsync pod as root on the destination cluster for all migrations.
- Run an Rsync pod as root on the destination cluster per migration.
						In both cases, you must set the following labels on the source side of any namespaces that are running workloads with higher privileges prior to migration: enforce, audit, and warn.
					
To learn more about Pod Security Admission and setting values for labels, see Controlling pod security admission synchronization.
3.4.4.2. Configuring the MigrationController CR as root or non-root for all migrations
By default, Rsync runs as non-root.
						On the destination cluster, you can configure the MigrationController CR to run Rsync as root.
					
Procedure
- Configure the - MigrationControllerCR as follows:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - This configuration will apply to all future migrations. 
3.4.4.3. Configuring the MigMigration CR as root or non-root per migration
						On the destination cluster, you can configure the MigMigration CR to run Rsync as root or non-root, with the following non-root options:
					
- As a specific user ID (UID)
- As a specific group ID (GID)
Procedure
- To run Rsync as root, configure the - MigMigrationCR according to this example:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- To run Rsync as a specific User ID (UID) or as a specific Group ID (GID), configure the - MigMigrationCR according to this example:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
3.5. Configuring a replication repository
You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster.
MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. Select a method that is suited for your environment and is supported by your storage provider.
MTC supports the following storage providers:
- Multicloud Object Gateway
- Amazon Web Services S3
- Google Cloud Platform
- Microsoft Azure Blob
- Generic S3 object storage, for example, Minio or Ceph S3
3.5.1. Prerequisites
- All clusters must have uninterrupted network access to the replication repository.
- If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository.
3.5.2. Retrieving Multicloud Object Gateway credentials
					You must retrieve the Multicloud Object Gateway (MCG) credentials and S3 endpoint in order to configure MCG as a replication repository for the Migration Toolkit for Containers (MTC). You must retrieve the Multicloud Object Gateway (MCG) credentials in order to create a Secret custom resource (CR) for the OpenShift API for Data Protection (OADP).
				
MCG is a component of OpenShift Data Foundation.
Prerequisites
- You must deploy OpenShift Data Foundation by using the appropriate OpenShift Data Foundation deployment guide.
Procedure
- Obtain the S3 endpoint, - AWS_ACCESS_KEY_ID, and- AWS_SECRET_ACCESS_KEYby running the- describecommand on the- NooBaacustom resource.- You use these credentials to add MCG as a replication repository. 
3.5.3. Configuring Amazon Web Services
You configure Amazon Web Services (AWS) S3 object storage as a replication repository for the Migration Toolkit for Containers (MTC).
Prerequisites
- You must have the AWS CLI installed.
- The AWS S3 storage bucket must be accessible to the source and target clusters.
- If you are using the snapshot copy method: - You must have access to EC2 Elastic Block Storage (EBS).
- The source and target clusters must be in the same region.
- The source and target clusters must have the same storage class.
- The storage class must be compatible with snapshots.
 
Procedure
- Set the - BUCKETvariable:- BUCKET=<your_bucket> - $ BUCKET=<your_bucket>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Set the - REGIONvariable:- REGION=<your_region> - $ REGION=<your_region>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create an AWS S3 bucket: - aws s3api create-bucket \ --bucket $BUCKET \ --region $REGION \ --create-bucket-configuration LocationConstraint=$REGION- $ aws s3api create-bucket \ --bucket $BUCKET \ --region $REGION \ --create-bucket-configuration LocationConstraint=$REGION- 1 - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- us-east-1does not support a- LocationConstraint. If your region is- us-east-1, omit- --create-bucket-configuration LocationConstraint=$REGION.
 
- Create an IAM user: - aws iam create-user --user-name velero - $ aws iam create-user --user-name velero- 1 - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster.
 
- Create a - velero-policy.jsonfile:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Attach the policies to give the - velerouser the minimum necessary permissions:- aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json - $ aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create an access key for the - velerouser:- aws iam create-access-key --user-name velero - $ aws iam create-access-key --user-name velero- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Record the - AWS_SECRET_ACCESS_KEYand the- AWS_ACCESS_KEY_ID. You use the credentials to add AWS as a replication repository.
3.5.4. Configuring Google Cloud Platform
You configure a Google Cloud Platform (GCP) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC).
Prerequisites
- 
							You must have the gcloudandgsutilCLI tools installed. See the Google cloud documentation for details.
- The GCP storage bucket must be accessible to the source and target clusters.
- If you are using the snapshot copy method: - The source and target clusters must be in the same region.
- The source and target clusters must have the same storage class.
- The storage class must be compatible with snapshots.
 
Procedure
- Log in to GCP: - gcloud auth login - $ gcloud auth login- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Set the - BUCKETvariable:- BUCKET=<bucket> - $ BUCKET=<bucket>- 1 - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specify your bucket name.
 
- Create the storage bucket: - gsutil mb gs://$BUCKET/ - $ gsutil mb gs://$BUCKET/- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Set the - PROJECT_IDvariable to your active project:- PROJECT_ID=$(gcloud config get-value project) - $ PROJECT_ID=$(gcloud config get-value project)- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create a service account: - gcloud iam service-accounts create velero \ --display-name "Velero service account"- $ gcloud iam service-accounts create velero \ --display-name "Velero service account"- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- List your service accounts: - gcloud iam service-accounts list - $ gcloud iam service-accounts list- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Set the - SERVICE_ACCOUNT_EMAILvariable to match its- emailvalue:- SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)')- $ SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)')- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Attach the policies to give the - velerouser the minimum necessary permissions:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create the - velero.servercustom role:- gcloud iam roles create velero.server \ --project $PROJECT_ID \ --title "Velero Server" \ --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"- $ gcloud iam roles create velero.server \ --project $PROJECT_ID \ --title "Velero Server" \ --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Add IAM policy binding to the project: - gcloud projects add-iam-policy-binding $PROJECT_ID \ --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \ --role projects/$PROJECT_ID/roles/velero.server- $ gcloud projects add-iam-policy-binding $PROJECT_ID \ --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \ --role projects/$PROJECT_ID/roles/velero.server- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Update the IAM service account: - gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}- $ gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Save the IAM service account keys to the - credentials-velerofile in the current directory:- gcloud iam service-accounts keys create credentials-velero \ --iam-account $SERVICE_ACCOUNT_EMAIL- $ gcloud iam service-accounts keys create credentials-velero \ --iam-account $SERVICE_ACCOUNT_EMAIL- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - You use the - credentials-velerofile to add GCP as a replication repository.
3.5.5. Configuring Microsoft Azure
You configure a Microsoft Azure Blob storage container as a replication repository for the Migration Toolkit for Containers (MTC).
Prerequisites
- You must have the Azure CLI installed.
- The Azure Blob storage container must be accessible to the source and target clusters.
- If you are using the snapshot copy method: - The source and target clusters must be in the same region.
- The source and target clusters must have the same storage class.
- The storage class must be compatible with snapshots.
 
Procedure
- Log in to Azure: - az login - $ az login- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Set the - AZURE_RESOURCE_GROUPvariable:- AZURE_RESOURCE_GROUP=Velero_Backups - $ AZURE_RESOURCE_GROUP=Velero_Backups- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create an Azure resource group: - az group create -n $AZURE_RESOURCE_GROUP --location CentralUS - $ az group create -n $AZURE_RESOURCE_GROUP --location CentralUS- 1 - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specify your location.
 
- Set the - AZURE_STORAGE_ACCOUNT_IDvariable:- AZURE_STORAGE_ACCOUNT_ID="velero$(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')" - $ AZURE_STORAGE_ACCOUNT_ID="velero$(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')"- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create an Azure storage account: - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Set the - BLOB_CONTAINERvariable:- BLOB_CONTAINER=velero - $ BLOB_CONTAINER=velero- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create an Azure Blob storage container: - az storage container create \ -n $BLOB_CONTAINER \ --public-access off \ --account-name $AZURE_STORAGE_ACCOUNT_ID - $ az storage container create \ -n $BLOB_CONTAINER \ --public-access off \ --account-name $AZURE_STORAGE_ACCOUNT_ID- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create a service principal and credentials for - velero:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Save the service principal credentials in the - credentials-velerofile:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - You use the - credentials-velerofile to add Azure as a replication repository.
3.6. Uninstalling MTC and deleting resources
You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster.
					Deleting the velero CRDs removes Velero from the cluster.
				
Prerequisites
- 
						You must be logged in as a user with cluster-adminprivileges.
Procedure
- Delete the - MigrationControllercustom resource (CR) on all clusters:- oc delete migrationcontroller <migration_controller> - $ oc delete migrationcontroller <migration_controller>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager.
- Delete cluster-scoped resources on all clusters by running the following commands: - migrationcustom resource definitions (CRDs):- oc delete $(oc get crds -o name | grep 'migration.openshift.io') - $ oc delete $(oc get crds -o name | grep 'migration.openshift.io')- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- veleroCRDs:- oc delete $(oc get crds -o name | grep 'velero') - $ oc delete $(oc get crds -o name | grep 'velero')- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- migrationcluster roles:- oc delete $(oc get clusterroles -o name | grep 'migration.openshift.io') - $ oc delete $(oc get clusterroles -o name | grep 'migration.openshift.io')- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- migration-operatorcluster role:- oc delete clusterrole migration-operator - $ oc delete clusterrole migration-operator- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- velerocluster roles:- oc delete $(oc get clusterroles -o name | grep 'velero') - $ oc delete $(oc get clusterroles -o name | grep 'velero')- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- migrationcluster role bindings:- oc delete $(oc get clusterrolebindings -o name | grep 'migration.openshift.io') - $ oc delete $(oc get clusterrolebindings -o name | grep 'migration.openshift.io')- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- migration-operatorcluster role bindings:- oc delete clusterrolebindings migration-operator - $ oc delete clusterrolebindings migration-operator- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- velerocluster role bindings:- oc delete $(oc get clusterrolebindings -o name | grep 'velero') - $ oc delete $(oc get clusterrolebindings -o name | grep 'velero')- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow