Chapter 3. Installing the Migration Toolkit for Containers


You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.

Note

By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a remote cluster.

After you have installed MTC, you must configure an object storage to use as a replication repository.

To uninstall MTC, see Uninstalling MTC and deleting resources.

3.1. Compatibility guidelines

You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version.

Definitions

legacy platform
OpenShift Container Platform 4.5 and earlier.
modern platform
OpenShift Container Platform 4.6 and later.
legacy operator
The MTC Operator designed for legacy platforms.
modern operator
The MTC Operator designed for modern platforms.
control cluster
The cluster that runs the MTC controller and GUI.
remote cluster
A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations.
Table 3.1. MTC compatibility: Migrating from a legacy platform
 OpenShift Container Platform 4.5 or earlierOpenShift Container Platform 4.6 or later

Stable MTC version

MTC 1.7.z

Legacy 1.7 operator: Install manually with the operator.yml file.

Important

This cluster cannot be the control cluster.

MTC 1.7.z

Install with OLM, release channel release-v1.7

Note

Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a modern OpenShift Container Platform cluster in the cloud, where the modern cluster cannot connect to the OpenShift Container Platform 3.11 cluster.

With MTC 1.7, if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the crane tunnel-api command.

With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster.

3.2. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 4.2 to 4.5

You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform versions 4.2 to 4.5.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges on all clusters.
  • You must have access to registry.redhat.io.
  • You must have podman installed.

Procedure

  1. Log in to registry.redhat.io with your Red Hat Customer Portal credentials:

    $ sudo podman login registry.redhat.io
  2. Download the operator.yml file by entering the following command:

    $ sudo podman cp $(sudo podman create \
      registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./
  3. Download the controller.yml file by entering the following command:

    $ sudo podman cp $(sudo podman create \
      registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./
  4. Log in to your source cluster.
  5. Verify that the cluster can authenticate with registry.redhat.io:

    $ oc run test --image registry.redhat.io/ubi8 --command sleep infinity
  6. Create the Migration Toolkit for Containers Operator object:

    $ oc create -f operator.yml

    Example output

    namespace/openshift-migration created
    rolebinding.rbac.authorization.k8s.io/system:deployers created
    serviceaccount/migration-operator created
    customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created
    role.rbac.authorization.k8s.io/migration-operator created
    rolebinding.rbac.authorization.k8s.io/migration-operator created
    clusterrolebinding.rbac.authorization.k8s.io/migration-operator created
    deployment.apps/migration-operator created
    Error from server (AlreadyExists): error when creating "./operator.yml":
    rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1
    Error from server (AlreadyExists): error when creating "./operator.yml":
    rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists

    1
    You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases.
  7. Create the MigrationController object:

    $ oc create -f controller.yml
  8. Verify that the MTC pods are running:

    $ oc get pods -n openshift-migration

3.3. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.8

You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.8 by using the Operator Lifecycle Manager.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges on all clusters.

Procedure

  1. In the OpenShift Container Platform web console, click Operators OperatorHub.
  2. Use the Filter by keyword field to find the Migration Toolkit for Containers Operator.
  3. Select the Migration Toolkit for Containers Operator and click Install.
  4. Click Install.

    On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded.

  5. Click Migration Toolkit for Containers Operator.
  6. Under Provided APIs, locate the Migration Controller tile, and click Create Instance.
  7. Click Create.
  8. Click Workloads Pods to verify that the MTC pods are running.

3.4. Proxy configuration

For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object.

For OpenShift Container Platform 4.2 to 4.8, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings.

3.4.1. Direct volume migration

Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy.

If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy.

3.4.1.1. TCP proxy setup for DVM

You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy:

apiVersion: migration.openshift.io/v1alpha1
kind: MigrationController
metadata:
  name: migration-controller
  namespace: openshift-migration
spec:
  [...]
  stunnel_tcp_proxy: http://username:password@ip:port

Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC.

3.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy?

You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel.

Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy.

3.4.1.3. Known issue

Migration fails with error Upgrade request required

The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required. Workaround: Use a proxy that supports the SPDY protocol.

In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required. Workaround: Ensure that the proxy forwards the Upgrade header.

3.4.2. Tuning network policies for migrations

OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration.

Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions.

3.4.2.1. NetworkPolicy configuration

3.4.2.1.1. Egress traffic from Rsync pods

You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all-egress-from-rsync-pods
spec:
  podSelector:
    matchLabels:
      owner: directvolumemigration
      app: directvolumemigration-rsync-transfer
  egress:
  - {}
  policyTypes:
  - Egress
3.4.2.1.2. Ingress traffic to Rsync pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all-egress-from-rsync-pods
spec:
  podSelector:
    matchLabels:
      owner: directvolumemigration
      app: directvolumemigration-rsync-transfer
  ingress:
  - {}
  policyTypes:
  - Ingress

3.4.2.2. EgressNetworkPolicy configuration

The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster.

Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters.

Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two:

apiVersion: network.openshift.io/v1
kind: EgressNetworkPolicy
metadata:
  name: test-egress-policy
  namespace: <namespace>
spec:
  egress:
  - to:
      cidrSelector: <cidr_of_source_or_target_cluster>
    type: Deny

3.4.2.3. Configuring supplemental groups for Rsync pods

When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access:

Table 3.2. Supplementary groups for Rsync pods
VariableTypeDefaultDescription

src_supplemental_groups

string

Not set

Comma-separated list of supplemental groups for source Rsync pods

target_supplemental_groups

string

Not set

Comma-separated list of supplemental groups for target Rsync pods

Example usage

The MigrationController CR can be updated to set values for these supplemental groups:

spec:
  src_supplemental_groups: "1000,2000"
  target_supplemental_groups: "2000,3000"

3.4.3. Configuring proxies

Prerequisites

  • You must be logged in as a user with cluster-admin privileges on all clusters.

Procedure

  1. Get the MigrationController CR manifest:

    $ oc get migrationcontroller <migration_controller> -n openshift-migration
  2. Update the proxy parameters:

    apiVersion: migration.openshift.io/v1alpha1
    kind: MigrationController
    metadata:
      name: <migration_controller>
      namespace: openshift-migration
    ...
    spec:
      stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1
      noProxy: example.com 2
    1
    Stunnel proxy URL for direct volume migration.
    2
    Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying.

    Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues.

    This field is ignored if neither the httpProxy nor the httpsProxy field is set.

  3. Save the manifest as migration-controller.yaml.
  4. Apply the updated manifest:

    $ oc replace -f migration-controller.yaml -n openshift-migration

For more information, see Configuring the cluster-wide proxy.

3.5. Configuring a replication repository

You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster.

MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. Select a method that is suited for your environment and is supported by your storage provider.

MTC supports the following storage providers:

  • Multi-Cloud Object Gateway (MCG)
  • Amazon Web Services (AWS) S3
  • Google Cloud Platform (GCP)
  • Microsoft Azure Blob
  • Generic S3 object storage, for example, Minio or Ceph S3

3.5.1. Prerequisites

  • All clusters must have uninterrupted network access to the replication repository.
  • If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository.

3.5.2. Configuring Multi-Cloud Object Gateway

You can install the OpenShift Container Storage Operator and configure a Multi-Cloud Object Gateway (MCG) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC).

3.5.2.1. Installing the OpenShift Container Storage Operator

You can install the OpenShift Container Storage Operator from OperatorHub.

Procedure

  1. In the OpenShift Container Platform web console, click Operators OperatorHub.
  2. Use Filter by keyword (in this case, OCS) to find the OpenShift Container Storage Operator.
  3. Select the OpenShift Container Storage Operator and click Install.
  4. Select an Update Channel, Installation Mode, and Approval Strategy.
  5. Click Install.

    On the Installed Operators page, the OpenShift Container Storage Operator appears in the openshift-storage project with the status Succeeded.

3.5.2.2. Creating the Multi-Cloud Object Gateway storage bucket

You can create the Multi-Cloud Object Gateway (MCG) storage bucket’s custom resources (CRs).

Procedure

  1. Log in to the OpenShift Container Platform cluster:

    $ oc login -u <username>
  2. Create the NooBaa CR configuration file, noobaa.yml, with the following content:

    apiVersion: noobaa.io/v1alpha1
    kind: NooBaa
    metadata:
      name: <noobaa>
      namespace: openshift-storage
    spec:
     dbResources:
       requests:
         cpu: 0.5 1
         memory: 1Gi
     coreResources:
       requests:
         cpu: 0.5 2
         memory: 1Gi
    1 2
    For a very small cluster, you can change the value to 0.1.
  3. Create the NooBaa object:

    $ oc create -f noobaa.yml
  4. Create the BackingStore CR configuration file, bs.yml, with the following content:

    apiVersion: noobaa.io/v1alpha1
    kind: BackingStore
    metadata:
      finalizers:
      - noobaa.io/finalizer
      labels:
        app: noobaa
      name: <mcg_backing_store>
      namespace: openshift-storage
    spec:
      pvPool:
        numVolumes: 3 1
        resources:
          requests:
            storage: <volume_size> 2
        storageClass: <storage_class> 3
      type: pv-pool
    1
    Specify the number of volumes in the persistent volume pool.
    2
    Specify the size of the volumes, for example, 50Gi.
    3
    Specify the storage class, for example, gp2.
  5. Create the BackingStore object:

    $ oc create -f bs.yml
  6. Create the BucketClass CR configuration file, bc.yml, with the following content:

    apiVersion: noobaa.io/v1alpha1
    kind: BucketClass
    metadata:
      labels:
        app: noobaa
      name: <mcg_bucket_class>
      namespace: openshift-storage
    spec:
      placementPolicy:
        tiers:
        - backingStores:
          - <mcg_backing_store>
          placement: Spread
  7. Create the BucketClass object:

    $ oc create -f bc.yml
  8. Create the ObjectBucketClaim CR configuration file, obc.yml, with the following content:

    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: <bucket>
      namespace: openshift-storage
    spec:
      bucketName: <bucket> 1
      storageClassName: <storage_class>
      additionalConfig:
        bucketclass: <mcg_bucket_class>
    1
    Record the bucket name for adding the replication repository to the MTC web console.
  9. Create the ObjectBucketClaim object:

    $ oc create -f obc.yml
  10. Watch the resource creation process to verify that the ObjectBucketClaim status is Bound:

    $ watch -n 30 'oc get -n openshift-storage objectbucketclaim migstorage -o yaml'

    This process can take five to ten minutes.

  11. Obtain and record the following values, which are required when you add the replication repository to the MTC web console:

    • S3 endpoint:

      $ oc get route -n openshift-storage s3
    • S3 provider access key:

      $ oc get secret -n openshift-storage migstorage \
        -o go-template='{{ .data.AWS_ACCESS_KEY_ID }}' | base64 --decode
    • S3 provider secret access key:

      $ oc get secret -n openshift-storage migstorage \
        -o go-template='{{ .data.AWS_SECRET_ACCESS_KEY }}' | base64 --decode

3.5.3. Configuring Amazon Web Services S3

You can configure an Amazon Web Services (AWS) S3 storage bucket as a replication repository for the Migration Toolkit for Containers (MTC).

Prerequisites

  • The AWS S3 storage bucket must be accessible to the source and target clusters.
  • You must have the AWS CLI installed.
  • If you are using the snapshot copy method:

    • You must have access to EC2 Elastic Block Storage (EBS).
    • The source and target clusters must be in the same region.
    • The source and target clusters must have the same storage class.
    • The storage class must be compatible with snapshots.

Procedure

  1. Create an AWS S3 bucket:

    $ aws s3api create-bucket \
        --bucket <bucket> \ 1
        --region <bucket_region> 2
    1
    Specify your S3 bucket name.
    2
    Specify your S3 bucket region, for example, us-east-1.
  2. Create the IAM user velero:

    $ aws iam create-user --user-name velero
  3. Create an EC2 EBS snapshot policy:

    $ cat > velero-ec2-snapshot-policy.json <<EOF
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:DescribeVolumes",
                    "ec2:DescribeSnapshots",
                    "ec2:CreateTags",
                    "ec2:CreateVolume",
                    "ec2:CreateSnapshot",
                    "ec2:DeleteSnapshot"
                ],
                "Resource": "*"
            }
        ]
    }
    EOF
  4. Create an AWS S3 access policy for one or for all S3 buckets:

    $ cat > velero-s3-policy.json <<EOF
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "s3:GetObject",
                    "s3:DeleteObject",
                    "s3:PutObject",
                    "s3:AbortMultipartUpload",
                    "s3:ListMultipartUploadParts"
                ],
                "Resource": [
                    "arn:aws:s3:::<bucket>/*" 1
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:ListBucket",
                    "s3:GetBucketLocation",
                    "s3:ListBucketMultipartUploads"
                ],
                "Resource": [
                    "arn:aws:s3:::<bucket>" 2
                ]
            }
        ]
    }
    EOF
    1 2
    To grant access to a single S3 bucket, specify the bucket name. To grant access to all AWS S3 buckets, specify * instead of a bucket name as in the following example:

    Example output

    "Resource": [
        "arn:aws:s3:::*"

  5. Attach the EC2 EBS policy to velero:

    $ aws iam put-user-policy \
      --user-name velero \
      --policy-name velero-ebs \
      --policy-document file://velero-ec2-snapshot-policy.json
  6. Attach the AWS S3 policy to velero:

    $ aws iam put-user-policy \
      --user-name velero \
      --policy-name velero-s3 \
      --policy-document file://velero-s3-policy.json
  7. Create an access key for velero:

    $ aws iam create-access-key --user-name velero
    {
      "AccessKey": {
            "UserName": "velero",
            "Status": "Active",
            "CreateDate": "2017-07-31T22:24:41.576Z",
            "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, 1
            "AccessKeyId": <AWS_ACCESS_KEY_ID> 2
        }
    }

    Record the AWS_SECRET_ACCESS_KEY and the AWS_ACCESS_KEY_ID. You use the credentials to add AWS as a replication repository.

3.5.4. Configuring Google Cloud Platform

You can configure a Google Cloud Platform (GCP) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC).

Prerequisites

  • The GCP storage bucket must be accessible to the source and target clusters.
  • You must have gsutil installed.
  • If you are using the snapshot copy method:

    • The source and target clusters must be in the same region.
    • The source and target clusters must have the same storage class.
    • The storage class must be compatible with snapshots.

Procedure

  1. Log in to gsutil:

    $ gsutil init

    Example output

    Welcome! This command will take you through the configuration of gcloud.
    
    Your current configuration has been set to: [default]
    
    To continue, you must login. Would you like to login (Y/n)?

  2. Set the BUCKET variable:

    $ BUCKET=<bucket> 1
    1 2 1
    Specify your bucket name.
  3. Create a storage bucket:

    $ gsutil mb gs://$BUCKET/
  4. Set the PROJECT_ID variable to your active project:

    $ PROJECT_ID=`gcloud config get-value project`
  5. Create a velero IAM service account:

    $ gcloud iam service-accounts create velero \
        --display-name "Velero Storage"
  6. Create the SERVICE_ACCOUNT_EMAIL variable:

    $ SERVICE_ACCOUNT_EMAIL=`gcloud iam service-accounts list \
      --filter="displayName:Velero Storage" \
      --format 'value(email)'`
  7. Create the ROLE_PERMISSIONS variable:

    $ ROLE_PERMISSIONS=(
        compute.disks.get
        compute.disks.create
        compute.disks.createSnapshot
        compute.snapshots.get
        compute.snapshots.create
        compute.snapshots.useReadOnly
        compute.snapshots.delete
        compute.zones.get
    )
  8. Create the velero.server custom role:

    $ gcloud iam roles create velero.server \
        --project $PROJECT_ID \
        --title "Velero Server" \
        --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"
  9. Add IAM policy binding to the project:

    $ gcloud projects add-iam-policy-binding $PROJECT_ID \
        --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
        --role projects/$PROJECT_ID/roles/velero.server
  10. Update the IAM service account:

    $ gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}
  11. Save the IAM service account keys to the credentials-velero file in the current directory:

    $ gcloud iam service-accounts keys create credentials-velero \
      --iam-account $SERVICE_ACCOUNT_EMAIL

    You use the credentials-velero file to add GCP as a replication repository.

3.5.5. Configuring Microsoft Azure Blob

You can configure a Microsoft Azure Blob storage container as a replication repository for the Migration Toolkit for Containers (MTC).

Prerequisites

  • You must have an Azure storage account.
  • You must have the Azure CLI installed.
  • The Azure Blob storage container must be accessible to the source and target clusters.
  • If you are using the snapshot copy method:

    • The source and target clusters must be in the same region.
    • The source and target clusters must have the same storage class.
    • The storage class must be compatible with snapshots.

Procedure

  1. Set the AZURE_RESOURCE_GROUP variable:

    $ AZURE_RESOURCE_GROUP=Velero_Backups
  2. Create an Azure resource group:

    $ az group create -n $AZURE_RESOURCE_GROUP --location <CentralUS> 1
    1
    Specify your location.
  3. Set the AZURE_STORAGE_ACCOUNT_ID variable:

    $ AZURE_STORAGE_ACCOUNT_ID=velerobackups
  4. Create an Azure storage account:

    $ az storage account create \
      --name $AZURE_STORAGE_ACCOUNT_ID \
      --resource-group $AZURE_RESOURCE_GROUP \
      --sku Standard_GRS \
      --encryption-services blob \
      --https-only true \
      --kind BlobStorage \
      --access-tier Hot
  5. Set the BLOB_CONTAINER variable:

    $ BLOB_CONTAINER=velero
  6. Create an Azure Blob storage container:

    $ az storage container create \
      -n $BLOB_CONTAINER \
      --public-access off \
      --account-name $AZURE_STORAGE_ACCOUNT_ID
  7. Create a service principal and credentials for velero:

    $ AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` \
      AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` \
      AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" \
      --role "Contributor" --query 'password' -o tsv` \
      AZURE_CLIENT_ID=`az ad sp list --display-name "velero" \
      --query '[0].appId' -o tsv`
  8. Save the service principal credentials in the credentials-velero file:

    $ cat << EOF > ./credentials-velero
    AZURE_SUBSCRIPTION_ID=${AZURE_SUBSCRIPTION_ID}
    AZURE_TENANT_ID=${AZURE_TENANT_ID}
    AZURE_CLIENT_ID=${AZURE_CLIENT_ID}
    AZURE_CLIENT_SECRET=${AZURE_CLIENT_SECRET}
    AZURE_RESOURCE_GROUP=${AZURE_RESOURCE_GROUP}
    AZURE_CLOUD_NAME=AzurePublicCloud
    EOF

    You use the credentials-velero file to add Azure as a replication repository.

3.5.6. Additional resources

3.6. Uninstalling MTC and deleting resources

You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster.

Note

Deleting the velero CRDs removes Velero from the cluster.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges.

Procedure

  1. Delete the MigrationController custom resource (CR) on all clusters:

    $ oc delete migrationcontroller <migration_controller>
  2. Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager.
  3. Delete cluster-scoped resources on all clusters by running the following commands:

    • migration custom resource definitions (CRDs):

      $ oc delete $(oc get crds -o name | grep 'migration.openshift.io')
    • velero CRDs:

      $ oc delete $(oc get crds -o name | grep 'velero')
    • migration cluster roles:

      $ oc delete $(oc get clusterroles -o name | grep 'migration.openshift.io')
    • migration-operator cluster role:

      $ oc delete clusterrole migration-operator
    • velero cluster roles:

      $ oc delete $(oc get clusterroles -o name | grep 'velero')
    • migration cluster role bindings:

      $ oc delete $(oc get clusterrolebindings -o name | grep 'migration.openshift.io')
    • migration-operator cluster role bindings:

      $ oc delete clusterrolebindings migration-operator
    • velero cluster role bindings:

      $ oc delete $(oc get clusterrolebindings -o name | grep 'velero')
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.