This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 4. Installing the Migration Toolkit for Containers
You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 3 and on OpenShift Container Platform 4.5 clusters.
You must install the same MTC version on all clusters.
By default, the MTC web console and the Migration Controller
pod run on the target cluster. You can configure the Migration Controller
custom resource manifest to run the MTC web console and the Migration Controller
pod on a source cluster or on a remote cluster.
After you have installed MTC, you must configure an object storage to use as a replication repository.
You can install the MTC Operator on OpenShift Container Platform 4 by using the OpenShift Container Platform web console.
Prerequisites
-
You must be logged in as a user with
cluster-admin
privileges on all clusters.
Procedure
-
In the OpenShift Container Platform web console, click Operators
OperatorHub. - Use the Filter by keyword field to find the Migration Toolkit for Containers Operator.
Select the Migration Toolkit for Containers Operator and click Install.
NoteDo not change the subscription approval option to Automatic. The Migration Toolkit for Containers version must be the same on the source and the target clusters.
Click Install.
On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded.
- Click Migration Toolkit for Containers Operator.
- Under Provided APIs, locate the Migration Controller tile, and click Create Instance.
If you do not want to run the MTC web console and the
Migration Controller
pod on the cluster, update the following parameters in themigration-controller
custom resource manifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This parameter is required only for OpenShift Container Platform 4.1.
- Click Create.
-
Click Workloads
Pods to verify that the MTC pods are running.
You can install the Migration Toolkit for Containers Operator manually on OpenShift Container Platform 3.7, 3.9, 3.10, or 3.11.
You must install the same MTC version on the OpenShift Container Platform 3 and 4 clusters.
To ensure that you have the latest version on the OpenShift Container Platform 3 cluster, download the operator.yml
and controller-3.yml
files when you are ready to create and run the migration plan.
Prerequisites
-
You must be logged in as a user with
cluster-admin
privileges on all clusters. -
You must have access to
registry.redhat.io
. -
You must have
podman
installed. - The cluster on which you are installing MTC must be OpenShift Container Platform 3.7, 3.9, 3.10, or 3.11.
- You must create an image stream secret and copy it to each node in the cluster.
Procedure
Log in to
registry.redhat.io
with your Red Hat Customer Portal credentials:sudo podman login registry.redhat.io
$ sudo podman login registry.redhat.io
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the
operator.yml
file:sudo podman cp $(sudo podman create \ registry.redhat.io/rhmtc/openshift-migration-rhel7-operator:v1.4):/operator.yml ./
$ sudo podman cp $(sudo podman create \ registry.redhat.io/rhmtc/openshift-migration-rhel7-operator:v1.4):/operator.yml ./
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the
controller-3.yml
file:sudo podman cp $(sudo podman create \ registry.redhat.io/rhmtc/openshift-migration-rhel7-operator:v1.4):/controller-3.yml ./
$ sudo podman cp $(sudo podman create \ registry.redhat.io/rhmtc/openshift-migration-rhel7-operator:v1.4):/controller-3.yml ./
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Log in to your OpenShift Container Platform 3 cluster.
Verify that the cluster can authenticate with
registry.redhat.io
:oc run test --image registry.redhat.io/ubi8 --command sleep infinity
$ oc run test --image registry.redhat.io/ubi8 --command sleep infinity
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the Migration Toolkit for Containers Operator object:
oc create -f operator.yml
$ oc create -f operator.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You can ignore
Error from server (AlreadyExists)
messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 3 that are provided in later releases.
Create the
MigrationController
object:oc create -f controller-3.yml
$ oc create -f controller-3.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the MTC pods are running:
oc get pods -n openshift-migration
$ oc get pods -n openshift-migration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3. Configuring a replication repository 링크 복사링크가 클립보드에 복사되었습니다!
You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster.
MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
All clusters must have uninterrupted network access to the replication repository.
If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository.
The following storage providers are supported:
- Multi-Cloud Object Gateway (MCG)
- Amazon Web Services (AWS) S3
- Google Cloud Platform (GCP)
- Microsoft Azure Blob
- Generic S3 object storage, for example, Minio or Ceph S3
Additional resources
4.3.1. Configuring Multi-Cloud Object Gateway 링크 복사링크가 클립보드에 복사되었습니다!
You can install the OpenShift Container Storage Operator and configure a Multi-Cloud Object Gateway (MCG) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC).
4.3.1.1. Installing the OpenShift Container Storage Operator 링크 복사링크가 클립보드에 복사되었습니다!
You can install the OpenShift Container Storage Operator from OperatorHub.
Procedure
-
In the OpenShift Container Platform web console, click Operators
OperatorHub. - Use Filter by keyword (in this case, OCS) to find the OpenShift Container Storage Operator.
- Select the OpenShift Container Storage Operator and click Install.
- Select an Update Channel, Installation Mode, and Approval Strategy.
Click Install.
On the Installed Operators page, the OpenShift Container Storage Operator appears in the openshift-storage project with the status Succeeded.
4.3.1.2. Creating the Multi-Cloud Object Gateway storage bucket 링크 복사링크가 클립보드에 복사되었습니다!
You can create the Multi-Cloud Object Gateway (MCG) storage bucket’s custom resources (CRs).
Procedure
Log in to the OpenShift Container Platform cluster:
oc login
$ oc login
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
NooBaa
CR configuration file,noobaa.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
NooBaa
object:oc create -f noobaa.yml
$ oc create -f noobaa.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
BackingStore
CR configuration file,bs.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
BackingStore
object:oc create -f bs.yml
$ oc create -f bs.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
BucketClass
CR configuration file,bc.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
BucketClass
object:oc create -f bc.yml
$ oc create -f bc.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
ObjectBucketClaim
CR configuration file,obc.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Record the bucket name for adding the replication repository to the MTC web console.
Create the
ObjectBucketClaim
object:oc create -f obc.yml
$ oc create -f obc.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Watch the resource creation process to verify that the
ObjectBucketClaim
status isBound
:watch -n 30 'oc get -n openshift-storage objectbucketclaim migstorage -o yaml'
$ watch -n 30 'oc get -n openshift-storage objectbucketclaim migstorage -o yaml'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This process can take five to ten minutes.
Obtain and record the following values, which are required when you add the replication repository to the MTC web console:
S3 endpoint:
oc get route -n openshift-storage s3
$ oc get route -n openshift-storage s3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow S3 provider access key:
oc get secret -n openshift-storage migstorage \ -o go-template='{{ .data.AWS_ACCESS_KEY_ID }}' | base64 --decode
$ oc get secret -n openshift-storage migstorage \ -o go-template='{{ .data.AWS_ACCESS_KEY_ID }}' | base64 --decode
Copy to Clipboard Copied! Toggle word wrap Toggle overflow S3 provider secret access key:
oc get secret -n openshift-storage migstorage \ -o go-template='{{ .data.AWS_SECRET_ACCESS_KEY }}' | base64 --decode
$ oc get secret -n openshift-storage migstorage \ -o go-template='{{ .data.AWS_SECRET_ACCESS_KEY }}' | base64 --decode
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.2. Configuring Amazon Web Services S3 링크 복사링크가 클립보드에 복사되었습니다!
You can configure an Amazon Web Services (AWS) S3 storage bucket as a replication repository for the Migration Toolkit for Containers (MTC).
Prerequisites
- The AWS S3 storage bucket must be accessible to the source and target clusters.
- You must have the AWS CLI installed.
If you are using the snapshot copy method:
- You must have access to EC2 Elastic Block Storage (EBS).
- The source and target clusters must be in the same region.
- The source and target clusters must have the same storage class.
- The storage class must be compatible with snapshots.
Procedure
Create an AWS S3 bucket:
aws s3api create-bucket \ --bucket <bucket> \ --region <bucket_region>
$ aws s3api create-bucket \ --bucket <bucket> \
1 --region <bucket_region>
2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the IAM user
velero
:aws iam create-user --user-name velero
$ aws iam create-user --user-name velero
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an EC2 EBS snapshot policy:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an AWS S3 access policy for one or for all S3 buckets:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
"Resource": [ "arn:aws:s3:::*"
"Resource": [ "arn:aws:s3:::*"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Attach the EC2 EBS policy to
velero
:aws iam put-user-policy \ --user-name velero \ --policy-name velero-ebs \ --policy-document file://velero-ec2-snapshot-policy.json
$ aws iam put-user-policy \ --user-name velero \ --policy-name velero-ebs \ --policy-document file://velero-ec2-snapshot-policy.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Attach the AWS S3 policy to
velero
:aws iam put-user-policy \ --user-name velero \ --policy-name velero-s3 \ --policy-document file://velero-s3-policy.json
$ aws iam put-user-policy \ --user-name velero \ --policy-name velero-s3 \ --policy-document file://velero-s3-policy.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an access key for
velero
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.3. Configuring Google Cloud Platform 링크 복사링크가 클립보드에 복사되었습니다!
You can configure a Google Cloud Platform (GCP) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC).
Prerequisites
- The GCP storage bucket must be accessible to the source and target clusters.
-
You must have
gsutil
installed. If you are using the snapshot copy method:
- The source and target clusters must be in the same region.
- The source and target clusters must have the same storage class.
- The storage class must be compatible with snapshots.
Procedure
Log in to
gsutil
:gsutil init
$ gsutil init
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Welcome! This command will take you through the configuration of gcloud. Your current configuration has been set to: [default] To continue, you must login. Would you like to login (Y/n)?
Welcome! This command will take you through the configuration of gcloud. Your current configuration has been set to: [default] To continue, you must login. Would you like to login (Y/n)?
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
BUCKET
variable:BUCKET=<bucket>
$ BUCKET=<bucket>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify your bucket name.
Create a storage bucket:
gsutil mb gs://$BUCKET/
$ gsutil mb gs://$BUCKET/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
PROJECT_ID
variable to your active project:PROJECT_ID=`gcloud config get-value project`
$ PROJECT_ID=`gcloud config get-value project`
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
velero
IAM service account:gcloud iam service-accounts create velero \ --display-name "Velero Storage"
$ gcloud iam service-accounts create velero \ --display-name "Velero Storage"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
SERVICE_ACCOUNT_EMAIL
variable:SERVICE_ACCOUNT_EMAIL=`gcloud iam service-accounts list \ --filter="displayName:Velero Storage" \ --format 'value(email)'`
$ SERVICE_ACCOUNT_EMAIL=`gcloud iam service-accounts list \ --filter="displayName:Velero Storage" \ --format 'value(email)'`
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
ROLE_PERMISSIONS
variable:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
velero.server
custom role:gcloud iam roles create velero.server \ --project $PROJECT_ID \ --title "Velero Server" \ --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"
$ gcloud iam roles create velero.server \ --project $PROJECT_ID \ --title "Velero Server" \ --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add IAM policy binding to the project:
gcloud projects add-iam-policy-binding $PROJECT_ID \ --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \ --role projects/$PROJECT_ID/roles/velero.server
$ gcloud projects add-iam-policy-binding $PROJECT_ID \ --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \ --role projects/$PROJECT_ID/roles/velero.server
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the IAM service account:
gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}
$ gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the IAM service account keys to the
credentials-velero
file in the current directory:gcloud iam service-accounts keys create credentials-velero \ --iam-account $SERVICE_ACCOUNT_EMAIL
$ gcloud iam service-accounts keys create credentials-velero \ --iam-account $SERVICE_ACCOUNT_EMAIL
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.4. Configuring Microsoft Azure Blob 링크 복사링크가 클립보드에 복사되었습니다!
You can configure a Microsoft Azure Blob storage container as a replication repository for the Migration Toolkit for Containers (MTC).
Prerequisites
- You must have an Azure storage account.
- You must have the Azure CLI installed.
- The Azure Blob storage container must be accessible to the source and target clusters.
If you are using the snapshot copy method:
- The source and target clusters must be in the same region.
- The source and target clusters must have the same storage class.
- The storage class must be compatible with snapshots.
Procedure
Set the
AZURE_RESOURCE_GROUP
variable:AZURE_RESOURCE_GROUP=Velero_Backups
$ AZURE_RESOURCE_GROUP=Velero_Backups
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Azure resource group:
az group create -n $AZURE_RESOURCE_GROUP --location <CentralUS>
$ az group create -n $AZURE_RESOURCE_GROUP --location <CentralUS>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify your location.
Set the
AZURE_STORAGE_ACCOUNT_ID
variable:AZURE_STORAGE_ACCOUNT_ID=velerobackups
$ AZURE_STORAGE_ACCOUNT_ID=velerobackups
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Azure storage account:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
BLOB_CONTAINER
variable:BLOB_CONTAINER=velero
$ BLOB_CONTAINER=velero
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Azure Blob storage container:
az storage container create \ -n $BLOB_CONTAINER \ --public-access off \ --account-name $AZURE_STORAGE_ACCOUNT_ID
$ az storage container create \ -n $BLOB_CONTAINER \ --public-access off \ --account-name $AZURE_STORAGE_ACCOUNT_ID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service principal and credentials for
velero
:AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` \ AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` \ AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" --role "Contributor" --query 'password' -o tsv` \ AZURE_CLIENT_ID=`az ad sp list --display-name "velero" --query '[0].appId' -o tsv`
$ AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` \ AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` \ AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" --role "Contributor" --query 'password' -o tsv` \ AZURE_CLIENT_ID=`az ad sp list --display-name "velero" --query '[0].appId' -o tsv`
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the service principal credentials in the
credentials-velero
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow