Este contenido no está disponible en el idioma seleccionado.
Chapter 4. Installing the Migration Toolkit for Containers in a restricted network environment
You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4 in a restricted network environment by performing the following procedures:
Create a mirrored Operator catalog.
This process creates a
file, which contains the mapping between themapping.txtimage and your mirror registry image. Theregistry.redhat.iofile is required for installing the legacy Migration Toolkit for Containers Operator on an OpenShift Container Platform 4.2 to 4.5 source cluster.mapping.txtInstall the Migration Toolkit for Containers Operator on the OpenShift Container Platform 4.8 target cluster by using Operator Lifecycle Manager.
By default, the MTC web console and the
pod run on the target cluster. You can configure theMigration Controllercustom resource manifest to run the MTC web console and theMigration Controllerpod on a remote cluster.Migration ControllerInstall the Migration Toolkit for Containers Operator on the source cluster:
- OpenShift Container Platform 4.6 or later: Install the Migration Toolkit for Containers Operator by using Operator Lifecycle Manager.
- OpenShift Container Platform 4.2 to 4.5: Install the legacy Migration Toolkit for Containers Operator from the command line interface.
- Configure object storage to use as a replication repository.
To install MTC on OpenShift Container Platform 3, see Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3.
To uninstall MTC, see Uninstalling MTC and deleting resources.
4.1. Compatibility guidelines Copiar enlaceEnlace copiado en el portapapeles!
You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version.
Definitions
- legacy platform
- OpenShift Container Platform 4.5 and earlier.
- modern platform
- OpenShift Container Platform 4.6 and later.
- legacy operator
- The MTC Operator designed for legacy platforms.
- modern operator
- The MTC Operator designed for modern platforms.
- control cluster
- The cluster that runs the MTC controller and GUI.
- remote cluster
- A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations.
| OpenShift Container Platform 4.5 or earlier | OpenShift Container Platform 4.6 or later | |
|---|---|---|
| Stable MTC version | MTC 1.7.z Legacy 1.7 operator: Install manually with the
Important This cluster cannot be the control cluster. | MTC 1.7.z Install with OLM, release channel
|
Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a modern OpenShift Container Platform cluster in the cloud, where the modern cluster cannot connect to the OpenShift Container Platform 3.11 cluster.
With MTC 1.7, if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the
crane tunnel-api
With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster.
4.2. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.8 Copiar enlaceEnlace copiado en el portapapeles!
You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.8 by using the Operator Lifecycle Manager.
Prerequisites
-
You must be logged in as a user with privileges on all clusters.
cluster-admin - You must create an Operator catalog from a mirror image in a local registry.
Procedure
-
In the OpenShift Container Platform web console, click Operators
OperatorHub. - Use the Filter by keyword field to find the Migration Toolkit for Containers Operator.
- Select the Migration Toolkit for Containers Operator and click Install.
Click Install.
On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded.
- Click Migration Toolkit for Containers Operator.
- Under Provided APIs, locate the Migration Controller tile, and click Create Instance.
- Click Create.
-
Click Workloads
Pods to verify that the MTC pods are running.
4.3. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 4.2 to 4.5 Copiar enlaceEnlace copiado en el portapapeles!
You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform versions 4.2 to 4.5.
Prerequisites
-
You must be logged in as a user with privileges on all clusters.
cluster-admin -
You must have access to .
registry.redhat.io -
You must have installed.
podman -
You must have a Linux workstation with network access in order to download files from .
registry.redhat.io - You must create a mirror image of the Operator catalog.
- You must install the Migration Toolkit for Containers Operator from the mirrored Operator catalog on OpenShift Container Platform 4.8.
Procedure
Log in to
with your Red Hat Customer Portal credentials:registry.redhat.io$ sudo podman login registry.redhat.ioDownload the
file by entering the following command:operator.yml$ sudo podman cp $(sudo podman create \ registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./Download the
file by entering the following command:controller.yml$ sudo podman cp $(sudo podman create \ registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./Obtain the Operator image mapping by running the following command:
$ grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtcThe
file was created when you mirrored the Operator catalog. The output shows the mapping between themapping.txtimage and your mirror registry image.registry.redhat.ioExample output
registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operatorUpdate the
values for theimageandansiblecontainers and theoperatorvalue in theREGISTRYfile:operator.ymlcontainers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a>1 ... - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a>2 ... env: - name: REGISTRY value: <registry.apps.example.com>3 - Log in to your source cluster.
Create the Migration Toolkit for Containers Operator object:
$ oc create -f operator.ymlExample output
namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists1 Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists- 1
- You can ignore
Error from server (AlreadyExists)messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases.
Create the
object:MigrationController$ oc create -f controller.ymlVerify that the MTC pods are running:
$ oc get pods -n openshift-migration
4.4. Proxy configuration Copiar enlaceEnlace copiado en el portapapeles!
For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the
MigrationController
proxy
For OpenShift Container Platform 4.2 to 4.8, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings.
4.4.1. Direct volume migration Copiar enlaceEnlace copiado en el portapapeles!
Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy.
If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy.
4.4.1.1. TCP proxy setup for DVM Copiar enlaceEnlace copiado en el portapapeles!
You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the
stunnel_tcp_proxy
MigrationController
apiVersion: migration.openshift.io/v1alpha1
kind: MigrationController
metadata:
name: migration-controller
namespace: openshift-migration
spec:
[...]
stunnel_tcp_proxy: http://username:password@ip:port
Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC.
4.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? Copiar enlaceEnlace copiado en el portapapeles!
You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel.
Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy.
4.4.1.3. Known issue Copiar enlaceEnlace copiado en el portapapeles!
Migration fails with error Upgrade request required
The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message
Upgrade request required
In addition to supporting the SPDY protocol, the proxy or firewall also must pass the
Upgrade
Upgrade
Upgrade request required
Upgrade
4.4.2. Tuning network policies for migrations Copiar enlaceEnlace copiado en el portapapeles!
OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration.
Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions.
4.4.2.1. NetworkPolicy configuration Copiar enlaceEnlace copiado en el portapapeles!
4.4.2.1.1. Egress traffic from Rsync pods Copiar enlaceEnlace copiado en el portapapeles!
You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the
NetworkPolicy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress-from-rsync-pods
spec:
podSelector:
matchLabels:
owner: directvolumemigration
app: directvolumemigration-rsync-transfer
egress:
- {}
policyTypes:
- Egress
4.4.2.1.2. Ingress traffic to Rsync pods Copiar enlaceEnlace copiado en el portapapeles!
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress-from-rsync-pods
spec:
podSelector:
matchLabels:
owner: directvolumemigration
app: directvolumemigration-rsync-transfer
ingress:
- {}
policyTypes:
- Ingress
4.4.2.2. EgressNetworkPolicy configuration Copiar enlaceEnlace copiado en el portapapeles!
The
EgressNetworkPolicy
Unlike the
NetworkPolicy
Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two:
apiVersion: network.openshift.io/v1
kind: EgressNetworkPolicy
metadata:
name: test-egress-policy
namespace: <namespace>
spec:
egress:
- to:
cidrSelector: <cidr_of_source_or_target_cluster>
type: Deny
4.4.2.3. Configuring supplemental groups for Rsync pods Copiar enlaceEnlace copiado en el portapapeles!
When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access:
| Variable | Type | Default | Description |
|---|---|---|---|
|
| string | Not set | Comma-separated list of supplemental groups for source Rsync pods |
|
| string | Not set | Comma-separated list of supplemental groups for target Rsync pods |
Example usage
The
MigrationController
spec:
src_supplemental_groups: "1000,2000"
target_supplemental_groups: "2000,3000"
4.4.3. Configuring proxies Copiar enlaceEnlace copiado en el portapapeles!
Prerequisites
-
You must be logged in as a user with privileges on all clusters.
cluster-admin
Procedure
Get the
CR manifest:MigrationController$ oc get migrationcontroller <migration_controller> -n openshift-migrationUpdate the proxy parameters:
apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port>1 noProxy: example.com2 Preface a domain with
to match subdomains only. For example,.matches.y.com, but notx.y.com. Usey.comto bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the*field from the installation configuration, you must add them to this list to prevent connection issues.networking.machineNetwork[].cidrThis field is ignored if neither the
nor thehttpProxyfield is set.httpsProxy-
Save the manifest as .
migration-controller.yaml Apply the updated manifest:
$ oc replace -f migration-controller.yaml -n openshift-migration
For more information, see Configuring the cluster-wide proxy.
4.5. Configuring a replication repository Copiar enlaceEnlace copiado en el portapapeles!
You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster. Multi-Cloud Object Gateway (MCG) is the only supported option for a restricted network environment.
MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
4.5.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
- All clusters must have uninterrupted network access to the replication repository.
- If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository.
4.5.2. Configuring Multi-Cloud Object Gateway Copiar enlaceEnlace copiado en el portapapeles!
You can install the OpenShift Container Storage Operator and configure a Multi-Cloud Object Gateway (MCG) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC).
4.5.2.1. Installing the OpenShift Container Storage Operator Copiar enlaceEnlace copiado en el portapapeles!
You can install the OpenShift Container Storage Operator from OperatorHub.
Procedure
-
In the OpenShift Container Platform web console, click Operators
OperatorHub. - Use Filter by keyword (in this case, OCS) to find the OpenShift Container Storage Operator.
- Select the OpenShift Container Storage Operator and click Install.
- Select an Update Channel, Installation Mode, and Approval Strategy.
Click Install.
On the Installed Operators page, the OpenShift Container Storage Operator appears in the openshift-storage project with the status Succeeded.
4.5.2.2. Creating the Multi-Cloud Object Gateway storage bucket Copiar enlaceEnlace copiado en el portapapeles!
You can create the Multi-Cloud Object Gateway (MCG) storage bucket’s custom resources (CRs).
Procedure
Log in to the OpenShift Container Platform cluster:
$ oc login -u <username>Create the
CR configuration file,NooBaa, with the following content:noobaa.ymlapiVersion: noobaa.io/v1alpha1 kind: NooBaa metadata: name: <noobaa> namespace: openshift-storage spec: dbResources: requests: cpu: 0.51 memory: 1Gi coreResources: requests: cpu: 0.52 memory: 1GiCreate the
object:NooBaa$ oc create -f noobaa.ymlCreate the
CR configuration file,BackingStore, with the following content:bs.ymlapiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <mcg_backing_store> namespace: openshift-storage spec: pvPool: numVolumes: 31 resources: requests: storage: <volume_size>2 storageClass: <storage_class>3 type: pv-poolCreate the
object:BackingStore$ oc create -f bs.ymlCreate the
CR configuration file,BucketClass, with the following content:bc.ymlapiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <mcg_bucket_class> namespace: openshift-storage spec: placementPolicy: tiers: - backingStores: - <mcg_backing_store> placement: SpreadCreate the
object:BucketClass$ oc create -f bc.ymlCreate the
CR configuration file,ObjectBucketClaim, with the following content:obc.ymlapiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <bucket> namespace: openshift-storage spec: bucketName: <bucket>1 storageClassName: <storage_class> additionalConfig: bucketclass: <mcg_bucket_class>- 1
- Record the bucket name for adding the replication repository to the MTC web console.
Create the
object:ObjectBucketClaim$ oc create -f obc.ymlWatch the resource creation process to verify that the
status isObjectBucketClaim:Bound$ watch -n 30 'oc get -n openshift-storage objectbucketclaim migstorage -o yaml'This process can take five to ten minutes.
Obtain and record the following values, which are required when you add the replication repository to the MTC web console:
S3 endpoint:
$ oc get route -n openshift-storage s3S3 provider access key:
$ oc get secret -n openshift-storage migstorage \ -o go-template='{{ .data.AWS_ACCESS_KEY_ID }}' | base64 --decodeS3 provider secret access key:
$ oc get secret -n openshift-storage migstorage \ -o go-template='{{ .data.AWS_SECRET_ACCESS_KEY }}' | base64 --decode
4.6. Uninstalling MTC and deleting resources Copiar enlaceEnlace copiado en el portapapeles!
You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster.
Deleting the
velero
Prerequisites
-
You must be logged in as a user with privileges.
cluster-admin
Procedure
Delete the
custom resource (CR) on all clusters:MigrationController$ oc delete migrationcontroller <migration_controller>- Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager.
Delete cluster-scoped resources on all clusters by running the following commands:
- custom resource definitions (CRDs):
migration$ oc delete $(oc get crds -o name | grep 'migration.openshift.io') - CRDs:
velero$ oc delete $(oc get crds -o name | grep 'velero') - cluster roles:
migration$ oc delete $(oc get clusterroles -o name | grep 'migration.openshift.io') - cluster role:
migration-operator$ oc delete clusterrole migration-operator - cluster roles:
velero$ oc delete $(oc get clusterroles -o name | grep 'velero') - cluster role bindings:
migration$ oc delete $(oc get clusterrolebindings -o name | grep 'migration.openshift.io') - cluster role bindings:
migration-operator$ oc delete clusterrolebindings migration-operator - cluster role bindings:
velero$ oc delete $(oc get clusterrolebindings -o name | grep 'velero')