이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Installing and using the Migration Toolkit for Virtualization
Migrating from VMware vSphere or Red Hat Virtualization to Red Hat OpenShift Virtualization
Abstract
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. About the Migration Toolkit for Virtualization
You can migrate virtual machines from VMware vSphere or Red Hat Virtualization to OpenShift Virtualization with the Migration Toolkit for Virtualization (MTV).
Chapter 2. Prerequisites
Review the following prerequisites to ensure that your environment is prepared for migration.
2.1. Software compatibility guidelines
You must install compatible software versions.
Migration Toolkit for Virtualization | OpenShift Container Platform | OpenShift Virtualization | VMware vSphere | Red Hat Virtualization |
---|---|---|---|---|
2.1 | 4.8 | 4.8.1 | 6.5 or later | 4.3 or later |
2.2. Storage support and default modes
MTV uses the following default volume and access modes for supported storage.
If the OpenShift Virtualization storage does not support dynamic provisioning, MTV applies the default settings:
Filesystem
volume modeFilesystem
volume mode is slower thanBlock
volume mode.ReadWriteOnce
access modeReadWriteOnce
access mode does not support live virtual machine migration.
Provisioner | Volume mode | Access mode |
---|---|---|
kubernetes.io/aws-ebs | Block | ReadWriteOnce |
kubernetes.io/azure-disk | Block | ReadWriteOnce |
kubernetes.io/azure-file | Filesystem | ReadWriteMany |
kubernetes.io/cinder | Block | ReadWriteOnce |
kubernetes.io/gce-pd | Block | ReadWriteOnce |
kubernetes.io/hostpath-provisioner | Filesystem | ReadWriteOnce |
manila.csi.openstack.org | Filesystem | ReadWriteMany |
openshift-storage.cephfs.csi.ceph.com | Filesystem | ReadWriteMany |
openshift-storage.rbd.csi.ceph.com | Block | ReadWriteOnce |
kubernetes.io/rbd | Block | ReadWriteOnce |
kubernetes.io/vsphere-volume | Block | ReadWriteOnce |
2.3. Network prerequisites
The following prerequisites apply to all migrations:
- IP addresses, VLANs, and other network configuration settings must not be changed before or after migration. The MAC addresses of the virtual machines are preserved during migration.
- The network connections between the source environment, the OpenShift Virtualization cluster, and the replication repository must be reliable and uninterrupted.
- If you are mapping more than one source and destination network, you must create a network attachment definition for each additional destination network.
2.3.1. Ports
The firewalls must enable traffic over the following ports:
Port | Protocol | Source | Destination | Purpose |
---|---|---|---|---|
443 | TCP | OpenShift nodes | VMware vCenter | VMware provider inventory Disk transfer authentication |
443 | TCP | OpenShift nodes | VMware ESXi hosts | Disk transfer authentication |
902 | TCP | OpenShift nodes | VMware ESXi hosts | Disk transfer data copy |
Port | Protocol | Source | Destination | Purpose |
---|---|---|---|---|
443 | TCP | OpenShift nodes | RHV Engine | RHV provider inventory Disk transfer authentication |
443 | TCP | OpenShift nodes | RHV hosts | Disk transfer authentication |
54322 | TCP | OpenShift nodes | RHV hosts | Disk transfer data copy |
2.4. Source virtual machine prerequisites
The following prerequisites apply to all migrations:
- ISO/CDROM disks must be unmounted.
- Each NIC must contain one IPv4 and/or one IPv6 address.
-
The VM name must contain only lowercase letters (
a-z
), numbers (0-9
), or hyphens (-
), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, periods (.
), or special characters. - The VM name must not duplicate the name of a VM in the OpenShift Virtualization environment.
-
The VM operating system must be certified and supported for use as a guest operating system with OpenShift Virtualization and for conversion to KVM with
virt-v2v
.
2.5. Red Hat Virtualization prerequisites
The following prerequisites apply to Red Hat Virtualization migrations:
You must have the CA certificate of the Manager.
You can obtain the CA certificate by navigating to
https://<www.example.com>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA
in a browser.
2.6. VMware prerequisites
The following prerequisites apply to VMware migrations:
- You must install VMware Tools on all source virtual machines (VMs).
- If you are running a warm migration, you must enable changed block tracking (CBT) on the VMs and on the VM disks.
- You must create a VMware Virtual Disk Development Kit (VDDK) image.
- You must obtain the SHA-1 fingerprint of the vCenter host.
- If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host.
2.6.1. Creating a VDDK image
The Migration Toolkit for Virtualization (MTV) uses the VMware Virtual Disk Development Kit (VDDK) SDK to transfer virtual disks from VMware vSphere.
You must download the VMware Virtual Disk Development Kit (VDDK), build a VDDK image, and push the VDDK image to your image registry. Later, you will add the VDDK image to the HyperConverged
custom resource (CR).
Storing the VDDK image in a public registry might violate the VMware license terms.
Prerequisites
- OpenShift Container Platform image registry or a secure external registry.
-
podman
installed. - If you are using an external registry, OpenShift Virtualization must be able to access it.
Procedure
Create and navigate to a temporary directory:
mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
$ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
Copy to Clipboard Copied! - In a browser, navigate to the VMware VDDK download page.
- Select the latest VDDK version and click Download.
- Save the VDDK archive file in the temporary directory.
Extract the VDDK archive:
tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
$ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
Copy to Clipboard Copied! Create a
Dockerfile
:cat > Dockerfile <<EOF FROM registry.access.redhat.com/ubi8/ubi-minimal COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib RUN mkdir -p /opt ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"] EOF
$ cat > Dockerfile <<EOF FROM registry.access.redhat.com/ubi8/ubi-minimal COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib RUN mkdir -p /opt ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"] EOF
Copy to Clipboard Copied! Build the VDDK image:
podman build . -t <registry_route_or_server_path>/vddk:<tag>
$ podman build . -t <registry_route_or_server_path>/vddk:<tag>
Copy to Clipboard Copied! Push the VDDK image to the registry:
podman push <registry_route_or_server_path>/vddk:<tag>
$ podman push <registry_route_or_server_path>/vddk:<tag>
Copy to Clipboard Copied! - Ensure that the image is accessible to your OpenShift Virtualization environment.
2.6.2. Obtaining the SHA-1 fingerprint of a vCenter host
You must obtain the SHA-1 fingerprint of a vCenter host in order to create a Secret
CR.
Procedure
Run the following command:
openssl s_client \ -connect <www.example.com>:443 \ < /dev/null 2>/dev/null \ | openssl x509 -fingerprint -noout -in /dev/stdin \ | cut -d '=' -f 2
$ openssl s_client \ -connect <www.example.com>:443 \
1 < /dev/null 2>/dev/null \ | openssl x509 -fingerprint -noout -in /dev/stdin \ | cut -d '=' -f 2
Copy to Clipboard Copied! - 1
- Specify the vCenter name.
Example output
01:23:45:67:89:AB:CD:EF:01:23:45:67:89:AB:CD:EF:01:23:45:67
01:23:45:67:89:AB:CD:EF:01:23:45:67:89:AB:CD:EF:01:23:45:67
Copy to Clipboard Copied!
2.6.3. Increasing the NFC service memory of an ESXi host
If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host. Otherwise,the migration will fail because the NFC service memory is limited to 10 parallel connections.
Procedure
- Log in to the ESXi host as root.
Change the value of
maxMemory
to1000000000
in/etc/vmware/hostd/config.xml
:... <nfcsvc> <path>libnfcsvc.so</path> <enabled>true</enabled> <maxMemory>1000000000</maxMemory> <maxStreamMemory>10485760</maxStreamMemory> </nfcsvc> ...
... <nfcsvc> <path>libnfcsvc.so</path> <enabled>true</enabled> <maxMemory>1000000000</maxMemory> <maxStreamMemory>10485760</maxStreamMemory> </nfcsvc> ...
Copy to Clipboard Copied! Restart
hostd
:/etc/init.d/hostd restart
# /etc/init.d/hostd restart
Copy to Clipboard Copied! You do not need to reboot the host.
Chapter 3. Installing the MTV Operator
You can install the MTV Operator by using the OpenShift Container Platform web console or the command line interface (CLI).
3.1. Installing the MTV Operator by using the OpenShift Container Platform web console
You can install the MTV Operator by using the OpenShift Container Platform web console.
Prerequisites
- OpenShift Container Platform 4.8 installed.
- OpenShift Virtualization Operator installed.
-
You must be logged in as a user with
cluster-admin
permissions.
Procedure
- In the OpenShift Container Platform web console, click Operators → OperatorHub.
- Use the Filter by keyword field to search for mtv-operator.
- Click the MTV Operator and then click Install.
- On the Install Operator page, click Install.
- Click Operators → Installed Operators to verify that the MTV Operator appears in the openshift-mtv project with the status Succeeded.
- Click the MTV Operator.
- Under Provided APIs, locate the ForkliftController, and click Create Instance.
- Click Create.
- Click Workloads → Pods to verify that the MTV pods are running.
Obtaining the MTV web console URL
You can obtain the MTV web console URL by using the OpenShift Container Platform web console.
Prerequisites
- You must have the OpenShift Virtualization Operator installed.
- You must have the MTV Operator installed.
-
You must be logged in as a user with
cluster-admin
privileges.
Procedure
- Log in to the OpenShift Container Platform web console.
- Click Networking → Routes.
-
Select the
openshift-mtv
project in the Project: list. -
Click the URL for the
forklift-ui
service to open the login page for the MTV web console.
3.2. Installing the MTV Operator from the command line interface
You can install the MTV Operator from the command line interface (CLI).
Prerequisites
- OpenShift Container Platform 4.8 installed.
- OpenShift Virtualization Operator installed.
-
You must be logged in as a user with
cluster-admin
permissions.
Procedure
Create the openshift-mtv project:
cat << EOF | oc apply -f - apiVersion: project.openshift.io/v1 kind: Project metadata: name: openshift-mtv EOF
$ cat << EOF | oc apply -f - apiVersion: project.openshift.io/v1 kind: Project metadata: name: openshift-mtv EOF
Copy to Clipboard Copied! Create an
OperatorGroup
CR calledmigration
:cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: migration namespace: openshift-mtv spec: targetNamespaces: - openshift-mtv EOF
$ cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: migration namespace: openshift-mtv spec: targetNamespaces: - openshift-mtv EOF
Copy to Clipboard Copied! Create a
Subscription
CR for the Operator:cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: mtv-operator namespace: openshift-mtv spec: channel: release-v2.1.0 installPlanApproval: Automatic name: mtv-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: "mtv-operator.2.1.0" EOF
$ cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: mtv-operator namespace: openshift-mtv spec: channel: release-v2.1.0 installPlanApproval: Automatic name: mtv-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: "mtv-operator.2.1.0" EOF
Copy to Clipboard Copied! Create a
ForkliftController
CR:cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: ForkliftController metadata: name: forklift-controller namespace: openshift-mtv spec: olm_managed: true EOF
$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: ForkliftController metadata: name: forklift-controller namespace: openshift-mtv spec: olm_managed: true EOF
Copy to Clipboard Copied! Verify that the MTV pods are running:
oc get pods -n openshift-mtv
$ oc get pods -n openshift-mtv
Copy to Clipboard Copied! Example output
NAME READY STATUS RESTARTS AGE forklift-controller-788bdb4c69-mw268 2/2 Running 0 2m forklift-operator-6bf45b8d8-qps9v 1/1 Running 0 5m forklift-ui-7cdf96d8f6-xnw5n 1/1 Running 0 2m
NAME READY STATUS RESTARTS AGE forklift-controller-788bdb4c69-mw268 2/2 Running 0 2m forklift-operator-6bf45b8d8-qps9v 1/1 Running 0 5m forklift-ui-7cdf96d8f6-xnw5n 1/1 Running 0 2m
Copy to Clipboard Copied!
Obtaining the MTV web console URL
You can obtain the MTV web console URL from the command line.
Prerequisites
- You must have the OpenShift Virtualization Operator installed.
- You must have the MTV Operator installed.
-
You must be logged in as a user with
cluster-admin
privileges.
Procedure
Obtain the MTV web console URL:
oc get route virt -n openshift-mtv \ -o custom-columns=:.spec.host
$ oc get route virt -n openshift-mtv \ -o custom-columns=:.spec.host
Copy to Clipboard Copied! Example output
https://virt-openshift-mtv.apps.cluster.openshift.com.
https://virt-openshift-mtv.apps.cluster.openshift.com.
Copy to Clipboard Copied! - Launch a browser and navigate to the MTV web console.
Chapter 4. Migrating virtual machines by using the MTV web console
You can migrate virtual machines (VMs) to OpenShift Virtualization by using the MTV web console.
You must ensure that all prerequisites are met.
4.1. Adding providers
You can add providers by using the MTV web console.
4.1.1. Adding a VMware source provider
You can add a VMware source provider by using the MTV web console.
Prerequisites
- vCenter SHA-1 fingerprint.
- VMware Virtual Disk Development Kit (VDDK) image in a secure registry that is accessible to all clusters.
Procedure
Add the VDDK image to the
HyperConverged
CR:$ cat << EOF | oc apply -f - apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: vddkInitImage: <registry_route_or_server_path>/vddk:<tag> EOF
$ cat << EOF | oc apply -f - apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: vddkInitImage: <registry_route_or_server_path>/vddk:<tag>
1 EOF
Copy to Clipboard Copied! - 1
- Specify the VDDK image that you created.
- In the MTV web console, click Providers.
- Click Add provider.
- Select VMware from the Type list.
Fill in the following fields:
- Name: Name to display in the list of providers
- Hostname or IP address: vCenter host name or IP address
-
Username: vCenter admin user, for example,
administrator@vsphere.local
- Password: vCenter admin password
- SHA-1 fingerprint: vCenter SHA-1 fingerprint
Click Add to add and save the provider.
The source provider appears in the list of providers.
4.1.2. Adding a Red Hat Virtualization source provider
You can add a Red Hat Virtualization source provider by using the MTV web console.
Prerequisites
- CA certificate of the Manager.
Procedure
- In the MTV web console, click Providers.
- Click Add provider.
- Select Red Hat Virtualization from the Type list.
Fill in the following fields:
- Name: Name to display in the list of providers
- Hostname or IP address: Manager host name or IP address
- Username: Manager user
- Password: Manager password
- CA certificate: CA certificate of the Manager
Click Add to add and save the provider.
The source provider appears in the list of providers.
4.1.2.1. Selecting a migration network for a source provider
You can select a migration network in the MTV web console for a source provider to reduce risk to the source environment and to improve performance.
Using the default network for migration can result in poor performance because the network might not have sufficient bandwidth. This situation can have a negative effect on the source platform because the disk transfer operation might saturate the network.
Prerequisites
- The migration network must have sufficient throughput, minimum speed of 10 Gbps, for disk transfer.
The migration network must be accessible to the OpenShift Virtualization nodes through the default gateway.
NoteThe source virtual disks are copied by a pod that is connected to the pod network of the target namespace.
- The migration network must have jumbo frames enabled.
Procedure
- In the MTV web console, click Providers
- Click the Red Hat Virtualization or VMware tab.
- Click the host number in the Hosts column beside a provider to view a list of hosts.
- Select one or more hosts and click Select migration network.
Select a Network.
You can clear the selection by selecting the default network.
If your source provider is VMware, complete the following fields:
-
ESXi host admin username: Specify the ESXi host admin user, for example,
root
. - ESXi host admin password: Specify the ESXi host admin password.
-
ESXi host admin username: Specify the ESXi host admin user, for example,
If your source provider is Red Hat Virtualization, complete the following fields:
- Username: Specify the Manager user.
- Password: Specify the Manager password.
- Click Save.
Verify that the status of each host is Ready.
If a host status is not Ready, the host might be unreachable on the migration network or the credentials might be incorrect. You can modify the host configuration and save the changes.
4.1.3. Adding an OpenShift Virtualization provider
You can add an OpenShift Virtualization provider to the MTV web console in addition to the default OpenShift Virtualization provider, which is the provider where you installed MTV.
Prerequisites
-
You must have an OpenShift Virtualization service account token with
cluster-admin
privileges.
Procedure
- In the MTV web console, click Providers.
- Click Add provider.
- Select OpenShift Virtualization from the Type list.
Complete the following fields:
- Cluster name: Specify the cluster name to display in the list of target providers.
- URL: Specify the API endpoint of the cluster.
-
Service account token: Specify the
cluster-admin
service account token.
- Click Check connection to verify the credentials.
Click Add.
The provider appears in the list of providers.
4.1.3.1. Selecting a migration network for an OpenShift Virtualization provider
You can select a default migration network for an OpenShift Virtualization provider in the MTV web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.
If you do not select a migration network, the default migration network is the pod
network, which might not be optimal for disk transfer.
You can override the default migration network of the provider by selecting a different network when you create a migration plan.
Procedure
- In the MTV web console, click Providers.
- Click the OpenShift Virtualization tab.
- Select a provider and click Select migration network.
- Select a network from the list of available networks and click Select.
- Click the network number in the Networks column beside the provider to verify that the selected network is the default migration network.
4.2. Creating a network mapping
You can create one or more network mappings by using the MTV web console to map source networks to OpenShift Virtualization networks.
Prerequisites
- Source and target providers added to the web console.
- If you map more than one source and target network, each additional OpenShift Virtualization network requires its own network attachment definition.
Procedure
- Click Mappings.
- Click the Network tab and then click Create mapping.
Complete the following fields:
- Name: Enter a name to display in the network mappings list.
- Source provider: Select a source provider.
- Target provider: Select a target provider.
- Source networks: Select a source network.
- Target namespaces/networks: Select a target network.
- Optional: Click Add to create additional network mappings or to map multiple source networks to a single target network.
- If you create an additional network mapping, select the network attachment definition as the target network.
Click Create.
The network mapping is displayed on the Network mappings screen.
4.3. Creating a storage mapping
You can create a storage mapping by using the MTV web console to map source data stores to OpenShift Virtualization storage classes.
Prerequisites
- Source and target providers added to the web console.
- Local and shared persistent storage that support VM migration.
Procedure
- Click Mappings.
- Click the Storage tab and then click Create mapping.
- Enter the Name of the storage mapping.
- Select a Source provider and a Target provider.
- If your source provider is VMware, select a Source datastore and a Target storage class.
- If your source provider is Red Hat Virtualization, select a Source storage domain and a Target storage class.
- Optional: Click Add to create additional storage mappings or to map multiple source data stores or storage domains to a single storage class.
Click Create.
The mapping is displayed on the Storage mappings page.
4.4. Creating a migration plan
You can create a migration plan by using the MTV web console.
A migration plan allows you to group virtual machines to be migrated together or with the same migration parameters, for example, a percentage of the members of a cluster or a complete application.
You can configure a hook to run an Ansible playbook or custom container image during a specified stage of the migration plan.
Prerequisites
- If MTV is not installed on the target cluster, you must add a target provider on the Providers page of the web console.
Procedure
- In the web console, click Migration plans and then click Create migration plan.
Complete the following fields:
- Plan name: Enter a migration plan name to display in the migration plan list.
- Plan description: Optional: Brief description of the migration plan.
- Source provider: Select a source provider.
- Target provider: Select a target provider.
- Target namespace: You can type to search for an existing target namespace or create a new namespace.
You can change the migration transfer network for this plan by clicking Select a different network, selecting a network from the list, and clicking Select.
If you defined a migration transfer network for the OpenShift Virtualization provider and if the network is in the target namespace, that network is the default network for all migration plans. Otherwise, the
pod
network is used.
- Click Next.
- Select options to filter the list of source VMs and click Next.
- Select the VMs to migrate and then click Next.
Select an existing network mapping or create a new network mapping.
To create a new network mapping:
- Select a target network for each source network.
- Optional: Select Save mapping to use again and enter a network mapping name.
- Click Next.
Select an existing storage mapping or create a new storage mapping.
To create a new storage mapping:
- Select a target storage class for each VMware data store or Red Hat Virtualization storage domain.
- Optional: Select Save mapping to use again and enter a storage mapping name.
- Click Next.
Select a migration type and click Next.
- Cold migration: The source VMs are stopped while the data is copied.
- Warm migration: The source VMs run while the data is copied incrementally. Later, you will run the cutover, which stops the VMs and copies the remaining VM data and metadata. Warm migration is not supported for Red Hat Virtualization.
Optional: You can create a migration hook to run an Ansible playbook before or after migration:
- Click Add hook.
- Select the step when the hook will run.
Select a hook definition:
- Ansible playbook: Browse to the Ansible playbook or paste it into the field.
Custom container image: If you do not want to use the default
hook-runner
image, enter the image path:<registry_path>/<image_name>:<tag>
.NoteThe registry must be accessible to your OpenShift Container Platform cluster.
- Click Next.
Review your migration plan and click Finish.
The migration plan is saved in the migration plan list.
-
Click the Options menu
of the migration plan and select View details to verify the migration plan details.
4.5. Running a migration plan
You can run a migration plan and view its progress in the MTV web console.
Prerequisites
- Valid migration plan.
Procedure
Click Migration plans.
The Migration plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, and the status of the plan.
Click Start beside a migration plan to start the migration.
Warm migration only:
- The precopy stage starts.
- Click Cutover to complete the migration.
Expand a migration plan to view the migration details.
The migration details screen displays the migration start and end time, the amount of data copied, and a progress pipeline for each VM being migrated.
- Expand a VM to view the migration steps, elapsed time of each step, and its state.
4.6. Canceling a migration
You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the MTV web console.
Procedure
- Click Migration Plans.
- Click the name of a running migration plan to view the migration details.
- Select one or more VMs and click Cancel.
Click Yes, cancel to confirm the cancellation.
In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.
You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.
Chapter 5. Migrating virtual machines from the command line interface
You can migrate virtual machines (VMs) to OpenShift Virtualization from the command line (CLI).
You must ensure that all prerequisites are met.
5.1. Migrating virtual machines
You migrate virtual machines (VMs) from the command line (CLI) by creating MTV custom resources (CRs).
You must specify a name for cluster-scoped CRs.
You must specify both a name and a namespace for namespace-scoped CRs.
Prerequisites
-
You must be logged in as a user with
cluster-admin
privileges. - VMware only: You must have a VMware Virtual Disk Development Kit (VDDK) image in a secure registry that is accessible to all clusters.
Procedure
VMware only: Add the VDDK image to the
HyperConverged
CR:$ cat << EOF | oc apply -f - apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: vddkInitImage: <registry_route_or_server_path>/vddk:<tag> EOF
$ cat << EOF | oc apply -f - apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: vddkInitImage: <registry_route_or_server_path>/vddk:<tag>
1 EOF
Copy to Clipboard Copied! - 1
- Specify the VDDK image that you created.
Create a
Secret
CR manifest for the source provider credentials:$ cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: openshift-mtv type: Opaque stringData: user: <user> password: <password> cacert: <RHV_ca_certificate> thumbprint: <vcenter_fingerprint> EOF
$ cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <secret> namespace: openshift-mtv type: Opaque stringData: user: <user>
1 password: <password>
2 cacert: <RHV_ca_certificate>
3 thumbprint: <vcenter_fingerprint>
4 EOF
Copy to Clipboard Copied! - 1
- Specify the base64-encoded vCenter admin user or the RHV Manager user.
- 2
- Specify the base64-encoded password.
- 3
- RHV only: Specify the base64-encoded CA certificate of the Manager. You can retrieve it at
https://<www.example.com>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA
. - 4
- VMware only: Specify the vCenter SHA-1 fingerprint.
Create a
Provider
CR manifest for the source provider:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <provider> namespace: openshift-mtv spec: type: <provider_type> url: <api_end_point> secret: name: <secret> namespace: openshift-mtv EOF
$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: <provider> namespace: openshift-mtv spec: type: <provider_type>
1 url: <api_end_point>
2 secret: name: <secret>
3 namespace: openshift-mtv EOF
Copy to Clipboard Copied! VMware only: Create a
Host
CR manifest:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Host metadata: name: <vmware_host> namespace: openshift-mtv spec: provider: namespace: openshift-mtv name: <source_provider> id: <source_host_mor> ipAddress: <source_network_ip> EOF
$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Host metadata: name: <vmware_host> namespace: openshift-mtv spec: provider: namespace: openshift-mtv name: <source_provider>
1 id: <source_host_mor>
2 ipAddress: <source_network_ip>
3 EOF
Copy to Clipboard Copied! Create a
NetworkMap
CR manifest to map the source and destination networks:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: openshift-mtv spec: map: - destination: name: <pod> namespace: openshift-mtv type: pod source: id: <source_network_id> name: <source_network_name> - destination: name: <network_attachment_definition> namespace: <network_attachment_definition_namespace> type: multus source: id: <source_network_id> name: <source_network_name> provider: source: name: <source_provider> namespace: openshift-mtv destination: name: <destination_cluster> namespace: openshift-mtv EOF
$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: NetworkMap metadata: name: <network_map> namespace: openshift-mtv spec: map: - destination: name: <pod> namespace: openshift-mtv type: pod
1 source:
2 id: <source_network_id>
3 name: <source_network_name> - destination: name: <network_attachment_definition>
4 namespace: <network_attachment_definition_namespace>
5 type: multus source: id: <source_network_id> name: <source_network_name> provider: source: name: <source_provider> namespace: openshift-mtv destination: name: <destination_cluster> namespace: openshift-mtv EOF
Copy to Clipboard Copied! - 1
- Allowed values are
pod
andmultus
. - 2
- You can use either the
id
or thename
parameter to specify the source network. - 3
- Specify the VMware network MOR or RHV network UUID.
- 4
- Specify a network attachment definition for each additional OpenShift Virtualization network.
- 5
- Specify the namespace of the OpenShift Virtualization network attachment definition.
Create a
StorageMap
CR manifest to map source and destination storage:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: openshift-mtv spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode> source: id: <source_datastore> - destination: storageClass: <storage_class> accessMode: <access_mode> source: id: <source_datastore> provider: source: name: <source_provider> namespace: openshift-mtv destination: name: <destination_cluster> namespace: openshift-mtv EOF
$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: StorageMap metadata: name: <storage_map> namespace: openshift-mtv spec: map: - destination: storageClass: <storage_class> accessMode: <access_mode>
1 source: id: <source_datastore>
2 - destination: storageClass: <storage_class> accessMode: <access_mode> source: id: <source_datastore> provider: source: name: <source_provider> namespace: openshift-mtv destination: name: <destination_cluster> namespace: openshift-mtv EOF
Copy to Clipboard Copied! Optional: Create a
Hook
CR manifest to run custom code on a VM during the phase specified in thePlan
CR:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: openshift-mtv spec: image: quay.io/konveyor/hook-runner playbook: | LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr bG9hZAoK EOF
$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Hook metadata: name: <hook> namespace: openshift-mtv spec: image: quay.io/konveyor/hook-runner
1 playbook: |
2 LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr bG9hZAoK EOF
Copy to Clipboard Copied! Create a
Plan
CR manifest for the migration:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan> namespace: openshift-mtv spec: warm: true provider: source: name: <source_provider> namespace: openshift-mtv destination: name: <destination_cluster> namespace: openshift-mtv map: network: name: <network_map> namespace: openshift-mtv storage: name: <storage_map> namespace: openshift-mtv targetNamespace: openshift-mtv vms: - id: <source_vm> - name: <source_vm> hooks: - hook: namespace: openshift-mtv name: <hook> step: <step> EOF
$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Plan metadata: name: <plan>
1 namespace: openshift-mtv spec: warm: true
2 provider: source: name: <source_provider> namespace: openshift-mtv destination: name: <destination_cluster> namespace: openshift-mtv map: network:
3 name: <network_map>
4 namespace: openshift-mtv storage: name: <storage_map>
5 namespace: openshift-mtv targetNamespace: openshift-mtv vms:
6 - id: <source_vm>
7 - name: <source_vm> hooks:
8 - hook: namespace: openshift-mtv name: <hook>
9 step: <step>
10 EOF
Copy to Clipboard Copied! - 1
- Specify the name of the
Plan
CR. - 2
- VMware only: Specify whether the migration is warm or cold. If you specify a warm migration without specifying a value for the
cutover
parameter in theMigration
CR manifest, only the precopy stage will run. Warm migration is not supported for RHV. - 3
- You can add multiple network mappings.
- 4
- Specify the name of the
NetworkMap
CR. - 5
- Specify the name of the
StorageMap
CR. - 6
- You can use either the
id
or thename
parameter to specify the source VMs. - 7
- Specify the VMware VM MOR or RHV VM UUID.
- 8
- Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
- 9
- Specify the name of the
Hook
CR. - 10
- Allowed values are
PreHook
, before the migation plan starts, orPostHook
, after the migration is complete.
Optional, for VMware only: To change the time interval between the CBT snapshots for warm migration, patch the
vm-import-controller-config
config map:oc patch configmap/vm-import-controller-config \ -n openshift-cnv -p '{"data": \ {"warmImport.intervalMinutes": "<interval>"}}'
$ oc patch configmap/vm-import-controller-config \ -n openshift-cnv -p '{"data": \ {"warmImport.intervalMinutes": "<interval>"}}'
1 Copy to Clipboard Copied! - 1
- Specify the time interval in minutes. The default value is
60
.
Create a
Migration
CR manifest to run thePlan
CR:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <migration> namespace: openshift-mtv spec: plan: name: <plan> namespace: openshift-mtv cutover: <cutover_time> EOF
$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <migration>
1 namespace: openshift-mtv spec: plan: name: <plan>
2 namespace: openshift-mtv cutover: <cutover_time>
3 EOF
Copy to Clipboard Copied! - 1
- Specify the name of the
Migration
CR. - 2
- Specify the name of the
Plan
CR that you are running. TheMigration
CR creates aVirtualMachineImport
CR for each VM that is migrated. - 3
- Optional: Specify a cutover time according to the ISO 8601 format with the UTC time offset, for example,
2021-04-04T01:23:45.678+09:00
.
You can associate multiple
Migration
CRs with a singlePlan
CR. If a migration does not complete, you can create a newMigration
CR, without changing thePlan
CR, to migrate the remaining VMs.View the
VirtualMachineImport
pods to monitor the progress of the migration:oc get pods -n openshift-mtv
$ oc get pods -n openshift-mtv
Copy to Clipboard Copied!
5.2. Canceling a migration
You can cancel an entire migration or individual virtual machines (VMs) while a migration is in progress from the command line interface (CLI).
Canceling an entire migration
Delete the
Migration
CR:oc delete migration <migration> -n openshift-mtv
$ oc delete migration <migration> -n openshift-mtv
1 Copy to Clipboard Copied!
Canceling the migration of individual VMs
Add the individual VMs to the
Migration
CR manifest:cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <migration> namespace: openshift-mtv ... spec: cancel: - id: vm-102 - id: vm-203 - name: rhel8-vm EOF
$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: Migration metadata: name: <migration> namespace: openshift-mtv ... spec: cancel:
1 - id: vm-102
2 - id: vm-203 - name: rhel8-vm EOF
Copy to Clipboard Copied! View the
VirtualMachineImport
pods to monitor the progress of the remaining VMs:oc get pods -n openshift-mtv
$ oc get pods -n openshift-mtv
Copy to Clipboard Copied!
Chapter 6. Upgrading the Migration Toolkit for Virtualization
You can upgrade the MTV Operator by using the OpenShift Container Platform web console to install the new version.
You must upgrade to the next release without skipping a release, for example, from 2.0 to 2.1 or from 2.1 to 2.2.
See Upgrading installed Operators in the OpenShift Container Platform documentation.
Chapter 7. Uninstalling the Migration Toolkit for Virtualization
You can uninstall the Migration Toolkit for Virtualization (MTV) by using the OpenShift Container Platform web console or the command line interface (CLI).
7.1. Uninstalling MTV by using the OpenShift Container Platform web console
You can uninstall Migration Toolkit for Virtualization (MTV) by using the OpenShift Container Platform web console to delete the openshift-mtv
project and custom resource definitions (CRDs).
Prerequisites
-
You must be logged in as a user with
cluster-admin
privileges.
Procedure
- Click Home → Projects.
- Locate the openshift-mtv project.
-
On the right side of the project, select Delete Project from the Options menu
.
- In the Delete Project pane, enter the project name and click Delete.
- Click Administration → CustomResourceDefinitions.
-
Enter
forklift
in the Search field to locate the CRDs in theforklift.konveyor.io
group. -
On the right side of each CRD, select Delete CustomResourceDefinition from the Options menu
.
7.2. Uninstalling MTV from the command line interface
You can uninstall Migration Toolkit for Virtualization (MTV) from the command line interface (CLI) by deleting the openshift-mtv
project and the forklift.konveyor.io
custom resource definitions (CRDs).
Prerequisites
-
You must be logged in as a user with
cluster-admin
privileges.
Procedure
Delete the project:
oc delete project openshift-mtv
$ oc delete project openshift-mtv
Copy to Clipboard Copied! Delete the CRDs:
oc get crd -o name | grep 'forklift' | xargs oc delete
$ oc get crd -o name | grep 'forklift' | xargs oc delete
Copy to Clipboard Copied! Delete the OAuthClient:
oc delete oauthclient/forklift-ui
$ oc delete oauthclient/forklift-ui
Copy to Clipboard Copied!
Chapter 8. Troubleshooting
This section provides information for troubleshooting common migration issues.
8.1. Architecture
This section describes MTV custom resources, services, and workflows.
8.1.1. MTV custom resources and services
The Migration Toolkit for Virtualization (MTV) is provided as an OpenShift Container Platform Operator. It creates and manages the following custom resources (CRs) and services.
MTV custom resources
-
Provider
CR stores attributes that enable MTV to connect to and interact with the source and target providers. -
NetworkMapping
CR maps the networks of the source and target providers. -
StorageMapping
CR maps the storage of the source and target providers. -
Provisioner
CR stores the configuration of the storage provisioners, such as supported volume and access modes. -
Plan
CR contains a list of VMs with the same migration parameters and associated network and storage mappings. Migration
CR runs a migration plan.Only one
Migration
CR per migration plan can run at a given time. You can create multipleMigration
CRs for a singlePlan
CR.
MTV services
Provider Inventory
service:- Connects to the source and target providers.
- Maintains a local inventory for mappings and plans.
- Stores VM configurations.
-
Runs the
Validation
service if a VM configuration change is detected.
-
Validation
service checks the suitability of a VM for migration by applying rules.
The Validation service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
User Interface
service:- Enables you to create and configure MTV CRs.
- Displays the status of the CRs and the progress of a migration.
Migration Controller
service orchestrates migrations.When you create a migration plan, the Migration Controller service validates the plan and adds a status label. If the plan fails validation, the plan status is
Not ready
and the plan cannot be used to perform a migration. If the plan passes validation, the plan status isReady
and it can be used to perform a migration. After a successful migration, the Migration Controller changes the plan status toCompleted
.-
Virtual Machine Import Controller
,Kubevirt Controller
, andContainerized Data Import (CDI) Controller
services handle most technical operations.
8.1.2. High-level migration workflow
The high-level workflow shows the migration process from the point of view of the user.
Figure 8.1. High-level workflow
The workflow describes the following steps:
- You create a source provider, a target provider, a network mapping, and a storage mapping.
You create a migration plan that includes the following resources:
- Source provider
- Target provider
- Network mapping
- Storage mapping
- One or more VMs
-
You run a migration plan by creating a
Migration
CR that references the migration plan. If a migration is incomplete, you can run a migration plan multiple times until all VMs are migrated. -
For each VM in the migration plan, the Migration Controller creates a
VirtualMachineImport
CR and monitors its status. When all VMs have been migrated, the Migration Controller sets the status of the migration plan toCompleted
. The power state of a source VM is maintained after migration.
8.1.3. Detailed migration workflow
You can use the detailed migration workflow to troubleshoot a failed migration.
Figure 8.2. Detailed OpenShift Virtualization migration workflow
The workflow describes the following steps:
-
When you run a migration plan, the Migration Controller creates a
VirtualMachineImport
custom resource (CR) for each source virtual machine (VM). -
The Virtual Machine Import Controller validates the
VirtualMachineImport
CR and generates aVirtualMachine
CR. The Virtual Machine Import Controller retrieves the VM configuration, including network, storage, and metadata, linked in the
VirtualMachineImport
CR.For each VM disk:
-
The Virtual Machine Import Controller creates a
DataVolume
CR as a wrapper for a Persistent Volume Claim (PVC) and annotations. -
The Containerized Data Importer (CDI) Controller creates a PVC. The Persistent Volume (PV) is dynamically provisioned by the
StorageClass
provisioner. -
The CDI Controller creates an
Importer
pod. For a VMware provider, the
Importer
pod connects to the VM disk by using the VMware Virtual Disk Development Kit (VDDK) SDK and streams the VM disk to the PV.After the VM disks are transferred:
The Virtual Machine Import Controller creates a
Conversion
pod with the PVCs attached to it.The
Conversion
pod runsvirt-v2v
, which installs and configures device drivers on the PVCs of the target VM.-
The Virtual Machine Import Controller creates a
VirtualMachineInstance
CR. When the target VM is powered on, the KubeVirt Controller creates a VM pod.
The VM pod runs
QEMU-KVM
with the PVCs attached as VM disks.
8.2. Error messages
This section describes error messages and how to resolve them.
warm import retry limit reached
The warm import retry limit reached
error message is displayed during a warm migration if a VMware virtual machine (VM) has reached the maximum number (28) of changed block tracking (CBT) snapshots during the precopy stage. You must delete some of the CBT snapshots from the VM and restart the migration plan.
8.3. Using the must-gather tool
You can collect logs and information about MTV custom resources (CRs) by using the must-gather
tool. You must attach a must-gather
data file to all customer cases.
You can gather data for a specific namespace, migration plan, or virtual machine (VM) by using the filtering options.
If you specify a non-existent resource in the filtered must-gather
command, no archive file is created.
Prerequisites
-
You must be logged in to the OpenShift Virtualization cluster as a user with the
cluster-admin
role. -
You must have the OpenShift Container Platform CLI (
oc
) installed.
Collecting logs and CR information
-
Navigate to the directory where you want to store the
must-gather
data. Run the
oc adm must-gather
command:oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.1.0
$ oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.1.0
Copy to Clipboard Copied! The data is saved as
/must-gather/must-gather.tar.gz
. You can upload this file to a support case on the Red Hat Customer Portal.Optional: Run the
oc adm must-gather
command with the following options to gather filtered data:Namespace:
oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.1.0 \ -- NS=<namespace> /usr/bin/targeted
$ oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.1.0 \ -- NS=<namespace> /usr/bin/targeted
Copy to Clipboard Copied! Migration plan:
oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.1.0 \ -- PLAN=<migration_plan> /usr/bin/targeted
$ oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.1.0 \ -- PLAN=<migration_plan> /usr/bin/targeted
Copy to Clipboard Copied! Virtual machine:
oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.1.0 \ -- VM=<vm_id> NS=<namespace> /usr/bin/targeted
$ oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.1.0 \ -- VM=<vm_id> NS=<namespace> /usr/bin/targeted
1 Copy to Clipboard Copied! - 1
- Specify the VM ID as it appears in the
Plan
CR.