Chapter 5. Provider-specific requirements for migration
Review the specific software requirements per source provider.
5.1. Red Hat Virtualization prerequisites Copy linkLink copied to clipboard!
The following prerequisites apply to Red Hat Virtualization migrations:
-
To create a source provider, you must have at least the
UserRoleandReadOnlyAdminroles assigned to you. These are the minimum required permissions, however, any other administrator or superuser permissions will also work.
You must keep the UserRole and ReadOnlyAdmin roles until the virtual machines of the source provider have been migrated. Otherwise, the migration will fail.
To migrate virtual machines:
You must have one of the following:
- RHV admin permissions. These permissions allow you to migrate any virtual machine in the system.
-
DiskCreatorandUserVmManagerpermissions on every virtual machine you want to migrate.
- You must use a compatible version of Red Hat Virtualization.
You must have the Manager CA certificate, unless it was replaced by a third-party certificate, in which case, specify the Manager Apache CA certificate.
You can obtain the Manager CA certificate by navigating to https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA in a browser.
- If you are migrating a virtual machine with a direct logical unit number (LUN) disk, ensure that the nodes in the OpenShift Virtualization destination cluster that the VM is expected to run on can access the backend storage.
- Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.
- LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.
5.2. OpenStack prerequisites Copy linkLink copied to clipboard!
To migrate from OpenStack to OpenShift Virtualization, verify you have a compatible OpenStack version and configure authentication. You can use token authentication, application credentials, or standard username and password credentials with MTV.
You can use these methods to migrate VMs from OpenStack source providers by using the command-line interface (CLI) the same way you migrate other VMs, except for how you prepare the Secret manifest.
5.2.1. Using token authentication with an OpenStack source provider Copy linkLink copied to clipboard!
You can use token authentication, instead of username and password authentication, when you create an OpenStack source provider.
MTV supports both of the following types of token authentication:
- Token with user ID
- Token with user name
For each type of token authentication, you need to use data from OpenStack to create a Secret manifest.
Prerequisites
Have an OpenStack account.
Procedure
- In the dashboard of the OpenStack web console, click Project > API Access.
Expand Download OpenStack RC file and click OpenStack RC file.
The file that is downloaded, referred to here as
<openstack_rc_file>, includes the following fields used for token authentication:OS_AUTH_URL OS_PROJECT_ID OS_PROJECT_NAME OS_DOMAIN_NAME OS_USERNAMETo get the data needed for token authentication, run the following command:
$ openstack token issueThe output, referred to here as
<openstack_token_output>, includes thetoken,userID, andprojectIDthat you need for authentication using a token with user ID.Create a
Secretmanifest similar to the following:For authentication using a token with user ID:
cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openstack-secret-tokenid namespace: openshift-mtv labels: createdForProviderType: openstack type: Opaque stringData: authType: token token: <token_from_openstack_token_output> projectID: <projectID_from_openstack_token_output> userID: <userID_from_openstack_token_output> url: <OS_AUTH_URL_from_openstack_rc_file> EOFFor authentication using a token with user name:
cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openstack-secret-tokenname namespace: openshift-mtv labels: createdForProviderType: openstack type: Opaque stringData: authType: token token: <token_from_openstack_token_output> domainName: <OS_DOMAIN_NAME_from_openstack_rc_file> projectName: <OS_PROJECT_NAME_from_openstack_rc_file> username: <OS_USERNAME_from_openstack_rc_file> url: <OS_AUTH_URL_from_openstack_rc_file> EOF
5.2.2. Using application credential authentication with an OpenStack source provider Copy linkLink copied to clipboard!
You can use application credential authentication, instead of username and password authentication, when you create an OpenStack source provider.
MTV supports both of the following types of application credential authentication:
- Application credential ID
- Application credential name
For each type of application credential authentication, you need to use data from OpenStack to create a Secret manifest.
Prerequisites
You have an OpenStack account.
Procedure
- In the dashboard of the OpenStack web console, click Project > API Access.
Expand Download OpenStack RC file and click OpenStack RC file.
The file that is downloaded, referred to here as
<openstack_rc_file>, includes the following fields used for application credential authentication:OS_AUTH_URL OS_PROJECT_ID OS_PROJECT_NAME OS_DOMAIN_NAME OS_USERNAMETo get the data needed for application credential authentication, run the following command:
$ openstack application credential create --role member --role reader --secret redhat forkliftThe output, referred to here as
<openstack_credential_output>, includes:-
The
idandsecretthat you need for authentication using an application credential ID -
The
nameandsecretthat you need for authentication using an application credential name
-
The
Create a
Secretmanifest similar to the following:For authentication using the application credential ID:
cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openstack-secret-appid namespace: openshift-mtv labels: createdForProviderType: openstack type: Opaque stringData: authType: applicationcredential applicationCredentialID: <id_from_openstack_credential_output> applicationCredentialSecret: <secret_from_openstack_credential_output> url: <OS_AUTH_URL_from_openstack_rc_file> EOFFor authentication using the application credential name:
cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openstack-secret-appname namespace: openshift-mtv labels: createdForProviderType: openstack type: Opaque stringData: authType: applicationcredential applicationCredentialName: <name_from_openstack_credential_output> applicationCredentialSecret: <secret_from_openstack_credential_output> domainName: <OS_DOMAIN_NAME_from_openstack_rc_file> username: <OS_USERNAME_from_openstack_rc_file> url: <OS_AUTH_URL_from_openstack_rc_file> EOF
5.3. VMware prerequisites Copy linkLink copied to clipboard!
It is strongly recommended to create a VDDK image to accelerate migrations. For more information, see Creating a VDDK image.
Virtual machine (VM) migrations do not work without VDDK when a VM is backed by VMware vSAN.
Migration Toolkit for Virtualization (MTV) cannot migrate VMware vSphere 6 and VMware vSphere 7 VMs to a FIPS-compliant OpenShift Virtualization cluster.
The following prerequisites apply to VMware migrations:
- You must use a compatible version of VMware vSphere.
- You must be logged in as a user with at least the minimal set of VMware privileges.
- To access the virtual machine using a pre-migration hook, VMware Tools must be installed on the source virtual machine.
-
The VM operating system must be certified and supported for use as a guest operating system with OpenShift Virtualization and for conversion to KVM with
virt-v2v. - If you are running a warm migration, you must enable changed block tracking (CBT) on the VMs and on the VM disks.
- If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the Network File Copy (NFC) service memory of the host.
- It is strongly recommended to disable hibernation because Migration Toolkit for Virtualization (MTV) does not support migrating hibernated VMs.
- The target namespace must have network connectivity to the VMware source environment. NetworkPolicies that block egress connections from the target namespace prevent migration from succeeding.
-
A default
MigControllerinstance must be present. This instance is required for the Storage Migration option in the Red Hat OpenShift web console in certain OpenShift versions. For steps to create the defaultMigControllerinstance, see Storage Migration Option Missing for Virtual Machines in OpenShift 4.20 Despite Availability in 4.18.
For virtual machines (VMs) running Microsoft Windows, Volume Shadow Copy Service (VSS) inside the guest VM is used to quiesce the file system and applications. When performing a warm migration of a Microsoft Windows virtual machine from VMware, you must start VSS on the Windows guest operating system in order for the snapshot and Quiesce guest file system to succeed.
If you do not start VSS on the Windows guest operating system, the snapshot creation during the warm migration fails with the following error: An error occurred while taking a snapshot: Failed to restart the virtual machine.
If you set the VSS service to Manual and start a snapshot creation with Quiesce guest file system = yes. In the background, the VMware Snapshot provider service requests VSS to start the shadow copy.
In case of a power outage, data might be lost for a VM with disabled hibernation. However, if hibernation is not disabled, migration will fail.
5.3.1. VMware privileges Copy linkLink copied to clipboard!
The following minimal set of VMware privileges is required to migrate virtual machines to OpenShift Virtualization with the Migration Toolkit for Virtualization (MTV).
| Privilege | Description |
|---|---|
|
| |
|
| Allows powering off a powered-on virtual machine. This operation powers down the guest operating system. |
|
| Allows powering on a powered-off virtual machine and resuming a suspended virtual machine. |
|
| Allows managing a virtual machine by the VMware Virtual Infrastructure eXtension (VIX) API. |
|
Note
All | |
|
| Allows opening a disk on a virtual machine for random read and write access. Used mostly for remote disk mounting. |
|
| Allows operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM. |
|
| Allows opening a disk on a virtual machine for random read access. Used mostly for remote disk mounting. |
|
| Allows read operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM. |
|
| Allows write operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM. |
|
| Allows cloning of a template. |
|
| Allows cloning of an existing virtual machine and allocation of resources. |
|
| Allows creation of a new template from a virtual machine. |
|
| Allows customization of a virtual machine’s guest operating system without moving the virtual machine. |
|
| Allows deployment of a virtual machine from a template. |
|
| Allows marking an existing powered-off virtual machine as a template. |
|
| Allows marking an existing template as a virtual machine. |
|
| Allows creation, modification, or deletion of customization specifications. |
|
| Allows promote operations on a virtual machine’s disks. |
|
| Allows reading a customization specification. |
|
| |
|
| Allows creation of a snapshot from the virtual machine’s current state. |
|
| Allows removal of a snapshot from the snapshot history. |
|
| |
|
| Allows exploring the contents of a datastore. |
|
| Allows performing low-level file operations - read, write, delete, and rename - in a datastore. |
|
| |
|
| Allows verification of the validity of a session. |
|
| |
|
| Allows decryption of an encrypted virtual machine. |
|
| Allows access to encrypted resources. |
Create a role in VMware with the permissions described in the preceding table and then apply this role to the Inventory section, as described in Creating a VMware role to apply MTV permissions.
5.3.2. Creating a VMware role to grant MTV privileges Copy linkLink copied to clipboard!
You can create a role in VMware to grant privileges for Migration Toolkit for Virtualization (MTV) and then grant those privileges to users with that role.
The procedure that follows explains how to do this in general. For detailed instructions, see VMware documentation.
Procedure
- In the vCenter Server UI, create a role that includes the set of privileges described in the table in VMware prerequisites.
In the vSphere inventory UI, grant privileges for users with this role to the appropriate vSphere logical objects at one of the following levels:
- At the user or group level: Assign privileges to the appropriate logical objects in the data center and use the Propagate to child objects option.
- At the object level: Apply the same role individually to all the relevant vSphere logical objects involved in the migration, for example, hosts, vSphere clusters, data centers, or networks.
5.3.3. Creating a VDDK image Copy linkLink copied to clipboard!
It is strongly recommended that Migration Toolkit for Virtualization (MTV) should be used with the VMware Virtual Disk Development Kit (VDDK) SDK when transferring virtual disks from VMware vSphere.
Creating a VDDK image, although optional, is highly recommended. Using MTV without VDDK is not recommended and could result in significantly lower migration speeds.
To make use of this feature, you download the VDDK, build a VDDK image, and push the VDDK image to your image registry.
The VDDK package contains symbolic links, therefore, the procedure of creating a VDDK image must be performed on a file system that preserves symbolic links (symlinks).
Storing the VDDK image in a public registry might violate the VMware license terms.
Prerequisites
- Red Hat OpenShift image registry.
-
You have
podmaninstalled. - You are working on a file system that preserves symbolic links (symlinks).
- If you are using an external registry, OpenShift Virtualization must be able to access it.
Procedure
Create and navigate to a temporary directory:
$ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>- In a browser, navigate to the VMware VDDK version 8 download page.
Select version 8.0.1 and click Download.
NoteTo migrate to OpenShift Virtualization 4.12, download VDDK version 7.0.3.2 from the VMware VDDK version 7 download page.
- Save the VDDK archive file in the temporary directory.
Extract the VDDK archive:
$ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gzCreate a
Dockerfile:$ cat > Dockerfile <<EOF FROM registry.access.redhat.com/ubi8/ubi-minimal USER 1001 COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib RUN mkdir -p /opt ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"] EOFBuild the VDDK image:
$ podman build . -t <registry_route_or_server_path>/vddk:<tag>Push the VDDK image to the registry:
$ podman push <registry_route_or_server_path>/vddk:<tag>- Ensure that the image is accessible to your OpenShift Virtualization environment.
5.3.4. Increasing the NFC service memory of an ESXi host Copy linkLink copied to clipboard!
If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the Network File copy (NFC) service memory of the host. Otherwise, the migration fails because the NFC service memory is limited to 10 parallel connections.
Procedure
- Log in to the ESXi host as root.
Change the value of
maxMemoryto1000000000in/etc/vmware/hostd/config.xml:... <nfcsvc> <path>libnfcsvc.so</path> <enabled>true</enabled> <maxMemory>1000000000</maxMemory> <maxStreamMemory>10485760</maxStreamMemory> </nfcsvc> ...Restart
hostd:# /etc/init.d/hostd restartYou do not need to reboot the host.
5.3.5. VDDK validator containers need requests and limits Copy linkLink copied to clipboard!
If you have the cluster or project resource quotas set, you must ensure that you have a sufficient quota for the MTV pods to perform the migration.
You can see the defaults, which you can override in the ForkliftController custom resource (CR), listed as follows. If necessary, you can adjust these defaults.
These settings are highly dependent on your environment. If there are many migrations happening at once and the quotas are not set enough for the migrations, then the migrations can fail. This can also be correlated to the MAX_VM_INFLIGHT setting that determines how many VMs/disks are migrated at once.
The following defaults can be overriden in the ForkliftController CR:
Defaults that affect both cold and warm migrations:
For cold migration, it is likely to be more resource intensive as it performs the disk copy. For warm migration, you could potentially reduce the requests.
-
virt_v2v_container_limits_cpu:
4000m -
virt_v2v_container_limits_memory:
8Gi -
virt_v2v_container_requests_cpu:
1000m virt_v2v_container_requests_memory:
1GiNoteCold and warm migration using
virt-v2vcan be resource-intensive. For more details, see Compute power and RAM.
-
virt_v2v_container_limits_cpu:
Defaults that affect any migrations with hooks:
-
hooks_container_limits_cpu:
1000m -
hooks_container_limits_memory:
1Gi -
hooks_container_requests_cpu:
100m -
hooks_container_requests_memory:
150Mi
-
hooks_container_limits_cpu:
Defaults that affect any OVA migrations:
-
ova_container_limits_cpu:
1000m -
ova_container_limits_memory:
1Gi -
ova_container_requests_cpu:
100m -
ova_container_requests_memory:
150Mi
-
ova_container_limits_cpu:
5.4. Open Virtual Appliance (OVA) prerequisites Copy linkLink copied to clipboard!
Open Virtual Appliance (OVA) migrations to OpenShift Virtualization require VMware vSphere files in NFS-shared directories. OVA files can be compressed Open Virtualization Format (OVF) packages with .ova extensions or extracted packages. Migration Toolkit for Virtualization (MTV) scans root and first-level subfolders for compressed packages, and up to second-level subfolders for extracted packages.
Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by MTV. MTV supports only OVA files created by VMware vSphere. Moreover, converting a vendor OVA may invalidate vendor support agreements.
To ensure stability and vendor support, always prioritize importing the vendor’s native QCOW2 image by using either the OpenShift Virtualization "Upload Image" or the OpenShift Virtualization "Import from URL" workflow rather than using the Migration Toolkit for Virtualization (MTV) OVA path.
- Prerequisites
- The NFS share is writable by the QEMU group (GID 107) if you plan to use the web upload feature for OVA files.
The OVA files are in one or more folders under an NFS shared directory in one of the following structures:
In one or more compressed OVF packages that hold all the VM information.
The filename of each compressed package must have the
.ovaextension. Several compressed packages can be stored in the same folder.When this structure is used, MTV scans the root folder and the first-level subfolders for compressed packages.
For example, if the NFS share is
/nfs, then:-
The folder
/nfsis scanned. -
The folder
/nfs/subfolder1is scanned. -
However,
/nfs/subfolder1/subfolder2is not scanned.
-
The folder
In extracted OVF packages.
When this structure is used, MTV scans the root folder, first-level subfolders, and second-level subfolders for extracted OVF packages.
However, there can be only one
.ovffile in a folder. Otherwise, the migration will fail.For example, if the NFS share is
/nfs, then:-
The OVF file
/nfs/vm.ovfis scanned. -
The OVF file
/nfs/subfolder1/vm.ovfis scanned. -
The OVF file
/nfs/subfolder1/subfolder2/vm.ovfis scanned. -
However, the OVF file
/nfs/subfolder1/subfolder2/subfolder3/vm.ovfis not scanned.
-
The OVF file
- If you plan to upload OVA files using the web browser, ensure that each .ova file has a unique filename.
You can optionally configure OVA file upload by web browser to upload OVA files directly to the NFS share. For more information, see Configuring OVA file upload by web browser.
5.5. OpenShift Virtualization prerequisites Copy linkLink copied to clipboard!
To migrate between OpenShift Virtualization clusters, verify that both clusters have matching Migration Toolkit for Virtualization (MTV) versions and the source uses OpenShift Virtualization 4.16 or later. You can migrate forward to newer OpenShift Virtualization versions if both are compatible with your MTV version.
It is strongly recommended to migrate only between clusters with the same version of OpenShift Virtualization, although migration from an earlier version of OpenShift Virtualization to a later one is supported.
5.5.1. OpenShift Virtualization live migration prerequisites Copy linkLink copied to clipboard!
In addition to the regular OpenShift Virtualization prerequisites, live migration has the following additional prerequisites:
- Migration Toolkit for Virtualization (MTV) 2.10.0 or later installed. MTV treats all OpenShift Virtualization migrations run on MTV 2.9 or earlier as cold migrations, even if they are configured as live migrations.
- OpenShift Virtualization 4.20.0 or later installed on both source and target clusters.
-
In the
KubeVirtresource of both clusters in thefeatureGatesof the YAML,DecentralizedLiveMigrationis listed. You must havecluster-adminprivileges to set this field. - Connectivity between the clusters must be established, including connectivity for state transfer. Technologies such as Submariner can be used for this purpose.
-
The target cluster has
VirtualMachineInstanceTypesandVirtualMachinePreferencesthat match those used by the VMs on the source cluster.
5.6. Software compatibility guidelines Copy linkLink copied to clipboard!
You must install compatible software versions. The table that follows lists the relevant software versions for this version of Migration Toolkit for Virtualization (MTV).
| Migration Toolkit for Virtualization | Red Hat OpenShift | OpenShift Virtualization | VMware vSphere | Red Hat Virtualization | OpenStack |
|---|---|---|---|---|---|
| 2.11 | 4.21, 4.20, 4.19 | 4.21, 4.20, 4.19 | 6.5 or later | 4.4 SP1 or later | 16.1 or later |
Migration from Red Hat Virtualization 4.3
MTV was tested only with Red Hat Virtualization 4.4 SP1. Migration from Red Hat Virtualization (RHV) 4.3 has not been tested with MTV 2.11. While not supported, basic migrations from RHV 4.3 are expected to work.
Generally it is advised to upgrade Red Hat Virtualization Manager to the previously mentioned supported version before the migration to OpenShift Virtualization.
Therefore, it is recommended to upgrade RHV to the supported version above before the migration to OpenShift Virtualization.
However, migrations from RHV 4.3.11 were tested with MTV 2.3, and might work in practice in many environments using MTV 2.11. In this case, it is recommended to upgrade Red Hat Virtualization Manager to the previously mentioned supported version before the migration to OpenShift Virtualization.
5.6.1. OpenShift Operator Life Cycles Copy linkLink copied to clipboard!
For more information about the software maintenance Life Cycle classifications for Operators shipped by Red Hat for use with OpenShift Container Platform, see OpenShift Operator Life Cycles.