Installing and using the Migration Toolkit for Virtualization
Migrating from VMware vSphere or Red Hat Virtualization to Red Hat OpenShift Virtualization
Abstract
Making open source more inclusive Copy linkLink copied to clipboard!
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. About the Migration Toolkit for Virtualization Copy linkLink copied to clipboard!
You can use the Migration Toolkit for Virtualization (MTV) to migrate virtual machines from the following source providers to OpenShift Virtualization destination providers:
- VMware vSphere
- Red Hat Virtualization (RHV)
- OpenStack
- Open Virtual Appliances (OVAs) that were created by VMware vSphere
- Remote OpenShift Virtualization clusters
Chapter 2. MTV cold migration and warm migration introduction Copy linkLink copied to clipboard!
Cold migration is when a powered off virtual machine (VM) is migrated to a separate host. The VM is powered off, and there is no need for common shared storage.
Warm migration is when a powered on VM is migrated to a separate host. A source host state is cloned to the destination host.
Warm migrations precopy stage
- Create an initial snapshot of running VM disks.
- Copy first snapshot to target: full-disk transfer, the largest amount of data copied. It takes more time to complete.
Copy deltas: changed data, copying only data that has changed since the last snapshot was taken. It takes less time to complete.
- Create a new snapshot.
- Copy the delta between the previous snapshot and the new snapshot.
- Schedule the next snapshot, configurable by default, one hour after the last snapshot finished.
- An arbitrary number of deltas can be copied.
Warm migration cutover stage
- Scheduled time to finalize warm migration
- Shut down the source VM.
- Copy the final snapshot delta to the target.
Continue in the same way as cold migration
- Guest conversion
- Starting target VM (optional)
2.1. Migration speed comparison Copy linkLink copied to clipboard!
- The observed speeds for the warm migration single disk transfer and disk conversion are approximately the same as for the cold migration.
- The benefit of warm migration is that the transfer of the snapshot is happening in the background while the VM is powered on.
- The default snapshot time is taken every 60 minutes. If VMs change substantially, more data needs to be transferred than in cold migration when the VM is powered off.
- The cutover time, meanng the shutdown of the VM and last snapshot transfer, is dependent on how much the VM has changed since the last snapshot.
2.2. About cold and warm migration Copy linkLink copied to clipboard!
MTV supports cold migration from:
- VMware vSphere
- Red Hat Virtualization (RHV)
- OpenStack
- Remote OpenShift Virtualization clusters
MTV supports warm migration from VMware vSphere and from RHV.
2.2.1. Cold migration Copy linkLink copied to clipboard!
Cold migration is the default migration type. The source virtual machines are shut down while the data is copied.
VMware only: In cold migrations, in situations in which a package manager cannot be used during the migration, MTV does not install the qemu-guest-agent daemon on the migrated VMs. This has some impact on the functionality of the migrated VMs, but overall, they are still expected to function.
To enable MTV to automatically install qemu-guest-agent on the migrated VMs, ensure that your package manager can install the daemon during the first boot of the VM after migration.
If that is not possible, use your preferred automated or manual procedure to install qemu-guest-agent manually.
2.2.2. Warm migration Copy linkLink copied to clipboard!
Most of the data is copied during the precopy stage while the source virtual machines (VMs) are running.
Then the VMs are shut down and the remaining data is copied during the cutover stage.
Precopy stage
The VMs are not shut down during the precopy stage.
The VM disks are copied incrementally by using changed block tracking (CBT) snapshots. The snapshots are created at one-hour intervals by default. You can change the snapshot interval by updating the forklift-controller deployment.
You must enable CBT for each source VM and each VM disk.
A VM can support up to 28 CBT snapshots. If the source VM has too many CBT snapshots and the Migration Controller service is not able to create a new snapshot, warm migration might fail. The Migration Controller service deletes each snapshot when the snapshot is no longer required.
The precopy stage runs until the cutover stage is started manually or is scheduled to start.
Cutover stage
The VMs are shut down during the cutover stage and the remaining data is migrated. Data stored in RAM is not migrated.
You can start the cutover stage manually by using the MTV console or you can schedule a cutover time in the Migration manifest.
2.2.3. Advantages and disadvantages of cold and warm migrations Copy linkLink copied to clipboard!
The table that follows offers a more detailed description of the advantages and disadvantages of cold migration and warm migration. It assumes that you have installed Red Hat Enterprise Linux (RHEL) 9 on the Red Hat OpenShift platform on which you installed MTV:
| Cold migration | Warm migration | |
|---|---|---|
| Duration | Correlates to the amount of data on the disks. Each block is copied once. | Correlates to the amount of data on the disks and VM utilization. Blocks may be copied multiple times. |
| Fail fast | Convert and then transfer. Each VM is converted to be compatible with OpenShift and, if the conversion is successful, the VM is transferred. If a VM cannot be converted, the migration fails immediately. | Transfer and then convert. For each VM, MTV creates a snapshot and transfers it to Red Hat OpenShift. When you start the cutover, MTV creates the last snapshot, transfers it, and then converts the VM. |
| Tools |
|
Containerized Data Importer (CDI), a persistent storage management add-on, and |
| Data transferred | Approximate sum of all disks | Approximate sum of all disks and VM utilization |
| VM downtime | High: The VMs are shut down, and the disks are transferred. | Low: Disks are transferred in the background. The VMs are shut down during the cutover stage, and the remaining data is migrated. Data stored in RAM is not migrated. |
| Parallelism | Disks are transferred sequentially for each VM. For remote migration, disks are transferred in parallel. [a] | Disks are transferred in parallel by different pods. |
| Connection use | Keeps the connection to the Source only during the disk transfer. | Keeps the connection to the Source during the disk transfer, but the connection is released between snapshots. |
| Tools | MTV only. | MTV and CDI from OpenShift Virtualization. |
[a]
Remote migration: Target environment that does not have MTV installed. Migration to a remote environment using CDI.
| ||
The preceding table describes the situation for VMs that are running because the main benefit of warm migration is the reduced downtime, and there is no reason to initiate warm migration for VMs that are down. However, performing warm migration for VMs that are down is not the same as cold migration, even when MTV uses virt-v2v and RHEL 9. For VMs that are down, MTV transfers the disks using CDI, unlike in cold migration.
When importing from VMware, there are additional factors which impact the migration speed such as limits related to ESXi, vSphere. or VDDK.
2.2.3.1. Conclusions Copy linkLink copied to clipboard!
Based on the preceding information, we can draw the following conclusions about cold migration vs. warm migration:
- The shortest downtime of VMs can be achieved by using warm migration.
- The shortest duration for VMs with a large amount of data on a single disk can be achieved by using cold migration.
- The shortest duration for VMs with a large amount of data that is spread evenly across multiple disks can be achieved by using warm migration.
Chapter 3. Prerequisites Copy linkLink copied to clipboard!
Review the following prerequisites to ensure that your environment is prepared for migration.
3.1. Software requirements Copy linkLink copied to clipboard!
Migration Toolkit for Virtualization (MTV) has software requirements for all providers as well as specific software requirements per provider.
3.1.1. Software requirements for all providers Copy linkLink copied to clipboard!
You must install compatible versions of Red Hat OpenShift and OpenShift Virtualization.
3.2. Storage support and default modes Copy linkLink copied to clipboard!
Migration Toolkit for Virtualization (MTV) uses the following default volume and access modes for supported storage.
| Provisioner | Volume mode | Access mode |
|---|---|---|
| kubernetes.io/aws-ebs | Block | ReadWriteOnce |
| kubernetes.io/azure-disk | Block | ReadWriteOnce |
| kubernetes.io/azure-file | Filesystem | ReadWriteMany |
| kubernetes.io/cinder | Block | ReadWriteOnce |
| kubernetes.io/gce-pd | Block | ReadWriteOnce |
| kubernetes.io/hostpath-provisioner | Filesystem | ReadWriteOnce |
| manila.csi.openstack.org | Filesystem | ReadWriteMany |
| openshift-storage.cephfs.csi.ceph.com | Filesystem | ReadWriteMany |
| openshift-storage.rbd.csi.ceph.com | Block | ReadWriteOnce |
| kubernetes.io/rbd | Block | ReadWriteOnce |
| kubernetes.io/vsphere-volume | Block | ReadWriteOnce |
If the OpenShift Virtualization storage does not support dynamic provisioning, you must apply the following settings:
Filesystemvolume modeFilesystemvolume mode is slower thanBlockvolume mode.ReadWriteOnceaccess modeReadWriteOnceaccess mode does not support live virtual machine migration.
See Enabling a statically-provisioned storage class for details on editing the storage profile.
If your migration uses block storage and persistent volumes created with an EXT4 file system, increase the file system overhead in CDI to be more than 10%. The default overhead that is assumed by CDI does not completely include the reserved place for the root partition. If you do not increase the file system overhead in CDI by this amount, your migration might fail.
When you migrate from OpenStack, or when you run a cold migration from Red Hat Virtualization to the Red Hat OpenShift cluster that MTV is deployed on, the migration allocates persistent volumes without CDI. In these cases, you might need to adjust the file system overhead.
If the configured file system overhead, which has a default value of 10%, is too low, the disk transfer will fail due to lack of space. In such a case, you would want to increase the file system overhead.
In some cases, however, you might want to decrease the file system overhead to reduce storage consumption.
You can change the file system overhead by changing the value of the controller_filesystem_overhead in the spec portion of the forklift-controller CR, as described in Configuring the MTV Operator.
3.3. Network prerequisites Copy linkLink copied to clipboard!
The following prerequisites apply to all migrations:
- IP addresses, VLANs, and other network configuration settings must not be changed before or during migration. The MAC addresses of the virtual machines are preserved during migration.
- The network connections between the source environment, the OpenShift Virtualization cluster, and the replication repository must be reliable and uninterrupted.
- If you are mapping more than one source and destination network, you must create a network attachment definition for each additional destination network.
3.3.1. Ports Copy linkLink copied to clipboard!
The firewalls must enable traffic over the following ports:
| Port | Protocol | Source | Destination | Purpose |
|---|---|---|---|---|
| 443 | TCP | OpenShift nodes | VMware vCenter | VMware provider inventory Disk transfer authentication |
| 443 | TCP | OpenShift nodes | VMware ESXi hosts | Disk transfer authentication |
| 902 | TCP | OpenShift nodes | VMware ESXi hosts | Disk transfer data copy |
| Port | Protocol | Source | Destination | Purpose |
|---|---|---|---|---|
| 443 | TCP | OpenShift nodes | RHV Engine | RHV provider inventory Disk transfer authentication |
| 443 | TCP | OpenShift nodes | RHV hosts | Disk transfer authentication |
| 54322 | TCP | OpenShift nodes | RHV hosts | Disk transfer data copy |
3.4. Source virtual machine prerequisites Copy linkLink copied to clipboard!
The following prerequisites apply to all migrations:
- ISO images and CD-ROMs are unmounted.
- Each NIC contains either an IPv4 address or an IPv6 address, although a NIC may use both.
- The operating system of each VM is certified and supported as a guest operating system for conversions.
You can check that the operating system is supported by referring to the table in Converting virtual machines from other hypervisors to KVM with virt-v2v. See the columns of the table that refer to RHEL 8 hosts and RHEL 9 hosts.
- VMs that you want to migrate with MTV 2.6.z run on RHEL 8.
- VMs that you want to migrate with MTV 2.7.z run on RHEL 9.
-
The name of a VM must not contain a period (
.). Migration Toolkit for Virtualization (MTV) changes any period in a VM name to a dash (-). The name of a VM must not be the same as any other VM in the OpenShift Virtualization environment.
WarningMTV has limited support for the migration of dual-boot operating system VMs.
In the case of a dual-boot operating system VM, MTV will try to convert the first boot disk it finds. Alternatively the root device can be specified in the MTV UI.
NoteMigration Toolkit for Virtualization automatically assigns a new name to a VM that does not comply with the rules.
Migration Toolkit for Virtualization makes the following changes when it automatically generates a new VM name:
- Excluded characters are removed.
- Uppercase letters are switched to lowercase letters.
-
Any underscore (
_) is changed to a dash (-).
This feature allows a migration to proceed smoothly even if someone enters a VM name that does not follow the rules.
VMs with Secure Boot enabled might not be migrated automatically
Virtual machines (VMs) with Secure Boot enabled currently might not be migrated automatically. This is because Secure Boot, a security standard developed by members of the PC industry to ensure that a device boots using only software that is trusted by the Original Equipment Manufacturer (OEM), would prevent the VMs from booting on the destination provider.
Workaround: The current workaround is to disable Secure Boot on the destination. For more details, see Disabling Secure Boot. (MTV-1548)
Windows VMs which are using Measured Boot cannot be migrated
Microsoft Windows virtual machines (VMs), which are using the Measured Boot feature, cannot be migrated because Measured Boot is a mechanism to prevent any kind of device changes, by checking each start-up component, including the firmware, all the way to the boot driver.
The alternative to migration is to re-create the Windows VM directly on OpenShift Virtualization.
3.5. Red Hat Virtualization prerequisites Copy linkLink copied to clipboard!
The following prerequisites apply to Red Hat Virtualization migrations:
-
To create a source provider, you must have at least the
UserRoleandReadOnlyAdminroles assigned to you. These are the minimum required permissions, however, any other administrator or superuser permissions will also work.
You must keep the UserRole and ReadOnlyAdmin roles until the virtual machines of the source provider have been migrated. Otherwise, the migration will fail.
To migrate virtual machines:
You must have one of the following:
- RHV admin permissions. These permissions allow you to migrate any virtual machine in the system.
-
DiskCreatorandUserVmManagerpermissions on every virtual machine you want to migrate.
- You must use a compatible version of Red Hat Virtualization.
You must have the Manager CA certificate, unless it was replaced by a third-party certificate, in which case, specify the Manager Apache CA certificate.
You can obtain the Manager CA certificate by navigating to https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA in a browser.
- If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the OpenShift Virtualization destination cluster that the VM is expected to run on can access the backend storage.
- Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.
- LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.
3.6. OpenStack prerequisites Copy linkLink copied to clipboard!
The following prerequisites apply to OpenStack migrations:
- You must use a compatible version of OpenStack.
3.6.1. Additional authentication methods for migrations with OpenStack source providers Copy linkLink copied to clipboard!
MTV versions 2.6 and later support the following authentication methods for migrations with OpenStack source providers in addition to the standard username and password credential set:
- Token authentication
- Application credential authentication
You can use these methods to migrate virtual machines with OpenStack source providers using the command-line interface (CLI) the same way you migrate other virtual machines, except for how you prepare the Secret manifest.
3.6.1.1. Using token authentication with an OpenStack source provider Copy linkLink copied to clipboard!
You can use token authentication, instead of username and password authentication, when you create an OpenStack source provider.
MTV supports both of the following types of token authentication:
- Token with user ID
- Token with user name
For each type of token authentication, you need to use data from OpenStack to create a Secret manifest.
Prerequisites
Have an OpenStack account.
Procedure
- In the dashboard of the OpenStack web console, click Project > API Access.
Expand Download OpenStack RC file and click OpenStack RC file.
The file that is downloaded, referred to here as
<openstack_rc_file>, includes the following fields used for token authentication:OS_AUTH_URL OS_PROJECT_ID OS_PROJECT_NAME OS_DOMAIN_NAME OS_USERNAME
OS_AUTH_URL OS_PROJECT_ID OS_PROJECT_NAME OS_DOMAIN_NAME OS_USERNAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow To get the data needed for token authentication, run the following command:
openstack token issue
$ openstack token issueCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output, referred to here as
<openstack_token_output>, includes thetoken,userID, andprojectIDthat you need for authentication using a token with user ID.Create a
Secretmanifest similar to the following:For authentication using a token with user ID:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For authentication using a token with user name:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.6.1.2. Using application credential authentication with an OpenStack source provider Copy linkLink copied to clipboard!
You can use application credential authentication, instead of username and password authentication, when you create an OpenStack source provider.
MTV supports both of the following types of application credential authentication:
- Application credential ID
- Application credential name
For each type of application credential authentication, you need to use data from OpenStack to create a Secret manifest.
Prerequisites
You have an OpenStack account.
Procedure
- In the dashboard of the OpenStack web console, click Project > API Access.
Expand Download OpenStack RC file and click OpenStack RC file.
The file that is downloaded, referred to here as
<openstack_rc_file>, includes the following fields used for application credential authentication:OS_AUTH_URL OS_PROJECT_ID OS_PROJECT_NAME OS_DOMAIN_NAME OS_USERNAME
OS_AUTH_URL OS_PROJECT_ID OS_PROJECT_NAME OS_DOMAIN_NAME OS_USERNAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow To get the data needed for application credential authentication, run the following command:
openstack application credential create --role member --role reader --secret redhat forklift
$ openstack application credential create --role member --role reader --secret redhat forkliftCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output, referred to here as
<openstack_credential_output>, includes:-
The
idandsecretthat you need for authentication using an application credential ID -
The
nameandsecretthat you need for authentication using an application credential name
-
The
Create a
Secretmanifest similar to the following:For authentication using the application credential ID:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For authentication using the application credential name:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.7. VMware prerequisites Copy linkLink copied to clipboard!
It is strongly recommended to create a VDDK image to accelerate migrations. For more information, see Creating a VDDK image.
Virtual machine (VM) migrations do not work without VDDK when a VM is backed by VMware vSAN.
The following prerequisites apply to VMware migrations:
- You must use a compatible version of VMware vSphere.
- You must be logged in as a user with at least the minimal set of VMware privileges.
- To access the virtual machine using a pre-migration hook, VMware Tools must be installed on the source virtual machine.
-
The VM operating system must be certified and supported for use as a guest operating system with OpenShift Virtualization and for conversion to KVM with
virt-v2v. - If you are running a warm migration, you must enable changed block tracking (CBT) on the VMs and on the VM disks.
- If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host.
- It is strongly recommended to disable hibernation because Migration Toolkit for Virtualization (MTV) does not support migrating hibernated VMs.
In the event of a power outage, data might be lost for a VM with disabled hibernation. However, if hibernation is not disabled, migration will fail
Neither MTV nor OpenShift Virtualization support conversion of Btrfs for migrating VMs from VMWare.
VMware privileges
The following minimal set of VMware privileges is required to migrate virtual machines to OpenShift Virtualization with the Migration Toolkit for Virtualization (MTV).
| Privilege | Description |
|---|---|
|
| |
|
| Allows powering off a powered-on virtual machine. This operation powers down the guest operating system. |
|
| Allows powering on a powered-off virtual machine and resuming a suspended virtual machine. |
|
| Allows managing a virtual machine by the VMware VIX API. |
|
Note
All | |
|
| Allows opening a disk on a virtual machine for random read and write access. Used mostly for remote disk mounting. |
|
| Allows operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM. |
|
| Allows opening a disk on a virtual machine for random read access. Used mostly for remote disk mounting. |
|
| Allows read operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM. |
|
| Allows write operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM. |
|
| Allows cloning of a template. |
|
| Allows cloning of an existing virtual machine and allocation of resources. |
|
| Allows creation of a new template from a virtual machine. |
|
| Allows customization of a virtual machine’s guest operating system without moving the virtual machine. |
|
| Allows deployment of a virtual machine from a template. |
|
| Allows marking an existing powered-off virtual machine as a template. |
|
| Allows marking an existing template as a virtual machine. |
|
| Allows creation, modification, or deletion of customization specifications. |
|
| Allows promote operations on a virtual machine’s disks. |
|
| Allows reading a customization specification. |
|
| |
|
| Allows creation of a snapshot from the virtual machine’s current state. |
|
| Allows removal of a snapshot from the snapshot history. |
|
| |
|
| Allows exploring the contents of a datastore. |
|
| Allows performing low-level file operations - read, write, delete, and rename - in a datastore. |
|
| |
|
| Allows verification of the validity of a session. |
|
| |
|
| Allows decryption of an encrypted virtual machine. |
|
| Allows access to encrypted resources. |
3.7.1. Creating a VDDK image Copy linkLink copied to clipboard!
It is strongly recommended that Migration Toolkit for Virtualization (MTV) should be used with the VMware Virtual Disk Development Kit (VDDK) SDK when transferring virtual disks from VMware vSphere.
Creating a VDDK image, although optional, is highly recommended. Using MTV without VDDK is not recommended and could result in significantly lower migration speeds.
To make use of this feature, you download the VMware Virtual Disk Development Kit (VDDK), build a VDDK image, and push the VDDK image to your image registry.
The VDDK package contains symbolic links, therefore, the procedure of creating a VDDK image must be performed on a file system that preserves symbolic links (symlinks).
Storing the VDDK image in a public registry might violate the VMware license terms.
Prerequisites
- Red Hat OpenShift image registry.
-
podmaninstalled. - You are working on a file system that preserves symbolic links (symlinks).
- If you are using an external registry, OpenShift Virtualization must be able to access it.
Procedure
Create and navigate to a temporary directory:
mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
$ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - In a browser, navigate to the VMware VDDK version 8 download page.
- Select version 8.0.1 and click Download.
In order to migrate to OpenShift Virtualization 4.12, download VDDK version 7.0.3.2 from the VMware VDDK version 7 download page.
- Save the VDDK archive file in the temporary directory.
Extract the VDDK archive:
tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
$ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
Dockerfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Build the VDDK image:
podman build . -t <registry_route_or_server_path>/vddk:<tag>
$ podman build . -t <registry_route_or_server_path>/vddk:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the VDDK image to the registry:
podman push <registry_route_or_server_path>/vddk:<tag>
$ podman push <registry_route_or_server_path>/vddk:<tag>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that the image is accessible to your OpenShift Virtualization environment.
3.7.2. Increasing the NFC service memory of an ESXi host Copy linkLink copied to clipboard!
If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host. Otherwise, the migration will fail because the NFC service memory is limited to 10 parallel connections.
Procedure
- Log in to the ESXi host as root.
Change the value of
maxMemoryto1000000000in/etc/vmware/hostd/config.xml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart
hostd:/etc/init.d/hostd restart
# /etc/init.d/hostd restartCopy to Clipboard Copied! Toggle word wrap Toggle overflow You do not need to reboot the host.
3.7.3. VDDK validator containers need requests and limits Copy linkLink copied to clipboard!
If you have the cluster or project resource quotas set, you must ensure that you have a sufficient quota for the MTV pods to perform the migration.
You can see the defaults, which you can override in the ForkliftController custom resource (CR), listed as follows. If necessary, you can adjust these defaults.
These settings are highly dependent on your environment. If there are many migrations happening at once and the quotas are not set enough for the migrations, then the migrations can fail. This can also be correlated to the MAX_VM_INFLIGHT setting that determines how many VMs/disks are migrated at once.
Defaults which can be overriden in the ForkliftController CR:
This affects both cold and warm migrations:
For cold migration, it is likely to be more resource intensive as it performs the disk copy. For warm migration, you could potentially reduce the requests.
-
virt_v2v_container_limits_cpu:
4000m -
virt_v2v_container_limits_memory:
8Gi -
virt_v2v_container_requests_cpu:
1000m virt_v2v_container_requests_memory:
1GiNoteCold and warm migration using
virt-v2vcan be resource-intensive. For more details, see Compute power and RAM.
-
virt_v2v_container_limits_cpu:
This affects any migrations with hooks:
-
hooks_container_limits_cpu:
1000m -
hooks_container_limits_memory:
1Gi -
hooks_container_requests_cpu:
100m -
hooks_container_requests_memory:
150Mi
-
hooks_container_limits_cpu:
This affects any OVA migrations:
-
ova_container_limits_cpu:
1000m -
ova_container_limits_memory:
1Gi -
ova_container_requests_cpu:
100m -
ova_container_requests_memory:
150Mi
-
ova_container_limits_cpu:
3.8. Open Virtual Appliance (OVA) prerequisites Copy linkLink copied to clipboard!
The following prerequisites apply to Open Virtual Appliance (OVA) file migrations:
- All OVA files are created by VMware vSphere.
Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by MTV. MTV supports only OVA files created by VMware vSphere.
The OVA files are in one or more folders under an NFS shared directory in one of the following structures:
In one or more compressed Open Virtualization Format (OVF) packages that hold all the VM information.
The filename of each compressed package must have the
.ovaextension. Several compressed packages can be stored in the same folder.When this structure is used, MTV scans the root folder and the first-level subfolders for compressed packages.
For example, if the NFS share is,
/nfs, then:
The folder/nfsis scanned.
The folder/nfs/subfolder1is scanned.
But,/nfs/subfolder1/subfolder2is not scanned.In extracted OVF packages.
When this structure is used, MTV scans the root folder, first-level subfolders, and second-level subfolders for extracted OVF packages. However, there can be only one
.ovffile in a folder. Otherwise, the migration will fail.For example, if the NFS share is,
/nfs, then:
The OVF file/nfs/vm.ovfis scanned.
The OVF file/nfs/subfolder1/vm.ovfis scanned.
The OVF file/nfs/subfolder1/subfolder2/vm.ovfis scanned.
But, the OVF file/nfs/subfolder1/subfolder2/subfolder3/vm.ovfis not scanned.
3.9. Software compatibility guidelines Copy linkLink copied to clipboard!
You must install compatible software versions.
| Migration Toolkit for Virtualization | Red Hat OpenShift | OpenShift Virtualization | VMware vSphere | Red Hat Virtualization | OpenStack |
|---|---|---|---|---|---|
| 2.7 | 4.17, 4.16, 4.15 | 4.17, 4.16, 4.15 | 6.5 or later | 4.4 SP1 or later | 16.1 or later |
MTV was tested only with Red Hat Virtualization (RHV) 4.4 SP1. Migration from Red Hat Virtualization (RHV) 4.3 has not been tested with MTV 2.7. While not supported, basic migrations from RHV 4.3 are expected to work.
Generally it is advised to upgrade Red Hat Virtualization Manager (RHVM) to the previously mentioned supported version before the migration to OpenShift Virtualization.
Therefore, it is recommended to upgrade RHV to the supported version above before the migration to OpenShift Virtualization.
However, migrations from RHV 4.3.11 were tested with MTV 2.3, and may work in practice in many environments using MTV 2.7. In this case, we advise upgrading Red Hat Virtualization Manager (RHVM) to the previously mentioned supported version before the migration to OpenShift Virtualization.
3.9.1. OpenShift Operator Life Cycles Copy linkLink copied to clipboard!
For more information about the software maintenance Life Cycle classifications for Operators shipped by Red Hat for use with OpenShift Container Platform, see OpenShift Operator Life Cycles.
Chapter 4. Installing and configuring the MTV Operator Copy linkLink copied to clipboard!
You can install the MTV Operator by using the Red Hat OpenShift web console or the command-line interface (CLI).
In Migration Toolkit for Virtualization (MTV) version 2.4 and later, the MTV Operator includes the MTV plugin for the Red Hat OpenShift web console.
After you install the MTV Operator by using either the Red Hat OpenShift web console or the CLI, you can configure the Operator.
4.1. Installing the MTV Operator by using the Red Hat OpenShift web console Copy linkLink copied to clipboard!
You can install the MTV Operator by using the Red Hat OpenShift web console.
Prerequisites
- Red Hat OpenShift 4.17 or later installed.
- OpenShift Virtualization Operator installed on an OpenShift migration target cluster.
-
You must be logged in as a user with
cluster-adminpermissions.
Procedure
- In the Red Hat OpenShift web console, click Operators → OperatorHub.
- Use the Filter by keyword field to search for mtv-operator.
- Click Migration Toolkit for Virtualization Operator and then click Install.
- Click Create ForkliftController when the button becomes active.
Click Create.
Your ForkliftController appears in the list that is displayed.
- Click Workloads → Pods to verify that the MTV pods are running.
Click Operators → Installed Operators to verify that Migration Toolkit for Virtualization Operator appears in the openshift-mtv project with the status Succeeded.
When the plugin is ready you will be prompted to reload the page. The Migration menu item is automatically added to the navigation bar, displayed on the left of the Red Hat OpenShift web console.
4.2. Installing the MTV Operator by using the command-line interface Copy linkLink copied to clipboard!
You can install the MTV Operator by using the command-line interface (CLI).
Prerequisites
- Red Hat OpenShift 4.17 or later installed.
- OpenShift Virtualization Operator installed on an OpenShift migration target cluster.
-
You must be logged in as a user with
cluster-adminpermissions.
Procedure
Create the openshift-mtv project:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
OperatorGroupCR calledmigration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
SubscriptionCR for the Operator:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ForkliftControllerCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the MTV pods are running:
oc get pods -n openshift-mtv
$ oc get pods -n openshift-mtvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3. Configuring the MTV Operator Copy linkLink copied to clipboard!
You can configure all of the following settings of the MTV Operator by modifying the ForkliftController CR, or in the Settings section of the Overview page, unless otherwise indicated.
- Maximum number of virtual machines (VMs) or disks per plan that Migration Toolkit for Virtualization (MTV) can migrate simultaneously.
-
How long
must gatherreports are retained before being automatically deleted. - CPU limit allocated to the main controller container.
- Memory limit allocated to the main controller container.
- Interval at which a new snapshot is requested before initiating a warm migration.
- Frequency with which the system checks the status of snapshot creation or removal during a warm migration.
-
Percentage of space in persistent volumes allocated as file system overhead when the
storageclassisfilesystem(ForkliftControllerCR only). -
Fixed amount of additional space allocated in persistent block volumes. This setting is applicable for any
storageclassthat is block-based (ForkliftControllerCR only). -
Configuration map of operating systems to preferences for vSphere source providers (
ForkliftControllerCR only). -
Configuration map of operating systems to preferences for Red Hat Virtualization (RHV) source providers (
ForkliftControllerCR only).
The procedure for configuring these settings using the user interface is presented in Configuring MTV settings. The procedure for configuring these settings by modifying the ForkliftController CR is presented following.
Procedure
Change a parameter’s value in the
specportion of theForkliftControllerCR by adding the label and value as follows:spec: label: value
spec: label: value1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Labels you can configure using the CLI are shown in the table that follows, along with a description of each label and its default value.
| Label | Description | Default value |
|---|---|---|
|
| Varies with provider as follows:
|
|
|
|
The duration in hours for retaining |
|
|
| The CPU limit allocated to the main controller container. |
|
|
| The memory limit allocated to the main controller container. |
|
|
| The interval in minutes at which a new snapshot is requested before initiating a warm migration. |
|
|
| The frequency in seconds with which the system checks the status of snapshot creation or removal during a warm migration. |
|
|
|
Percentage of space in persistent volumes allocated as file system overhead when the
|
|
|
|
Fixed amount of additional space allocated in persistent block volumes. This setting is applicable for any
|
|
|
| Configuration map for vSphere source providers. This configuration map maps the operating system of the incoming VM to a OpenShift Virtualization preference name. This configuration map needs to be in the namespace where the MTV Operator is deployed. To see the list of preferences in your OpenShift Virtualization environment, open the OpenShift web console and click Virtualization → Preferences.
You can add values to the configuration map when this label has the default value,
|
|
|
| Configuration map for RHV source providers. This configuration map maps the operating system of the incoming VM to a OpenShift Virtualization preference name. This configuration map needs to be in the namespace where the MTV Operator is deployed. To see the list of preferences in your OpenShift Virtualization environment, open the OpenShift web console and click Virtualization → Preferences.
You can add values to the configuration map when this label has the default value,
|
|
4.3.1. Configuring the controller_max_vm_inflight label Copy linkLink copied to clipboard!
The meaning of the controller_max_vm_inflight label, which is shown in the UI as Max concurrent virtual machine migrations, varies by the source provider of the migration
For all migrations except OVA or VMware migrations, the label specifies the maximum number of disks that Migration Toolkit for Virtualization (MTV) can transfer simultaneously. In these migrations, MTV migrates the disks in parallel. This means that if the combined number of disks that you want to migrate is greater than the value of the setting, additional disks must wait until the queue is free, without regard for whether a VM has finished migrating.
For example, if the value of the label is 15, and VM A has 5 disks, VM B has 5 disks, and VM C has 6 disks; all the disks except for the 16th disk start migrating at the same time. Once any of them has migrated, the 16th disk can be migrated, even though not all the disks on VM A and the disks on VM B have finished migrating.
For OVA migrations, the label specifies the maximum number of VMs that MTV can migrate simultaneously, meaning that all additional disks must wait until at least one VM has been completely migrated.
For example, if the value of the label is 2, and VM A has 5 disks, VM B has 5 disks, and VM C has 6 disks. All the disks on VM C must wait to migrate until either all the disks on VM A or on VM B finish migrating.
For VMware migrations, the label has the following meanings:
Cold migration:
- To local OpenShift Virtualization: VMs for each ESXi host that can migrate simultaneously.
- To remote OpenShift Virtualization: Disks for each ESXi host that can migrate simultaneously.
- Warm migration: Disks for each ESXi host that can migrate simultaneously.
Chapter 5. Migrating virtual machines by using the Red Hat OpenShift web console Copy linkLink copied to clipboard!
Use the MTV user interface to migrate virtual machines (VMs). It is located in the Virtualization section of the Red Hat OpenShift web console.
5.1. The MTV user interface Copy linkLink copied to clipboard!
The Migration Toolkit for Virtualization (MTV) user interface is integrated into the OpenShift web console.
In the left-hand panel, you can choose a page related to a component of the migration progress, for example, Providers for virtualization, or, if you are an administrator, you can choose Overview, which contains information about migrations and lets you configure MTV settings.
Figure 5.1. MTV extension interface
In pages related to components, you can click on the Projects list, which is in the upper-left portion of the page, and see which projects (namespaces) you are allowed to work with.
- If you are an administrator, you can see all projects.
- If you are a non-administrator, you can see only the projects that you have permissions to work with.
5.1.1. The MTV Overview page Copy linkLink copied to clipboard!
The Migration Toolkit for Virtualization (MTV) Overview page displays system-wide information about migrations and a list of Settings you can change.
If you have Administrator privileges, you can access the Overview page by clicking Migration → Overview in the Red Hat OpenShift web console.
The Overview page has 3 tabs:
- Overview
- YAML
- Metrics
5.1.1.1. Overview tab Copy linkLink copied to clipboard!
The Overview tab lets you see:
- Operator: The namespace on which the MTV Operator is deployed and the status of the Operator
- Pods: The name, status, and creation time of each pod that was deployed by the MTV Operator
Conditions: Status of the MTV Operator:
-
Failure: Last failure.
Falseindicates no failure since deployment. - Running: Whether the Operator is currently running and waiting for the next reconciliation.
- Successful: Last successful reconciliation.
-
Failure: Last failure.
5.1.1.2. YAML tab Copy linkLink copied to clipboard!
The custom resource ForkliftController that defines the operation of the MTV Operator. You can modify the custom resource from this tab.
5.1.1.3. Metrics tab Copy linkLink copied to clipboard!
The Metrics tab lets you see:
Migrations: The number of migrations performed using MTV:
- Total
- Running
- Failed
- Succeeded
- Canceled
Virtual Machine Migrations: The number of VMs migrated using MTV:
- Total
- Running
- Failed
- Succeeded
- Canceled
Since a single migration might involve many virtual machines, the number of migrations performed using MTV might vary significantly from the number of virtual machines that have been migrated using MTV.
- Chart showing the number of running, failed, and succeeded migrations performed using MTV for each of the last 7 days
- Chart showing the number of running, failed, and succeeded virtual machine migrations performed using MTV for each of the last 7 days
5.1.2. Configuring MTV settings Copy linkLink copied to clipboard!
If you have Administrator privileges, you can access the Overview page and change the following settings in it:
| Setting | Description | Default value |
|---|---|---|
| Max concurrent virtual machine migrations | Varies with provider as follows:
| 20 |
| Must gather cleanup after (hours) |
The duration for retaining | Disabled |
| Controller main container CPU limit | The CPU limit allocated to the main controller container. | 500 m |
| Controller main container Memory limit | The memory limit allocated to the main controller container. | 800 Mi |
| Precopy internal (minutes) | The interval at which a new snapshot is requested before initiating a warm migration. | 60 |
| Snapshot polling interval (seconds) | The frequency with which the system checks the status of snapshot creation or removal during a warm migration. | 10 |
Procedure
- In the Red Hat OpenShift web console, click Migration → Overview. The Settings list is on the right-hand side of the page.
- In the Settings list, click the Edit icon of the setting you want to change.
- Choose a setting from the list.
- Click Save.
5.2. Migrating virtual machines using the MTV user interface Copy linkLink copied to clipboard!
Use the MTV user interface to migrate VMs from the following providers:
- VMware vSphere
- Red Hat Virtualization (RHV)
- OpenStack
- Open Virtual Appliances (OVAs) that were created by VMware vSphere
- OpenShift Virtualization clusters
For all migrations, you specify the source provider, the destination provider, and the migration plan. The specific procedures vary per provider.
You must ensure that all prerequisites are met.
VMware only: You must have the minimal set of VMware privileges.
VMware only: Creating a VMware Virtual Disk Development Kit (VDDK) image will increase migration speed.
Chapter 6. Migrating virtual machines from VMware vSphere Copy linkLink copied to clipboard!
6.1. Adding a VMware vSphere source provider Copy linkLink copied to clipboard!
You can migrate VMware vSphere VMs from VMware vCenter or from a VMWare ESX/ESXi server. In MTV versions 2.6 and later, you can migrate directly from an ESX/ESXi server, without going through vCenter, by specifying the SDK endpoint to that of an ESX/ESXi server.
EMS enforcement is disabled for migrations with VMware vSphere source providers in order to enable migrations from versions of vSphere that are supported by Migration Toolkit for Virtualization but do not comply with the 2023 FIPS requirements. Therefore, users should consider whether migrations from vSphere source providers risk their compliance with FIPS. Supported versions of vSphere are specified in Software compatibility guidelines.
If you input any value of maximum transmission unit (MTU) besides the default value in your migration network, you must also input the same value in the OpenShift transfer network that you use. For more information about the OpenShift transfer network, see Creating a migration plan.
Prerequisites
- It is strongly recommended to create a VMware Virtual Disk Development Kit (VDDK) image in a secure registry that is accessible to all clusters. A VDDK image accelerates migration and reduces the risk of a plan failing. If you are not using VDDK and a plan fails, then please retry with VDDK installed. For more information, see Creating a VDDK image.
Virtual machine (VM) migrations do not work without VDDK when a VM is backed by VMware vSAN.
Procedure
- In the Red Hat OpenShift web console, click Migration → Providers for virtualization.
- Click Create Provider.
- Click vSphere.
Specify the following fields:
Provider details
- Provider resource name: Name of the source provider.
- Endpoint type: Select the vSphere provider endpoint type. Options: vCenter or ESXi. You can migrate virtual machines from vCenter, an ESX/ESXi server that is not managed by vCenter, or from an ESX/ESXi server that is managed by vCenter but does not go through vCenter.
-
URL: URL of the SDK endpoint of the vCenter on which the source VM is mounted. Ensure that the URL includes the
sdkpath, usually/sdk. For example,https://vCenter-host-example.com/sdk. If a certificate for FQDN is specified, the value of this field needs to match the FQDN in the certificate. -
VDDK init image:
VDDKInitImagepath. It is strongly recommended to create a VDDK init image to accelerate migrations. For more information, see Creating a VDDK image.
Provider credentials
-
Username: vCenter user or ESXi user. For example,
user@vsphere.local. - Password: vCenter user password or ESXi user password.
Choose one of the following options for validating CA certificates:
- Use a custom CA certificate: Migrate after validating a custom CA certificate.
- Use the system CA certificate: Migrate after validating the system CA certificate.
Skip certificate validation : Migrate without validating a CA certificate.
- To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
- To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
- To skip certificate validation, toggle the Skip certificate validation switch to the right.
Optional: Ask MTV to fetch a custom CA certificate from the provider’s API endpoint URL.
- Click Fetch certificate from URL. The Verify certificate window opens.
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
Click Create provider to add and save the provider.
The provider appears in the list of providers.
NoteIt might take a few minutes for the provider to have the status
Ready.Optional: Add access to the UI of the provider:
On the Providers page, click the provider.
The Provider details page opens.
- Click the Edit icon under External UI web link.
Enter the link and click Save.
NoteIf you do not enter a link, MTV attempts to calculate the correct link.
- If MTV succeeds, the hyperlink of the field points to the calculated link.
- If MTV does not succeed, the field remains empty.
6.2. Selecting a migration network for a VMware source provider Copy linkLink copied to clipboard!
You can select a migration network in the Red Hat OpenShift web console for a source provider to reduce risk to the source environment and to improve performance.
Using the default network for migration can result in poor performance because the network might not have sufficient bandwidth. This situation can have a negative effect on the source platform because the disk transfer operation might saturate the network.
You can also control the network from which disks are transferred from a host by using the Network File Copy (NFC) service in vSphere.
If you input any value of maximum transmission unit (MTU) besides the default value in your migration network, you must also input the same value in the OpenShift transfer network that you use. For more information about the OpenShift transfer network, see Creating a migration plan.
Prerequisites
- The migration network must have sufficient throughput, minimum speed of 10 Gbps, for disk transfer.
The migration network must be accessible to the OpenShift Virtualization nodes through the default gateway.
NoteThe source virtual disks are copied by a pod that is connected to the pod network of the target namespace.
- The migration network should have jumbo frames enabled.
Procedure
- In the Red Hat OpenShift web console, click Migration → Providers for virtualization.
- Click the host number in the Hosts column beside a provider to view a list of hosts.
- Select one or more hosts and click Select migration network.
Specify the following fields:
- Network: Network name
-
ESXi host admin username: For example,
root - ESXi host admin password: Password
- Click Save.
Verify that the status of each host is Ready.
If a host status is not Ready, the host might be unreachable on the migration network or the credentials might be incorrect. You can modify the host configuration and save the changes.
6.3. Adding an OpenShift Virtualization destination provider Copy linkLink copied to clipboard!
You can use a Red Hat OpenShift Virtualization provider as both a source provider and destination provider.
Specifically, the host cluster that is automatically added as a OpenShift Virtualization provider can be used as both a source provider and a destination provider.
You can also add another OpenShift Virtualization destination provider to the Red Hat OpenShift web console in addition to the default OpenShift Virtualization destination provider, which is the cluster where you installed MTV.
You can migrate VMs from the cluster that MTV is deployed on to another cluster, or from a remote cluster to the cluster that MTV is deployed on.
Prerequisites
-
You must have an OpenShift Virtualization service account token with
cluster-adminprivileges.
Procedure
- In the Red Hat OpenShift web console, click Migration → Providers for virtualization.
- Click Create Provider.
- Click OpenShift Virtualization.
Specify the following fields:
- Provider resource name: Name of the source provider
- URL: URL of the endpoint of the API server
Service account bearer token: Token for a service account with
cluster-adminprivilegesIf both URL and Service account bearer token are left blank, the local OpenShift cluster is used.
Choose one of the following options for validating CA certificates:
- Use a custom CA certificate: Migrate after validating a custom CA certificate.
- Use the system CA certificate: Migrate after validating the system CA certificate.
Skip certificate validation : Migrate without validating a CA certificate.
- To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
- To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
- To skip certificate validation, toggle the Skip certificate validation switch to the right.
Optional: Ask MTV to fetch a custom CA certificate from the provider’s API endpoint URL.
- Click Fetch certificate from URL. The Verify certificate window opens.
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
Click Create provider to add and save the provider.
The provider appears in the list of providers.
6.4. Selecting a migration network for an OpenShift Virtualization provider Copy linkLink copied to clipboard!
You can select a default migration network for an OpenShift Virtualization provider in the Red Hat OpenShift web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.
If you do not select a migration network, the default migration network is the pod network, which might not be optimal for disk transfer.
You can override the default migration network of the provider by selecting a different network when you create a migration plan.
Procedure
- In the Red Hat OpenShift web console, click Migration > Providers for virtualization.
Click the OpenShift Virtualization provider whose migration network you want to change.
When the Providers detail page opens:
- Click the Networks tab.
- Click Set default transfer network.
- Select a default transfer network from the list and click Save.
6.5. Creating a migration plan Copy linkLink copied to clipboard!
Use the Red Hat OpenShift web console to create a migration plan. Specify the source provider, the virtual machines (VMs) you want to migrate, and other plan details.
Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration.
This prevents concurrent disk access to the storage the guest points to.
A plan cannot contain more than 500 VMs or 500 disks.
Procedure
In the Red Hat OpenShift web console, click Plans for virtualization and then click Create Plan.
The Create migration plan wizard opens to the Select source provider interface.
Select the source provider of the VMs you want to migrate.
The Select virtual machines interface opens.
Select the VMs you want to migrate and click Next.
The Create migration plan pane opens. It displays the source provider’s name and suggestions for a target provider and namespace, a network map, and a storage map.
- Enter the Plan name.
- To change the Target provider, the Target namespace, or elements of the Network map or the Storage map, select an item from the relevant list.
- To add either a Network map or a Storage map, click the + sign anf add a mapping.
Click Create migration plan.
MTV validates the migration plan, and the Plan details page opens, indicating whether the plan is ready for use or contains an error.
The details of the plan are listed, and you can edit the items you filled in on the previous page. If you make any changes, MTV validates the plan again.
Check the following items in the Settings section of the page:
- Warm migration: By default, all migrations are cold migrations. For a warm migration, click the Edit icon and select Warm migration.
Transfer Network: The network used to transfer the VMs to OpenShift Virtualization, by default, this is the default transfer network of the provider. Verify that the transfer network is in the selected target namespace.To edit the transfer network, click the Edit icon, choose a different transfer network from the list in the window that opens, and click Save.
You can configure an OpenShift network in the OpenShift web console by clicking Networking > NetworkAttachmentDefinitions.
To learn more about the different types of networks OpenShift supports, see Additional Networks in OpenShift Container Platform.
If you want to adjust the maximum transmission unit (MTU) of the OpenShift transfer network, you must also change the MTU of the VMware migration network. For more information see Selecting a migration network for a VMware source provider.
- Target namespace: Destination namespace to be used by all the migrated VMs, by default, this is the current or active namespace. To edit the namespace, click the Edit icon, choose a different target namespace from the list in the window that opens, and click Save.
Preserve static IPs: By default, virtual network interface controllers (vNICs) change during the migration process. As a result, vNICs that are configured with a static IP linked to the interface name in the guest VM lose their IP. To avoid this, click the Edit icon next to Preserve static IPs and toggle the Whether to preserve the static IPs switch in the window that opens. Then click Save.
MTV then issues a warning message about any VMs for which vNIC properties are missing. To retrieve any missing vNIC properties, run those VMs in vSphere in order for the vNIC properties to be reported to MTV.
- Disk decryption passphrases: For disks encrypted using Linux Unified Key Setup (LUKS). To enter a list of decryption passphrases for LUKS-encrypted devices, in the Settings section, click the Edit icon next to Disk decryption passphrases, enter the passphrases, and then click Save. You do not need to enter the passphrases in a specific order. For each LUKS-encrypted device, MTV tries each passphrase until one unlocks the device.
Root device: Applies to multi-boot VM migrations only. By default, MTV uses the first bootable device detected as the root device.
To specify a different root device, in the Settings section, click the Edit icon next to Root device and choose a device from the list of commonly-used options, or enter a device in the text box.
MTV uses the following format for disk location:
/dev/sd<disk_identifier><disk_partition>. For example, if the second disk is the root device and the operating system is on the disk’s second partition, the format would be:/dev/sdb2. After you enter the boot device, click Save.If the conversion fails because the boot device provided is incorrect, it is possible to get the correct information by checking the conversion pod logs.
When you migrate a VMware 7 VM to an OpenShift 4.13+ platform that uses CentOS 7.9, the name of the network interfaces changes and the static IP configuration for the VM no longer works.
6.6. Running a migration plan Copy linkLink copied to clipboard!
You can run a migration plan and view its progress in the Red Hat OpenShift web console.
Prerequisites
- Valid migration plan.
Procedure
In the Red Hat OpenShift web console, click Migration → Plans for virtualization.
The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, the date that the migration started, and the description of each plan.
- Click Start beside a migration plan to start the migration.
Click Start in the confirmation window that opens.
The plan’s Status changes to Running, and the migration’s progress is displayed.
Warm migration only:
- The precopy stage starts.
- Click Cutover to complete the migration.
Optional: Click the links in the migration’s Status to see its overall status and the status of each VM:
- The link on the left indicates whether the migration failed, succeeded, or is ongoing. It also reports the number of VMs whose migration succeeded, failed, or was canceled.
The link on the right opens the Virtual Machines tab of the Plan Details page. For each VM, the tab displays the following data:
- The name of the VM
- The start and end times of the migration
- The amount of data copied
A progress pipeline for the VM’s migration
WarningvMotion, including svMotion, and relocation must be disabled for VMs that are being imported to avoid data corruption.
Optional: To view your migration’s logs, either as it is running or after it is completed, perform the following actions:
- Click the Virtual Machines tab.
Click the arrow (>) to the left of the virtual machine whose migration progress you want to check.
The VM’s details are displayed.
In the Pods section, in the Pod links column, click the Logs link.
The Logs tab opens.
NoteLogs are not always available. The following are common reasons for logs not being available:
-
The migration is from OpenShift Virtualization to OpenShift Virtualization. In this case,
virt-v2vis not involved, so no pod is required. - No pod was created.
- The pod was deleted.
- The migration failed before running the pod.
-
The migration is from OpenShift Virtualization to OpenShift Virtualization. In this case,
- To see the raw logs, click the Raw link.
- To download the logs, click the Download link.
6.7. Migration plan options Copy linkLink copied to clipboard!
On the Plans for virtualization page of the Red Hat OpenShift web console, you can click the Options menu
beside a migration plan to access the following options:
Edit Plan: Edit the details of a migration plan. If the plan is running or has completed successfully, you cannot edit the following options:
- All properties on the Settings section of the Plan details page. For example, warm or cold migration, target namespace, and preserved static IPs.
- The plan’s mapping on the Mappings tab.
- The hooks listed on the Hooks tab.
- Start migration: Active only if relevant.
- Restart migration: Restart a migration that was interrupted. Before choosing this option, make sure there are no error messages. If there are, you need to edit the plan.
Cutover: Warm migrations only. Active only if relevant. Clicking Cutover opens the Cutover window, which supports the following options:
- Set cutover: Set the date and time for a cutover.
- Remove cutover: Cancel a scheduled cutover. Active only if relevant.
Duplicate Plan: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:
- Migrate VMs to a different namespace.
- Edit an archived migration plan.
- Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.
Archive Plan: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed, duplicated, or deleted.
NoteArchive Plan is irreversible. However, you can duplicate an archived plan.
Delete Plan: Permanently remove a migration plan. You cannot delete a running migration plan.
NoteDelete Plan is irreversible.
Deleting a migration plan does not remove temporary resources. To remove temporary resources, archive the plan first before deleting it.
6.8. Canceling a migration Copy linkLink copied to clipboard!
You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the Red Hat OpenShift web console.
Procedure
- In the Red Hat OpenShift web console, click Plans for virtualization.
- Click the name of a running migration plan to view the migration details.
- Select one or more VMs and click Cancel.
Click Yes, cancel to confirm the cancellation.
In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.
You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.
Chapter 7. Migrating virtual machines from Red Hat Virtualization Copy linkLink copied to clipboard!
7.1. Adding a Red Hat Virtualization source provider Copy linkLink copied to clipboard!
You can add a Red Hat Virtualization source provider by using the Red Hat OpenShift web console.
Prerequisites
- Manager CA certificate, unless it was replaced by a third-party certificate, in which case, specify the Manager Apache CA certificate
Procedure
- In the Red Hat OpenShift web console, click Migration → Providers for virtualization.
- Click Create Provider.
- Click Red Hat Virtualization
Specify the following fields:
- Provider resource name: Name of the source provider.
-
URL: URL of the API endpoint of the Red Hat Virtualization Manager (RHVM) on which the source VM is mounted. Ensure that the URL includes the path leading to the RHVM API server, usually
/ovirt-engine/api. For example,https://rhv-host-example.com/ovirt-engine/api. - Username: Username.
- Password: Password.
Choose one of the following options for validating CA certificates:
- Use a custom CA certificate: Migrate after validating a custom CA certificate.
- Use the system CA certificate: Migrate after validating the system CA certificate.
Skip certificate validation : Migrate without validating a CA certificate.
- To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
- To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
- To skip certificate validation, toggle the Skip certificate validation switch to the right.
Optional: Ask MTV to fetch a custom CA certificate from the provider’s API endpoint URL.
- Click Fetch certificate from URL. The Verify certificate window opens.
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
Click Create provider to add and save the provider.
The provider appears in the list of providers.
Optional: Add access to the UI of the provider:
On the Providers page, click the provider.
The Provider details page opens.
- Click the Edit icon under External UI web link.
Enter the link and click Save.
NoteIf you do not enter a link, MTV attempts to calculate the correct link.
- If MTV succeeds, the hyperlink of the field points to the calculated link.
- If MTV does not succeed, the field remains empty.
7.2. Adding an OpenShift Virtualization destination provider Copy linkLink copied to clipboard!
You can use a Red Hat OpenShift Virtualization provider as both a source provider and destination provider.
Specifically, the host cluster that is automatically added as a OpenShift Virtualization provider can be used as both a source provider and a destination provider.
You can also add another OpenShift Virtualization destination provider to the Red Hat OpenShift web console in addition to the default OpenShift Virtualization destination provider, which is the cluster where you installed MTV.
You can migrate VMs from the cluster that MTV is deployed on to another cluster, or from a remote cluster to the cluster that MTV is deployed on.
Prerequisites
-
You must have an OpenShift Virtualization service account token with
cluster-adminprivileges.
Procedure
- In the Red Hat OpenShift web console, click Migration → Providers for virtualization.
- Click Create Provider.
- Click OpenShift Virtualization.
Specify the following fields:
- Provider resource name: Name of the source provider
- URL: URL of the endpoint of the API server
Service account bearer token: Token for a service account with
cluster-adminprivilegesIf both URL and Service account bearer token are left blank, the local OpenShift cluster is used.
Choose one of the following options for validating CA certificates:
- Use a custom CA certificate: Migrate after validating a custom CA certificate.
- Use the system CA certificate: Migrate after validating the system CA certificate.
Skip certificate validation : Migrate without validating a CA certificate.
- To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
- To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
- To skip certificate validation, toggle the Skip certificate validation switch to the right.
Optional: Ask MTV to fetch a custom CA certificate from the provider’s API endpoint URL.
- Click Fetch certificate from URL. The Verify certificate window opens.
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
Click Create provider to add and save the provider.
The provider appears in the list of providers.
7.3. Selecting a migration network for an OpenShift Virtualization provider Copy linkLink copied to clipboard!
You can select a default migration network for an OpenShift Virtualization provider in the Red Hat OpenShift web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.
If you do not select a migration network, the default migration network is the pod network, which might not be optimal for disk transfer.
You can override the default migration network of the provider by selecting a different network when you create a migration plan.
Procedure
- In the Red Hat OpenShift web console, click Migration > Providers for virtualization.
Click the OpenShift Virtualization provider whose migration network you want to change.
When the Providers detail page opens:
- Click the Networks tab.
- Click Set default transfer network.
- Select a default transfer network from the list and click Save.
7.4. Creating a migration plan Copy linkLink copied to clipboard!
Use the Red Hat OpenShift web console to create a migration plan. Specify the source provider, the virtual machines (VMs) you want to migrate, and other plan details.
Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration.
This prevents concurrent disk access to the storage the guest points to.
A plan cannot contain more than 500 VMs or 500 disks.
Procedure
In the Red Hat OpenShift web console, click Plans for virtualization and then click Create Plan.
The Create migration plan wizard opens to the Select source provider interface.
Select the source provider of the VMs you want to migrate.
The Select virtual machines interface opens.
Select the VMs you want to migrate and click Next.
The Create migration plan pane opens. It displays the source provider’s name and suggestions for a target provider and namespace, a network map, and a storage map.
- Enter the Plan name.
- To change the Target provider, the Target namespace, or elements of the Network map or the Storage map, select an item from the relevant list.
- To add either a Network map or a Storage map, click the + sign anf add a mapping.
Click Create migration plan.
MTV validates the migration plan, and the Plan details page opens, indicating whether the plan is ready for use or contains an error.
The details of the plan are listed, and you can edit the items you filled in on the previous page. If you make any changes, MTV validates the plan again.
Check the following items in the Settings section of the page:
- Warm migration By default, all migrations are cold migrations. For a warm migration, click the Edit icon, and select Warm migration.
Transfer Network: The network used to transfer the VMs to OpenShift Virtualization, by default, this is the default transfer network of the provider. Verify that the transfer network is in the selected target namespace.To edit the transfer network, click the Edit icon, choose a different transfer network from the list in the window that opens, and click Save.
You can configure an OpenShift network in the OpenShift web console by clicking Networking > NetworkAttachmentDefinitions.
To learn more about the different types of networks OpenShift supports, see Additional Networks in OpenShift Container Platform.
If you want to adjust the maximum transmission unit (MTU) of the OpenShift transfer network, you must also change the MTU of the VMware migration network. For more information see Selecting a migration network for a VMware source provider.
- Target namespace: Destination namespace to be used by all the migrated VMs, by default, this is the current or active namespace. To edit the namespace, click the Edit icon, choose a different target namespace from the list in the window that opens, and click Save.
Preserving the CPU model of VMs that are migrated from RHV: Generally, the CPU model (type) for RHV VMs is set at the cluster level, but it can be set at the VM level, which is called a custom CPU model. By default, MTV sets the CPU model on the destination cluster as follows:
- MTV preserves custom CPU settings for VMs that have them.
For VMs without custom CPU settings, MTV does not set the CPU model. Instead, the CPU model is later set by OpenShift Virtualization.
To preserve the cluster-level CPU model of your RHV VMs, in the Settings section, click the Edit icon next to Preserve CPU model. Toggle the Whether to preserve the CPU model switch, and then click Save.
7.5. Running a migration plan Copy linkLink copied to clipboard!
You can run a migration plan and view its progress in the Red Hat OpenShift web console.
Prerequisites
- Valid migration plan.
Procedure
In the Red Hat OpenShift web console, click Migration → Plans for virtualization.
The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, the date that the migration started, and the description of each plan.
- Click Start beside a migration plan to start the migration.
Click Start in the confirmation window that opens.
The plan’s Status changes to Running, and the migration’s progress is displayed.
Warm migration only:
- The precopy stage starts.
- Click Cutover to complete the migration.
Optional: Click the links in the migration’s Status to see its overall status and the status of each VM:
- The link on the left indicates whether the migration failed, succeeded, or is ongoing. It also reports the number of VMs whose migration succeeded, failed, or was canceled.
The link on the right opens the Virtual Machines tab of the Plan Details page. For each VM, the tab displays the following data:
- The name of the VM
- The start and end times of the migration
- The amount of data copied
A progress pipeline for the VM’s migration
WarningvMotion, including svMotion, and relocation must be disabled for VMs that are being imported to avoid data corruption.
Optional: To view your migration’s logs, either as it is running or after it is completed, perform the following actions:
- Click the Virtual Machines tab.
Click the arrow (>) to the left of the virtual machine whose migration progress you want to check.
The VM’s details are displayed.
In the Pods section, in the Pod links column, click the Logs link.
The Logs tab opens.
NoteLogs are not always available. The following are common reasons for logs not being available:
-
The migration is from OpenShift Virtualization to OpenShift Virtualization. In this case,
virt-v2vis not involved, so no pod is required. - No pod was created.
- The pod was deleted.
- The migration failed before running the pod.
-
The migration is from OpenShift Virtualization to OpenShift Virtualization. In this case,
- To see the raw logs, click the Raw link.
- To download the logs, click the Download link.
7.6. Migration plan options Copy linkLink copied to clipboard!
On the Plans for virtualization page of the Red Hat OpenShift web console, you can click the Options menu
beside a migration plan to access the following options:
Edit Plan: Edit the details of a migration plan. If the plan is running or has completed successfully, you cannot edit the following options:
- All properties on the Settings section of the Plan details page. For example, warm or cold migration, target namespace, and preserved static IPs.
- The plan’s mapping on the Mappings tab.
- The hooks listed on the Hooks tab.
- Start migration: Active only if relevant.
- Restart migration: Restart a migration that was interrupted. Before choosing this option, make sure there are no error messages. If there are, you need to edit the plan.
Cutover: Warm migrations only. Active only if relevant. Clicking Cutover opens the Cutover window, which supports the following options:
- Set cutover: Set the date and time for a cutover.
- Remove cutover: Cancel a scheduled cutover. Active only if relevant.
Duplicate Plan: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:
- Migrate VMs to a different namespace.
- Edit an archived migration plan.
- Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.
Archive Plan: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed, duplicated, or deleted.
NoteArchive Plan is irreversible. However, you can duplicate an archived plan.
Delete Plan: Permanently remove a migration plan. You cannot delete a running migration plan.
NoteDelete Plan is irreversible.
Deleting a migration plan does not remove temporary resources. To remove temporary resources, archive the plan first before deleting it.
7.7. Canceling a migration Copy linkLink copied to clipboard!
You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the Red Hat OpenShift web console.
Procedure
- In the Red Hat OpenShift web console, click Plans for virtualization.
- Click the name of a running migration plan to view the migration details.
- Select one or more VMs and click Cancel.
Click Yes, cancel to confirm the cancellation.
In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.
You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.
Chapter 8. Migrating virtual machines from OpenStack Copy linkLink copied to clipboard!
8.1. Adding an OpenStack source provider Copy linkLink copied to clipboard!
You can add an OpenStack source provider by using the Red Hat OpenShift web console.
When you migrate an image-based VM from an OpenStack provider, a snapshot is created for the image that is attached to the source VM and the data from the snapshot is copied over to the target VM. This means that the target VM will have the same state as that of the source VM at the time the snapshot was created.
Procedure
- In the Red Hat OpenShift web console, click Migration → Providers for virtualization.
- Click Create Provider.
- Click OpenStack.
Specify the following fields:
- Provider resource name: Name of the source provider.
-
URL: URL of the OpenStack Identity (Keystone) endpoint. For example,
http://controller:5000/v3. Authentication type: Choose one of the following methods of authentication and supply the information related to your choice. For example, if you choose Application credential ID as the authentication type, the Application credential ID and the Application credential secret fields become active, and you need to supply the ID and the secret.
Application credential ID
- Application credential ID: OpenStack application credential ID
-
Application credential secret: OpenStack application credential
Secret
Application credential name
- Application credential name: OpenStack application credential name
-
Application credential secret: OpenStack application credential
Secret - Username: OpenStack username
- Domain: OpenStack domain name
Token with user ID
- Token: OpenStack token
- User ID: OpenStack user ID
- Project ID: OpenStack project ID
Token with user Name
- Token: OpenStack token
- Username: OpenStack username
- Project: OpenStack project
- Domain name: OpenStack domain name
Password
- Username: OpenStack username
- Password: OpenStack password
- Project: OpenStack project
- Domain: OpenStack domain name
Choose one of the following options for validating CA certificates:
- Use a custom CA certificate: Migrate after validating a custom CA certificate.
- Use the system CA certificate: Migrate after validating the system CA certificate.
Skip certificate validation : Migrate without validating a CA certificate.
- To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
- To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
- To skip certificate validation, toggle the Skip certificate validation switch to the right.
Optional: Ask MTV to fetch a custom CA certificate from the provider’s API endpoint URL.
- Click Fetch certificate from URL. The Verify certificate window opens.
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
Click Create provider to add and save the provider.
The provider appears in the list of providers.
Optional: Add access to the UI of the provider:
On the Providers page, click the provider.
The Provider details page opens.
- Click the Edit icon under External UI web link.
Enter the link and click Save.
NoteIf you do not enter a link, MTV attempts to calculate the correct link.
- If MTV succeeds, the hyperlink of the field points to the calculated link.
- If MTV does not succeed, the field remains empty.
8.2. Adding an OpenShift Virtualization destination provider Copy linkLink copied to clipboard!
You can use a Red Hat OpenShift Virtualization provider as both a source provider and destination provider.
Specifically, the host cluster that is automatically added as a OpenShift Virtualization provider can be used as both a source provider and a destination provider.
You can also add another OpenShift Virtualization destination provider to the Red Hat OpenShift web console in addition to the default OpenShift Virtualization destination provider, which is the cluster where you installed MTV.
You can migrate VMs from the cluster that MTV is deployed on to another cluster, or from a remote cluster to the cluster that MTV is deployed on.
Prerequisites
-
You must have an OpenShift Virtualization service account token with
cluster-adminprivileges.
Procedure
- In the Red Hat OpenShift web console, click Migration → Providers for virtualization.
- Click Create Provider.
- Click OpenShift Virtualization.
Specify the following fields:
- Provider resource name: Name of the source provider
- URL: URL of the endpoint of the API server
Service account bearer token: Token for a service account with
cluster-adminprivilegesIf both URL and Service account bearer token are left blank, the local OpenShift cluster is used.
Choose one of the following options for validating CA certificates:
- Use a custom CA certificate: Migrate after validating a custom CA certificate.
- Use the system CA certificate: Migrate after validating the system CA certificate.
Skip certificate validation : Migrate without validating a CA certificate.
- To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
- To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
- To skip certificate validation, toggle the Skip certificate validation switch to the right.
Optional: Ask MTV to fetch a custom CA certificate from the provider’s API endpoint URL.
- Click Fetch certificate from URL. The Verify certificate window opens.
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
Click Create provider to add and save the provider.
The provider appears in the list of providers.
8.3. Selecting a migration network for an OpenShift Virtualization provider Copy linkLink copied to clipboard!
You can select a default migration network for an OpenShift Virtualization provider in the Red Hat OpenShift web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.
If you do not select a migration network, the default migration network is the pod network, which might not be optimal for disk transfer.
You can override the default migration network of the provider by selecting a different network when you create a migration plan.
Procedure
- In the Red Hat OpenShift web console, click Migration > Providers for virtualization.
Click the OpenShift Virtualization provider whose migration network you want to change.
When the Providers detail page opens:
- Click the Networks tab.
- Click Set default transfer network.
- Select a default transfer network from the list and click Save.
8.4. Creating a migration plan Copy linkLink copied to clipboard!
Use the Red Hat OpenShift web console to create a migration plan. Specify the source provider, the virtual machines (VMs) you want to migrate, and other plan details.
Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration.
This prevents concurrent disk access to the storage the guest points to.
A plan cannot contain more than 500 VMs or 500 disks.
Procedure
In the Red Hat OpenShift web console, click Plans for virtualization and then click Create Plan.
The Create migration plan wizard opens to the Select source provider interface.
Select the source provider of the VMs you want to migrate.
The Select virtual machines interface opens.
Select the VMs you want to migrate and click Next.
The Create migration plan pane opens. It displays the source provider’s name and suggestions for a target provider and namespace, a network map, and a storage map.
- Enter the Plan name.
- To change the Target provider, the Target namespace, or elements of the Network map or the Storage map, select an item from the relevant list.
- To add either a Network map or a Storage map, click the + sign anf add a mapping.
Click Create migration plan.
MTV validates the migration plan, and the Plan details page opens, indicating whether the plan is ready for use or contains an error.
The details of the plan are listed, and you can edit the items you filled in on the previous page. If you make any changes, MTV validates the plan again.
Check the following items in the Settings section of the page:
Transfer Network: The network used to transfer the VMs to OpenShift Virtualization, by default, this is the default transfer network of the provider. Verify that the transfer network is in the selected target namespace.To edit the transfer network, click the Edit icon, choose a different transfer network from the list in the window that opens, and click Save.
You can configure an OpenShift network in the OpenShift web console by clicking Networking > NetworkAttachmentDefinitions.
To learn more about the different types of networks OpenShift supports, see Additional Networks in OpenShift Container Platform.
If you want to adjust the maximum transmission unit (MTU) of the OpenShift transfer network, you must also change the MTU of the VMware migration network. For more information see Selecting a migration network for a VMware source provider.
- Target namespace: Destination namespace to be used by all the migrated VMs, by default, this is the current or active namespace. To edit the namespace, click the Edit icon, choose a different target namespace from the list in the window that opens, and click Save.
If your plan is valid, you can do one of the following:
- Run the plan now by clicking Start migration.
- Run the plan later by selecting it on the Plans for virtualization page and following the procedure in Running a migration plan.
8.5. Running a migration plan Copy linkLink copied to clipboard!
You can run a migration plan and view its progress in the Red Hat OpenShift web console.
Prerequisites
- Valid migration plan.
Procedure
In the Red Hat OpenShift web console, click Migration → Plans for virtualization.
The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, the date that the migration started, and the description of each plan.
- Click Start beside a migration plan to start the migration.
Click Start in the confirmation window that opens.
The plan’s Status changes to Running, and the migration’s progress is displayed.
Warm migration only:
- The precopy stage starts.
- Click Cutover to complete the migration.
Optional: Click the links in the migration’s Status to see its overall status and the status of each VM:
- The link on the left indicates whether the migration failed, succeeded, or is ongoing. It also reports the number of VMs whose migration succeeded, failed, or was canceled.
The link on the right opens the Virtual Machines tab of the Plan Details page. For each VM, the tab displays the following data:
- The name of the VM
- The start and end times of the migration
- The amount of data copied
A progress pipeline for the VM’s migration
WarningvMotion, including svMotion, and relocation must be disabled for VMs that are being imported to avoid data corruption.
Optional: To view your migration’s logs, either as it is running or after it is completed, perform the following actions:
- Click the Virtual Machines tab.
Click the arrow (>) to the left of the virtual machine whose migration progress you want to check.
The VM’s details are displayed.
In the Pods section, in the Pod links column, click the Logs link.
The Logs tab opens.
NoteLogs are not always available. The following are common reasons for logs not being available:
-
The migration is from OpenShift Virtualization to OpenShift Virtualization. In this case,
virt-v2vis not involved, so no pod is required. - No pod was created.
- The pod was deleted.
- The migration failed before running the pod.
-
The migration is from OpenShift Virtualization to OpenShift Virtualization. In this case,
- To see the raw logs, click the Raw link.
- To download the logs, click the Download link.
8.6. Migration plan options Copy linkLink copied to clipboard!
On the Plans for virtualization page of the Red Hat OpenShift web console, you can click the Options menu
beside a migration plan to access the following options:
Edit Plan: Edit the details of a migration plan. If the plan is running or has completed successfully, you cannot edit the following options:
- All properties on the Settings section of the Plan details page. For example, warm or cold migration, target namespace, and preserved static IPs.
- The plan’s mapping on the Mappings tab.
- The hooks listed on the Hooks tab.
- Start migration: Active only if relevant.
- Restart migration: Restart a migration that was interrupted. Before choosing this option, make sure there are no error messages. If there are, you need to edit the plan.
Cutover: Warm migrations only. Active only if relevant. Clicking Cutover opens the Cutover window, which supports the following options:
- Set cutover: Set the date and time for a cutover.
- Remove cutover: Cancel a scheduled cutover. Active only if relevant.
Duplicate Plan: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:
- Migrate VMs to a different namespace.
- Edit an archived migration plan.
- Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.
Archive Plan: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed, duplicated, or deleted.
NoteArchive Plan is irreversible. However, you can duplicate an archived plan.
Delete Plan: Permanently remove a migration plan. You cannot delete a running migration plan.
NoteDelete Plan is irreversible.
Deleting a migration plan does not remove temporary resources. To remove temporary resources, archive the plan first before deleting it.
8.7. Canceling a migration Copy linkLink copied to clipboard!
You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the Red Hat OpenShift web console.
Procedure
- In the Red Hat OpenShift web console, click Plans for virtualization.
- Click the name of a running migration plan to view the migration details.
- Select one or more VMs and click Cancel.
Click Yes, cancel to confirm the cancellation.
In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.
You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.
Chapter 9. Migrating virtual machines from OVA Copy linkLink copied to clipboard!
9.1. Adding an Open Virtual Appliance (OVA) source provider Copy linkLink copied to clipboard!
You can add Open Virtual Appliance (OVA) files that were created by VMware vSphere as a source provider by using the Red Hat OpenShift web console.
Procedure
- In the Red Hat OpenShift web console, click Migration → Providers for virtualization.
- Click Create Provider.
- Click Open Virtual Appliance (OVA).
Specify the following fields:
- Provider resource name: Name of the source provider
- URL: URL of the NFS file share that serves the OVA
Click Create provider to add and save the provider.
The provider appears in the list of providers.
NoteAn error message might appear that states that an error has occurred. You can ignore this message.
9.2. Adding an OpenShift Virtualization destination provider Copy linkLink copied to clipboard!
You can use a Red Hat OpenShift Virtualization provider as both a source provider and destination provider.
Specifically, the host cluster that is automatically added as a OpenShift Virtualization provider can be used as both a source provider and a destination provider.
You can also add another OpenShift Virtualization destination provider to the Red Hat OpenShift web console in addition to the default OpenShift Virtualization destination provider, which is the cluster where you installed MTV.
You can migrate VMs from the cluster that MTV is deployed on to another cluster, or from a remote cluster to the cluster that MTV is deployed on.
Prerequisites
-
You must have an OpenShift Virtualization service account token with
cluster-adminprivileges.
Procedure
- In the Red Hat OpenShift web console, click Migration → Providers for virtualization.
- Click Create Provider.
- Click OpenShift Virtualization.
Specify the following fields:
- Provider resource name: Name of the source provider
- URL: URL of the endpoint of the API server
Service account bearer token: Token for a service account with
cluster-adminprivilegesIf both URL and Service account bearer token are left blank, the local OpenShift cluster is used.
Choose one of the following options for validating CA certificates:
- Use a custom CA certificate: Migrate after validating a custom CA certificate.
- Use the system CA certificate: Migrate after validating the system CA certificate.
Skip certificate validation : Migrate without validating a CA certificate.
- To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
- To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
- To skip certificate validation, toggle the Skip certificate validation switch to the right.
Optional: Ask MTV to fetch a custom CA certificate from the provider’s API endpoint URL.
- Click Fetch certificate from URL. The Verify certificate window opens.
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
Click Create provider to add and save the provider.
The provider appears in the list of providers.
9.3. Selecting a migration network for an OpenShift Virtualization provider Copy linkLink copied to clipboard!
You can select a default migration network for an OpenShift Virtualization provider in the Red Hat OpenShift web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.
If you do not select a migration network, the default migration network is the pod network, which might not be optimal for disk transfer.
You can override the default migration network of the provider by selecting a different network when you create a migration plan.
Procedure
- In the Red Hat OpenShift web console, click Migration > Providers for virtualization.
Click the OpenShift Virtualization provider whose migration network you want to change.
When the Providers detail page opens:
- Click the Networks tab.
- Click Set default transfer network.
- Select a default transfer network from the list and click Save.
9.4. Creating a migration plan Copy linkLink copied to clipboard!
Use the Red Hat OpenShift web console to create a migration plan. Specify the source provider, the virtual machines (VMs) you want to migrate, and other plan details.
Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration.
This prevents concurrent disk access to the storage the guest points to.
A plan cannot contain more than 500 VMs or 500 disks.
Procedure
In the Red Hat OpenShift web console, click Plans for virtualization and then click Create Plan.
The Create migration plan wizard opens to the Select source provider interface.
Select the source provider of the VMs you want to migrate.
The Select virtual machines interface opens.
Select the VMs you want to migrate and click Next.
The Create migration plan pane opens. It displays the source provider’s name and suggestions for a target provider and namespace, a network map, and a storage map.
- Enter the Plan name.
- To change the Target provider, the Target namespace, or elements of the Network map or the Storage map, select an item from the relevant list.
- To add either a Network map or a Storage map, click the + sign anf add a mapping.
Click Create migration plan.
MTV validates the migration plan, and the Plan details page opens, indicating whether the plan is ready for use or contains an error.
The details of the plan are listed, and you can edit the items you filled in on the previous page. If you make any changes, MTV validates the plan again.
Check the following items in the Settings section of the page:
Transfer Network: The network used to transfer the VMs to OpenShift Virtualization, by default, this is the default transfer network of the provider. Verify that the transfer network is in the selected target namespace.To edit the transfer network, click the Edit icon, choose a different transfer network from the list in the window that opens, and click Save.
You can configure an OpenShift network in the OpenShift web console by clicking Networking > NetworkAttachmentDefinitions.
To learn more about the different types of networks OpenShift supports, see Additional Networks in OpenShift Container Platform.
If you want to adjust the maximum transmission unit (MTU) of the OpenShift transfer network, you must also change the MTU of the VMware migration network. For more information see Selecting a migration network for a VMware source provider.
- Target namespace: Destination namespace to be used by all the migrated VMs, by default, this is the current or active namespace. To edit the namespace, click the Edit icon, choose a different target namespace from the list in the window that opens, and click Save.
If your plan is valid, you can do one of the following:
- Run the plan now by clicking Start migration.
- Run the plan later by selecting it on the Plans for virtualization page and following the procedure in Running a migration plan.
9.5. Running a migration plan Copy linkLink copied to clipboard!
You can run a migration plan and view its progress in the Red Hat OpenShift web console.
Prerequisites
- Valid migration plan.
Procedure
In the Red Hat OpenShift web console, click Migration → Plans for virtualization.
The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, the date that the migration started, and the description of each plan.
- Click Start beside a migration plan to start the migration.
Click Start in the confirmation window that opens.
The plan’s Status changes to Running, and the migration’s progress is displayed.
Warm migration only:
- The precopy stage starts.
- Click Cutover to complete the migration.
Optional: Click the links in the migration’s Status to see its overall status and the status of each VM:
- The link on the left indicates whether the migration failed, succeeded, or is ongoing. It also reports the number of VMs whose migration succeeded, failed, or was canceled.
The link on the right opens the Virtual Machines tab of the Plan Details page. For each VM, the tab displays the following data:
- The name of the VM
- The start and end times of the migration
- The amount of data copied
A progress pipeline for the VM’s migration
WarningvMotion, including svMotion, and relocation must be disabled for VMs that are being imported to avoid data corruption.
Optional: To view your migration’s logs, either as it is running or after it is completed, perform the following actions:
- Click the Virtual Machines tab.
Click the arrow (>) to the left of the virtual machine whose migration progress you want to check.
The VM’s details are displayed.
In the Pods section, in the Pod links column, click the Logs link.
The Logs tab opens.
NoteLogs are not always available. The following are common reasons for logs not being available:
-
The migration is from OpenShift Virtualization to OpenShift Virtualization. In this case,
virt-v2vis not involved, so no pod is required. - No pod was created.
- The pod was deleted.
- The migration failed before running the pod.
-
The migration is from OpenShift Virtualization to OpenShift Virtualization. In this case,
- To see the raw logs, click the Raw link.
- To download the logs, click the Download link.
9.6. Migration plan options Copy linkLink copied to clipboard!
On the Plans for virtualization page of the Red Hat OpenShift web console, you can click the Options menu
beside a migration plan to access the following options:
Edit Plan: Edit the details of a migration plan. If the plan is running or has completed successfully, you cannot edit the following options:
- All properties on the Settings section of the Plan details page. For example, warm or cold migration, target namespace, and preserved static IPs.
- The plan’s mapping on the Mappings tab.
- The hooks listed on the Hooks tab.
- Start migration: Active only if relevant.
- Restart migration: Restart a migration that was interrupted. Before choosing this option, make sure there are no error messages. If there are, you need to edit the plan.
Cutover: Warm migrations only. Active only if relevant. Clicking Cutover opens the Cutover window, which supports the following options:
- Set cutover: Set the date and time for a cutover.
- Remove cutover: Cancel a scheduled cutover. Active only if relevant.
Duplicate Plan: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:
- Migrate VMs to a different namespace.
- Edit an archived migration plan.
- Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.
Archive Plan: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed, duplicated, or deleted.
NoteArchive Plan is irreversible. However, you can duplicate an archived plan.
Delete Plan: Permanently remove a migration plan. You cannot delete a running migration plan.
NoteDelete Plan is irreversible.
Deleting a migration plan does not remove temporary resources. To remove temporary resources, archive the plan first before deleting it.
9.7. Canceling a migration Copy linkLink copied to clipboard!
You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the Red Hat OpenShift web console.
Procedure
- In the Red Hat OpenShift web console, click Plans for virtualization.
- Click the name of a running migration plan to view the migration details.
- Select one or more VMs and click Cancel.
Click Yes, cancel to confirm the cancellation.
In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.
You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.
Chapter 10. Migrating virtual machines from OpenShift Virtualization Copy linkLink copied to clipboard!
10.1. Adding a Red Hat OpenShift Virtualization source provider Copy linkLink copied to clipboard!
You can use a Red Hat OpenShift Virtualization provider as both a source provider and destination provider.
Specifically, the host cluster that is automatically added as a OpenShift Virtualization provider can be used as both a source provider and a destination provider.
You can migrate VMs from the cluster that MTV is deployed on to another cluster, or from a remote cluster to the cluster that MTV is deployed on.
The Red Hat OpenShift cluster version of the source provider must be 4.13 or later.
Procedure
- In the Red Hat OpenShift web console, click Migration → Providers for virtualization.
- Click Create Provider.
- Click OpenShift Virtualization.
Specify the following fields:
- Provider resource name: Name of the source provider
- URL: URL of the endpoint of the API server
Service account bearer token: Token for a service account with
cluster-adminprivilegesIf both URL and Service account bearer token are left blank, the local OpenShift cluster is used.
Choose one of the following options for validating CA certificates:
- Use a custom CA certificate: Migrate after validating a custom CA certificate.
- Use the system CA certificate: Migrate after validating the system CA certificate.
Skip certificate validation : Migrate without validating a CA certificate.
- To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
- To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
- To skip certificate validation, toggle the Skip certificate validation switch to the right.
Optional: Ask MTV to fetch a custom CA certificate from the provider’s API endpoint URL.
- Click Fetch certificate from URL. The Verify certificate window opens.
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
Click Create provider to add and save the provider.
The provider appears in the list of providers.
Optional: Add access to the UI of the provider:
On the Providers page, click the provider.
The Provider details page opens.
- Click the Edit icon under External UI web link.
Enter the link and click Save.
NoteIf you do not enter a link, MTV attempts to calculate the correct link.
- If MTV succeeds, the hyperlink of the field points to the calculated link.
- If MTV does not succeed, the field remains empty.
10.2. Adding an OpenShift Virtualization destination provider Copy linkLink copied to clipboard!
You can use a Red Hat OpenShift Virtualization provider as both a source provider and destination provider.
Specifically, the host cluster that is automatically added as a OpenShift Virtualization provider can be used as both a source provider and a destination provider.
You can also add another OpenShift Virtualization destination provider to the Red Hat OpenShift web console in addition to the default OpenShift Virtualization destination provider, which is the cluster where you installed MTV.
You can migrate VMs from the cluster that MTV is deployed on to another cluster, or from a remote cluster to the cluster that MTV is deployed on.
Prerequisites
-
You must have an OpenShift Virtualization service account token with
cluster-adminprivileges.
Procedure
- In the Red Hat OpenShift web console, click Migration → Providers for virtualization.
- Click Create Provider.
- Click OpenShift Virtualization.
Specify the following fields:
- Provider resource name: Name of the source provider
- URL: URL of the endpoint of the API server
Service account bearer token: Token for a service account with
cluster-adminprivilegesIf both URL and Service account bearer token are left blank, the local OpenShift cluster is used.
Choose one of the following options for validating CA certificates:
- Use a custom CA certificate: Migrate after validating a custom CA certificate.
- Use the system CA certificate: Migrate after validating the system CA certificate.
Skip certificate validation : Migrate without validating a CA certificate.
- To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
- To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
- To skip certificate validation, toggle the Skip certificate validation switch to the right.
Optional: Ask MTV to fetch a custom CA certificate from the provider’s API endpoint URL.
- Click Fetch certificate from URL. The Verify certificate window opens.
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
Click Create provider to add and save the provider.
The provider appears in the list of providers.
10.3. Selecting a migration network for an OpenShift Virtualization provider Copy linkLink copied to clipboard!
You can select a default migration network for an OpenShift Virtualization provider in the Red Hat OpenShift web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.
If you do not select a migration network, the default migration network is the pod network, which might not be optimal for disk transfer.
You can override the default migration network of the provider by selecting a different network when you create a migration plan.
Procedure
- In the Red Hat OpenShift web console, click Migration > Providers for virtualization.
Click the OpenShift Virtualization provider whose migration network you want to change.
When the Providers detail page opens:
- Click the Networks tab.
- Click Set default transfer network.
- Select a default transfer network from the list and click Save.
10.4. Creating a migration plan Copy linkLink copied to clipboard!
Use the Red Hat OpenShift web console to create a migration plan. Specify the source provider, the virtual machines (VMs) you want to migrate, and other plan details.
Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration.
This prevents concurrent disk access to the storage the guest points to.
A plan cannot contain more than 500 VMs or 500 disks.
Procedure
In the Red Hat OpenShift web console, click Plans for virtualization and then click Create Plan.
The Create migration plan wizard opens to the Select source provider interface.
Select the source provider of the VMs you want to migrate.
The Select virtual machines interface opens.
Select the VMs you want to migrate and click Next.
The Create migration plan pane opens. It displays the source provider’s name and suggestions for a target provider and namespace, a network map, and a storage map.
- Enter the Plan name.
- To change the Target provider, the Target namespace, or elements of the Network map or the Storage map, select an item from the relevant list.
- To add either a Network map or a Storage map, click the + sign anf add a mapping.
Click Create migration plan.
MTV validates the migration plan, and the Plan details page opens, indicating whether the plan is ready for use or contains an error.
The details of the plan are listed, and you can edit the items you filled in on the previous page. If you make any changes, MTV validates the plan again.
Check the following items in the Settings section of the page:
Transfer Network: The network used to transfer the VMs to OpenShift Virtualization, by default, this is the default transfer network of the provider. Verify that the transfer network is in the selected target namespace.To edit the transfer network, click the Edit icon, choose a different transfer network from the list in the window that opens, and click Save.
You can configure an OpenShift network in the OpenShift web console by clicking Networking > NetworkAttachmentDefinitions.
To learn more about the different types of networks OpenShift supports, see Additional Networks in OpenShift Container Platform.
If you want to adjust the maximum transmission unit (MTU) of the OpenShift transfer network, you must also change the MTU of the VMware migration network. For more information see Selecting a migration network for a VMware source provider.
- Target namespace: Destination namespace to be used by all the migrated VMs, by default, this is the current or active namespace. To edit the namespace, click the Edit icon, choose a different target namespace from the list in the window that opens, and click Save.
If your plan is valid, you can do one of the following:
- Run the plan now by clicking Start migration.
- Run the plan later by selecting it on the Plans for virtualization page and following the procedure in Running a migration plan.
10.5. Running a migration plan Copy linkLink copied to clipboard!
You can run a migration plan and view its progress in the Red Hat OpenShift web console.
Prerequisites
- Valid migration plan.
Procedure
In the Red Hat OpenShift web console, click Migration → Plans for virtualization.
The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, the date that the migration started, and the description of each plan.
- Click Start beside a migration plan to start the migration.
Click Start in the confirmation window that opens.
The plan’s Status changes to Running, and the migration’s progress is displayed.
Warm migration only:
- The precopy stage starts.
- Click Cutover to complete the migration.
Optional: Click the links in the migration’s Status to see its overall status and the status of each VM:
- The link on the left indicates whether the migration failed, succeeded, or is ongoing. It also reports the number of VMs whose migration succeeded, failed, or was canceled.
The link on the right opens the Virtual Machines tab of the Plan Details page. For each VM, the tab displays the following data:
- The name of the VM
- The start and end times of the migration
- The amount of data copied
A progress pipeline for the VM’s migration
WarningvMotion, including svMotion, and relocation must be disabled for VMs that are being imported to avoid data corruption.
Optional: To view your migration’s logs, either as it is running or after it is completed, perform the following actions:
- Click the Virtual Machines tab.
Click the arrow (>) to the left of the virtual machine whose migration progress you want to check.
The VM’s details are displayed.
In the Pods section, in the Pod links column, click the Logs link.
The Logs tab opens.
NoteLogs are not always available. The following are common reasons for logs not being available:
-
The migration is from OpenShift Virtualization to OpenShift Virtualization. In this case,
virt-v2vis not involved, so no pod is required. - No pod was created.
- The pod was deleted.
- The migration failed before running the pod.
-
The migration is from OpenShift Virtualization to OpenShift Virtualization. In this case,
- To see the raw logs, click the Raw link.
- To download the logs, click the Download link.
10.6. Migration plan options Copy linkLink copied to clipboard!
On the Plans for virtualization page of the Red Hat OpenShift web console, you can click the Options menu
beside a migration plan to access the following options:
Edit Plan: Edit the details of a migration plan. If the plan is running or has completed successfully, you cannot edit the following options:
- All properties on the Settings section of the Plan details page. For example, warm or cold migration, target namespace, and preserved static IPs.
- The plan’s mapping on the Mappings tab.
- The hooks listed on the Hooks tab.
- Start migration: Active only if relevant.
- Restart migration: Restart a migration that was interrupted. Before choosing this option, make sure there are no error messages. If there are, you need to edit the plan.
Cutover: Warm migrations only. Active only if relevant. Clicking Cutover opens the Cutover window, which supports the following options:
- Set cutover: Set the date and time for a cutover.
- Remove cutover: Cancel a scheduled cutover. Active only if relevant.
Duplicate Plan: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:
- Migrate VMs to a different namespace.
- Edit an archived migration plan.
- Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.
Archive Plan: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed, duplicated, or deleted.
NoteArchive Plan is irreversible. However, you can duplicate an archived plan.
Delete Plan: Permanently remove a migration plan. You cannot delete a running migration plan.
NoteDelete Plan is irreversible.
Deleting a migration plan does not remove temporary resources. To remove temporary resources, archive the plan first before deleting it.
10.7. Canceling a migration Copy linkLink copied to clipboard!
You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the Red Hat OpenShift web console.
Procedure
- In the Red Hat OpenShift web console, click Plans for virtualization.
- Click the name of a running migration plan to view the migration details.
- Select one or more VMs and click Cancel.
Click Yes, cancel to confirm the cancellation.
In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.
You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.
Chapter 11. Migrating virtual machines from the command line Copy linkLink copied to clipboard!
You can migrate virtual machines to OpenShift Virtualization from the command line.
You must ensure that all prerequisites are met.
11.1. Permissions needed by non-administrators to work with migration plan components Copy linkLink copied to clipboard!
If you are an administrator, you can work with all components of migration plans (for example, providers, network mappings, and migration plans).
By default, non-administrators have limited ability to work with migration plans and their components. As an administrator, you can modify their roles to allow them full access to all components, or you can give them limited permissions.
For example, administrators can assign non-administrators one or more of the following cluster roles for migration plans:
| Role | Description |
|---|---|
|
| Can view migration plans but not to create, delete or modify them |
|
|
Can create, delete or modify (all parts of |
|
|
All |
Note that pre-defined cluster roles include a resource (for example, plans), an API group (for example, forklift.konveyor.io-v1beta1) and an action (for example, view, edit).
As a more comprehensive example, you can grant non-administrators the following set of permissions per namespace:
- Create and modify storage maps, network maps, and migration plans for the namespaces they have access to
- Attach providers created by administrators to storage maps, network maps, and migration plans
- Not be able to create providers or to change system settings
| Actions | API group | Resource |
|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| Empty string |
|
Non-administrators need to have the create permissions that are part of edit roles for network maps and for storage maps to create migration plans, even when using a template for a network map or a storage map.
11.2. Migrating virtual machines Copy linkLink copied to clipboard!
You migrate virtual machines (VMs) using the command-line interface (CLI) by creating MTV custom resources (CRs). The CRs and the migration procedure vary by source provider.
You must specify a name for cluster-scoped CRs.
You must specify both a name and a namespace for namespace-scoped CRs.
To migrate to or from an OpenShift cluster that is different from the one the migration plan is defined on, you must have an OpenShift Virtualization service account token with cluster-admin privileges.
11.3. Migrating from a VMware vSphere source provider Copy linkLink copied to clipboard!
You can migrate from a VMware vSphere source provider by using the command-line interface (CLI).
Procedure
Create a
Secretmanifest for the source provider credentials:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
ownerReferencessection is optional. - 2
- Specify the vCenter user or the ESX/ESXi user.
- 3
- Specify the password of the vCenter user or the ESX/ESXi user.
- 4
- Specify
"true"to skip certificate verification, and specify"false"to verify the certificate. Defaults to"false"if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. - 5
- When this field is not set and skip certificate verification is disabled, MTV attempts to use the system CA.
- 6
- Specify the API endpoint URL of the vCenter or the ESX/ESXi, for example,
https://<vCenter_host>/sdk.
Create a
Providermanifest for the source provider:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the URL of the API endpoint, for example,
https://<vCenter_host>/sdk. - 2
- Optional, but it is strongly recommended to create a VDDK image to accelerate migrations. Follow OpenShift documentation to specify the VDDK image you created.
- 3
- Options:
vcenteroresxi. - 4
- Specify the name of the provider
SecretCR.
Create a
Hostmanifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the VMware vSphere
ProviderCR. - 2
- Specify the Managed Object Reference (moRef) of the VMware vSphere host. To retrieve the moRef, see Retrieving a VMware vSphere moRef.
- 3
- Specify the IP address of the VMware vSphere migration network.
Create a
NetworkMapmanifest to map the source and destination networks:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Allowed values are
podandmultus. - 2
- You can use either the
idor thenameparameter to specify the source network. Forid, specify the VMware vSphere network Managed Object Reference (moRef). To retrieve the moRef, see Retrieving a VMware vSphere moRef. - 3
- Specify a network attachment definition for each additional OpenShift Virtualization network.
- 4
- Required only when
typeismultus. Specify the namespace of the OpenShift Virtualization network attachment definition.
Create a
StorageMapmanifest to map source and destination storage:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Allowed values are
ReadWriteOnceandReadWriteMany. - 2
- Specify the VMware vSphere datastore moRef. For example,
f2737930-b567-451a-9ceb-2887f6207009. To retrieve the moRef, see Retrieving a VMware vSphere moRef.
Optional: Create a
Hookmanifest to run custom code on a VM during the phase specified in thePlanCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can use the default
hook-runnerimage or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
Create a
Planmanifest for the migration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the
PlanCR. - 2
- Specify whether the migration is warm -
true- or cold -false. If you specify a warm migration without specifying a value for thecutoverparameter in theMigrationmanifest, only the precopy stage will run. - 3
- Specify only one network map and one storage map per plan.
- 4
- Specify a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
- 5
- Specify the name of the
NetworkMapCR. - 6
- Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
- 7
- Specify the name of the
StorageMapCR. - 8
- By default, virtual network interface controllers (vNICs) change during the migration process. As a result, vNICs that are configured with a static IP linked to the interface name in the guest VM lose their IP. To avoid this, set
preserveStaticIPstotrue. MTV issues a warning message about any VMs for which vNIC properties are missing. To retrieve any missing vNIC properties, run those VMs in vSphere in order for the vNIC properties to be reported to MTV. - 9
- You can use either the
idor thenameparameter to specify the source VMs. - 10
- Specify the VMware vSphere VM moRef. To retrieve the moRef, see Retrieving a VMware vSphere moRef.
- 11
- Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
- 12
- Specify the name of the
HookCR. - 13
- Allowed values are
PreHook, before the migration plan starts, orPostHook, after the migration is complete.
ImportantWhen you migrate a VMware 7 VM to an OpenShift 4.13+ platform that uses CentOS 7.9, the name of the network interfaces changes and the static IP configuration for the VM no longer works.
Create a
Migrationmanifest to run thePlanCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example,
2024-04-04T01:23:45.678+09:00.
There is an issue with the forklift-controller consistently failing to reconcile a migration plan, and subsequently returning an HTTP 500 error. This issue is caused when you specify the user permissions only on the virtual machine (VM).
In MTV, you need to add permissions at the datacenter level, which includes storage, networks, switches, and so on, which are used by the VM. You must then propagate the permissions to the child elements.
If you do not want to add this level of permissions, you must manually add the permissions to each object on the VM host required.
11.3.1. Retrieving a VMware vSphere moRef Copy linkLink copied to clipboard!
When you migrate VMs with a VMware vSphere source provider using Migration Toolkit for Virtualization (MTV) from the command line, you need to know the managed object reference (moRef) of certain entities in vSphere, such as datastores, networks, and VMs.
You can retrieve the moRef of one or more vSphere entities from the Inventory service. You can then use each moRef as a reference for retrieving the moRef of another entity.
Procedure
Retrieve the routes for the project:
oc get route -n openshift-mtv
oc get route -n openshift-mtvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the
Inventoryservice route:oc get route <inventory_service> -n openshift-mtv
$ oc get route <inventory_service> -n openshift-mtvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the access token:
TOKEN=$(oc whoami -t)
$ TOKEN=$(oc whoami -t)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the moRef of a VMware vSphere provider:
curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/vsphere -k
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/vsphere -kCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the datastores of a VMware vSphere source provider:
curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/vsphere/<provider id>/datastores/ -k
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/vsphere/<provider id>/datastores/ -kCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
In this example, the moRef of the datastore v2v_general_porpuse_ISCSI_DC is datastore-11 and the moRef of the datastore f01-h27-640-SSD_2 is datastore-730.
11.3.2. Canceling a migration from the command-line interface Copy linkLink copied to clipboard!
You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.
Canceling an entire migration
Delete the
MigrationCR:oc delete migration <migration> -n <namespace>
$ oc delete migration <migration> -n <namespace>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the
MigrationCR.
Canceling the migration of specific VMs
Add the specific VMs to the
spec.cancelblock of theMigrationmanifest:Example YAML for canceling the migrations of two VMs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You can specify a VM by using the
idkey or thenamekey.
The value of the
idkey is the managed object reference, for a VMware VM, or the VM UUID, for a RHV VM.Retrieve the
MigrationCR to monitor the progress of the remaining VMs:oc get migration/<migration> -n <namespace> -o yaml
$ oc get migration/<migration> -n <namespace> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.4. Migrating from a Red Hat Virtualization source provider Copy linkLink copied to clipboard!
You can migrate from a Red Hat Virtualization (RHV) source provider by using the command-line interface (CLI).
Prerequisites
If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the OpenShift Virtualization destination cluster that the VM is expected to run on can access the backend storage.
- Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.
- LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.
Procedure
Create a
Secretmanifest for the source provider credentials:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
ownerReferencessection is optional. - 2
- Specify the RHV Manager user.
- 3
- Specify the user password.
- 4
- Specify
"true"to skip certificate verification, and specify"false"to verify the certificate. Defaults to"false"if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. - 5
- Enter the Manager CA certificate, unless it was replaced by a third-party certificate, in which case, enter the Manager Apache CA certificate. You can retrieve the Manager CA certificate at https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA.
- 6
- Specify the API endpoint URL, for example,
https://<engine_host>/ovirt-engine/api.
Create a
Providermanifest for the source provider:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
NetworkMapmanifest to map the source and destination networks:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Allowed values are
podandmultus. - 2
- You can use either the
idor thenameparameter to specify the source network. Forid, specify the RHV network Universal Unique ID (UUID). - 3
- Specify a network attachment definition for each additional OpenShift Virtualization network.
- 4
- Required only when
typeismultus. Specify the namespace of the OpenShift Virtualization network attachment definition.
Create a
StorageMapmanifest to map source and destination storage:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Create a
Hookmanifest to run custom code on a VM during the phase specified in thePlanCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can use the default
hook-runnerimage or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
Create a
Planmanifest for the migration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the
PlanCR. - 2
- See note below.
- 3
- Specify whether the migration is warm or cold. If you specify a warm migration without specifying a value for the
cutoverparameter in theMigrationmanifest, only the precopy stage will run. - 4
- Specify only one network map and one storage map per plan.
- 5
- Specify a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
- 6
- Specify the name of the
NetworkMapCR. - 7
- Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
- 8
- Specify the name of the
StorageMapCR. - 9
- You can use either the
idor thenameparameter to specify the source VMs. - 10
- Specify the RHV VM UUID.
- 11
- Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
- 12
- Specify the name of the
HookCR. - 13
- Allowed values are
PreHook, before the migration plan starts, orPostHook, after the migration is complete.
Note-
If the migrated machine is set with a custom CPU model, it will be set with that CPU model in the destination cluster, regardless of the setting of
preserveClusterCpuModel. If the migrated machine is not set with a custom CPU model:
-
If
preserveClusterCpuModelis set to 'true`, MTV checks the CPU model of the VM when it runs in RHV, based on the cluster’s configuration, and then sets the migrated VM with that CPU model. -
If
preserveClusterCpuModelis set to 'false`, MTV does not set a CPU type and the VM is set with the default CPU model of the destination cluster.
-
If
Create a
Migrationmanifest to run thePlanCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example,
2024-04-04T01:23:45.678+09:00.
11.4.1. Canceling a migration from the command-line interface Copy linkLink copied to clipboard!
You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.
Canceling an entire migration
Delete the
MigrationCR:oc delete migration <migration> -n <namespace>
$ oc delete migration <migration> -n <namespace>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the
MigrationCR.
Canceling the migration of specific VMs
Add the specific VMs to the
spec.cancelblock of theMigrationmanifest:Example YAML for canceling the migrations of two VMs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You can specify a VM by using the
idkey or thenamekey.
The value of the
idkey is the managed object reference, for a VMware VM, or the VM UUID, for a RHV VM.Retrieve the
MigrationCR to monitor the progress of the remaining VMs:oc get migration/<migration> -n <namespace> -o yaml
$ oc get migration/<migration> -n <namespace> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.5. Migrating from an OpenStack source provider Copy linkLink copied to clipboard!
You can migrate from an OpenStack source provider by using the command-line interface (CLI).
Procedure
Create a
Secretmanifest for the source provider credentials:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
ownerReferencessection is optional. - 2
- Specify the OpenStack user.
- 3
- Specify the user OpenStack password.
- 4
- Specify
"true"to skip certificate verification, and specify"false"to verify the certificate. Defaults to"false"if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. - 5
- When this field is not set and skip certificate verification is disabled, MTV attempts to use the system CA.
- 6
- Specify the API endpoint URL, for example,
https://<identity_service>/v3.
Create a
Providermanifest for the source provider:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
NetworkMapmanifest to map the source and destination networks:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Allowed values are
podandmultus. - 2
- You can use either the
idor thenameparameter to specify the source network. Forid, specify the OpenStack network UUID. - 3
- Specify a network attachment definition for each additional OpenShift Virtualization network.
- 4
- Required only when
typeismultus. Specify the namespace of the OpenShift Virtualization network attachment definition.
Create a
StorageMapmanifest to map source and destination storage:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Create a
Hookmanifest to run custom code on a VM during the phase specified in thePlanCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can use the default
hook-runnerimage or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
Create a
Planmanifest for the migration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the
PlanCR. - 2
- Specify only one network map and one storage map per plan.
- 3
- Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
- 4
- Specify the name of the
NetworkMapCR. - 5
- Specify a storage mapping, even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
- 6
- Specify the name of the
StorageMapCR. - 7
- You can use either the
idor thenameparameter to specify the source VMs. - 8
- Specify the OpenStack VM UUID.
- 9
- Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
- 10
- Specify the name of the
HookCR. - 11
- Allowed values are
PreHook, before the migration plan starts, orPostHook, after the migration is complete.
Create a
Migrationmanifest to run thePlanCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example,
2024-04-04T01:23:45.678+09:00.
11.5.1. Canceling a migration from the command-line interface Copy linkLink copied to clipboard!
You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.
Canceling an entire migration
Delete the
MigrationCR:oc delete migration <migration> -n <namespace>
$ oc delete migration <migration> -n <namespace>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the
MigrationCR.
Canceling the migration of specific VMs
Add the specific VMs to the
spec.cancelblock of theMigrationmanifest:Example YAML for canceling the migrations of two VMs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You can specify a VM by using the
idkey or thenamekey.
The value of the
idkey is the managed object reference, for a VMware VM, or the VM UUID, for a RHV VM.Retrieve the
MigrationCR to monitor the progress of the remaining VMs:oc get migration/<migration> -n <namespace> -o yaml
$ oc get migration/<migration> -n <namespace> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.6. Migrating from an Open Virtual Appliance (OVA) source provider Copy linkLink copied to clipboard!
You can migrate from Open Virtual Appliance (OVA) files that were created by VMware vSphere as a source provider by using the command-line interface (CLI).
Procedure
Create a
Secretmanifest for the source provider credentials:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
Providermanifest for the source provider:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
NetworkMapmanifest to map the source and destination networks:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Allowed values are
podandmultus. - 2
- Specify the OVA network Universal Unique ID (UUID).
- 3
- Specify a network attachment definition for each additional OpenShift Virtualization network.
- 4
- Required only when
typeismultus. Specify the namespace of the OpenShift Virtualization network attachment definition.
Create a
StorageMapmanifest to map source and destination storage:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Allowed values are
ReadWriteOnceandReadWriteMany. - 2
- For OVA, the
StorageMapcan map only a single storage, which all the disks from the OVA are associated with, to a storage class at the destination. For this reason, the storage is referred to in the UI as "Dummy storage for source provider <provider_name>". In the YAML, write the phrase as it appears above, without the quotation marks and replacing <provider_name> with the actual name of the provider.
Optional: Create a
Hookmanifest to run custom code on a VM during the phase specified in thePlanCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can use the default
hook-runnerimage or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
Create a
Planmanifest for the migration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the
PlanCR. - 2
- Specify only one network map and one storage map per plan.
- 3
- Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
- 4
- Specify the name of the
NetworkMapCR. - 5
- Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
- 6
- Specify the name of the
StorageMapCR. - 7
- You can use either the
idor thenameparameter to specify the source VMs. - 8
- Specify the OVA VM UUID.
- 9
- Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
- 10
- Specify the name of the
HookCR. - 11
- Allowed values are
PreHook, before the migration plan starts, orPostHook, after the migration is complete.
Create a
Migrationmanifest to run thePlanCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example,
2024-04-04T01:23:45.678+09:00.
11.6.1. Canceling a migration from the command-line interface Copy linkLink copied to clipboard!
You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.
Canceling an entire migration
Delete the
MigrationCR:oc delete migration <migration> -n <namespace>
$ oc delete migration <migration> -n <namespace>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the
MigrationCR.
Canceling the migration of specific VMs
Add the specific VMs to the
spec.cancelblock of theMigrationmanifest:Example YAML for canceling the migrations of two VMs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You can specify a VM by using the
idkey or thenamekey.
The value of the
idkey is the managed object reference, for a VMware VM, or the VM UUID, for a RHV VM.Retrieve the
MigrationCR to monitor the progress of the remaining VMs:oc get migration/<migration> -n <namespace> -o yaml
$ oc get migration/<migration> -n <namespace> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.7. Migrating from a Red Hat OpenShift Virtualization source provider Copy linkLink copied to clipboard!
You can use a Red Hat OpenShift Virtualization provider as either a source provider or as a destination provider. You can migrate from an OpenShift Virtualization source provider by using the command-line interface (CLI).
Procedure
Create a
Secretmanifest for the source provider credentials:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
ownerReferencessection is optional. - 2
- Specify a token for a service account with
cluster-adminprivileges. If bothtokenandurlare left blank, the local OpenShift cluster is used. - 3
- Specify the user password.
- 4
- Specify
"true"to skip certificate verification, and specify"false"to verify the certificate. Defaults to"false"if not specified. Skipping certificate verification proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. - 5
- When this field is not set and skip certificate verification is disabled, MTV attempts to use the system CA.
- 6
- Specify the URL of the endpoint of the API server.
Create a
Providermanifest for the source provider:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
NetworkMapmanifest to map the source and destination networks:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Allowed values are
podandmultus. - 2
- Specify a network attachment definition for each additional OpenShift Virtualization network. Specify the
namespaceeither by using thenamespace propertyor with a name built as follows:<network_namespace>/<network_name>. - 3
- Required only when
typeismultus. Specify the namespace of the OpenShift Virtualization network attachment definition.
Create a
StorageMapmanifest to map source and destination storage:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Allowed values are
ReadWriteOnceandReadWriteMany.
Optional: Create a
Hookmanifest to run custom code on a VM during the phase specified in thePlanCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can use the default
hook-runnerimage or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
Create a
Planmanifest for the migration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the
PlanCR. - 2
- Specify only one network map and one storage map per plan.
- 3
- Specify a network mapping, even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
- 4
- Specify the name of the
NetworkMapCR. - 5
- Specify a storage mapping, even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
- 6
- Specify the name of the
StorageMapCR. - 7
- Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
- 8
- Specify the name of the
HookCR. - 9
- Allowed values are
PreHook, before the migration plan starts, orPostHook, after the migration is complete.
Create a
Migrationmanifest to run thePlanCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you specify a cutover time, use the ISO 8601 format with the UTC time offset, for example,
2024-04-04T01:23:45.678+09:00.
11.7.1. Canceling a migration from the command-line interface Copy linkLink copied to clipboard!
You can use the command-line interface (CLI) to cancel either an entire migration or the migration of specific virtual machines (VMs) while a migration is in progress.
Canceling an entire migration
Delete the
MigrationCR:oc delete migration <migration> -n <namespace>
$ oc delete migration <migration> -n <namespace>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the
MigrationCR.
Canceling the migration of specific VMs
Add the specific VMs to the
spec.cancelblock of theMigrationmanifest:Example YAML for canceling the migrations of two VMs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You can specify a VM by using the
idkey or thenamekey.
The value of the
idkey is the managed object reference, for a VMware VM, or the VM UUID, for a RHV VM.Retrieve the
MigrationCR to monitor the progress of the remaining VMs:oc get migration/<migration> -n <namespace> -o yaml
$ oc get migration/<migration> -n <namespace> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 12. Advanced migration options Copy linkLink copied to clipboard!
12.1. Changing precopy intervals for warm migration Copy linkLink copied to clipboard!
You can change the snapshot interval by patching the ForkliftController custom resource (CR).
Procedure
Patch the
ForkliftControllerCR:oc patch forkliftcontroller/<forklift-controller> -n openshift-mtv -p '{"spec": {"controller_precopy_interval": <60>}}' --type=merge$ oc patch forkliftcontroller/<forklift-controller> -n openshift-mtv -p '{"spec": {"controller_precopy_interval": <60>}}' --type=merge1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the precopy interval in minutes. The default value is
60.
You do not need to restart the
forklift-controllerpod.
12.2. Creating custom rules for the Validation service Copy linkLink copied to clipboard!
The Validation service uses Open Policy Agent (OPA) policy rules to check the suitability of each virtual machine (VM) for migration. The Validation service generates a list of concerns for each VM, which are stored in the Provider Inventory service as VM attributes. The web console displays the concerns for each VM in the provider inventory.
You can create custom rules to extend the default ruleset of the Validation service. For example, you can create a rule that checks whether a VM has multiple disks.
12.2.1. About Rego files Copy linkLink copied to clipboard!
Validation rules are written in Rego, the Open Policy Agent (OPA) native query language. The rules are stored as .rego files in the /usr/share/opa/policies/io/konveyor/forklift/<provider> directory of the Validation pod.
Each validation rule is defined in a separate .rego file and tests for a specific condition. If the condition evaluates as true, the rule adds a {“category”, “label”, “assessment”} hash to the concerns. The concerns content is added to the concerns key in the inventory record of the VM. The web console displays the content of the concerns key for each VM in the provider inventory.
The following .rego file example checks for distributed resource scheduling enabled in the cluster of a VMware VM:
drs_enabled.rego example
12.2.2. Checking the default validation rules Copy linkLink copied to clipboard!
Before you create a custom rule, you must check the default rules of the Validation service to ensure that you do not create a rule that redefines an existing default value.
Example: If a default rule contains the line default valid_input = false and you create a custom rule that contains the line default valid_input = true, the Validation service will not start.
Procedure
Connect to the terminal of the
Validationpod:oc rsh <validation_pod>
$ oc rsh <validation_pod>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Go to the OPA policies directory for your provider:
cd /usr/share/opa/policies/io/konveyor/forklift/<provider>
$ cd /usr/share/opa/policies/io/konveyor/forklift/<provider>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify
vmwareorovirt.
Search for the default policies:
grep -R "default" *
$ grep -R "default" *Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.2.3. Creating a validation rule Copy linkLink copied to clipboard!
You create a validation rule by applying a config map custom resource (CR) containing the rule to the Validation service.
-
If you create a rule with the same name as an existing rule, the
Validationservice performs anORoperation with the rules. -
If you create a rule that contradicts a default rule, the
Validationservice will not start.
Validation rule example
Validation rules are based on virtual machine (VM) attributes collected by the Provider Inventory service.
For example, the VMware API uses this path to check whether a VMware VM has NUMA node affinity configured: MOR:VirtualMachine.config.extraConfig["numa.nodeAffinity"].
The Provider Inventory service simplifies this configuration and returns a testable attribute with a list value:
"numaNodeAffinity": [
"0",
"1"
],
"numaNodeAffinity": [
"0",
"1"
],
You create a Rego query, based on this attribute, and add it to the forklift-validation-config config map:
`count(input.numaNodeAffinity) != 0`
`count(input.numaNodeAffinity) != 0`
Procedure
Create a config map CR according to the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the provider package name. Allowed values are
io.konveyor.forklift.vmwarefor VMware andio.konveyor.forklift.ovirtfor Red Hat Virtualization. - 2
- Specify the
concernsname and Rego query. - 3
- Specify the
concernsname andflagparameter values. - 4
- Allowed values are
Critical,Warning, andInformation.
Stop the
Validationpod by scaling theforklift-controllerdeployment to0:oc scale -n openshift-mtv --replicas=0 deployment/forklift-controller
$ oc scale -n openshift-mtv --replicas=0 deployment/forklift-controllerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
Validationpod by scaling theforklift-controllerdeployment to1:oc scale -n openshift-mtv --replicas=1 deployment/forklift-controller
$ oc scale -n openshift-mtv --replicas=1 deployment/forklift-controllerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the
Validationpod log to verify that the pod started:oc logs -f <validation_pod>
$ oc logs -f <validation_pod>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the custom rule conflicts with a default rule, the
Validationpod will not start.Remove the source provider:
oc delete provider <provider> -n openshift-mtv
$ oc delete provider <provider> -n openshift-mtvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the source provider to apply the new rule:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You must update the rules version after creating a custom rule so that the Inventory service detects the changes and validates the VMs.
12.2.4. Updating the inventory rules version Copy linkLink copied to clipboard!
You must update the inventory rules version each time you update the rules so that the Provider Inventory service detects the changes and triggers the Validation service.
The rules version is recorded in a rules_version.rego file for each provider.
Procedure
Retrieve the current rules version:
GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version
$ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{ "result": { "rules_version": 5 } }{ "result": { "rules_version": 5 } }Copy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to the terminal of the
Validationpod:oc rsh <validation_pod>
$ oc rsh <validation_pod>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Update the rules version in the
/usr/share/opa/policies/io/konveyor/forklift/<provider>/rules_version.regofile. -
Log out of the
Validationpod terminal. Verify the updated rules version:
GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version
$ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{ "result": { "rules_version": 6 } }{ "result": { "rules_version": 6 } }Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.3. Retrieving the Inventory service JSON Copy linkLink copied to clipboard!
You retrieve the Inventory service JSON by sending an Inventory service query to a virtual machine (VM). The output contains an "input" key, which contains the inventory attributes that are queried by the Validation service rules.
You can create a validation rule based on any attribute in the "input" key, for example, input.snapshot.kind.
Procedure
Retrieve the routes for the project:
oc get route -n openshift-mtv
oc get route -n openshift-mtvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the
Inventoryservice route:oc get route <inventory_service> -n openshift-mtv
$ oc get route <inventory_service> -n openshift-mtvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the access token:
TOKEN=$(oc whoami -t)
$ TOKEN=$(oc whoami -t)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Trigger an HTTP GET request (for example, using Curl):
curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers -k
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers -kCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the
UUIDof a provider:curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider> -k
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider> -k1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the VMs of a provider:
curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider>/<UUID>/vms -k
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider>/<UUID>/vms -kCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the details of a VM:
curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider>/<UUID>/workloads/<vm> -k
$ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers/<provider>/<UUID>/workloads/<vm> -kCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.4. Adding hooks to an MTV migration plan Copy linkLink copied to clipboard!
You can add hooks to an Migration Toolkit for Virtualization (MTV) migration plan to perform automated operations on a VM, either before or after you migrate it.
12.4.1. About hooks for MTV migration plans Copy linkLink copied to clipboard!
You can add hooks to Migration Toolkit for Virtualization (MTV) migration plans using either the MTV CLI or the MTV user interface, which is located in the Red Hat OpenShift web console.
- Pre-migration hooks are hooks that perform operations on a VM that is located on a provider. This prepares the VM for migration.
- Post-migration hooks are hooks that perform operations on a VM that has migrated to OpenShift Virtualization.
12.4.1.1. Default hook image Copy linkLink copied to clipboard!
The default hook image for an MTV hook is quay.io/konveyor/hook-runner. The image is based on the Ansible Runner image with the addition of python-openshift to provide Ansible Kubernetes resources and a recent oc binary.
12.4.1.2. Hook execution Copy linkLink copied to clipboard!
An Ansible playbook that is provided as part of a migration hook is mounted into the hook container as a ConfigMap. The hook container is run as a job on the desired cluster in the openshift-mtv namespace using the ServiceAccount you choose.
When you add a hook, you must specify the namespace where the Hook CR is located, the name of the hook, and whether the hook is a pre-migration hook or a post-migration hook.
In order for a hook to run on a VM, the VM must be started and available using SSH.
The illustration that follows shows the general process of using a migration hook. For specific procedures, see Adding a migration hook to a migration plan using the Red Hat OpenShift web console and Adding a migration hook to a migration plan using the CLI.
Figure 12.1. Adding a hook to a migration plan
Process:
Input your Ansible hook and credentials.
Input an Ansible hook image to the MTV controller using either the UI or the CLI.
-
In the UI, specify the
ansible-runnerand enter theplaybook.ymlthat contains the hook. - In the CLI, input the hook image, which specifies the playbook that runs the hook.
-
In the UI, specify the
If you need additional data to run the playbook inside the pod, such as SSH data, create a Secret that contains credentials for the VM. The Secret is not mounted to the pod, but is called by the playbook.
NoteThis Secret is not the same as the
SecretCR that contains the credentials of your source provider.
The MTV controller creates the
ConfigMap, which contains:-
workload.yml, which contains information about the VMs. -
playbook.yml, the raw string playbook you want to execute. plan.yml, which is thePlanCR.The
ConfigMapcontains the name of the VM and instructs the playbook what to do.
-
The MTV controller creates a job that starts the user specified image.
Mounts the
ConfigMapto the container.The Ansible hook imports the Secret that the user previously entered.
The job runs a pre-migration hook or a post-migration hook as follows:
- For a pre-migration hook, the job logs into the VMs on the source provider using SSH and runs the hook.
- For a post-migration hook, the job logs into the VMs on OpenShift Virtualization using SSH and runs the hook.
12.4.2. Adding a migration hook to a migration plan using the Red Hat OpenShift web console Copy linkLink copied to clipboard!
You can add a migration hook to an existing migration plan using the Red Hat OpenShift web console. Note that you need to run one command in the Migration Toolkit for Virtualization (MTV) CLI.
For example, you can create a hook to install the cloud-init service on a VM and write a file before migration.
You can run one pre-migration hook, one post-migration hook, or one of each per migration plan.
Prerequisites
- Migration plan
- Migration hook file, whose contents you copy and paste into the web console
-
File containing the
Secretfor the source provider - Red Hat OpenShift service account called by the hook and that has at least write access for the namespace you are working in
- SSH access for VMs you want to migrate with the public key installed on the VMs
- VMs running on Microsoft Server only: Remote Execution enabled
Additional resources
For instructions for creating a service account, see Understanding and creating service accounts.
Procedure
- In the Red Hat OpenShift web console, click Migration > Plans for virtualization and then click the migration plan you want to add the hook to.
- Click Hooks.
For a pre-migration hook, perform the following steps:
- In the Pre migration hook section, toggle the Enable hook switch to Enable pre migration hook.
-
Enter the Hook runner image. If you are specifying the
spec.playbook, you need to use an image that has anansible-runner. - Paste your hook as a YAML file in the Ansible playbook text box.
For a post-migration hook, perform the following steps:
- In the Post migration hook, toggle the Enable hook switch Enable post migration hook.
-
Enter the Hook runner image. If you are specifying the
spec.playbook, you need to use an image that has anansible-runner. - Paste your hook as a YAML file in the Ansible playbook text box.
- At the top of the tab, click Update hooks.
In a terminal, enter the following command to associate each hook with your Red Hat OpenShift service account:
oc -n openshift-mtv patch hook <name_of_hook> \ -p '{"spec":{"serviceAccount":"<service_account>"}}' --type merge$ oc -n openshift-mtv patch hook <name_of_hook> \ -p '{"spec":{"serviceAccount":"<service_account>"}}' --type mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The example migration hook that follows ensures that the VM can be accessed using SSH, creates an SSH key, and runs 2 tasks: stopping the Maria database and generating a text file.
Example migration hook
12.4.3. Adding a migration hook to a migration plan using the CLI Copy linkLink copied to clipboard!
You can use a Hook CR to add a pre-migration hook or a post-migration hook to an existing migration plan using the Migration Toolkit for Virtualization (MTV) CLI.
For example, you can create a Hook CR to install the cloud-init service on a VM and write a file before migration.
You can run one pre-migration hook, one post-migration hook, or one of each per migration plan. Each hook needs its own Hook CR, but a Plan CR contains data for all the hooks it uses.
You can retrieve additional information stored in a secret or in a ConfigMap by using a k8s module.
Prerequisites
- Migration plan
- Migration hook image or the playbook containing the hook image
- File containing the Secret for the source provider
- Red Hat OpenShift service account called by the hook and that has at least write access for the namespace you are working in
- SSH access for VMs you want to migrate with the public key installed on the VMs
- VMs running on Microsoft Server only: Remote Execution enabled
Additional resources
For instructions for creating a service account, see Understanding and creating service accounts.
Procedure
If needed, create a Secret with an SSH private key for the VM.
- Choose an existing key or generate a key pair.
- Install the public key on the VM.
Encode the private key in the Secret to base64.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Encode your playbook by concatenating a file and piping it for Base64 encoding, for example:
cat playbook.yml | base64 -w0
$ cat playbook.yml | base64 -w0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Hook CR:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can use the default
hook-runnerimage or specify a custom image. If you specify a custom image, you do not have to specify a playbook.NoteTo decode an attached playbook, retrieve the resource with custom output and pipe it to base64. For example:
oc get -n konveyor-forklift hook playbook -o \ go-template='{{ .spec.playbook }}' | base64 -d$ oc get -n konveyor-forklift hook playbook -o \ go-template='{{ .spec.playbook }}' | base64 -dCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the
PlanCR of the migration, for each VM, add the following section to the end of the CR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Options are
PreHook, to run the hook before the migration, andPostHook, to run the hook after the migration.
In order for a PreHook to run on a VM, the VM must be started and available via SSH.
The example migration hook that follows ensures that the VM can be accessed using SSH, creates an SSH key, and runs 2 tasks: stopping the Maria database and generating a text file.
Example migration hook
Chapter 13. Upgrading the Migration Toolkit for Virtualization Copy linkLink copied to clipboard!
You can upgrade the MTV Operator by using the Red Hat OpenShift web console to install the new version.
Procedure
- In the Red Hat OpenShift web console, click Operators → Installed Operators → Migration Toolkit for Virtualization Operator → Subscription.
Change the update channel to the correct release.
See Changing update channel in the Red Hat OpenShift documentation.
Confirm that Upgrade status changes from Up to date to Upgrade available. If it does not, restart the
CatalogSourcepod:-
Note the catalog source, for example,
redhat-operators. From the command line, retrieve the catalog source pod:
oc get pod -n openshift-marketplace | grep <catalog_source>
$ oc get pod -n openshift-marketplace | grep <catalog_source>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the pod:
oc delete pod -n openshift-marketplace <catalog_source_pod>
$ oc delete pod -n openshift-marketplace <catalog_source_pod>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Upgrade status changes from Up to date to Upgrade available.
If you set Update approval on the Subscriptions tab to Automatic, the upgrade starts automatically.
-
Note the catalog source, for example,
If you set Update approval on the Subscriptions tab to Manual, approve the upgrade.
See Manually approving a pending upgrade in the Red Hat OpenShift documentation.
-
If you are upgrading from MTV 2.2 and have defined VMware source providers, edit the VMware provider by adding a VDDK
initimage. Otherwise, the update will change the state of any VMware providers toCritical. For more information, see Adding a VMSphere source provider. -
If you mapped to NFS on the Red Hat OpenShift destination provider in MTV 2.2, edit the
AccessModesandVolumeModeparameters in the NFS storage profile. Otherwise, the upgrade will invalidate the NFS mapping. For more information, see Customizing the storage profile.
Chapter 14. Uninstalling the Migration Toolkit for Virtualization Copy linkLink copied to clipboard!
You can uninstall the Migration Toolkit for Virtualization (MTV) by using the Red Hat OpenShift web console or the command-line interface (CLI).
14.1. Uninstalling MTV by using the Red Hat OpenShift web console Copy linkLink copied to clipboard!
You can uninstall Migration Toolkit for Virtualization (MTV) by using the Red Hat OpenShift web console.
Prerequisites
-
You must be logged in as a user with
cluster-adminprivileges.
Procedure
- In the Red Hat OpenShift web console, click Operators > Installed Operators.
Click Migration Toolkit for Virtualization Operator.
The Operator Details page opens in the Details tab.
- Click the ForkliftController tab.
Click Actions and select Delete ForkLiftController.
A confirmation window opens.
Click Delete.
The controller is removed.
Open the Details tab.
The Create ForkliftController button appears instead of the controller you deleted. There is no need to click it.
On the upper-right side of the page, click Actions and select Uninstall Operator.
A confirmation window opens, displaying any operand instances.
To delete all instances, select the Delete all operand instances for this operator checkbox. By default, the checkbox is cleared.
ImportantIf your Operator configured off-cluster resources, these will continue to run and will require manual cleanup.
Click Uninstall.
The Installed Operators page opens, and the Migration Toolkit for Virtualization Operator is removed from the list of installed Operators.
- Click Home > Overview.
In the Status section of the page, click Dynamic Plugins.
The Dynamic Plugins popup opens, listing forklift-console-plugin as a failed plugin. If the forklift-console-plugin does not appear as a failed plugin, refresh the web console.
Click forklift-console-plugin.
The ConsolePlugin details page opens in the Details tab.
On the upper right-hand side of the page, click Actions and select Delete ConsolePlugin from the list.
A confirmation window opens.
Click Delete.
The plugin is removed from the list of Dynamic plugins on the Overview page. If the plugin still appears, restart the Overview page.
14.2. Uninstalling MTV from the command line Copy linkLink copied to clipboard!
You can uninstall Migration Toolkit for Virtualization (MTV) from the command line.
This action does not remove resources managed by the MTV Operator, including custom resource definitions (CRDs) and custom resources (CRs). To remove these after uninstalling the MTV Operator, you might need to manually delete the MTV Operator CRDs.
Prerequisites
-
You must be logged in as a user with
cluster-adminprivileges.
Procedure
Delete the
forkliftcontroller by running the following command:oc delete ForkliftController --all -n openshift-mtv
$ oc delete ForkliftController --all -n openshift-mtvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the subscription to the MTV Operator by running the following command:
oc get subscription -o name|grep 'mtv-operator'| xargs oc delete
$ oc get subscription -o name|grep 'mtv-operator'| xargs oc deleteCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
clusterserviceversionfor the MTV Operator by running the following command:oc get clusterserviceversion -o name|grep 'mtv-operator'| xargs oc delete
$ oc get clusterserviceversion -o name|grep 'mtv-operator'| xargs oc deleteCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the plugin console CR by running the following command:
oc delete ConsolePlugin forklift-console-plugin
$ oc delete ConsolePlugin forklift-console-pluginCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Delete the custom resource definitions (CRDs) by running the following command:
oc get crd -o name | grep 'forklift.konveyor.io' | xargs oc delete
oc get crd -o name | grep 'forklift.konveyor.io' | xargs oc deleteCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Perform cleanup by deleting the MTV project by running the following command:
oc delete project openshift-mtv
oc delete project openshift-mtvCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 15. MTV performance recommendations Copy linkLink copied to clipboard!
The purpose of this section is to share recommendations for efficient and effective migration of virtual machines (VMs) using Migration Toolkit for Virtualization (MTV), based on findings observed through testing.
The data provided here was collected from testing in Red Hat Labs and is provided for reference only.
Overall, these numbers should be considered to show the best-case scenarios.
The observed performance of migration can differ from these results and depends on several factors.
15.1. Ensure fast storage and network speeds Copy linkLink copied to clipboard!
Ensure fast storage and network speeds, both for VMware and Red Hat OpenShift (OCP) environments.
To perform fast migrations, VMware must have fast read access to datastores. Networking between VMware ESXi hosts should be fast, ensure a 10 GiB network connection, and avoid network bottlenecks.
- Extend the VMware network to the OCP Workers Interface network environment.
- It is important to ensure that the VMware network offers high throughput (10 Gigabit Ethernet) and rapid networking to guarantee that the reception rates align with the read rate of the ESXi datastore.
- Be aware that the migration process uses significant network bandwidth and that the migration network is utilized. If other services utilize that network, it may have an impact on those services and their migration rates.
-
For example, 200 to 325 MiB/s was the average network transfer rate from the
vmnicfor each ESXi host associated with transferring data to the OCP interface.
15.2. Ensure fast datastore read speeds to ensure efficient and performant migrations Copy linkLink copied to clipboard!
Datastores read rates impact the total transfer times, so it is essential to ensure fast reads are possible from the ESXi datastore to the ESXi host.
Example in numbers: 200 to 300 MiB/s was the average read rate for both vSphere and ESXi endpoints for a single ESXi server. When multiple ESXi servers are used, higher datastore read rates are possible.
15.3. Endpoint types Copy linkLink copied to clipboard!
MTV 2.6 allows for the following vSphere provider options:
- ESXi endpoint (inventory and disk transfers from ESXi), introduced in MTV 2.6
- vCenter Server endpoint; no networks for the ESXi host (inventory and disk transfers from vCenter)
- vCenter endpoint and ESXi networks are available (inventory from vCenter, disk transfers from ESXi).
When transferring many VMs that are registered to multiple ESXi hosts, using the vCenter endpoint and ESXi network is suggested.
As of vSphere 7.0, ESXi hosts can label which network to use for NBD transport. This is accomplished by tagging the desired virtual network interface card (NIC) with the appropriate vSphereBackupNFC label. When this is done, MTV will be able to utilize the ESXi interface for network transfer to Openshift as long as the worker and ESXi host interfaces are reachable. This is especially useful when migration users may not have access to the ESXi credentials yet would like to be able to control which ESXi interface is used for migration.
For more details, see: (MTV-1230)
You can use the following ESXi command, which designates interface vmk2 for NBD backup:
esxcli network ip interface tag add -t vSphereBackupNFC -i vmk2
esxcli network ip interface tag add -t vSphereBackupNFC -i vmk2
15.4. Set ESXi hosts BIOS profile and ESXi Host Power Management for High Performance Copy linkLink copied to clipboard!
Where possible, ensure that hosts used to perform migrations are set with BIOS profiles related to maximum performance. Hosts which use Host Power Management controlled within vSphere should check that High Performance is set.
Testing showed that when transferring more than 10 VMs with both BIOS and host power management set accordingly, migrations had an increase of 15 MiB in the average datastore read rate.
15.5. Avoid additional network load on VMware networks Copy linkLink copied to clipboard!
You can reduce the network load on VMware networks by selecting the migration network when using the ESXi endpoint.
By incorporating a virtualization provider, MTV enables the selection of a specific network, which is accessible on the ESXi hosts, for the purpose of migrating virtual machines to OCP. Selecting this migration network from the ESXi host in the MTV UI will ensure that the transfer is performed using the selected network as an ESXi endpoint.
It is imperative to ensure that the network selected has connectivity to the OCP interface, has adequate bandwidth for migrations, and that the network interface is not saturated.
In environments with fast networks, such as 10GbE networks, migration network impacts can be expected to match the rate of ESXi datastore reads.
15.6. Control maximum concurrent disk migrations per ESXi host Copy linkLink copied to clipboard!
Set the MAX_VM_INFLIGHT MTV variable to control the maximum number of concurrent VMs transfers allowed for the ESXi host.
MTV allows for concurrency to be controlled using this variable; by default, it is set to 20.
When setting MAX_VM_INFLIGHT, consider the number of maximum concurrent VMs transfers are required for ESXi hosts. It is important to consider the type of migration to be transferred concurrently. Warm migrations, which are defined by migrations of a running VM that will be migrated over a scheduled time.
Warm migrations use snapshots to compare and migrate only the differences between previous snapshots of the disk. The migration of the differences between snapshots happens over specific intervals before a final cut-over of the running VM to OpenShift occurs.
In MTV 2.6, MAX_VM_INFLIGHT reserves one transfer slot per VM, regardless of current migration activity for a specific snapshot or the number of disks that belong to a single vm. The total set by MAX_VM_INFLIGHT is used to indicate how many concurrent VM tranfers per ESXi host is allowed.
Examples
-
MAX_VM_INFLIGHT = 20and 2 ESXi hosts defined in the provider mean each host can transfer 20 VMs.
15.7. Migrations are completed faster when migrating multiple VMs concurrently Copy linkLink copied to clipboard!
When multiple VMs from a specific ESXi host are to be migrated, starting concurrent migrations for multiple VMs leads to faster migration times.
Testing demonstrated that migrating 10 VMs (each containing 35 GiB of data, with a total size of 50 GiB) from a single host is significantly faster than migrating the same number of VMs sequentially, one after another.
It is possible to increase concurrent migration to more than 10 virtual machines from a single host, but it does not show a significant improvement.
Examples
- 1 single disk VMs took 6 minutes, with migration rate of 100 MiB/s
- 10 single disk VMs took 22 minutes, with migration rate of 272 MiB/s
- 20 single disk VMs took 42 minutes, with migration rate of 284 MiB/s
From the aforementioned examples, it is evident that the migration of 10 virtual machines simultaneously is three times faster than the migration of identical virtual machines in a sequential manner.
The migration rate was almost the same when moving 10 or 20 virtual machines simultaneously.
15.8. Migrations complete faster using multiple hosts Copy linkLink copied to clipboard!
Using multiple hosts with registered VMs equally distributed among the ESXi hosts used for migrations leads to faster migration times.
Testing showed that when transferring more than 10 single disk VMS, each containing 35 GiB of data out of a total of 50G total, using an additional host can reduce migration time.
Examples
- 80 single disk VMs, containing 35 GiB of data each, using a single host took 2 hours and 43 minutes, with a migration rate of 294 MiB/s.
- 80 single disk VMs, containing 35 GiB of data each, using 8 ESXi hosts took 41 minutes, with a migration rate of 1,173 MiB/s.
From the aforementioned examples, it is evident that migrating 80 VMs from 8 ESXi hosts, 10 from each host, concurrently is four times faster than running the same VMs from a single ESXi host.
Migrating a larger number of VMs from more than 8 ESXi hosts concurrently could potentially show increased performance. However, it was not tested and therefore not recommended.
15.9. Multiple migration plans compared to a single large migration plan Copy linkLink copied to clipboard!
The maximum number of disks that can be referenced by a single migration plan is 500. For more details, see (MTV-1203).
When attempting to migrate many VMs in a single migration plan, it can take some time for all migrations to start. By breaking up one migration plan into several migration plans, it is possible to start them at the same time.
Comparing migrations of:
-
500 VMs using 8 ESXi hosts in 1 plan,
max_vm_inflight=100, took 5 hours and 10 minutes. -
800 VMs using 8 ESXi hosts with 8 plans,
max_vm_inflight=100, took 57 minutes.
Testing showed that by breaking one single large plan into multiple moderately sized plans, for example, 100 VMS per plan, the total migration time can be reduced.
15.10. Maximum values tested for cold migrations Copy linkLink copied to clipboard!
- Maximum number of ESXi hosts tested: 8
- Maximum number of VMs in a single migration plan: 500
- Maximum number of VMs migrated in a single test: 5000
- Maximum number of migration plans performed concurrently: 40
- Maximum single disk size migrated: 6 T disks, which contained 3 Tb of data
- Maximum number of disks on a single VM migrated: 50
- Highest observed single datastore read rate from a single ESXi server: 312 MiB/second
- Highest observed multi-datastore read rate using eight ESXi servers and two datastores: 1,242 MiB/second
- Highest observed virtual NIC transfer rate to an OpenShift worker: 327 MiB/second
- Maximum migration transfer rate of a single disk: 162 MiB/second (rate observed when transferring nonconcurrent migration of 1.5 Tb utilized data)
- Maximum cold migration transfer rate of the multiple VMs (single disk) from a single ESXi host: 294 MiB/s (concurrent migration of 30 VMs, 35/50 GiB used, from Single ESXi)
- Maximum cold migration transfer rate of the multiple VMs (single disk) from multiple ESXi hosts: 1173MB/s (concurrent migration of 80 VMs, 35/50 GiB used, from 8 ESXi servers, 10 VMs from each ESXi)
15.11. Warm migration recommendations Copy linkLink copied to clipboard!
The following recommendations are specific to warm migrations:
15.11.1. Migrate up to 400 disks in parallel Copy linkLink copied to clipboard!
Testing involved migrating 200 VMs in parallel, with 2 disks each using 8 ESXi hosts, for a total of 400 disks. No tests were run on migration plans migrating over 400 disks in parallel, so it is not recommended to migrate over this number of disks in parallel.
15.11.2. Migrate up to 200 disks in parallel for the fastest rate Copy linkLink copied to clipboard!
Testing was successfully performed on parallel disk migrations with 200, 300, and 400 disks. There was a decrease in the precopy migration rate, approximately 25%, between the tests migrating 200 disks and those migrating 300 and 400 disks.
Therefore, it is recommended to perform parallel disk migrations in groups of 200 or fewer, instead of 300 to 400 disks, unless a decline of 25% in precopy speed does not affect your cutover planning.
15.11.3. When possible, set cutover time to be immediately after a migration plan starts Copy linkLink copied to clipboard!
To reduce the overall time of warm migrations, it is recommended to set the cutover to occur immediately after the migration plan is started. This causes MTV to run only one precopy per VM. This recommendation is valid, no matter how many VMs are in the migration plan.
15.11.4. Increase precopy intervals between snapshots Copy linkLink copied to clipboard!
If you are creating many migration plans with a single VM and have enough time between the migration start and the cutover, increase the value of the controller_precopy_interval parameter to between 120 and 240 minutes, inclusive. The longer setting will reduce the total number of snapshots and disk transfers per VM before the cutover.
15.12. Maximum values tested for warm migrations Copy linkLink copied to clipboard!
- Maximum number of ESXi hosts tested: 8
- Maximum number of worker nodes: 12
- Maximum number of VMs in a single migration plan: 200
- Maximum number of total parallel disk transfers: 400, with 200 VMs, 6 ESXis, and a transfer rate of 667 MB/s
- Maximum number of disks on a single VM migrated: 3
- Maximum number of parallel disk transfers per ESXi host: 68
- Maximum transfer rate observed of a single disk with no concurrent migrations: 76.5 MB/s
- Maximum transfer rate observed of multiple disks from a single ESXi host: 253 MB/s (concurrent migration of 10 VMs, 1 disk each, 35/50 GiB used per disk)
- Total transfer rate observed of multiple disks (210) from 8 ESXi hosts: 802 MB/s (concurrent migration of 70 VMs, 3 disks each, 35/50 GiB used per disk)
15.13. Increasing asynchronous I/O (AIO) sizes and buffer counts for NBD transport mode Copy linkLink copied to clipboard!
This document describes how to change NBD transport NFC parameters for increased migration performance when using the Migration Toolkit for Virtualization (MTV) product.
Using AIO buffering is only suitable for Cold Migration use cases.
- Disable AIO settings before initializing Warm Migration. For more details, see Disabling AIO Buffer Configuration.
Additional information
15.13.1. Key findings Copy linkLink copied to clipboard!
The best migration performance was achieved by migrating using multiple VMs (10) on a single ESXi host with the following values:
-
VixDiskLib.nfcAio.Session.BufSizeIn64KB=16 -
vixDiskLib.nfcAio.Session.BufCount=4
-
The following improvements were noted when using AIO buffer (Asynchronous Buffer Counts) settings:
-
Migration time was reduced by 31.1%, from
0:24:32to0:16:54. -
Read rate was increased from
347.83 MB/sto504.93 MB/s.
-
Migration time was reduced by 31.1%, from
- There was no significant improvement observed when using AIO buffer settings with a single VM.
- There was no significant improvement observed when using AIO buffer settings with multiple VMs from multiple hosts.
15.13.2. Enabling AIO buffer configuration Copy linkLink copied to clipboard!
Validating Controller Pod support for AIO values
Ensure that the
forklift-controllerpod in theopenshift-mtvnamespace supports the AIO buffer values.Since the pod name prefix is dynamic, check the pod name first by running the following command:
oc get pods -n openshift-mtv | grep forklift-controller | awk '{print $1}'oc get pods -n openshift-mtv | grep forklift-controller | awk '{print $1}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow The example output is as follows:
forklift-controller-667f57c8f8-qllnx
forklift-controller-667f57c8f8-qllnxCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis is the pod name prefix from the example:
forklift-controller-667f57c8f8-qllnxCheck the environment variables of the pod by running:
oc get pod forklift-controller-667f57c8f8-qllnx -n openshift-mtv -o yaml
oc get pod forklift-controller-667f57c8f8-qllnx -n openshift-mtv -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check for the following lines in the output:
... \- name: VIRT\_V2V\_EXTRA\_ARGS \- name: VIRT\_V2V\_EXTRA\_CONF\_CONFIG\_MAP ...
... \- name: VIRT\_V2V\_EXTRA\_ARGS \- name: VIRT\_V2V\_EXTRA\_CONF\_CONFIG\_MAP ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Editing ForkliftController Configuration
In the
openshift-mtv namespace, edit theForkliftControllerobject to include the AIO buffer values by running the following command:oc edit forkliftcontroller -n openshift-mtv
oc edit forkliftcontroller -n openshift-mtvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following under the spec section:
virt_v2v_extra_args: "--vddk-config /mnt/extra-v2v-conf/input.conf" virt_v2v_extra_conf_config_map: "perf"
virt_v2v_extra_args: "--vddk-config /mnt/extra-v2v-conf/input.conf" virt_v2v_extra_conf_config_map: "perf"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Creating a ConfigMap named perf
Create the required ConfigMap using the following command:
oc -n openshift-mtv create cm perf
oc -n openshift-mtv create cm perfCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Preparing the ConfigMap content
Convert the desired buffer configuration values to Base64. For example, for 16/4:
echo -e "VixDiskLib.nfcAio.Session.BufSizeIn64KB=16\nvixDiskLib.nfcAio.Session.BufCount=4" | base64
echo -e "VixDiskLib.nfcAio.Session.BufSizeIn64KB=16\nvixDiskLib.nfcAio.Session.BufCount=4" | base64Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output will be similar to the following:
Vml4RGlza0xpYi5uZmNBaW8uU2Vzc2lvbi5CdWZTaXplSW42NEtCPTE2CnZpeERpc2tMaWIubmZjQWlvLlNlc3Npb24uQnVmQ291bnQ9NAo=
Vml4RGlza0xpYi5uZmNBaW8uU2Vzc2lvbi5CdWZTaXplSW42NEtCPTE2CnZpeERpc2tMaWIubmZjQWlvLlNlc3Npb24uQnVmQ291bnQ9NAo=Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Editing the ConfigMap
Update the perf ConfigMap with the Base64 string under the
binaryDatasection, for example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Restarting the Forklift Controller Pod
- Restart the forklift-controller pod to apply the new configuration.
-
Ensure the
VIRT_V2V_EXTRA_ARGSenvironment variable reflects the updated settings.
Verifying migration logs
Run a migration plan and check the logs of the migration pod. Confirm that the AIO buffer settings are passed as parameters, particularly the
--vddk-config value.For example:
exec: /usr/bin/virt-v2v … --vddk-config /mnt/extra-v2v-conf/input.conf
exec: /usr/bin/virt-v2v … --vddk-config /mnt/extra-v2v-conf/input.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Sample log excerpt:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe above logs were when using
debug_level = 4
Inspecting ConfigMap values Content are in the Migration Pod
Log in to the migration pod and verify the buffer settings using the following command:
cat /mnt/extra-v2v-conf/input.conf
cat /mnt/extra-v2v-conf/input.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow The example output is as follows:
VixDiskLib.nfcAio.Session.BufSizeIn64KB=16 vixDiskLib.nfcAio.Session.BufCount=4
VixDiskLib.nfcAio.Session.BufSizeIn64KB=16 vixDiskLib.nfcAio.Session.BufCount=4Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Enabling Debugging (optional)
To enable debug logs, convert the configuration to Base64, including a high log level:
echo -e "`VixDiskLib.nfcAio.Session.BufSizeIn64KB=16\nVixDiskLib.nfcAio.Session.BufCount=4\nVixDiskLib.nfc.LogLevel=4`" | base64
echo -e "`VixDiskLib.nfcAio.Session.BufSizeIn64KB=16\nVixDiskLib.nfcAio.Session.BufCount=4\nVixDiskLib.nfc.LogLevel=4`" | base64Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAdding a high log level will degrade performance and is for debugging purposes only.
15.13.3. Disabling AIO Buffer Configuration Copy linkLink copied to clipboard!
To disable the AIO buffer configuration, complete the following steps:
Edit the ForkliftController Object: Remove the previously added lines from the spec section in the ForkliftController object:
oc edit forkliftcontroller -n openshift-mtv
oc edit forkliftcontroller -n openshift-mtvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the following lines:
virt_v2v_extra_args: "`–vddk-config /mnt/extra-v2v-conf/input.conf`" virt_v2v_extra_conf_config_map: "`perf`"
virt_v2v_extra_args: "`–vddk-config /mnt/extra-v2v-conf/input.conf`" virt_v2v_extra_conf_config_map: "`perf`"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the ConfigMap: Remove the perf ConfigMap that was created earlier:
oc delete cm perf -n openshift-mtv
oc delete cm perf -n openshift-mtvCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart the Forklift Controller Pod (Optional).
If needed, ensure the changes take effect by restarting the forklift-controller pod.
15.13.4. Key requirements for AIO Buffer (Asynchronous Buffer Counts) support Copy linkLink copied to clipboard!
VDDK and vSphere Versions
Support is based upon tests performed using the following versions:
- vSphere : 7.0.3
- VDDK : 7.0.3
- For other VDDK and vSphere versions, please check the AIO buffer support in the official VMware documentation.
Chapter 16. Troubleshooting Copy linkLink copied to clipboard!
This section provides information for troubleshooting common migration issues.
16.1. Error messages Copy linkLink copied to clipboard!
This section describes error messages and how to resolve them.
warm import retry limit reached
The warm import retry limit reached error message is displayed during a warm migration if a VMware virtual machine (VM) has reached the maximum number (28) of changed block tracking (CBT) snapshots during the precopy stage.
To resolve this problem, delete some of the CBT snapshots from the VM and restart the migration plan.
Unable to resize disk image to required size
The Unable to resize disk image to required size error message is displayed when migration fails because a virtual machine on the target provider uses persistent volumes with an EXT4 file system on block storage. The problem occurs because the default overhead that is assumed by CDI does not completely include the reserved place for the root partition.
To resolve this problem, increase the file system overhead in CDI to more than 10%.
16.2. Using the must-gather tool Copy linkLink copied to clipboard!
You can collect logs and information about MTV custom resources (CRs) by using the must-gather tool. You must attach a must-gather data file to all customer cases.
You can gather data for a specific namespace, migration plan, or virtual machine (VM) by using the filtering options.
If you specify a non-existent resource in the filtered must-gather command, no archive file is created.
Prerequisites
-
You must be logged in to the OpenShift Virtualization cluster as a user with the
cluster-adminrole. -
You must have the Red Hat OpenShift CLI (
oc) installed.
Collecting logs and CR information
-
Navigate to the directory where you want to store the
must-gatherdata. Run the
oc adm must-gathercommand:oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.7.12
$ oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.7.12Copy to Clipboard Copied! Toggle word wrap Toggle overflow The data is saved as
/must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.Optional: Run the
oc adm must-gathercommand with the following options to gather filtered data:Namespace:
oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.7.12 \ -- NS=<namespace> /usr/bin/targeted
$ oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.7.12 \ -- NS=<namespace> /usr/bin/targetedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Migration plan:
oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.7.12 \ -- PLAN=<migration_plan> /usr/bin/targeted
$ oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.7.12 \ -- PLAN=<migration_plan> /usr/bin/targetedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Virtual machine:
oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.7.12 \ -- VM=<vm_id> NS=<namespace> /usr/bin/targeted
$ oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.7.12 \ -- VM=<vm_id> NS=<namespace> /usr/bin/targeted1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the VM ID as it appears in the
PlanCR.
16.3. Architecture Copy linkLink copied to clipboard!
This section describes MTV custom resources, services, and workflows.
16.3.1. MTV custom resources and services Copy linkLink copied to clipboard!
The Migration Toolkit for Virtualization (MTV) is provided as an Red Hat OpenShift Operator. It creates and manages the following custom resources (CRs) and services.
MTV custom resources
-
ProviderCR stores attributes that enable MTV to connect to and interact with the source and target providers. -
NetworkMappingCR maps the networks of the source and target providers. -
StorageMappingCR maps the storage of the source and target providers. -
PlanCR contains a list of VMs with the same migration parameters and associated network and storage mappings. MigrationCR runs a migration plan.Only one
MigrationCR per migration plan can run at a given time. You can create multipleMigrationCRs for a singlePlanCR.
MTV services
The
Inventoryservice performs the following actions:- Connects to the source and target providers.
- Maintains a local inventory for mappings and plans.
- Stores VM configurations.
-
Runs the
Validationservice if a VM configuration change is detected.
-
The
Validationservice checks the suitability of a VM for migration by applying rules. The
Migration Controllerservice orchestrates migrations.When you create a migration plan, the
Migration Controllerservice validates the plan and adds a status label. If the plan fails validation, the plan status isNot readyand the plan cannot be used to perform a migration. If the plan passes validation, the plan status isReadyand it can be used to perform a migration. After a successful migration, theMigration Controllerservice changes the plan status toCompleted.-
The
Populator Controllerservice orchestrates disk transfers using Volume Populators. -
The
Kubevirt ControllerandContainerized Data Import (CDI) Controllerservices handle most technical operations.
16.3.2. High-level migration workflow Copy linkLink copied to clipboard!
The high-level workflow shows the migration process from the point of view of the user:
- You create a source provider, a target provider, a network mapping, and a storage mapping.
You create a
Plancustom resource (CR) that includes the following resources:- Source provider
- Target provider, if MTV is not installed on the target cluster
- Network mapping
- Storage mapping
- One or more virtual machines (VMs)
You run a migration plan by creating a
MigrationCR that references thePlanCR.If you cannot migrate all the VMs for any reason, you can create multiple
MigrationCRs for the samePlanCR until all VMs are migrated.-
For each VM in the
PlanCR, theMigration Controllerservice records the VM migration progress in theMigrationCR. Once the data transfer for each VM in the
PlanCR completes, theMigration Controllerservice creates aVirtualMachineCR.When all VMs have been migrated, the
Migration Controllerservice updates the status of thePlanCR toCompleted. The power state of each source VM is maintained after migration.
16.3.3. Detailed migration workflow Copy linkLink copied to clipboard!
You can use the detailed migration workflow to troubleshoot a failed migration.
The workflow describes the following steps:
Warm Migration or migration to a remote OpenShift cluster:
When you create the
Migrationcustom resource (CR) to run a migration plan, theMigration Controllerservice creates aDataVolumeCR for each source VM disk.For each VM disk:
-
The
Containerized Data Importer (CDI) Controllerservice creates a persistent volume claim (PVC) based on the parameters specified in theDataVolumeCR. -
If the
StorageClasshas a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by theStorageClassprovisioner. -
The
CDI Controllerservice creates animporterpod. The
importerpod streams the VM disk to the PV.After the VM disks are transferred:
The
Migration Controllerservice creates aconversionpod with the PVCs attached to it when importing from VMWare.The
conversionpod runsvirt-v2v, which installs and configures device drivers on the PVCs of the target VM.-
The
Migration Controllerservice creates aVirtualMachineCR for each source virtual machine (VM), connected to the PVCs. If the VM ran on the source environment, the
Migration Controllerpowers on the VM, theKubeVirt Controllerservice creates avirt-launcherpod and aVirtualMachineInstanceCR.The
virt-launcherpod runsQEMU-KVMwith the PVCs attached as VM disks.
Cold migration from RHV or OpenStack to the local OpenShift cluster:
When you create a
Migrationcustom resource (CR) to run a migration plan, theMigration Controllerservice creates for each source VM disk aPersistentVolumeClaimCR, and anOvirtVolumePopulatorwhen the source is RHV, or anOpenstackVolumePopulatorCR when the source is OpenStack.For each VM disk:
-
The
Populator Controllerservice creates a temporarily persistent volume claim (PVC). If the
StorageClasshas a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by theStorageClassprovisioner.-
The
Migration Controllerservice creates a dummy pod to bind all PVCs. The name of the pod containspvcinit.
-
The
-
The
Populator Controllerservice creates apopulatorpod. The
populatorpod transfers the disk data to the PV.After the VM disks are transferred:
- The temporary PVC is deleted, and the initial PVC points to the PV with the data.
-
The
Migration Controllerservice creates aVirtualMachineCR for each source virtual machine (VM), connected to the PVCs. If the VM ran on the source environment, the
Migration Controllerpowers on the VM, theKubeVirt Controllerservice creates avirt-launcherpod and aVirtualMachineInstanceCR.The
virt-launcherpod runsQEMU-KVMwith the PVCs attached as VM disks.
Cold migration from VMWare to the local OpenShift cluster:
When you create a
Migrationcustom resource (CR) to run a migration plan, theMigration Controllerservice creates aDataVolumeCR for each source VM disk.For each VM disk:
-
The
Containerized Data Importer (CDI) Controllerservice creates a blank persistent volume claim (PVC) based on the parameters specified in theDataVolumeCR. -
If the
StorageClasshas a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by theStorageClassprovisioner.
For all VM disks:
-
The
Migration Controllerservice creates a dummy pod to bind all PVCs. The name of the pod containspvcinit. -
The
Migration Controllerservice creates aconversionpod for all PVCs. The
conversionpod runsvirt-v2v, which converts the VM to the KVM hypervisor and transfers the disks' data to their corresponding PVs.After the VM disks are transferred:
-
The
Migration Controllerservice creates aVirtualMachineCR for each source virtual machine (VM), connected to the PVCs. If the VM ran on the source environment, the
Migration Controllerpowers on the VM, theKubeVirt Controllerservice creates avirt-launcherpod and aVirtualMachineInstanceCR.The
virt-launcherpod runsQEMU-KVMwith the PVCs attached as VM disks.
16.3.4. How MTV uses the virt-v2v tool Copy linkLink copied to clipboard!
The Migration Toolkit for Virtualization (MTV) uses the virt-v2v tool to convert the disk image of a VM into a format compatible with OpenShift Virtualization. The tool makes migrations easier because it automatically performs the tasks needed to make your VMs work with OpenShift Virtualization, such as enabling paravirtualized VirtIO drivers in the converted virtual machine, if possible, and installing the QEMU guest agent.
virt-v2v is included in Red Hat Enterprise Linux (RHEL) versions 7 and later.
16.3.4.1. Main functions of virt-v2v in MTV migrations Copy linkLink copied to clipboard!
During migration, MTV uses virt-v2v to collect metadata about VMs, make necessary changes to VM disks, and copy the disks containing the VMs to OpenShift Virtualization.
virt-v2v makes the following changes to VM disks to prepare them for migration:
Additions:
- Injection of VirtIO drivers, for example, network or disk drivers.
- Preparation of hypervisor-specific tools or agents, for example, a QEMU guest agent installation.
- Modification of boot configuration, for example, updated bootloader or boot entries.
Removals:
- Unnecessary or former hypervisor-specific files, for example, VMware tools or VirtualBox additions.
- Old network driver configurations, for example, removing VMware-specific NIC drivers.
- Configuration settings that are incompatible with the target system, for example, old boot settings.
If you are migrating from VMware or from OVA files, virt-v2v also sets their IP addresses either during the migration or during the first reboot of the VMs after migration.
You can also run pre-defined Ansible hooks before or after a migration using MTV. For more information, see Adding hooks to an MTV migration plan.
These hooks do not necessarily use virt-v2v.
16.3.4.2. Customizing, removing, and installing files Copy linkLink copied to clipboard!
MTV uses virt-v2v to perform additional guest customizations during the conversion, such as the following actions:
- Customization to preserve IP addresses
- Customization to preserve drive letters
For RHEL-based guests, virt-v2v attempts to install the guest agent from the Red Hat registry. If the migration is run in a detached environment, the installer will fail, and you must use hooks, or other automation, to install the guest agent.
For more information, see the man reference pages:
16.3.4.3. Permissions and virt-v2v Copy linkLink copied to clipboard!
virt-v2v does not require permissions or access credentials for the guest operating system itself because virt-v2v is not run against a running VM, but only against the disks of a VM.
16.4. Logs and custom resources Copy linkLink copied to clipboard!
You can download logs and custom resource (CR) information for troubleshooting. For more information, see the detailed migration workflow.
16.4.1. Collected logs and custom resource information Copy linkLink copied to clipboard!
You can download logs and custom resource (CR) yaml files for the following targets by using the Red Hat OpenShift web console or the command-line interface (CLI):
- Migration plan: Web console or CLI.
- Virtual machine: Web console or CLI.
- Namespace: CLI only.
The must-gather tool collects the following logs and CR files in an archive file:
CRs:
-
DataVolumeCR: Represents a disk mounted on a migrated VM. -
VirtualMachineCR: Represents a migrated VM. -
PlanCR: Defines the VMs and storage and network mapping. -
JobCR: Optional: Represents a pre-migration hook, a post-migration hook, or both.
-
Logs:
-
importerpod: Disk-to-data-volume conversion log. Theimporterpod naming convention isimporter-<migration_plan>-<vm_id><5_char_id>, for example,importer-mig-plan-ed90dfc6-9a17-4a8btnfh, whereed90dfc6-9a17-4a8is a truncated RHV VM ID andbtnfhis the generated 5-character ID. -
conversionpod: VM conversion log. Theconversionpod runsvirt-v2v, which installs and configures device drivers on the PVCs of the VM. Theconversionpod naming convention is<migration_plan>-<vm_id><5_char_id>. -
virt-launcherpod: VM launcher log. When a migrated VM is powered on, thevirt-launcherpod runsQEMU-KVMwith the PVCs attached as VM disks. -
forklift-controllerpod: The log is filtered for the migration plan, virtual machine, or namespace specified by themust-gathercommand. -
forklift-must-gather-apipod: The log is filtered for the migration plan, virtual machine, or namespace specified by themust-gathercommand. hook-jobpod: The log is filtered for hook jobs. Thehook-jobnaming convention is<migration_plan>-<vm_id><5_char_id>, for example,plan2j-vm-3696-posthook-4mx85orplan2j-vm-3696-prehook-mwqnl.NoteEmpty or excluded log files are not included in the
must-gatherarchive file.
-
Example must-gather archive structure for a VMware migration plan
16.4.2. Downloading logs and custom resource information from the web console Copy linkLink copied to clipboard!
You can download logs and information about custom resources (CRs) for a completed, failed, or canceled migration plan or for migrated virtual machines (VMs) from the Red Hat OpenShift web console.
Procedure
- In the Red Hat OpenShift web console, click Migration → Plans for virtualization.
- Click Get logs beside a migration plan name.
In the Get logs window, click Get logs.
The logs are collected. A
Log collection completemessage is displayed.- Click Download logs to download the archive file.
- To download logs for a migrated VM, click a migration plan name and then click Get logs beside the VM.
16.4.3. Accessing logs and custom resource information from the command line Copy linkLink copied to clipboard!
You can access logs and information about custom resources (CRs) from the command line by using the must-gather tool. You must attach a must-gather data file to all customer cases.
You can gather data for a specific namespace, a completed, failed, or canceled migration plan, or a migrated virtual machine (VM) by using the filtering options.
If you specify a non-existent resource in the filtered must-gather command, no archive file is created.
Prerequisites
-
You must be logged in to the OpenShift Virtualization cluster as a user with the
cluster-adminrole. -
You must have the Red Hat OpenShift CLI (
oc) installed.
Procedure
-
Navigate to the directory where you want to store the
must-gatherdata. Run the
oc adm must-gathercommand:oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.7.12
$ oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.7.12Copy to Clipboard Copied! Toggle word wrap Toggle overflow The data is saved as
/must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.Optional: Run the
oc adm must-gathercommand with the following options to gather filtered data:Namespace:
oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.7.12 \ -- NS=<namespace> /usr/bin/targeted
$ oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.7.12 \ -- NS=<namespace> /usr/bin/targetedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Migration plan:
oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.7.12 \ -- PLAN=<migration_plan> /usr/bin/targeted
$ oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.7.12 \ -- PLAN=<migration_plan> /usr/bin/targetedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Virtual machine:
oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.7.12 \ -- VM=<vm_name> NS=<namespace> /usr/bin/targeted
$ oc adm must-gather --image=registry.redhat.io/migration-toolkit-virtualization/mtv-must-gather-rhel8:2.7.12 \ -- VM=<vm_name> NS=<namespace> /usr/bin/targeted1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You must specify the VM name, not the VM ID, as it appears in the
PlanCR.
Chapter 17. Telemetry Copy linkLink copied to clipboard!
17.1. Telemetry Copy linkLink copied to clipboard!
Red Hat uses telemetry to collect anonymous usage data from Migration Toolkit for Virtualization (MTV) installations to help us improve the usability and efficiency of MTV.
MTV collects the following data:
- Migration plan status: The number of migrations. Includes those that failed, succeeded, or were canceled.
- Provider: The number of migrations per provider. Includes Red Hat Virtualization, vSphere, OpenStack, OVA, and OpenShift Virtualization providers.
- Mode: The number of migrations by mode. Includes cold and warm migrations.
- Target: The number of migrations by target. Includes local and remote migrations.
- Plan ID: The ID number of the migration plan. The number is assigned by MTV.
Metrics are calculated every 10 seconds and are reported per week, per month, and per year.
Chapter 18. Additional information Copy linkLink copied to clipboard!
18.1. MTV performance addendum Copy linkLink copied to clipboard!
The data provided here was collected from testing in Red Hat Labs and is provided for reference only.
Overall, these numbers should be considered to show the best-case scenarios.
The observed performance of migration can differ from these results and depends on several factors.
18.1.1. ESXi performance Copy linkLink copied to clipboard!
Single ESXi performance
Test migration using the same ESXi host.
In each iteration, the total VMs are increased, to display the impact of concurrent migration on the duration.
The results show that migration time is linear when increasing the total VMs (50 GiB disk, Utilization 70%).
The optimal number of VMs per ESXi is 10.
| Test Case Description | MTV | VDDK | max_vm inflight | Migration Type | Total Duration |
|---|---|---|---|---|---|
| cold migration, 10 VMs, Single ESXi, Private Network [a] | 2.6 | 7.0.3 | 100 | cold | 0:21:39 |
| cold migration, 20 VMs, Single ESXi, Private Network | 2.6 | 7.0.3 | 100 | cold | 0:41:16 |
| cold migration, 30 VMs, Single ESXi, Private Network | 2.6 | 7.0.3 | 100 | cold | 1:00:59 |
| cold migration, 40 VMs, Single ESXi, Private Network | 2.6 | 7.0.3 | 100 | cold | 1:23:02 |
| cold migration, 50 VMs, Single ESXi, Private Network | 2.6 | 7.0.3 | 100 | cold | 1:46:24 |
| cold migration, 80 VMs, Single ESXi, Private Network | 2.6 | 7.0.3 | 100 | cold | 2:42:49 |
| cold migration, 100 VMs, Single ESXi, Private Network | 2.6 | 7.0.3 | 100 | cold | 3:25:15 |
[a]
Private Network refers to a non -Management network
| |||||
Multi ESXi hosts and single data store
In each iteration, the number of ESXi hosts were increased, to show that increasing the number of ESXi hosts improves the migration time (50 GiB disk, Utilization 70%).
| Test Case Description | MTV | VDDK | Max_vm inflight | Migration Type | Total Duration |
|---|---|---|---|---|---|
| cold migration, 100 VMs, Single ESXi, Private Network [a] | 2.6 | 7.0.3 | 100 | cold | 3:25:15 |
| cold migration, 100 VMs, 4 ESXs (25 VMs per ESX), Private Network | 2.6 | 7.0.3 | 100 | cold | 1:22:27 |
| cold migration, 100 VMs, 5 ESXs (20 VMs per ESX), Private Network, 1 DataStore | 2.6 | 7.0.3 | 100 | cold | 1:04:57 |
[a]
Private Network refers to a non-Management network
| |||||
18.1.2. Different migration network performance Copy linkLink copied to clipboard!
Each iteration the Migration Network was changed, using the Provider, to find the fastest network for migration.
The results show that there is no degradation using management compared to non-managment networks when all interfaces and network speeds are the same.
| Test Case Description | MTV | VDDK | max_vm inflight | Migration Type | Total Duration |
|---|---|---|---|---|---|
| cold migration, 10 VMs, Single ESXi, MGMT Network | 2.6 | 7.0.3 | 100 | cold | 0:21:30 |
| cold migration, 10 VMs, Single ESXi, Private Network [a] | 2.6 | 7.0.3 | 20 | cold | 0:21:20 |
| cold migration, 10 VMs, Single ESXi, Default Network | 2.6.2 | 7.0.3 | 20 | cold | 0:21:30 |
[a]
Private Network refers to a non-Management network
| |||||